Entity import

Every entity includes a CSV importer that validates rows client-side, uploads files, and processes data in batches with duplicate detection. The importer is accessible from the list page actions menu.

Import flow

  1. User navigates to /{entity-name}/importer (or clicks "Import" in the list actions).
  2. Uploads a CSV file or downloads a blank CSV template first.
  3. The CSV is parsed client-side with PapaParse. Each row is validated against a Zod schema.
  4. Invalid rows are marked as errors with field-level messages. Valid rows are marked as pending.
  5. If the entity has file or image fields, a file upload step runs first — downloading files from the URLs in the CSV and re-uploading them to your storage.
  6. User clicks "Import". Rows are sent to the backend in chunks of 50 (configurable).
  7. The backend validates each row, checks for duplicates via an MD5 hash, and creates the entity.
  8. Results are displayed in a table with per-row success/error status.

CSV format

The CSV header row must match the field names defined in your entity schema. A downloadable template CSV is available on the import page with all expected columns pre-filled.

Field typeCSV formatExample
TextPlain stringJohn Doe
IntegerNumber string42
DecimalNumber string19.99
DateISO date2024-06-15
DatetimeISO datetime2024-06-15T14:30:00
Booleantrue or TRUEtrue
EnumeratorEnum value keyactive
Enumerator multipleSpace-separated valuesadmin member
TagsSpace-separated stringsurgent important
Files / ImagesSpace-separated URLshttps://example.com/a.pdf https://example.com/b.pdf
Relationship (one)Entity ID (UUID)550e8400-e29b-41d4-a716-446655440000
Relationship (many)Space-separated IDsid1 id2 id3

Validation

Validation runs at two stages:

  • Client-side — each row is parsed against the entity's Zod schema before any network request. Validation errors (required fields, invalid formats, min/max constraints) are displayed inline in the import table.
  • Server-side — the backend re-validates each row and checks unique constraints. If a row fails, it's marked as an error without affecting other rows in the batch.

Duplicate detection

Each row gets an MD5 hash generated from its CSV content. The backend stores this hash on the created entity. If a row with the same hash already exists in the organization, it's rejected as a duplicate. This prevents accidentally importing the same CSV twice.

File and image uploads

When the entity has file or image fields, the importer adds an upload step before processing:

  1. For each row, the importer downloads files from the URLs listed in the CSV.
  2. Files are uploaded to the configured storage backend.
  3. The CSV URL values are replaced with the uploaded file metadata.
  4. Already-uploaded files are cached to avoid duplicate uploads.

If some files in a row fail to download or upload, that row is marked as an error.

Batch processing

Import runs in configurable chunks (default: 50 rows per request). This provides:

  • A progress bar showing completion percentage
  • The ability to pause and resume the import at any time
  • Error isolation — a failed chunk doesn't block subsequent ones
  • TanStack Query cache invalidation after each chunk for live list updates

Permissions

Import requires the import permission for the entity. The import button only appears in the list actions if the current user's role grants this permission.

Key files

FilePurpose
src/features/{entity}/pages/{Entity}ImporterPage.tsxImport page with field and file configuration
src/shared/components/importer/Importer.tsxCore importer with state machine and chunk processing
src/shared/components/importer/ImporterForm.tsxCSV upload, parsing, and template download
src/shared/components/importer/ImporterTable.tsxResults table with row status
src/shared/components/importer/ImporterFileUploadStep.tsxFile/image download and re-upload
src/shared/lib/csvReader.tsPapaParse wrapper for CSV parsing
backend/src/features/{entity}/controllers/{entity}ImporterController.tsBackend import endpoint with duplicate detection