Skip to content

Bulk Operations API

The Bulk Operations API enables asynchronous import of contacts and organizations in batches, with job tracking and detailed error reporting.

How Bulk Import Works

  1. Submit an array of records to import
  2. Receive a job_id immediately
  3. Poll the job status endpoint to track progress
  4. Get detailed results including successes and failures

Bulk operations are processed asynchronously with a maximum of 1,000 records per request. Records are processed in batches of 100.

Job Status Values

StatusDescription
queuedJob is waiting to be processed
processingJob is currently being processed
completedJob finished successfully (all or some records may have failed)
failedJob failed completely (all records failed)

Endpoints

Bulk Import Contacts

Import multiple contacts in a single request.

POST /api-v1-bulk/contacts

Request Body

{
"records": [
{
"first_name": "John",
"last_name": "Doe",
"job_title": "CEO",
"organization_id": "123e4567-e89b-12d3-a456-426614174000",
"emails": [
{
"email": "john@example.com",
"type": "work",
"is_primary": true
}
],
"phones": [
{
"phone_number": "+1234567890",
"type": "mobile",
"is_primary": true
}
],
"metadata": {
"source": "trade_show",
"campaign": "Q4_2025"
}
}
]
}

Contact Record Fields

FieldTypeRequiredDescription
first_namestringYesContact’s first name
last_namestringNoContact’s last name
job_titlestringNoJob title or position
organization_idUUIDNoID of related organization
emailsarrayNoArray of email objects
emails[].emailstringYes (if emails present)Email address
emails[].typestringNoEmail type (default: work)
emails[].is_primarybooleanNoPrimary email flag (first is default)
phonesarrayNoArray of phone objects
phones[].phone_numberstringYes (if phones present)Phone number
phones[].typestringNoPhone type (default: work)
phones[].is_primarybooleanNoPrimary phone flag (first is default)
metadataobjectNoCustom metadata

Example Request

Terminal window
curl -X POST "https://your-instance.supabase.co/functions/v1/api-v1-bulk/contacts" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"records": [
{
"first_name": "John",
"last_name": "Doe",
"job_title": "CEO",
"emails": [{"email": "john@example.com", "type": "work", "is_primary": true}]
},
{
"first_name": "Jane",
"last_name": "Smith",
"job_title": "CTO",
"emails": [{"email": "jane@example.com"}]
}
]
}'

Example Response

{
"data": {
"job_id": "456e7890-e89b-12d3-a456-426614174001",
"status": "completed",
"total_records": 2,
"successful_records": 2,
"failed_records": 0,
"errors": []
}
}

Bulk Import Organizations

Import multiple organizations in a single request.

POST /api-v1-bulk/organizations

Request Body

{
"records": [
{
"name": "Acme Corp",
"status": "active",
"organization_type": "customer",
"address": "123 Main St",
"city": "San Francisco",
"state": "CA",
"postal_code": "94102",
"country": "US",
"domain": "acme.com",
"metadata": {
"industry": "technology",
"employee_count": 500
}
}
]
}

Organization Record Fields

FieldTypeRequiredDescription
namestringYesOrganization name
statusstringNoStatus: lead, active, inactive, lost (default: lead)
organization_typestringNoType of organization
addressstringNoStreet address
citystringNoCity
statestringNoState or province
postal_codestringNoPostal/ZIP code
countrystringNoCountry code
domainstringNoCompany website domain
metadataobjectNoCustom metadata

Example Request

Terminal window
curl -X POST "https://your-instance.supabase.co/functions/v1/api-v1-bulk/organizations" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"records": [
{
"name": "Acme Corp",
"status": "active",
"city": "San Francisco",
"state": "CA"
},
{
"name": "TechStart Inc",
"status": "lead",
"domain": "techstart.com"
}
]
}'

Example Response

{
"data": {
"job_id": "567e8901-e89b-12d3-a456-426614174002",
"status": "completed",
"total_records": 2,
"successful_records": 2,
"failed_records": 0,
"errors": []
}
}

Get Job Status

Check the status and progress of a bulk import job.

GET /api-v1-bulk/jobs/:id

Path Parameters

ParameterTypeDescription
idUUIDJob ID returned from the import request

Example Request

Terminal window
curl "https://your-instance.supabase.co/functions/v1/api-v1-bulk/jobs/456e7890-e89b-12d3-a456-426614174001" \
-H "Authorization: Bearer YOUR_API_KEY"

Example Response (In Progress)

{
"data": {
"id": "456e7890-e89b-12d3-a456-426614174001",
"type": "import_contacts",
"status": "processing",
"total_records": 1000,
"processed_records": 450,
"successful_records": 430,
"failed_records": 20,
"progress_percent": 45,
"errors": [
{
"row": 5,
"field": "first_name",
"error": "First name is required"
},
{
"row": 23,
"field": "organization_id",
"error": "Organization not found"
}
],
"created_at": "2025-12-09T10:00:00Z",
"started_at": "2025-12-09T10:00:01Z",
"completed_at": null
}
}

Example Response (Completed)

{
"data": {
"id": "456e7890-e89b-12d3-a456-426614174001",
"type": "import_contacts",
"status": "completed",
"total_records": 1000,
"processed_records": 1000,
"successful_records": 985,
"failed_records": 15,
"progress_percent": 100,
"errors": [
{
"row": 5,
"field": "first_name",
"error": "First name is required"
},
{
"row": 23,
"field": "organization_id",
"error": "Organization not found"
}
],
"created_at": "2025-12-09T10:00:00Z",
"started_at": "2025-12-09T10:00:01Z",
"completed_at": "2025-12-09T10:05:30Z"
}
}

Polling for Completion

Since bulk operations are asynchronous, you should poll the job status endpoint until the job is complete:

async function waitForCompletion(jobId) {
while (true) {
const response = await fetch(
`https://your-instance.supabase.co/functions/v1/api-v1-bulk/jobs/${jobId}`,
{
headers: {
'Authorization': 'Bearer YOUR_API_KEY'
}
}
);
const { data } = await response.json();
if (data.status === 'completed' || data.status === 'failed') {
return data;
}
// Wait 2 seconds before checking again
await new Promise(resolve => setTimeout(resolve, 2000));
}
}
// Usage
const importResponse = await fetch(/* ... bulk import ... */);
const { data: { job_id } } = await importResponse.json();
const finalResult = await waitForCompletion(job_id);
console.log(`Imported ${finalResult.successful_records}/${finalResult.total_records} records`);

Error Handling

Partial Failures

Bulk operations can succeed partially - some records may import successfully while others fail. The response includes:

  • successful_records: Count of successfully imported records
  • failed_records: Count of failed records
  • errors: Array of error details (up to 100 errors returned)
{
"data": {
"job_id": "456e7890-e89b-12d3-a456-426614174001",
"status": "completed",
"total_records": 100,
"successful_records": 95,
"failed_records": 5,
"errors": [
{
"row": 12,
"field": "first_name",
"error": "First name is required"
},
{
"row": 34,
"field": "organization_id",
"error": "Organization not found"
},
{
"row": 56,
"error": "Internal processing error"
}
]
}
}

Error Object Structure

Each error in the errors array contains:

FieldTypeDescription
rowinteger1-based row number of the failed record
fieldstringField that caused the error (if applicable)
errorstringError message

Validation Rules

Contacts

  • first_name is required and cannot be empty
  • organization_id must reference an existing organization in your company
  • Email addresses in emails array must be valid
  • If is_primary is not specified, the first email/phone is marked as primary

Organizations

  • name is required and cannot be empty
  • status must be one of: lead, active, inactive, lost
  • All fields are trimmed of whitespace

Limits and Batching

  • Maximum records per request: 1,000
  • Batch processing size: 100 records
  • Error reporting limit: First 100 errors are returned
  • Records are processed sequentially within each batch
  • Progress is updated after each batch completes

Best Practices

  1. Validate data before import: Check that organization IDs exist before importing contacts
  2. Handle partial failures: Don’t assume all records succeeded - check the error array
  3. Use reasonable polling intervals: Poll every 2-5 seconds to avoid rate limits
  4. Monitor progress: Use progress_percent to show user feedback during long imports
  5. Limit request size: For very large datasets, split into multiple requests of 500-1000 records
  6. Preserve row numbers: Keep track of original row numbers in your source data for error mapping

Common Error Responses

400 Bad Request - No Records

{
"error": {
"code": "INVALID_REQUEST",
"message": "No records provided"
}
}

400 Bad Request - Too Many Records

{
"error": {
"code": "VALIDATION_ERROR",
"message": "Too many records. Maximum 1000 records per request."
}
}

400 Bad Request - Invalid Format

{
"error": {
"code": "INVALID_REQUEST",
"message": "Invalid request. Expected { records: [...] }"
}
}

404 Not Found - Job Not Found

{
"error": {
"code": "NOT_FOUND",
"message": "Job not found"
}
}

Example: Complete Import Workflow

Terminal window
# 1. Start bulk import
IMPORT_RESPONSE=$(curl -X POST "https://your-instance.supabase.co/functions/v1/api-v1-bulk/contacts" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"records": [
{"first_name": "John", "last_name": "Doe"},
{"first_name": "Jane", "last_name": "Smith"}
]
}')
# Extract job ID
JOB_ID=$(echo $IMPORT_RESPONSE | jq -r '.data.job_id')
# 2. Poll for completion
while true; do
STATUS_RESPONSE=$(curl "https://your-instance.supabase.co/functions/v1/api-v1-bulk/jobs/$JOB_ID" \
-H "Authorization: Bearer YOUR_API_KEY")
STATUS=$(echo $STATUS_RESPONSE | jq -r '.data.status')
PROGRESS=$(echo $STATUS_RESPONSE | jq -r '.data.progress_percent')
echo "Progress: $PROGRESS%"
if [ "$STATUS" = "completed" ] || [ "$STATUS" = "failed" ]; then
break
fi
sleep 2
done
# 3. Check results
echo "Final result:"
echo $STATUS_RESPONSE | jq '.data'