Bulk Operations API
The Bulk Operations API enables asynchronous import of contacts and organizations in batches, with job tracking and detailed error reporting.
How Bulk Import Works
- Submit an array of records to import
- Receive a
job_idimmediately - Poll the job status endpoint to track progress
- Get detailed results including successes and failures
Bulk operations are processed asynchronously with a maximum of 1,000 records per request. Records are processed in batches of 100.
Job Status Values
| Status | Description |
|---|---|
queued | Job is waiting to be processed |
processing | Job is currently being processed |
completed | Job finished successfully (all or some records may have failed) |
failed | Job failed completely (all records failed) |
Endpoints
Bulk Import Contacts
Import multiple contacts in a single request.
POST /api-v1-bulk/contactsRequest Body
{ "records": [ { "first_name": "John", "last_name": "Doe", "job_title": "CEO", "organization_id": "123e4567-e89b-12d3-a456-426614174000", "emails": [ { "email": "john@example.com", "type": "work", "is_primary": true } ], "phones": [ { "phone_number": "+1234567890", "type": "mobile", "is_primary": true } ], "metadata": { "source": "trade_show", "campaign": "Q4_2025" } } ]}Contact Record Fields
| Field | Type | Required | Description |
|---|---|---|---|
first_name | string | Yes | Contact’s first name |
last_name | string | No | Contact’s last name |
job_title | string | No | Job title or position |
organization_id | UUID | No | ID of related organization |
emails | array | No | Array of email objects |
emails[].email | string | Yes (if emails present) | Email address |
emails[].type | string | No | Email type (default: work) |
emails[].is_primary | boolean | No | Primary email flag (first is default) |
phones | array | No | Array of phone objects |
phones[].phone_number | string | Yes (if phones present) | Phone number |
phones[].type | string | No | Phone type (default: work) |
phones[].is_primary | boolean | No | Primary phone flag (first is default) |
metadata | object | No | Custom metadata |
Example Request
curl -X POST "https://your-instance.supabase.co/functions/v1/api-v1-bulk/contacts" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "records": [ { "first_name": "John", "last_name": "Doe", "job_title": "CEO", "emails": [{"email": "john@example.com", "type": "work", "is_primary": true}] }, { "first_name": "Jane", "last_name": "Smith", "job_title": "CTO", "emails": [{"email": "jane@example.com"}] } ] }'Example Response
{ "data": { "job_id": "456e7890-e89b-12d3-a456-426614174001", "status": "completed", "total_records": 2, "successful_records": 2, "failed_records": 0, "errors": [] }}Bulk Import Organizations
Import multiple organizations in a single request.
POST /api-v1-bulk/organizationsRequest Body
{ "records": [ { "name": "Acme Corp", "status": "active", "organization_type": "customer", "address": "123 Main St", "city": "San Francisco", "state": "CA", "postal_code": "94102", "country": "US", "domain": "acme.com", "metadata": { "industry": "technology", "employee_count": 500 } } ]}Organization Record Fields
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Organization name |
status | string | No | Status: lead, active, inactive, lost (default: lead) |
organization_type | string | No | Type of organization |
address | string | No | Street address |
city | string | No | City |
state | string | No | State or province |
postal_code | string | No | Postal/ZIP code |
country | string | No | Country code |
domain | string | No | Company website domain |
metadata | object | No | Custom metadata |
Example Request
curl -X POST "https://your-instance.supabase.co/functions/v1/api-v1-bulk/organizations" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "records": [ { "name": "Acme Corp", "status": "active", "city": "San Francisco", "state": "CA" }, { "name": "TechStart Inc", "status": "lead", "domain": "techstart.com" } ] }'Example Response
{ "data": { "job_id": "567e8901-e89b-12d3-a456-426614174002", "status": "completed", "total_records": 2, "successful_records": 2, "failed_records": 0, "errors": [] }}Get Job Status
Check the status and progress of a bulk import job.
GET /api-v1-bulk/jobs/:idPath Parameters
| Parameter | Type | Description |
|---|---|---|
id | UUID | Job ID returned from the import request |
Example Request
curl "https://your-instance.supabase.co/functions/v1/api-v1-bulk/jobs/456e7890-e89b-12d3-a456-426614174001" \ -H "Authorization: Bearer YOUR_API_KEY"Example Response (In Progress)
{ "data": { "id": "456e7890-e89b-12d3-a456-426614174001", "type": "import_contacts", "status": "processing", "total_records": 1000, "processed_records": 450, "successful_records": 430, "failed_records": 20, "progress_percent": 45, "errors": [ { "row": 5, "field": "first_name", "error": "First name is required" }, { "row": 23, "field": "organization_id", "error": "Organization not found" } ], "created_at": "2025-12-09T10:00:00Z", "started_at": "2025-12-09T10:00:01Z", "completed_at": null }}Example Response (Completed)
{ "data": { "id": "456e7890-e89b-12d3-a456-426614174001", "type": "import_contacts", "status": "completed", "total_records": 1000, "processed_records": 1000, "successful_records": 985, "failed_records": 15, "progress_percent": 100, "errors": [ { "row": 5, "field": "first_name", "error": "First name is required" }, { "row": 23, "field": "organization_id", "error": "Organization not found" } ], "created_at": "2025-12-09T10:00:00Z", "started_at": "2025-12-09T10:00:01Z", "completed_at": "2025-12-09T10:05:30Z" }}Polling for Completion
Since bulk operations are asynchronous, you should poll the job status endpoint until the job is complete:
async function waitForCompletion(jobId) { while (true) { const response = await fetch( `https://your-instance.supabase.co/functions/v1/api-v1-bulk/jobs/${jobId}`, { headers: { 'Authorization': 'Bearer YOUR_API_KEY' } } );
const { data } = await response.json();
if (data.status === 'completed' || data.status === 'failed') { return data; }
// Wait 2 seconds before checking again await new Promise(resolve => setTimeout(resolve, 2000)); }}
// Usageconst importResponse = await fetch(/* ... bulk import ... */);const { data: { job_id } } = await importResponse.json();const finalResult = await waitForCompletion(job_id);console.log(`Imported ${finalResult.successful_records}/${finalResult.total_records} records`);Error Handling
Partial Failures
Bulk operations can succeed partially - some records may import successfully while others fail. The response includes:
successful_records: Count of successfully imported recordsfailed_records: Count of failed recordserrors: Array of error details (up to 100 errors returned)
{ "data": { "job_id": "456e7890-e89b-12d3-a456-426614174001", "status": "completed", "total_records": 100, "successful_records": 95, "failed_records": 5, "errors": [ { "row": 12, "field": "first_name", "error": "First name is required" }, { "row": 34, "field": "organization_id", "error": "Organization not found" }, { "row": 56, "error": "Internal processing error" } ] }}Error Object Structure
Each error in the errors array contains:
| Field | Type | Description |
|---|---|---|
row | integer | 1-based row number of the failed record |
field | string | Field that caused the error (if applicable) |
error | string | Error message |
Validation Rules
Contacts
first_nameis required and cannot be emptyorganization_idmust reference an existing organization in your company- Email addresses in
emailsarray must be valid - If
is_primaryis not specified, the first email/phone is marked as primary
Organizations
nameis required and cannot be emptystatusmust be one of:lead,active,inactive,lost- All fields are trimmed of whitespace
Limits and Batching
- Maximum records per request: 1,000
- Batch processing size: 100 records
- Error reporting limit: First 100 errors are returned
- Records are processed sequentially within each batch
- Progress is updated after each batch completes
Best Practices
- Validate data before import: Check that organization IDs exist before importing contacts
- Handle partial failures: Don’t assume all records succeeded - check the error array
- Use reasonable polling intervals: Poll every 2-5 seconds to avoid rate limits
- Monitor progress: Use
progress_percentto show user feedback during long imports - Limit request size: For very large datasets, split into multiple requests of 500-1000 records
- Preserve row numbers: Keep track of original row numbers in your source data for error mapping
Common Error Responses
400 Bad Request - No Records
{ "error": { "code": "INVALID_REQUEST", "message": "No records provided" }}400 Bad Request - Too Many Records
{ "error": { "code": "VALIDATION_ERROR", "message": "Too many records. Maximum 1000 records per request." }}400 Bad Request - Invalid Format
{ "error": { "code": "INVALID_REQUEST", "message": "Invalid request. Expected { records: [...] }" }}404 Not Found - Job Not Found
{ "error": { "code": "NOT_FOUND", "message": "Job not found" }}Example: Complete Import Workflow
# 1. Start bulk importIMPORT_RESPONSE=$(curl -X POST "https://your-instance.supabase.co/functions/v1/api-v1-bulk/contacts" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "records": [ {"first_name": "John", "last_name": "Doe"}, {"first_name": "Jane", "last_name": "Smith"} ] }')
# Extract job IDJOB_ID=$(echo $IMPORT_RESPONSE | jq -r '.data.job_id')
# 2. Poll for completionwhile true; do STATUS_RESPONSE=$(curl "https://your-instance.supabase.co/functions/v1/api-v1-bulk/jobs/$JOB_ID" \ -H "Authorization: Bearer YOUR_API_KEY")
STATUS=$(echo $STATUS_RESPONSE | jq -r '.data.status') PROGRESS=$(echo $STATUS_RESPONSE | jq -r '.data.progress_percent')
echo "Progress: $PROGRESS%"
if [ "$STATUS" = "completed" ] || [ "$STATUS" = "failed" ]; then break fi
sleep 2done
# 3. Check resultsecho "Final result:"echo $STATUS_RESPONSE | jq '.data'