No, I wouldn’t recommend using the Sync API for that, because the Sync API tool has a ton of limitations. For example, you can’t manually edit nor delete any of the records after they are imported into Airtable.
For your needs, I would highly recommend using Make’s advanced CSV automations alongside Make’s Airtable automations for that.
I demonstrate how easy & simple this is to setup on this Airtable podcast episode.
If you’ve never used Make before, I’ve assembled a bunch of Make training resources in this thread.
However, Make has a limit of 40 minutes for each one of its automations, and I don’t think you’ll be able to get through all 25,000 records in 40 minutes.
The quickest & easiest way of solving this would simply be to break up your CSV file into multiple smaller files.
The more complex way of solving this would be to create 3 or more automations that all do the same exact thing, but in each automation, you would use Make’s “filtering” function to only process certain row numbers.
So let’s say you had 4 automations. The 1st automation would only process rows 1 through 6,250. The 2nd automation would only process rows 6,251 through 12,500. And so on.
Make takes a little bit of time to learn, so you might not figure it all out on the first setup, but check out my training resources above, and you’ll be able to figure it out in no time.
Hope this helps!
If you have a budget and you’d like to hire the best Airtable consultant to help you with this or anything else that is Airtable-related, please feel free to contact me through my website: Airtable consultant — ScottWorld
Hm I don’t think that’s actually possible with the Sync API because of the 10k row limit you mentioned; you either end up being unable to delete the old records or being unable to upload new records
How automated does this need to be? If you’re okay with running a bash script once a day then you could hire someone to do that for you, wouldn’t take more than a half hour probably and letting it run in the background. ChatGPT can sort you out real quick too if you’re comfortable with that. End result would be a script that you ran in a folder with the 25k row CSV file, and it would:
- Delete all the records in the table
- Chunk the 25k row CSV file into 10 API calls each. The rate limit's 5 requests per second, so assume you did 4, so 40 records per second, would take about 11 minutes to run
Could make it more automated but then you’re looking at a lot more complexity / cost
Thank you both.
Adam, I think you’re correct. The Sync API docs are ambiguous to me but it reads like you can’t sync more than 10k records unless you use the setting to retain previous records. The issue of course being that there’s no way to delete the existing records via the API.
- To confirm, the only approach you think will work is:
- Use `Create records` endpoint to write records (10 recs at at time)
- Use `Delete multiple records` endpoint to delete (10 recs at a time)
Yeah, so that’d take like 22 minutes to run actually, sorry
Could try messing with a Run a Script action that would run via an Airtable automation to delete the records for you too actually, trigger it via webhook that the Bash script hits, and then:
- Run a Script action to grab all the records in the table
- Chunk them out into an array of 1k each
- Use a repeating group on that output with a Run a Script to delete them in batches of 1k (We have to do this to get around the 30 second time limit for the Run a Script step)
Downside to this is that you’d need to experiment with how long this would take. I reckon probably just give it a 5 minute window before it starts creating and you should be good?
You’re right to move away from the Sync API here.
For a nightly 25k refresh, I’d treat the CSV as the source of truth and run an idempotent upsert+prune with the Web API. Add a stable external key in your table (e.g., a hash or source ID), then use updateRecords with performUpsert on that key so unchanged rows aren’t rewritten and new/changed rows are created/updated in place (https://airtable.com/developers/web/api/update-record)
After the upsert pass, compare the CSV keys to what’s in Airtable and delete only the stragglers that no longer exist in the file; do deletes in batches of 10 via the REST endpoint you linked.
A few practical notes from doing this at similar scale: paginate your upserts at 10 records per request and keep concurrency conservative to respect the 5 req/sec rate limit; at 25k rows this comfortably fits a nightly window, and you’ll avoid automation timeouts by running it from an external script that hits an Airtable automation webhook to toggle “maintenance mode” if needed (https://airtable.com/developers/web/api/rate-limits)
Using upsert instead of “wipe-and-reload” preserves record IDs, comments, and revision history, and it dramatically reduces the amount of data you need to send if only a fraction of rows changed.
If you ever outgrow single-table batching or need two-way reliability between CSV, Airtable, and other systems, event-based platforms like Stacksync handle that type of incremental sync automatically — no manual chunking or deletion logic required.