Can only send 15 mutations every 1000ms

We use Airtable as a back-end for many of our major systems. A big part of these systems is the use of scripting apps in order to streamline many of our processes. We have one system to track shipments. We get data in at the end of our day and someone will upload that data into one of our bases. From here, we run a series of scripts that scrubs and processes the shipping data since much of it can come in very raw.

I was in the process of working on one of these script today when all the sudden I was thrown the error “You can only send 15 mutations every 1000ms”. Now, I haven’t made any changes to these scripts today and have never seen this error prior to today.

The issue is that we process large amounts of data and while putting an ‘await’, which we don’t normally do, in front of an ‘updateRecordsAsync’ function will allow the script to work, it takes us from a process that used to take 5 minutes to one that takes ~30 and potentially ruins our entire decision to move this process into Airtable.

I thought I would reach out here to see if anyone could enlighten me as to why this might happen. Is this an update from Airtable? Is our account being throttled for some reason due to the amount of data we are pushing through our scripting apps? If it is an update to the scripting app itself, why was there no warning or mention of this?

Thanks for all the help in advance!

I suspect that you are simply seeing better error reporting on an existing limit that Airtable has not clearly published or enforced.

Although I haven’t seen a published rate limit in Scripting App, both automation script actions and custom apps have a rate limit of 15 mutations per second. It is reasonable to assume that scripting app probably has a same rate limit.

My experience and the experience of the user in this thread also support the idea of a 15 mutations/second rate limit in scripting app.

As for why you didn’t see the error before, it could be that Airtable isn’t always consistent in reinforcing rate limits, per this post about rate limits and the REST API.

With the increased number of mutations due to the release of automations, Airtable may have decided that they need to enforce rate limits more.

As for workarounds, you could write your own code to throttle your calls to 15/second using the setTimeout function, as described in this thread. It might be difficult to get 15 calls/second, but you should be able to get more than you would get with awaiting every call.

1 Like

Thanks for the help and linking in the other threads!

Even though it takes time to wait for records to update using “await”, there is potential for records to get updated inaccurately if you don’t use “await” before calling updateRecordsAsync. I’ve seen this firsthand in my early Airtable scripts, where I forgot to use “await” and things got messed up as a result.

When you say that you process large amounts of data, how many records are you updating at a time? Using updateRecordsAsync, Airtable can process updates in groups of up to 50, which actually is much faster than updating records one at a time with updateRecordAsync. If you build an array containing all of the necessary updates in advance, a simple loop can pass those to updateRecordsAsync in groups of 50 (using “await”) for very efficient processing. Several hundred records can process in a matter of seconds.

4 Likes

Hi Justin,

Yeah this is my next step and probably makes the most sense for our use case, when I initially began with the scripting app updateRecordAsync was the first thing that popped up that seemed applicable for us, so I ran with it but I’m learning more and more that there are some alternatives out there.

Thanks for the additional info!

I had assumed Sam was using this approach given the reference to large amounts of data. This should remedy the issue.

Another optimization that may be applicable (if there are no dependencies between field transformations) is processing multiple steps in parallel as you build the arrayed updates - this is possible using Promise.all() - a way to spin up independent (but simultaneous) transformation workers such that the update is performed when all branches are complete for all fifty records.

I’ve used this approach to take a 10,000-record 10-minute process down to just 11 seconds.

2 Likes

@Sam_Cederwall Aren’t you already updating your records in batches of 50? You stated that you are using updateRecordsAsync (with an s) in your initial post.

You could further break up your batches into sets of 15, and use setTimeout to spread out those batches. This is what I was referring to in my prior post that suggested throttling your calls to 15/second.

You can update 50 records in one mutation. You can submit up to 15 mutations in one second. Thus, you can update 750 records in one second.

For example, say you have 50,000 records to update:

Break the records down into sets of 66 sets of 750 (and one set of 500).
Break each set of 750 into 15 batches of 50.
Submit the first set of 750 records, using 15 synchronous calls (no await) to updateRecordsAsync with 50 records each.
Wait one second.
Submit the second set of 750 records.
Wait one second.
Continue on until all 67 sets are submitted.

1 Like

I apologize for the confusion, I meant ‘updateRecordAsync’, just a typo on my part, and no, I wasn’t batching by 50.

I also wasn’t previously running into issues doing things this way, no scripting errors, no issues with the script missing records, nothing really that stood out.

I have been in contact with support and have figured this out but am still not sure why I wasn’t getting errors thrown on this originally.

Thank you all for the help.

This topic was solved and automatically closed 3 days after the last reply. New replies are no longer allowed.