Error 429 deploying to Vercel

1966 6
Showing results for 
Search instead for 
Did you mean: 
5 - Automation Enthusiast
5 - Automation Enthusiast

When deploying my website on Vercel, I always receive the error 429 due to Airtable rate limitation, which I’ve never seen on the local server.
I guess this problem is due to the server execute the page build much faster.
In that case, is there some way to handle this limitation?

6 Replies 6

I had to look up Vercel, as I had never heard of it before. (Tip: when writing about another site/service, don’t assume that your audience knows what it is unless it’s been discussed a TON. I can only find four references in this forum to Vercel, and your post is one of those four.)

When making calls to Airtable’s REST API from your setup on Vercel, are you waiting for the response using the await keyword? If not, Airtable is likely balking at the requests because they’re coming in too quickly. Naturally such delays aren’t always ideal when building a site, but that’s the only way to ensure that your API requests stay under the threshold.

I’m not familiar enough with Vercel to know if this will help, but is there any way that you can cache some of the data in another datastore so that you don’t have to retrieve everything from Airtable each time your site/page/service is being called.

Yeah you are right, I think I should not mention Vercel. It makes ppl confused. Are other hosts experience the same issues? (As I believe they should make the building page quickly as Vercle do)

Yeah, I do async and await for all data fetching.
On Vercel, they have the caching actually. But the first build will always fail due to 429, the second deployment based on that cache is successful.

Hard to say. I have very little experience calling Airtable from other sites/services. However, I’ve done a fair bit of calling one base from another using the REST API, and have never had a threshold problem. My guess is that separate parts of your code may be calling Airtable in quick succession because processes are running in parallel. Not sure if there’s a way to enforce making the calls in series to avoid that, but it’s worth looking into.

The largest part of my code that need the database might be this one:

export const fetchImportantVariantRecords = async () => {
  const ids = []
  const allRecords = nanoHomeBase("variants")
      // view: "Grid view",
      fields: ["id", "name", "slug"],
      filterByFormula: `AND({validated} = 1, {priority} < 2)`,

  // return a resolved promise if the records array is empty
  if (!allRecords.length) return Promise.resolve(allRecords)

  // handle a non-empty records array
  return Promise.all( => {
      return new Promise((resolve) => ids.push(record, resolve))
  ).then((pushRecords) => {
    // all records had errors
    if (pushRecords.every(Boolean)) return Promise.reject(pushRecords)

    // one or multiple records had no errors
    return pushRecords
      .map((error, index) => (error ? null : records[index]))

May the .all() method be the culprit?

As for the caching solution that you have mentioned, is it like this one: Bypass Airtable API limit and make unlimited requests - DEV Community

Hey @Ian_Tran – we have lots of companies and teams using Vercel with our product, Sequin. We’ll provide you with a Postgres database that contains all your Airtbale data (synced in real-time) so you can make these read requests using SQL (and avoid the rate-limits).

Could be a great fit for what you are building:

Very possible. By resolving all promises simultaneously, it’s possible that multiple Airtable calls are being made, thereby exceeding the threshold.

Earlier you said:

Looking at your sample code, it appears that you’re only using async/await for the functions that you’re exporting. The method calls themselves are still resolving via Promise.all. I also realized thanks to your code sample that you’re not using the REST API directly, which is what I was referring to. You’re using the API client for Node.js. When using the REST API—making direct HTTP calls to the API endpoints—I’ve never had any problems with exceeding the request rate. I have no experience with the client that you’re using, so I’m not going to be much help in optimizing its setup.