Help

Can Airtable handle a table with 50,000 rows and 25 columns?

9161 8
cancel
Showing results for 
Search instead for 
Did you mean: 
Richard_Scholey
4 - Data Explorer
4 - Data Explorer

Assuming that I pay for the pro plan, will it be even possible for me to bring this large amount of data up on the web? It’s an extremely large number of rows and I wonder how Airtable will handle scrolling?

Would appreciate advice from someone who has done something similar.

Thanks

8 Replies 8

Yes, you can load a 50,000-record table — but under the Pro plan, that would have to be a one-table base, which might not be all that useful. If you want to test friskiness with a large base, version 3.0 of my data deduplication routines includes a 10,000-record demo base; you can copy that to your workspace and test it. (Don’t let Airtable hear me say this :winking_face: , but if your free account has aged beyond the two(?)-week Pro trial, create another account just to test.) That base includes some relatively involved calculations, and the initial load takes a little while, but once it’s loaded I think you’ll find it scrolls extremely rapidly.

The base also supports some dynamic updates that recalculate across records, so you can get an idea of how fast it responds. Try these tests:

  1. In the [Main] table, select the <Deduplication demo> view. Sort the table by {MatchKey}; this will group all potential duplicates together. For any record flagged with a Heart as a possible duplicate, check the corresponding {Dupe OK} checkbox. IIRC, this causes all 10,000 records to be compared with a rollup field drawn that collates a value from all 10,000 records. On my PC, this takes less than a second.
  2. In the [Main] table, select the <Deduplication demo> view. Sort the table by {MatchKey}; this will group all potential duplicates together. For any record flagged with a Heart as a possible duplicate, select its {ID} field, press Ctrl-C to copy it, and paste it (Ctrl-V) into the {Master ID} field of any other record with the same {MatchKey}. On my PC this recalculation takes about 40 seconds for a 10,000-record base; however, as I recall, it requires a much more massive recalculation for each record, including a cascading chain of nested lookups, so the total number of calculations is several orders of magnitude higher than for the first example.

You can also mark-and-copy the 10,000 records a couple of times to create an even larger base. I’m not sure if I ever got to 50,000 records — since the [DeDupe] table contains only a single record, the [Main] table can officially contain 49,999 records under the Pro plan — but I know I played around with tables with 25,000 or 35,000 records. The performance penalty gets higher; perhaps at some point it becomes unacceptable.

Note that with the second test, above, it takes essentially the same amount of time to recalculate after a change to a single {Master ID} value as it does after a change to all 10,000 {Master ID} values. Accordingly, I provide an alternative method for updating multiple {Master ID}s where the target {ID} values are first copy-and-pasted into {Hold Master ID}, and then the entire {Hold Master ID} column is copy-and-pasted into the {Master ID} column. As you can see, this still takes about 40 to 45 seconds to perform, but it’s possible to update hundreds or thousands of records in that period of time.

As always, Your Mileage May Vary — but the demo base linked above should give you a feel for how large bases perform.


Note: You don’t need to know anything about the routines to play with the demo base; the two tests above should be enough to step through the trial. However, if you need to see how closely your use case matches the one in the demo, the first link above goes to a Community post that in turn links to a ridiculous amount of written and video documentation…

I’ve noticed the same in ~25,000 record sets. Once loaded, it’s very responsive. It all depends on the fields, the formulas, and the nature of the data in addition to the volume as @W_Vann_Hall said.

I have not noticed any significant issues accessing and updating large tables with the API with one exception - where the volume necessitates lots of pagination. You have to factor the volume of the data into any architecture that involves the API and especially so with rate limiting and no offer (yet) of a higher paid tier to process API calls faster.

But this is not a ding on Airtable’s API per-se; all APIs are challenged by volume and rate limits.

I have one client that is managing 20,000 new records per month with more than 10 million records at rest. We achieved rapid access to all 10+ million records via Airtable by using a real-time database and by crafting a GraphQL (query middle-ware) between Firebase and Airtable.

Firebase is an ideal pairing with Airtable for many reasons:

  1. It provides a great UI/UX to information workers that can dynamically ebb and flow to meet specific user requirements and real-time data needs at the operational level.

  2. Firebase can support very large data sets with arbitrary field indexing.

  3. Firebase’s NoSQL design makes it possible for Airtable’s JSON data objects to live without modification as Firebase objects.

It would be nice if Airtable would easily handle 50 thousand (or 10 million) records, but we cannot forget why Airtable exists or where it shines.

If putting a smile on end-users faces is a good measure of IT success, then Airtable has a lot to be proud of. Sustaining those smiles at scale typically involves the API economy.

Whast is Firebase? Never heard of it…

Despite having been created for mobile apps, it has proven that its architecture is well-suited for all web apps, and especially those that require high-performance and in some cases, true real-time updates.

Abraham_Bochner
8 - Airtable Astronomer
8 - Airtable Astronomer

My base are starting to get bigger and bigger, and I afried how big it can go

Well, this is probably a good problem to have, but you need to either establish an archive process or a way to simulate large databases in the confines of Airtable.

Any guide how to “establish an archive process” ? OR “to simulate large databases”

We are right now at 32000 records and growing on a daily basis

I’m not aware of anyone who has advised what should happen when you hit the record ceiling. As for helping Airtable mimick the idea that it can handle rows > 50,000 and upwards of 10 million, it’s not easy, and it’s definitely costly (see this).