Base limit of 50,000 records work around?

One real downside for me is Airtable limiting records to just 50,000. While that will allow me to enter just over a 1 year of our current data I really would like all of our data to remain in the same base for many years to come and scale as we do. That way we can filter and report on many years at a time from one base. I would only require 1-3 user licenses, so enterprise is not an option (plus it starts at $3,000 per month!), so that is way out of our budget. Would be nice if they offered a larger plan that offered more records (like 100,000 records for $35mo., 150,000 records for $50mo., 200,000 records for $65mo., etc. )

Any users run into the 50,000 limit have a work around?


1 Like


All the time.

Airtable has many good reasons to put it and certain data in front of business users. However, it doesn’t scale well - at least not in terms of record counts that are likely to expand well beyond the base limit. That said, it is certainly not the best place to store data-at-rest, or even data per-se.

My clients recognize this limitation (mostly because I’ve educated them) and they also recognize the advantages of Airtable. It is ideal for operational processes, workflows, and day-to-day data-intensive activities especially when it comes to collecting data from other workers. These are all the things that ElasticSearch (for example) does poorly.

I tend to combine ElasticSearch with Airtable when scale is critical and especially when pervasive search and business analytics are needed.

I have a number of tools that make it easy for Airtable users to “request” data from ElasticSearch by simply completing an Airtable form which kicks off a process that instantiates the data they need for a specific process or analytical task. I even make it possible to push the data back into ElasticSearch as updates to the master data set - a basic “commit” of sorts.

While Airtable’s block charting features are useful in narrow use cases at the edge, analytics that blend data from many activities including Airtable and other data systems require Kibana (the ElasticSearch analytics platform) which is able to deliver broad business analytics at scale.

All of this, of course, requires a comprehensive middle-ware that tightly integrates ElasticSearch and Airtable using their respective APIs. It’s not trival to do this, but it does make for a very useful solution combining the best of three very nice environments.

I’d love to know if there’s any chance this limit could be lifted to 100,000 by the end of 2020.


Yes, they really need to considers this. I am a new user and I can already see the max rows records limit eventually becoming an issue. It would be nice to see it at 1 million.

I’m sure its not so simple, but they could charge the users a great premium for this.

I recognize the ease of airtable and its intended audience… but products do evolve. I think they really have to look at how many of their customers are needing to scaling there product.

1 Like

Indeed, products evolve in many ways but typically only in ways that they can actually evolve practically. Given the current Airtable experience and features, they’re sort’a in a box not unlike the box Google Sheets finds itself in.

As I said here, recently…

A Google sheet with 5 million cells (populated or not) looks and feels a lot like Airtable with 50,000 records. What does that indicate? It tells me that the tipping point is not likely the underlying architecture; rather, it’s the limitation of the underlying compute stack typically employed. My mostly uninformed technical assessment is that – all things constant – neither 5 million cells or 50,000 records will ever be performant given today’s commonly available consumer devices.

And, so what do I mean by “all things constant” in this assessment?

The user experience and features represent what we know Airtable to be for use cases under 50,000 records. And depending on the number of fields, especially formula fields, the practical ceiling is dynamically lower than 50,000 records.

If you want to push the record limit to 100,000, or a perhaps a million records, something – perhaps many things – about the product experience need to change. As such, users must be willing to give up some features and work differently, ergo - there’s a good probability that it won’t be Airtable as you know it.

As such, feel free to indicate here (to help the Airtable design team) understand the central and bare minimum UI and UX features that you cannot live without in a product that has vastly more scale. If you think the technical complexities are challenging, try getting every user to agree to this new [scalable] product definition. :wink:

Have you noticed that most databases that scale do not look or feel like Airtable?

There’s a pretty good reason. They can’t because they support millions of records.

What’s the Remedy?

I hate to crap all over customer requirements that are generally reasonable and helpful to a growing product, so I’ll toss this idea out because I’ve actually mimicked this with a one-million record data set.

  • Imagine a plugin architecture that allows you to select arbitrary database back ends like Firebase, or ElasticSearch.
  • The plugin would automatically cache into and out of Airtable’s current datastore records that are immediately relevant.
  • As the user moves through the dataset, records would paginate to provide viewing access, shifting the relevant “window” as needed.
  • The plugin would be a premium service extension and users would be responsible for their own data storage and bandwidth costs with their chosen datastore provider.

Indeed, take just one feature; search. Imagine how this would need to change to perform instant findability across a million records of which less than 50,000 are actually in view or memory.

Bill, have you considered packaging your tools into a plugin and releasing it? I’m sure many people would find it useful.

It would be interesting to think if Airtable had been built with this approach from the ground up. People could have been able to point to their own databases (ES, Postgres, etc.) and caching/views/pagination would have been built in to the front end so millions of records could be supported.