Skip to main content
Question

Base limits - number of records vs thin fact rows

  • February 5, 2026
  • 3 replies
  • 48 views

Forum|alt.badge.img

Hi everyone,

Looking for real-world experience / benchmarks from folks running large Airtable bases.

I’m currently redesigning our team’s base. We’re at ~330k records today (hard limit is 500k) and we’re already seeing performance issues (slow loads, sluggish automations/scripts at peak times, recalculation lag, etc.). Our first version was built to meet immediate needs, and we’re now rebuilding with a more best-practice / scalable approach.

Airtable Support mentioned (and we’re also observing) that even with ~170k record “headroom,” the base can still struggle well before 500k depending on schema + computation complexity. We suspect a big contributor is heavy rollups / formulas / long calculation chains across linked records.

What we’re considering / already doing:

  • Reducing rollups + nested formulas, especially ones that fan out across many linked records

  • Breaking long “calculation chains” into fewer dependency layers

  • Keeping “fact” tables as thin as possible (minimizing computed fields / high-cardinality links)

  • Moving some computations out to scripts/automations or external processing where appropriate

  • More normalized design (but that can increase link traversals + rollups, so it’s a tradeoff)

What I’d love to learn from you:

  1. Do you have a base with 350k+ / 400k+ / near 500k records? How is it performing day-to-day?

  2. What were the biggest drivers of performance improvements for you? (schema patterns, field types to avoid, interface design, views, automations, etc.)

  3. Any “rules of thumb” you follow around:

    • max links per record / max rollup breadth

    • avoiding circular or deep dependencies

    • when to split into multiple bases vs stay in one base

  4. If you reached strong performance at scale, what did you stop doing that made the biggest difference?

Context: We need fairly granular data (multiple granularities + aggregations), which makes “thin fact rows” harder than expected. I’m trying to find the best balance between normalization, computation load, and maintainability.

Any real examples, tips, or even “here’s what broke us” stories would be super helpful. Thanks!

3 replies

Tyler_Thorson
Forum|alt.badge.img+15

Hello mromao,

This is a great question, I am interested to hear what others have to say.

I think to answer though should involve breaking down “Performance” into a less wholistic measurement. In my experience Search/Query performance, Configuration change performance, and update performance are all affected by different things, though there is certainly some overlap.

I manage many Airtable bases of different sizes. The largest being on the order of about 300k records, so I don’t have a lot of experience with the super massive scale (for airtable), but honestly the number of records tends not to be the biggest indicator of load times, especially if you are doing a good job of finding ways to routinely bifurcate the data into sets, or making sure to have a reliable method for archiving records to a different base.

The single largest effect on performance is Syncing from another Airtable base. This seems to have implications across all the areas of performance, and not just in the standard “it takes longer to update” way. The biggest implication of this one is with configuration changes. After changing any relationship to a synced table, I will often have to move that base into and back out of a sandbox in order to get it to stop hanging on every configuration change. This has even completely taken a couple of my bases out of commission for several hours until support can fix the issue.

Another one is Relationships with One-to-many relationship where the ratio is super lopsided. E.g. A table with one record that is linked to every record in another table with tens of thousands of records.

This one is very frustrating as I often like to set up a “Settings” table for settings static values that I can change across an entire base, so I will often have every single record across all tables linked to one single record in that table. This seems to cause some strange behavior and I have had support blame configuration change errors on it in the past.

Really curious to hear what others have to say on this subject!


Forum|alt.badge.img
  • Author
  • New Participant
  • February 5, 2026

Hi Tyler, thanks for the help and insightful breakdown. 

For our case, the pain is mostly search/query performance (and secondarily update/recalc). The user experience is getting rough: it takes some time to open the base/tables and noticeable lag when scrolling. That’s the main thing I’m trying to solve because it makes day‑to‑day work frustrating even before we hit record limits.

On synced tables: we don’t have any today. I mentioned sync primarily as a possible archiving strategy (moving older/historical records out of the main base) — but your warning is super helpful because than can become its own bottleneck instead of a solution.

On highly lopsided links I agree, we’ve already flagged that as something to eliminate or redesign in this rebuild.

The reason I posted is that we don’t have time to exhaustively test every architecture variant. My hope (maybe wishful thinking) is that with a cleaner design (fewer heavy rollups, shorter dependency chains) we can comfortably accommodate more records while keeping the UX acceptable.

When you saw synced tables or lopsided links hurt performance, did it show up mainly in schema/config changes, or did it also affect view load + scrolling?


Tyler_Thorson
Forum|alt.badge.img+15

For synced tables I think you may end up coming out ahead if your use case is for archival, with the caveat that it would be critical to have an aggregation process that creates static values to pass back to the original table if needed, rather than creating a two-way relationship that essentially just extends your records across multiple bases.

E.g if you had a Table with a list of transaction records that was getting large enough to effect performance, but you didn’t want to lose the ability to see an overall total: You could sync the transaction records to another base, and have an automation in the archival base aggregate the totals for a given set (such as by day, week, or month depending on how aggressive you need the reduction to be) and storing those values in a different table which you then sync back to the original transaction base. This is a simplified example and there are tons of ways to do this but I’ve deployed a solution like this more than once successfully.


For the lopsided relationships the primary impact we felt was on the configuration change performance.

I have not done any kind of testing to isolate it’s impact on record loading performance. My gut tells me it is unlikely to have a major impact on it. It might slow the loading on the smaller side of the ratio’s table, that shouldn’t matter much since it will obviously have orders of magnitude fewer records. I believe this is because the performance penalty lies in having a field that has to make thousands of references, rather than it mattering that thousands of records have a field that happens to be referencing the same record.