Skip to main content
Question

How are you structuring Airtable for media operations at scale?

  • February 13, 2026
  • 3 replies
  • 0 views

Forum|alt.badge.img+1

Hi everyone,

I’m starting this topic to learn from others in the media & entertainment space who are using Airtable at scale.

I’m part of a podcast company, and we don’t just use Airtable as a database — we’ve built a significant portion of our operational workflows inside it. Our team works primarily through Interfaces, and all inputs are stored as structured data in the backend. In many ways, Airtable functions as our operational backbone.

Recently, we’ve noticed that we’re approaching record limits faster than expected. This seems to be driven by the volume of workflows, automations, and syncs to other systems. It’s prompted a bigger question for us: are we structuring our architecture in a suboptimal way, or are we pushing Airtable beyond what it’s realistically designed to support at scale?

I’d really appreciate hearing how other larger teams in this group are approaching this. Specifically:

  • How are you structuring your bases to manage operational workflows and long-term data growth?

  • Do you separate operational data across multiple bases?

  • Do you archive by year or move historical data elsewhere?

  • How do you prevent hitting record limits while keeping workflows intact?

I’m especially interested in practical, real-world examples of what’s working well (and what hasn’t).

Thanks in advance — and if it’s easier to discuss live, I’d be happy to connect for a quick call.

Best

Sandra

3 replies

Forum|alt.badge.img
  • New Participant
  • February 13, 2026

As someone who is just starting to build out their operational systems for live events on AirTable, I am definitely interested in this conversation! I am quickly forecasting the exact situation you spelled out here where we are hitting record limits and breaking down the associated workflows. We aren’t large scale, but we are supporting 50-80 events per year and the details are quickly stacking up (we haven’t even touched our education or graphic groups’ needs).


Blessing_Nuga
Forum|alt.badge.img+6
  • New Participant
  • February 13, 2026

 

  • How are you structuring your bases to manage operational workflows and long-term data growth?

  • Do you separate operational data across multiple bases?

  • Do you archive by year or move historical data elsewhere?

  • How do you prevent hitting record limits while keeping workflows intact?

We’re working through these same exact questions in higher ed for our degree program catalogs. While we haven’t found permanent fixes, this is we’ve implemented so far that seems to be working for us:

  • Yes, our data is separated across multiple synced bases, which was prompted by us hitting the limit on automations. We’re in the process of moving towards a more “unit-focused” Airtable system where a base is dedicated to a single portfolio item—e.g. degree programs, space and capital projects, institutional data—rather than forcing a single base to maintain all three.
  • We’re moving towards a “template model” where our tables are fixed, but records are not. This allows us to archive records—exporting the entire table as .csv file, which unfortunately wipes any attachments and images—at the end of an academic year, wipe the table except for the most recent record for a given degree program, and then continue using the base. Our records are all linked in a versioned auditing system so we’re able to trace the most current record back to its archived versions in the .csv files.

MatteoCrOps
Forum|alt.badge.img+3
  • Participating Frequently
  • February 13, 2026

Hey @sandra_k_o , ​@Blessing_Nuga – this resonates. I had to tackle record limitations and general base sprawl both managing the DET (Disney Entertainment Television) Airtable ecosystem and most recently a big NPR station.

 

In both occurrences there are various approaches depending on the use of the data. My three main approaches were (in order of complexity):

 

  1. Summary fields (basically extracting just the data needed for reporting out of archived records)
  2. Archiving and de-archiving automations (via Syncs or (storing records as JSONs in Long Text fields)
  3. HyperDB

These approaches were utilized depending on how the ‘overage’ data was supposed to be used, whether it was only needed for reporting purposes (i.e. you wanna know how many hours your team worked last year on an episode basis, but you do not need any other details), or if you actually needed the entire record with all its fields.

Happy to chat more feel free to connect over linkedin @mattcossu or via email matteo@crops-ag.com