Skip to main content
Question

How are you structuring Airtable for media operations at scale?

  • February 13, 2026
  • 5 replies
  • 87 views

Forum|alt.badge.img+1

Hi everyone,

I’m starting this topic to learn from others in the media & entertainment space who are using Airtable at scale.

I’m part of a podcast company, and we don’t just use Airtable as a database — we’ve built a significant portion of our operational workflows inside it. Our team works primarily through Interfaces, and all inputs are stored as structured data in the backend. In many ways, Airtable functions as our operational backbone.

Recently, we’ve noticed that we’re approaching record limits faster than expected. This seems to be driven by the volume of workflows, automations, and syncs to other systems. It’s prompted a bigger question for us: are we structuring our architecture in a suboptimal way, or are we pushing Airtable beyond what it’s realistically designed to support at scale?

I’d really appreciate hearing how other larger teams in this group are approaching this. Specifically:

  • How are you structuring your bases to manage operational workflows and long-term data growth?

  • Do you separate operational data across multiple bases?

  • Do you archive by year or move historical data elsewhere?

  • How do you prevent hitting record limits while keeping workflows intact?

I’m especially interested in practical, real-world examples of what’s working well (and what hasn’t).

Thanks in advance — and if it’s easier to discuss live, I’d be happy to connect for a quick call.

Best

Sandra

5 replies

Forum|alt.badge.img
  • New Participant
  • February 13, 2026

As someone who is just starting to build out their operational systems for live events on AirTable, I am definitely interested in this conversation! I am quickly forecasting the exact situation you spelled out here where we are hitting record limits and breaking down the associated workflows. We aren’t large scale, but we are supporting 50-80 events per year and the details are quickly stacking up (we haven’t even touched our education or graphic groups’ needs).


Blessing_Nuga
Forum|alt.badge.img+6
  • New Participant
  • February 13, 2026

 

  • How are you structuring your bases to manage operational workflows and long-term data growth?

  • Do you separate operational data across multiple bases?

  • Do you archive by year or move historical data elsewhere?

  • How do you prevent hitting record limits while keeping workflows intact?

We’re working through these same exact questions in higher ed for our degree program catalogs. While we haven’t found permanent fixes, this is we’ve implemented so far that seems to be working for us:

  • Yes, our data is separated across multiple synced bases, which was prompted by us hitting the limit on automations. We’re in the process of moving towards a more “unit-focused” Airtable system where a base is dedicated to a single portfolio item—e.g. degree programs, space and capital projects, institutional data—rather than forcing a single base to maintain all three.
  • We’re moving towards a “template model” where our tables are fixed, but records are not. This allows us to archive records—exporting the entire table as .csv file, which unfortunately wipes any attachments and images—at the end of an academic year, wipe the table except for the most recent record for a given degree program, and then continue using the base. Our records are all linked in a versioned auditing system so we’re able to trace the most current record back to its archived versions in the .csv files.

MatteoCrOps
Forum|alt.badge.img+3
  • Participating Frequently
  • February 13, 2026

Hey @sandra_k_o , ​@Blessing_Nuga – this resonates. I had to tackle record limitations and general base sprawl both managing the DET (Disney Entertainment Television) Airtable ecosystem and most recently a big NPR station.

 

In both occurrences there are various approaches depending on the use of the data. My three main approaches were (in order of complexity):

 

  1. Summary fields (basically extracting just the data needed for reporting out of archived records)
  2. Archiving and de-archiving automations (via Syncs or (storing records as JSONs in Long Text fields)
  3. HyperDB

These approaches were utilized depending on how the ‘overage’ data was supposed to be used, whether it was only needed for reporting purposes (i.e. you wanna know how many hours your team worked last year on an episode basis, but you do not need any other details), or if you actually needed the entire record with all its fields.

Happy to chat more feel free to connect over linkedin @mattcossu or via email matteo@crops-ag.com


Cherry_Yang2
Forum|alt.badge.img+13
  • Known Participant
  • February 15, 2026

Hi ​@sandra_k_o 

I’m curious as to which tables/automations are the major culprits. I use airtable to produce podcasts for my Airtable consulting business too. Likely a smaller scale but we have worked on lots of production/content systems.

We run into this sometimes and often it’s because of a few reasons:
1. The base architecture wasn’t set up properly which causes extra records/redundant information. The solution is to clean up the architecture. 
2. There’s really old data. For example, if you have tasks/details from a couple of years ago, you can use automation to capture the historical information into a text fields and delete the actual records. Ex. I could capture just the 17 tasks from a project/episode from 2 years ago because the likelyhood that I’ll actually need that info is low.

​​​​​​​​​​​​​​3.Syncs are not built in an effective way or people might be bringing it too much extraneous information. 

 

Happy to connect if you want me to have a look at your base. Cherry@claribase.com

 

Cherry

 


Forum|alt.badge.img+1
  • Author
  • New Participant
  • February 16, 2026

Thank you so much everyone for your very helpful and fast replies! The tool is still rather new to us so getting these tangible hands-on insights and suggestions are so valuable.

@MatteoCrOps I've connected to you on LI and would be happy to get more insight to how you’re working with Airtable and how the summary fields works. ​@Emily.Ann I can loop you in if you’re also interested in learning more about this?

@Blessing_Nuga thank you so much for this. I’m curious to understand your template model better. Does this approach limit the possibility to compare data between years? I also thought about archiving data into csv files, however that beats our overall aim to have a single source of truth, where we can make decisions based on historical data - being able to quickly sort through data and compare between the years. So storing data in separate CSV files don’t work for us, as we need the ‘live’ data at all times.

 

@Cherry_Yang2 would love to continue the discussion and see what solutions you’ve built now that you also work with content/production of content. I’ll follow up with you on email!