Yes, you can load a 50,000-record table — but under the Pro plan, that would have to be a one-table base, which might not be all that useful. If you want to test friskiness with a large base, version 3.0 of my data deduplication routines includes a 10,000-record demo base; you can copy that to your workspace and test it. (Don’t let Airtable hear me say this
, but if your free account has aged beyond the two(?)-week Pro trial, create another account just to test.) That base includes some relatively involved calculations, and the initial load takes a little while, but once it’s loaded I think you’ll find it scrolls extremely rapidly.
The base also supports some dynamic updates that recalculate across records, so you can get an idea of how fast it responds. Try these tests:
- In the
[Main]
table, select the <Deduplication demo>
view. Sort the table by {MatchKey}
; this will group all potential duplicates together. For any record flagged with a
as a possible duplicate, check the corresponding {Dupe OK}
checkbox. IIRC, this causes all 10,000 records to be compared with a rollup field drawn that collates a value from all 10,000 records. On my PC, this takes less than a second.
- In the
[Main]
table, select the <Deduplication demo>
view. Sort the table by {MatchKey}
; this will group all potential duplicates together. For any record flagged with a
as a possible duplicate, select its {ID}
field, press Ctrl-C
to copy it, and paste it (Ctrl-V
) into the {Master ID}
field of any other record with the same {MatchKey}
. On my PC this recalculation takes about 40 seconds for a 10,000-record base; however, as I recall, it requires a much more massive recalculation for each record, including a cascading chain of nested lookups, so the total number of calculations is several orders of magnitude higher than for the first example.
You can also mark-and-copy the 10,000 records a couple of times to create an even larger base. I’m not sure if I ever got to 50,000 records — since the [DeDupe]
table contains only a single record, the [Main]
table can officially contain 49,999 records under the Pro plan — but I know I played around with tables with 25,000 or 35,000 records. The performance penalty gets higher; perhaps at some point it becomes unacceptable.
Note that with the second test, above, it takes essentially the same amount of time to recalculate after a change to a single {Master ID}
value as it does after a change to all 10,000 {Master ID}
values. Accordingly, I provide an alternative method for updating multiple {Master ID}
s where the target {ID}
values are first copy-and-pasted into {Hold Master ID}
, and then the entire {Hold Master ID}
column is copy-and-pasted into the {Master ID}
column. As you can see, this still takes about 40 to 45 seconds to perform, but it’s possible to update hundreds or thousands of records in that period of time.
As always, Your Mileage May Vary — but the demo base linked above should give you a feel for how large bases perform.
Note: You don’t need to know anything about the routines to play with the demo base; the two tests above should be enough to step through the trial. However, if you need to see how closely your use case matches the one in the demo, the first link above goes to a Community post that in turn links to a ridiculous amount of written and video documentation…