I wish I had numbers for how much longer these scripts are taking compared to how long they used to but it’s taking processes that used to take probably less than a minute and turning some of them into 5-10 minutes scripts. Far past an acceptable slow down for our use cases.
I’m wondering if anyone knows why this would be happening to us, across multiple bases and scripts. Thanks in advance!
Part of me thinks that this might have to do with lookups, rollups, and counts. When I run one of these scripts, it looks like it won’t advance till the next loop until these have been populated, even though the link exists. Not entirely sure if this is it.
First step - isolate the update process without the lookups, rollups, and counts. Perform tests with and without.
Second step - crate a metrics harness to gather actual performance data. I typically create a JSON object and update it as the process runs. At the end of the test process, I write the data out to a table so it’s easier to perform analyses/review, etc.
I assume the reference to “50” is your attempt to ensure not more than 50 records updated at a time.
Without knowing the scope and what updArr looks like, we can only speculate, but it’s entirely possible the slow down is completely out of your hands and squarely on Airtable’s infrastructure. Connectivity can also make things change when no changes have occurred.