Help

Re: New Scripting API calls: Selecting records by recordId

9391 0
cancel
Showing results for 
Search instead for 
Did you mean: 
Philip_Langdale
5 - Automation Enthusiast
5 - Automation Enthusiast

Hi everyone!

I’m happy to announce that we’ve released a new scripting API capability today, and this is available in both the Scripting App and the ‘Run a script’ Automation action.

You can now select records by recordId, either by setting a new recordIds option when calling selectRecordsAsync or by using the new convenience method selectRecordAsync when you only need one record.

This will be especially useful in Automations, where you can now easily select a Record based on a trigger or Find Records output.

Examples:

const recordIds = input.config().recordIds;
const table = base.getTable('Team');
const recordsResult = await table.selectRecordsAsync({fields: table.fields, recordIds});
const records = recordsResult.records;
for (const record of records) {
    console.log(`Record name: ${record.name}`);
}

and

const recordId = input.config().recordId;
const table = base.getTable('Team');
const record = await table.selectRecordAsync(recordId);
if (record) {
    console.log(`Record name: ${record.name}`);
}

We hope this new capability will help you with your scripts!

Thanks!

13 Replies 13

Thank you for joining the Airtable Community Forums to share this wonderful new feature.

Does this have a phased rollout? If so, do you have a timeline for when the rollout will be complete? These new features currently are not working for me in Scripting App, only in an automation script.

And because I’m a documentation nut, here are links to the documentation: selectRecordsAsync and selectRecordAsync

I think you have an extra period between await and table.

Ashley_Brichter
4 - Data Explorer
4 - Data Explorer

This is fantastic. Thank you!

Given this new construct …

  • Is it the case that under the covers, this record selection variant actually performs an iteration of all records to find the record specified?
  • Or, is it the case that the underlying enhancement is truly that; a process that uses an existing object pointer based on the record ID?

Be careful how you answer this because I know at least one person who will validate the claim by testing the performance under each approach. :winking_face:

Sorry, I sent the announcement too early. The scripting app was not updated at the time. That has now happened, so it works in both environments now.

The answer depends on which environment you’re in.

  • In Automations, it will only load the specific records that are specified.
  • In the Scripting App, it is currently doing filtering on the result set, so it’s no more performant than doing the filtering yourself, but there are ways we can optimise it in the future. For now it’s primarily present to maintain API consistency, given the primary use-cases are in Automations.

Oh wow, this post now seems near-prophetic. Either way, a quintessential OOP method like this one is bound to be a super useful addition to both environments.

If anyone’s looking for ideas on how to leverage this while minimizing async function calls, remember that table descriptions are already loaded when your script/automation starts, and given their generous 50k-character cap, you could easily use them to store maps of, say, module record IDs that you only load in specific scenarios.

It’s not that circumventing the 50k-character limit inside the Scripting (and automation, I think) block was difficult to do beforehand, but this simplifies things by making most memoization use cases unnecessary.

I’ve been accused of doing this from time-to-time, although it’s not reliable foresight that I possess per-se. :winking_face:

Um, brilliant?

Indeed, this is where someone at Airtable needs to recognize that operationalizing the ability to bind cached-forward preprocessed indices of many types should be fundamental to all tables. Indeed, views are fundamentally similar in concept but offer no capacity for improved performance. Ideally, we should be able to create arbitrary JSON objects that are bound to the table. Such objects could be used for many things and updated as needed through distributed automation or as a result of other existing processes.

While 50k is not a lot, it’s also possible to dodge memoization in some cases. Inverted indexes are fairly large if designed for multi-field full-text capabilities, but for record IDs, probably enough for many cases.

In many languages, we have script properties - KV pairs designed to sustain persistence across operations. Why not dynamic table properties, right? Why limit table properties to fields and rows? Why be forced to create a table of table properties?

Descriptions for all fields and views are also available. However none of them can be set with code. They can only be set manually. This makes maintaining any information in them difficult.

Both of you raise valid points, should have kept my mouth shut for a few more days but now I have a prototype to finish over the weekend. :joy: So, stay tuned, it’s possible — probable, even — memoization is overkill for exactly what I had in mind here but I’ve had reasonable, measurable success with it in the pre-cold-polling-recordIds era of Airtable.

And the “what I’ve had in mind” part was largely inspired by posts like BlockChaining and, coincidentally some side-loading exchanges between precisely you two (first one I found). All of which made me start consciously viewing Airtable as the already-turing-complete-platform that it is. One whose coplexity limitations are as arbitrary as they are arbitrarily circumventable. Maybe even without things getting too hackey outside of the comfort of its box.

Bottom line, after a year and about half a dozen prototypes. I think I’m close to a fairly end-to-end system for extending Airtable scripting logic. Or rather, easily writing logic that’s way more complex than what the Scripting environment normally makes convenient, without actually leaving the scripting block or using any “outside” help other than maaybe an extra table in a given base, whose records are treated as modules of sorts.

In limited testing so far, a code base north of 40k characters (so, even someway before surpassing the vanilla app limit) would already benefit from memoization substantially. Especially memoization of the staged variety, though I’m not that thrilled with this part of the code base yet, it remains a way bigger mess than it should be. Regardless, the basic approach I’ve had some succcess with so far was charting out my mini dependencies between code modules split across Airtable records on runtime, memoizing what’s needed, and proceeding to run from there.

Reading the objects (sometimes even more elaborate code involving a few classes and the like) used for instantiating individual app prototypes has so far been straightforward, as well. Regardless of whether the JSON or full-fledged code is partially stored in a description of one or more tables, fields, or a combination thereof.

One cool pattern I want to say I planned but have mostly just stumbled into has to do with prototypes of somewhat user-facing apps; the kind expecting at least a minimal degree of UI interaction during runtime. So far, those have been trivial to manage by virtue of giving me breathing room to bundle blocking memoization calls into Promise.all and Promise.race methods seemingly tied to the app awaiting user input. Modern garbage collection’s been taking care of the rest so far. How well, I can’t tell until I try scaling this to a level whereon it might actually matter one iota.

Not sure if the end result will be anywhere near as capable as even the most basic use case for something like on2Air, but the intent is to open-source it all anway as a curiosity. Because at the end of the day, this was always a just “yes, but can it run Doom I might be dead before I learn enough React to just churn out custom apps with grown-up code-splitting in production” project. Mostly a testament to how much I enjoy messing with the Scripting app + Airtable combo even with all its limitations.

So, again, all valid points, will do my best to follow-up with some numbers after I write a few more test cases quantifying large-scale benefits of memoization, or lack thereof, for this and similar use cases. Oh, and maybe an extra test case focusing on how much absolutely none of this doesn’t even matters any longer now that we have granular control over individual records being polled (not that I’m complaining).

Tl;dr: I won’t be applying for any cybersecurity roles in the near future and I take my procrastinating very seriously. :joy: