Help

Re: Filtering with Loops in Javascript - How to Avoid Insanity

9718 0
cancel
Showing results for 
Search instead for 
Did you mean: 

As developers, we get roped into codifying a vast and seemingly endless escalation of conditional processing to find and filter records and related tables. Where does it end? Apparently, it doesn’t because as soon as you create the perfect script that loops across many filtering constraints, the client (or management) is going ask for a “slight” modification or you will invariably discover an unanticipated case where you’re standing in a corner holding a paintbrush without egress.

This and vastly many of our script projects are fundamentally based on two essential tasks -

  1. Filter out the data that we need
  2. Perform computations on the filtered data

Typically, we use loops to achieve #1. #2 is easy if we get #1 right. But #1 is never simple and rarely performant given enough conditional complexity and records.

What’s better than this approach? Fewer loops for sure because they are slow, and they make our lives shi**y.

The Remedy…

The term “filter”, and especially in a javascript sense, is subject to abuse. Why not exchange this term with “search”. Indeed, a JSON hash index or an actual inverted index is exactly the right chainsaw to mitigate the looping and increase direct access to lists of record IDs that match what the loop is intended to provide.

I use hash indexes all the time, but increasingly, I use Lunr to establish inverted indexes that are very agile and extremely fast. Imagine finding 200 record IDs from a collection of 30,000 rows where three fields match specific values and the fourth is “fuzzy”. Looping requires lots of code and plenty of time. An inverted index can do this in 20 milliseconds.

And when you factor in the advantages of term presence, wildcard searches, and field-specific boosting, the loop that builds the index is time extremely well spent. Plus, you can create and cache-forward index documents independent of your business logic.

One last comment concerning perfomance - there are many computations and metrics that we perform repeatedly for business analytics. Why not compute these metrics and store them in the search index so that when the CEO wants to see how her business is performing, you’ve reduced the overhead to a simple search query rendered at the edge in 500ms?

While an inverted full-text index offers many advantages, there’s a simple approach you can use to improve script performance while creating an elegant approach that drastically simplifies your code when nested loops seem the only viable route.

Hash Index

This is simply a JSON document intended to make it fast and simple to perform lookups; it’s literally a copy of a collection of items where the object key represents a way to directly access the data related to that key.

Let’s say you had a thousand records in a table and you needed to loop through it to find a given record whose name matched a specific customer name – and you need to perform this looping lookup repeatedly nested inside another enumeration. You have three options -

  1. Perform a FilterByFormula approach for each lookup; ugly, slow, convoluted.
  2. Loop through every record seeking a match; very slow and CPU-intensive, added code complexity.
  3. Perform a “seek” into a pre-built hash index; almost instant, elegant.

Approach 3 requires a single pass across all the records containing the customer names (that’s the downside). That pass would create a collection of objects in a single JSON document (ex., oCustomerNameHash that is global to your app) and looks something like this:

{ "ABC Transportation" : {
   "customer_id" : "1001",
   "customer_contact" : "Jimmy Johns",
   other attributes needed for the index...
   }
}
... objects representing rest of the customers ...

When you want to know the contact name for “ABC Transportation”, the code is simple. effortless, and requires only 6 milliseconds.

let customerContact = oCustomerNameHash["ABC Transportation"].customer_contact;

Even if the hash index has 50,000 items in it, the response time will be sub-20 milliseconds. This single line of code eliminates a nested loop and the complexity of doing so. It is possible because you performed the loop in advance of needing to perform the lookups. Essentially, you are caching forward an intelligent and performant class of data objects that can be reused throughout your script.

This makes it possible to vastly compress the code and complexity while also sparing the number of loop introspections required to a single pass across that data instead of vastly many passes which are required in nested filtering loops.

With a hash index like this, you may often find multiple cases where the hash index can be utilized, so it has the advantage of being ready to perform lookups for the duration of your script process. Furthermore, you may discover you need to perform lookups using additional keys which suggests multiple hash indexes for the same data.

Imagine you wanted to also lookup customer name by customer ID or customer contact by customer ID. This would require a new hash index like oCustomerIDHash. Instead of building a second hash, this added seek capability can be built into the same constructor loop so there’s no need to loop for every index dimension you might need.

Example… you can mix both indexed dimentions into one hash index document - this time we’ll call it oCustomerHash. By simply creating two keys for each customer - one for the customer name, and one for the customer ID – we have begun to create a complex indexing model. The lookup process is constant and the agility is magnified.

The objects in the index would include two entries for each customer.

{ "ABC Transportation" : {
   "customer_id" : "1001",
   "customer_contact" : "Jimmy Johns",
   other attributes needed for the index...
   }
}

{ "1001" : {
   "customer_name" : "ABC Transportation",
   "customer_contact" : "Jimmy Johns",
   other attributes needed for the index...
   }
}

Now you can query by customer name to get customer ID or by customer ID to get customer name instantly - no nested looping required.

let customerName = oCustomerHash["1001"].customer_contact;
let customerID   = oCustomerHash["ABC Transportation"].customer_id;

Building the Index

This is one approach that can be used to construct the index from an existing collection of records.

let oCustomerHash = {};
for (let record of oRecords)
{
    // add the customer name object
    oCustomerHash[oRecords[record].getCellValue("Customer Name")] = {
        "customer_id"      : oRecords[record].getCellValue("Customer ID"),
        "customer_contact" : oRecords[record].getCellValue("Customer Contact"),
     }
    // add the customer id object
     oCustomerHash[oRecords[record].getCellValue("Customer ID")] = {
        "customer_name"    : oRecords[record].getCellValue("Customer Name"),
        "customer_contact" : oRecords[record].getCellValue("Customer Contact"),
    }
}
31 Replies 31

It is a collection of table records returned from Airtable’s SDK. You can glean what this is by looking at this reference to one of the Airtable fields:

oRecords[record].getCellValue("Customer ID")

Rose_Haft1
8 - Airtable Astronomer
8 - Airtable Astronomer

Hi @Bill.French - I managed to get the hash to work but now I am getting a time out fail in Airtable. Is it possible to create the hash in another way?

Is it possible the timeout is unrelated to the manner in which the hash index is created?

Have you benchmarked the time it takes the script to create the hash index? In my tests and the examples I published, even a 50,000 item hash index takes just a few seconds to build. As such, I’m sceptical that the hash indexing process is the cause of the timeouts.

And there are other mitigating factors when tackling a timeout issue in Airtable script. Is this code running in an automation step? Is it an inbound webhook? A script block?

There are a couple of if statements in there. I will see what I can move into another process. Thanks!

“IF” statements aren’t going to cause any kind of delay. The most likely delays are from reading/writing data to/from Airtable. Such operations require using the “await” keyword, and depending on the complexity, frequency, and placement of such operations, those could be the cause of the delay leading to the timeout. For example, if you’re running a loop and are querying a table’s records inside the loop, that will eat up a ton of time because you’re waiting for the query results once for each loop iteration. A more efficient way is to run the query once before the loop begins, and then use the result inside the loop.

These are just general tips, of course. If you could share your actual code, we could probably see directly what’s leading to the timeout.

Unlikely to help but no one can be certain without seeing the actual process - even a process narrative would help.

Ideally, you need to benchmark the segments of the process to see where the delays and slowness really exist. Just log the current time/seconds to the console throughout the script to see where resources take the longest.

I’ll bet a fine latte the hash index is not what’s slowing this down to the point of time-out. :winking_face:

I agree that the creation of the hash index should should be very quick. If the building of the hash is taking more than a couple of seconds at most, it probably isn’t being built efficiently.

Does the script run in scripting app? How long does it take in scripting app?

Nick_Richards
5 - Automation Enthusiast
5 - Automation Enthusiast

Thanks for the info @Bill.French. If you had to update a bunch of records, would you use the hash index in some way? Once a quarter, I need to update a single field for all records but I hit the timeout before it updates them all. How can I speed this up? My script is 8 lines long, one for loop across all records and updating a single field. Doesn’t seem like it should take this long.

Oh, and it’s just 150 records, so not a big dataset either.

You are probably updating the records one at a time instead of in batches. If you want to improve your scripting skills, you should learn to update records in batches.

But the easiest way to deal with it is probably to convert the script to Scripting Extension instead of an automation. Scripting Extension doesn’t have the same time limits. If you are worried that you will forget to run the script, have the automation send an email nagging to get it done.

Hashes are most useful when you need to find records based on a field value. They can speed things up if a script takes a lot of time searching for records. But if your script is taking most of its time writing updates, a hash won’t help that part.