Help

Save the date! Join us on October 16 for our Product Ops launch event. Register here.

Script Exceeded Execution Time Limit

Topic Labels: Automations
Solved
Jump to Solution
1504 1
cancel
Showing results for 
Search instead for 
Did you mean: 
Francoloco23
5 - Automation Enthusiast
5 - Automation Enthusiast

Good morning team,

I’m trying to run this automation script that works on a base with 82,000 records but now throws this error since May 3, 2023. This is the first time that it happened on my base.

Just a bit of a background, my Airtable base is connected to Softr as well. 

This is the code developed:


// Converts settings selections to variables
let sourceTable = base.getTable("Bulk Upload");
let selectedField = sourceTable.getField("Attachments");
let destinationTable = base.getTable("Bulk Upload Extract");
let nameField = destinationTable.getField("File name");
let attachmentField = destinationTable.getField("Attachments");


let recordinput = input.config();
let bulkrecords = await sourceTable.selectRecordsAsync();
let chosenrecord = bulkrecords.records.filter(test => test.id === recordinput.record_id);

let attachments = chosenrecord[0].getCellValue(selectedField); // Attachments in the above record


if (attachments) {

    // Creates an array based on the attachments to then create the individual records
    let attachmentRecords = attachments.map(=> ({
        fields: {
            [nameField.id]: a.filename,
            [attachmentField.id]: [{ url: a.url }]
        }
    }));

    // Batches the record creation
    while (attachmentRecords.length > 0) {
        await destinationTable.createRecordsAsync(attachmentRecords.slice(0, 50));
        attachmentRecords = attachmentRecords.slice(50);
    };
} 

Can someone help me to fix this error.

Thank you in advance!

1 Solution

Accepted Solutions
Ben_Young1
11 - Venus
11 - Venus

Hey @Francoloco23

Try this out:

const { record_id } = input.config();
const sourceTable = base.getTable("Bulk Upload");
const destinationTable = base.getTable("Bulk Upload Extract");

const selectedField = sourceTable.getField("Attachments");
const nameField = destinationTable.getField("File name");
const attachmentField = destinationTable.getField("Attachments");

let record = await sourceTable.selectRecordAsync(record_id);
let recordAttachments = record.getCellValue(selectedField);

if (recordAttachments) {

    let newRecords = recordAttachments.map(attachment => ({
        fields: {
            [nameField.id]: attachment.filename,
            [attachmentField.id]: [{ url: attachment.url }]
        }
    }));

    while (newRecords.length > 0) {
        await destinationTable.createRecordsAsync(newRecords.slice(0, 50));
        newRecords = newRecords.slice(50);
    }
}

The reason your script was exceeding the maximum execution time is because you were doing an indiscriminate query for all 82,000 records in your table. The number of records isn't a problem. Instead, the problem was that you were not just querying every record, but you were also grabbing every field object as well, which is a massive amount of data to pull just for you to filter out a gargantuan 99.9% of that data.

You can optimize your script performance by utilizing the selectRecordAsync method instead of the selectRecordsAsync method.
Since you already have a record id of the record you're looking for, there's no need to grab every record when you can simply query the record you're looking for directly.

 

See Solution in Thread

1 Reply 1
Ben_Young1
11 - Venus
11 - Venus

Hey @Francoloco23

Try this out:

const { record_id } = input.config();
const sourceTable = base.getTable("Bulk Upload");
const destinationTable = base.getTable("Bulk Upload Extract");

const selectedField = sourceTable.getField("Attachments");
const nameField = destinationTable.getField("File name");
const attachmentField = destinationTable.getField("Attachments");

let record = await sourceTable.selectRecordAsync(record_id);
let recordAttachments = record.getCellValue(selectedField);

if (recordAttachments) {

    let newRecords = recordAttachments.map(attachment => ({
        fields: {
            [nameField.id]: attachment.filename,
            [attachmentField.id]: [{ url: attachment.url }]
        }
    }));

    while (newRecords.length > 0) {
        await destinationTable.createRecordsAsync(newRecords.slice(0, 50));
        newRecords = newRecords.slice(50);
    }
}

The reason your script was exceeding the maximum execution time is because you were doing an indiscriminate query for all 82,000 records in your table. The number of records isn't a problem. Instead, the problem was that you were not just querying every record, but you were also grabbing every field object as well, which is a massive amount of data to pull just for you to filter out a gargantuan 99.9% of that data.

You can optimize your script performance by utilizing the selectRecordAsync method instead of the selectRecordsAsync method.
Since you already have a record id of the record you're looking for, there's no need to grab every record when you can simply query the record you're looking for directly.