Help

Re: Skript exceeding the 30second runtime

Solved
Jump to Solution
1183 1
cancel
Showing results for 
Search instead for 
Did you mean: 
Yannick_Flink
5 - Automation Enthusiast
5 - Automation Enthusiast

Hello everyone, 

we created the following script yesterday, which is running and working, but we are facing the 30second timeout issue every time. Is there a way to make this script run smoother or more efficient, or what could be a solution to loop the automation? 

// Fetch records from TABLE1
let table1 = base.getTable('Supplier summary');
let result1 = await table1.selectRecordsAsync();

// Fetch records from TABLE2
let table2 = base.getTable('ERP_suppliers_hub');
let result2 = await table2.selectRecordsAsync();

// Loop through TABLE2 records
for (let record2 of result2.records) {
  let SupplierId = record2.getCellValueAsString('supplier_id');
  let key = record2.getCellValueAsString('supplier_id_key_formula');
  
  // Check if the key is available in TABLE1
  let keyRecord = result1.records.find(record1 => record1.getCellValueAsString('Supplier_id') === key);
  
  // Check if the vendor ID is available in TABLE1
  let vendorRecord = result1.records.find(record1 => record1.getCellValueAsString('Supplier_id') === SupplierId);
  
  // If keyRecord is available, link the records
  if (keyRecord) {
    table2.updateRecordAsync(record2, {
      'supplier_id_key1_linked': [{ id: keyRecord.id }]
    });
  }
  // If vendorRecord is available, link the records
  else if (vendorRecord) {
    table2.updateRecordAsync(record2, {
      'supplier_id_key1_linked': [{ id: vendorRecord.id }]
    });
  }
  // If none is available, remove the link
  else {
    table2.updateRecordAsync(record2, {
      'supplier_id_key1_linked': []
    });
  }
}



Thank you for any advice 

1 Solution

Accepted Solutions

Hi,
indeed, the problem was inserting the find inside loop.
find is also has loop 'under cover'. So If you have for example 10k records in both tables, it means each find doing the same 10k iterations with compare inside 10k loop
and you are doing 10k*10k = 100 millions of compare operations for each find
when you created the Map object outside, it executed one time.

the second issue was updating one record at a time. of course, doing this via batches x25 is faster same as if you set maximum allowed size, 50 records. when examples were written, slice was used. Classic example:

while (createArray.length > 0) {
    await peopleTable.createRecordsAsync(createArray.slice(0, 50));
    createArray = createArray.slice(50);
}

 but now splice operator is supported, which doing both functions - it cuts X records from existing array and returns them to calling function. no new variable assignment needed, it's like process with 100% recycling - environment friendlier, because all fractions used in process 😀  
if array length is less than 50, it just cuts all that remain and empty the array.
Also, if(x>0)  in this case is the same as if (x)  because,  0 is 'falsy' value, while 1,2,3..  considered as 'true'.

In the end, your script might be something like (can't check it, but it might work and do the same):

 

// Fetch records from TABLE1
let table1 = base.getTable('Supplier summary');
let result1 = await table1.selectRecordsAsync();
// Fetch records from TABLE2
let table2 = base.getTable('ERP_suppliers_hub');
let result2 = await table2.selectRecordsAsync();
// Build a map for quick lookups in TABLE1
let keyMap = new Map(result1.records.map(rec=>[rec.getCellValueAsString('Supplier_id'), rec.id]));
//count and write
  let keyRec = rec => keyMap.get(rec.getCellValueAsString('supplier_id_key_formula'));
  let upd=r=>({id:r.id,fields:{'supplier_id_key1_linked': keyRec(r)? [{id:keyRec(r)}] : [] }});
  let update=result2.records.map(upd)
while (update.length) await table2.updateRecordsAsync(update.splice(0,50));

 



See Solution in Thread

3 Replies 3
Yannick_Flink
5 - Automation Enthusiast
5 - Automation Enthusiast

We solved it by implementing batch sizes to the script and reduced the traffic drastically. 

// Fetch records from TABLE1
let table1 = base.getTable('Supplier summary');
let result1 = await table1.selectRecordsAsync();

// Fetch records from TABLE2
let table2 = base.getTable('ERP_suppliers_hub');
let result2 = await table2.selectRecordsAsync();

// Build a map for quick lookups in TABLE1
let keyMap = new Map(result1.records.map(record1 => [record1.getCellValueAsString('Supplier_id'), record1]));

// Split updates into smaller batches
const batchSize = 25; // Adjust the batch size as needed
const batches = [];
let currentBatch = [];

// Loop through TABLE2 records
for (let record2 of result2.records) {
  let SupplierId = record2.getCellValueAsString('supplier_id');
  let key = record2.getCellValueAsString('supplier_id_key_formula');
  
  // Check if the key is available in TABLE1
  let keyRecord = keyMap.get(key);
  
  // Build an update object for batch update
  currentBatch.push({
    id: record2.id,
    fields: {
      'supplier_id_key1_linked': keyRecord ? [{ id: keyRecord.id }] : []
    }
  });

  // Check if the batch size is reached, then start a new batch
  if (currentBatch.length === batchSize) {
    batches.push(currentBatch);
    currentBatch = [];
  }
}

// Add the last batch (which may be less than the batchSize)
if (currentBatch.length > 0) {
  batches.push(currentBatch);
}

// Perform batch updates using Promise.all for each batch
for (let batch of batches) {
  await table2.updateRecordsAsync(batch);
}

Hey @Yannick_Flink,

Here's another way to address batch updating records:

const tables = {
  summary: base.getTable('Supplier summary'),
  suppliers: base.getTable('ERP_suppliers_hub')
};

const fields = {
  summary: {
    supplierId: 'Supplier_id'
  },
  suppliers: {
    supplierId: 'supplier_id',
    keyFormula: 'supplier_id_key_formula',
    summaryLink: 'supplier_id_key1_linked'
  }
}


const { summaryRecords, supplierRecords } = await Promise.all([
  tables.summary.selectRecordsAsync({ fields: Object.values(fields.summary) }),
  tables.suppliers.selectRecordsAsync({ fields: Object.values(fields.suppliers) })
])
  .then((results) => ({ summaryRecords: results[0].records, supplierRecords: results[1].records }));


let updates = await supplierRecords.map(async (r) => {
  
    let supplierId = r.getCellValueAsString(fields.suppliers.supplierId);
    let key = r.getCellValueAsString(fields.suppliers.keyFormula);

    let results = await Promise.all([
      summaryRecords.find((r) => r.getCellValueAsString(fields.summary.supplierId) === key),
      summaryRecords.find((r) => r.getCellValueAsString(fields.summary.supplierId) === supplierId)
    ]);

    if (results[0]) {
      return { id: r.id, fields: { [fields.suppliers.summaryLink]: [{ id: results[0].id }] }};
    } else if (results[1]) {
      return { id: r.id, fields: { [fields.suppliers.summaryLink]: [{ id: results[1].id }] }};
    } else {
      return { id: r.id, fields: { [fields.suppliers.summaryLink]: [] } };
    }
});

while (updates.length) {
  await tables.suppliers.updateRecordsAsync(updates.slice(0, 50))
  updates = updates.slice(50);
}

Hi,
indeed, the problem was inserting the find inside loop.
find is also has loop 'under cover'. So If you have for example 10k records in both tables, it means each find doing the same 10k iterations with compare inside 10k loop
and you are doing 10k*10k = 100 millions of compare operations for each find
when you created the Map object outside, it executed one time.

the second issue was updating one record at a time. of course, doing this via batches x25 is faster same as if you set maximum allowed size, 50 records. when examples were written, slice was used. Classic example:

while (createArray.length > 0) {
    await peopleTable.createRecordsAsync(createArray.slice(0, 50));
    createArray = createArray.slice(50);
}

 but now splice operator is supported, which doing both functions - it cuts X records from existing array and returns them to calling function. no new variable assignment needed, it's like process with 100% recycling - environment friendlier, because all fractions used in process 😀  
if array length is less than 50, it just cuts all that remain and empty the array.
Also, if(x>0)  in this case is the same as if (x)  because,  0 is 'falsy' value, while 1,2,3..  considered as 'true'.

In the end, your script might be something like (can't check it, but it might work and do the same):

 

// Fetch records from TABLE1
let table1 = base.getTable('Supplier summary');
let result1 = await table1.selectRecordsAsync();
// Fetch records from TABLE2
let table2 = base.getTable('ERP_suppliers_hub');
let result2 = await table2.selectRecordsAsync();
// Build a map for quick lookups in TABLE1
let keyMap = new Map(result1.records.map(rec=>[rec.getCellValueAsString('Supplier_id'), rec.id]));
//count and write
  let keyRec = rec => keyMap.get(rec.getCellValueAsString('supplier_id_key_formula'));
  let upd=r=>({id:r.id,fields:{'supplier_id_key1_linked': keyRec(r)? [{id:keyRec(r)}] : [] }});
  let update=result2.records.map(upd)
while (update.length) await table2.updateRecordsAsync(update.splice(0,50));