Jan 12, 2024 12:53 AM - edited Jan 12, 2024 01:17 AM
Hello everyone,
we created the following script yesterday, which is running and working, but we are facing the 30second timeout issue every time. Is there a way to make this script run smoother or more efficient, or what could be a solution to loop the automation?
Thank you for any advice
Solved! Go to Solution.
Jan 12, 2024 09:24 PM
Hi,
indeed, the problem was inserting the find inside loop.
find is also has loop 'under cover'. So If you have for example 10k records in both tables, it means each find doing the same 10k iterations with compare inside 10k loop
and you are doing 10k*10k = 100 millions of compare operations for each find
when you created the Map object outside, it executed one time.
the second issue was updating one record at a time. of course, doing this via batches x25 is faster same as if you set maximum allowed size, 50 records. when examples were written, slice was used. Classic example:
while (createArray.length > 0) { await peopleTable.createRecordsAsync(createArray.slice(0, 50)); createArray = createArray.slice(50); }
but now splice operator is supported, which doing both functions - it cuts X records from existing array and returns them to calling function. no new variable assignment needed, it's like process with 100% recycling - environment friendlier, because all fractions used in process 😀
if array length is less than 50, it just cuts all that remain and empty the array.
Also, if(x>0) in this case is the same as if (x) because, 0 is 'falsy' value, while 1,2,3.. considered as 'true'.
In the end, your script might be something like (can't check it, but it might work and do the same):
// Fetch records from TABLE1
let table1 = base.getTable('Supplier summary');
let result1 = await table1.selectRecordsAsync();
// Fetch records from TABLE2
let table2 = base.getTable('ERP_suppliers_hub');
let result2 = await table2.selectRecordsAsync();
// Build a map for quick lookups in TABLE1
let keyMap = new Map(result1.records.map(rec=>[rec.getCellValueAsString('Supplier_id'), rec.id]));
//count and write
let keyRec = rec => keyMap.get(rec.getCellValueAsString('supplier_id_key_formula'));
let upd=r=>({id:r.id,fields:{'supplier_id_key1_linked': keyRec(r)? [{id:keyRec(r)}] : [] }});
let update=result2.records.map(upd)
while (update.length) await table2.updateRecordsAsync(update.splice(0,50));
Jan 12, 2024 02:04 AM
We solved it by implementing batch sizes to the script and reduced the traffic drastically.
Jan 12, 2024 06:46 PM
Hey @Yannick_Flink,
Here's another way to address batch updating records:
const tables = {
summary: base.getTable('Supplier summary'),
suppliers: base.getTable('ERP_suppliers_hub')
};
const fields = {
summary: {
supplierId: 'Supplier_id'
},
suppliers: {
supplierId: 'supplier_id',
keyFormula: 'supplier_id_key_formula',
summaryLink: 'supplier_id_key1_linked'
}
}
const { summaryRecords, supplierRecords } = await Promise.all([
tables.summary.selectRecordsAsync({ fields: Object.values(fields.summary) }),
tables.suppliers.selectRecordsAsync({ fields: Object.values(fields.suppliers) })
])
.then((results) => ({ summaryRecords: results[0].records, supplierRecords: results[1].records }));
let updates = await supplierRecords.map(async (r) => {
let supplierId = r.getCellValueAsString(fields.suppliers.supplierId);
let key = r.getCellValueAsString(fields.suppliers.keyFormula);
let results = await Promise.all([
summaryRecords.find((r) => r.getCellValueAsString(fields.summary.supplierId) === key),
summaryRecords.find((r) => r.getCellValueAsString(fields.summary.supplierId) === supplierId)
]);
if (results[0]) {
return { id: r.id, fields: { [fields.suppliers.summaryLink]: [{ id: results[0].id }] }};
} else if (results[1]) {
return { id: r.id, fields: { [fields.suppliers.summaryLink]: [{ id: results[1].id }] }};
} else {
return { id: r.id, fields: { [fields.suppliers.summaryLink]: [] } };
}
});
while (updates.length) {
await tables.suppliers.updateRecordsAsync(updates.slice(0, 50))
updates = updates.slice(50);
}
Jan 12, 2024 09:24 PM
Hi,
indeed, the problem was inserting the find inside loop.
find is also has loop 'under cover'. So If you have for example 10k records in both tables, it means each find doing the same 10k iterations with compare inside 10k loop
and you are doing 10k*10k = 100 millions of compare operations for each find
when you created the Map object outside, it executed one time.
the second issue was updating one record at a time. of course, doing this via batches x25 is faster same as if you set maximum allowed size, 50 records. when examples were written, slice was used. Classic example:
while (createArray.length > 0) { await peopleTable.createRecordsAsync(createArray.slice(0, 50)); createArray = createArray.slice(50); }
but now splice operator is supported, which doing both functions - it cuts X records from existing array and returns them to calling function. no new variable assignment needed, it's like process with 100% recycling - environment friendlier, because all fractions used in process 😀
if array length is less than 50, it just cuts all that remain and empty the array.
Also, if(x>0) in this case is the same as if (x) because, 0 is 'falsy' value, while 1,2,3.. considered as 'true'.
In the end, your script might be something like (can't check it, but it might work and do the same):
// Fetch records from TABLE1
let table1 = base.getTable('Supplier summary');
let result1 = await table1.selectRecordsAsync();
// Fetch records from TABLE2
let table2 = base.getTable('ERP_suppliers_hub');
let result2 = await table2.selectRecordsAsync();
// Build a map for quick lookups in TABLE1
let keyMap = new Map(result1.records.map(rec=>[rec.getCellValueAsString('Supplier_id'), rec.id]));
//count and write
let keyRec = rec => keyMap.get(rec.getCellValueAsString('supplier_id_key_formula'));
let upd=r=>({id:r.id,fields:{'supplier_id_key1_linked': keyRec(r)? [{id:keyRec(r)}] : [] }});
let update=result2.records.map(upd)
while (update.length) await table2.updateRecordsAsync(update.splice(0,50));