The Community will be temporarily unavailable starting on Friday February 28. We’ll be back as soon as we can! To learn more, check out our Announcements blog post.
Nov 12, 2020 07:22 AM
I’m currently trying to clone one airtable into a different airtable. When I run my code, I eventually get timed out after copying anywhere from 100 - 700 rows. Is there anything I can do to ensure I don’t get disconnected to airtable.
‘’’
await airTS(‘Applicants’).select({
// Selecting the first 3 records in Students:
maxRecords: maxRecords,
view: “All applicants”
}).eachPage(async function page(records, fetchNextPage) {
await records.map(async function(record, count) {
var keyValue = count.toString();
//each page resets count to 0, so count is 0-99
if (pageCount > 0) {
keyValue = (pageCount*100+count).toString();
}
// GCP Storage CODE
const bucket = storage.bucket(bucketname);
var theURL = record.get('Resume');
var bigtableResumeURL;
if (theURL) {
theURL = theURL[0]['url'];
//create unique file name for gcp on storage and bigtable
var lastPartOfFileName = theURL.substring(theURL.lastIndexOf("/") + 1);
var airtableId = theURL.split('/')[4];
var gcpFilename = airtableId+lastPartOfFileName;
bigtableResumeURL = 'https://storage.googleapis.com/pave-resumes/' + gcpFilename;
var request = https.get(theURL, function (response) {
if (response.statusCode === 200) {
var file = fs.createWriteStream(gcpFilename);
response.pipe(file);
file.on('finish', async function () {
//console.log('Pipe OK');
bucket.upload(gcpFilename, {
destination: gcpFilename
}, (err, file) => {
//console.log(err);
//console.log('File OK on Storage');
});
var email = record.get('Email');
//grab the role from the positions table
var role = record.get('Role');
var roleArray = [];
if (role !== null && role !== undefined) {
console.log(keyValue + ': ' + role +" "+ role.length);
for (var indexRole = 0; indexRole < role.length; indexRole++) {
roleArray.push(positionsDictionary[role[indexRole]]);
}
} else {
roleArray = null;
}
await talentBase('Complete').create([
{
"fields": {
'First Name': record.get('First Name'),
'Last Name': record.get('Last Name'),
'Email': record.get('Email'),
'Role': roleArray,
'Eligible': record.get('Eligible'),
'Resume': [{
'url': bigtableResumeURL
}],
'LinkedIn': record.get('LinkedIn') || '',
'Type': record.get('Type') || null,
'Start Date': record.get('Start Date') || '',
'Salary': record.get('Salary') || null,
'Phone': record.get('Phone') || '',
//'Date Applied': record.get('Date Applied'),
'Source': record.get('Source') || null,
'Priority': record.get('Priority') || null,
'Stage': record.get('Stage') || null,
},
},
], {"typecast": true}, function(err, records) {
if (err) {
console.error(err);
}
});
file.close();
});
filesToDeleteLocally.push('./' + gcpFilename);
}
});
request.end();
‘’’
Solved! Go to Solution.
Nov 17, 2020 12:35 PM
Hey @Draden_Gaffney,
It’s possible you’re hitting request/volume limits? I believe Airtable has an API throttle of 5 requests every second. Perhaps your code is spinning out a ton of requests and getting throttled?
Perhaps the cheapest way to obey the throttle is to just add a sleep before you scan a subsequent page.
Best,
Anthony
Nov 17, 2020 12:35 PM
Hey @Draden_Gaffney,
It’s possible you’re hitting request/volume limits? I believe Airtable has an API throttle of 5 requests every second. Perhaps your code is spinning out a ton of requests and getting throttled?
Perhaps the cheapest way to obey the throttle is to just add a sleep before you scan a subsequent page.
Best,
Anthony