Skip to main content

I am sure this has been asked before, I have searched high and low, and have found no solution to do the following, even with Zapier.



I have a .csv file that I want to import direct to Airtable.



The .csv was created programatically (in Node.js), and exists on a Google Cloud VM (Red Hat Linux).



There is nothing in the AirTable API that allows for direct insertion of a full CSV as a new tab in an existing base.



I want to name the tab in the existing base the same name as the CSV file, without the .csv extension.



If anyone has actually done this successfully, and smoothly, please respond. The only articles I can find on how to import a CSV to AirTable involve a human being and clicking in the browser.



To my knowledge, this part must be done manually. I’m not aware of any API-available option—including using tools like Zapier or Integromat—for creating tables. You can only add or modify data in existing tables. That aside, both of those listed services can parse CSV data, so once the table has been manually made, either service can be set up to import the data.




To my knowledge, this part must be done manually. I’m not aware of any API-available option—including using tools like Zapier or Integromat—for creating tables. You can only add or modify data in existing tables. That aside, both of those listed services can parse CSV data, so once the table has been manually made, either service can be set up to import the data.


Thank you for the info.



Even if I have a pre-existing table created manually:



I’m not seeing that Zapier can do this (get all rows in my CSV to Airtable). Zapier looks to be a tool connecting two web app end points.



A CSV on my local file system is not presented to me when I go through the Zapier wizard, as an integration end point to choose.



After I generate a CSV on the cloud VM (from a select from a DB), I’m not sure how Zapier can help or even work, or play a role. The data source is the CSV on a file system (cloud VM). Zapier has no access to the CSV.



I’m not sure about Integromat but will investiate.



I’m not seeing any solution that Zapier would provide in this case, but I see it mentioned quite often.


Thank you for the info.



Even if I have a pre-existing table created manually:



I’m not seeing that Zapier can do this (get all rows in my CSV to Airtable). Zapier looks to be a tool connecting two web app end points.



A CSV on my local file system is not presented to me when I go through the Zapier wizard, as an integration end point to choose.



After I generate a CSV on the cloud VM (from a select from a DB), I’m not sure how Zapier can help or even work, or play a role. The data source is the CSV on a file system (cloud VM). Zapier has no access to the CSV.



I’m not sure about Integromat but will investiate.



I’m not seeing any solution that Zapier would provide in this case, but I see it mentioned quite often.


Admittedly I don’t have experience with cloud VM systems, but if it can be accessed online—and the “cloud” part of “cloud VM” leads me to believe that it can—my gut says that either Zapier or Integromat could retrieve the data and insert it into Airtable. I just don’t know the specifics, so I’ll have to defer to others to chime in on that front.


Thank you for the info.



Even if I have a pre-existing table created manually:



I’m not seeing that Zapier can do this (get all rows in my CSV to Airtable). Zapier looks to be a tool connecting two web app end points.



A CSV on my local file system is not presented to me when I go through the Zapier wizard, as an integration end point to choose.



After I generate a CSV on the cloud VM (from a select from a DB), I’m not sure how Zapier can help or even work, or play a role. The data source is the CSV on a file system (cloud VM). Zapier has no access to the CSV.



I’m not sure about Integromat but will investiate.



I’m not seeing any solution that Zapier would provide in this case, but I see it mentioned quite often.




It can’t be accessed by Zapier until it exists through a web service. Nothing but roadblocks between your data and Airtable.


Thank you for the info.



Even if I have a pre-existing table created manually:



I’m not seeing that Zapier can do this (get all rows in my CSV to Airtable). Zapier looks to be a tool connecting two web app end points.



A CSV on my local file system is not presented to me when I go through the Zapier wizard, as an integration end point to choose.



After I generate a CSV on the cloud VM (from a select from a DB), I’m not sure how Zapier can help or even work, or play a role. The data source is the CSV on a file system (cloud VM). Zapier has no access to the CSV.



I’m not sure about Integromat but will investiate.



I’m not seeing any solution that Zapier would provide in this case, but I see it mentioned quite often.




If the table already exists (with fields that match the file or, at least, fields that can be mapped to the file) and you are comfortable using the Airtable API, you could easily create a process/script on your VM that iterates through the CSV file and loads it into Airtable. This could be written in JS, Python, just about anything. The API has two record create methods - one is for single records the other is for a group of up to 10 records.



As you already have a process that produces the CSV file using Node, could you or your developer extend this and push the records into AT? Arguably, there is no need for the CSV file (unless it serves some other purpose) as the records could be POSTed straight from the Node app to AT.



JB


For those that are interested, here’s an example Python script that imports records one by one (takes only a few seconds for 100 records):



import requests

import csv



post_url = 'https://api.airtable.com/v0/YOUR_APP_ID/CSVImport'

post_headers = {

'Authorization' : 'Bearer YOUR API KEY',

'Content-Type': 'application/json'

}



f = open('MOCK_DATA.csv')

csv_f = csv.reader(f)



for row in csv_f:

name = rowo0]

number = int(rowo1])

role = rowo2]



data = {

"fields": {

"Field1": name,

"Field2": number,

"Field3": role

}

}



print(post_url)

print(data)



post_airtable_request = requests.post(post_url, headers = post_headers, json = data)

print(post_airtable_request.status_code)



Here’s the first few lines of the CSV file referenced:



Ross,92,Marketing Manager

Delmor,10,Help Desk Operator

Jazmin,79,Budget/Accounting Analyst III

Moore,50,Senior Developer

Winny,95,Quality Engineer

Goran,34,Environmental Tech

Linnea,94,Professor

Smitty,60,Tax Accountant

...



The end result is:





Enjoy!



JB


I want to thank everyone for the responses and ideas, and will look forward to an expanded API with additional features:



1. bases and sheets within bases created programatically through the API

2. insertion of a CSV direct



I was able to work through all CRUD operations using the Airtable API using Node / axios (to an existing base and sheet created by clicking and typing).



I still have a few things to work out (like how to insert once the column changes type from text to checkbox), I posted a new issue/question on this, but otherwise have a good handle on the API at this point.


@JonathanBowen



Thanks for your suggested solution for this programatic import of CSV to AirTable. I used the code below, which reads the CSV fine however I am getting 422 response from AirTable. I have troubleshooted to the best of my skills, but can’t fix the error. What am I missing?



Please note the ingredient name is a lookup ID, that’s why its in brackets.



import requests

import csv



post_url = 'https://api.airtable.com/v0/appjntXiOLPBmOpKs/WEM%20Data’'

post_headers = {

'Authorization' : 'Bearer YOUR API KEY',

'Content-Type': 'application/json'

}



f = open('import_data.csv')

csv_f = csv.reader(f)

for row in csv_f:

inventory = row 0]

usage = row 1]

received = row 2]

report_date = row 3]

batch_id = row 4]

lookup_id = row 6]

name = row 7]



data = {

"fields": {

"Name": name,

"Ingredient": elookup_id],

"Inventory": inventory,

"Usage": usage,

"Receipts": received,

"Report Date": report_date,

"Batch Number": batch_id

}

}



print(post_url)

print(data)



post_airtable_request = requests.post(post_url, headers=post_headers, json=data)

print(post_airtable_request.status_code)



I have been able to validate the post request using these credentials works via Postman, but still having issues. Here’s the first few lines of the CSV file I used:


949.7318,180.3,0,05/21/20,4421b1fa-9bb6-11ea-8f85-8c85904e2678,Agrado Plus,recg1ZpkYgUMXRgZf,Agrado Plus - 2020-05-21


4408,0,0,05/21/20,4421b1fa-9bb6-11ea-8f85-8c85904e2678,EGold Calcium Salt,recvCnq1AXSX7Y6cq,EGold Calcium Salt - 2020-05-21


14354 5441,0,05/21/20,4421b1fa-9bb6-11ea-8f85-8c85904e2678,Smartamine ML,rec36WGlYOrUdEDIu,Smartamine ML - 2020-05-21



Here’s what my local terminal is showing when I run the code using the csv:



@JonathanBowen



Thanks for your suggested solution for this programatic import of CSV to AirTable. I used the code below, which reads the CSV fine however I am getting 422 response from AirTable. I have troubleshooted to the best of my skills, but can’t fix the error. What am I missing?



Please note the ingredient name is a lookup ID, that’s why its in brackets.



import requests

import csv



post_url = 'https://api.airtable.com/v0/appjntXiOLPBmOpKs/WEM%20Data’'

post_headers = {

'Authorization' : 'Bearer YOUR API KEY',

'Content-Type': 'application/json'

}



f = open('import_data.csv')

csv_f = csv.reader(f)

for row in csv_f:

inventory = row 0]

usage = row 1]

received = row 2]

report_date = row 3]

batch_id = row 4]

lookup_id = row 6]

name = row 7]



data = {

"fields": {

"Name": name,

"Ingredient": elookup_id],

"Inventory": inventory,

"Usage": usage,

"Receipts": received,

"Report Date": report_date,

"Batch Number": batch_id

}

}



print(post_url)

print(data)



post_airtable_request = requests.post(post_url, headers=post_headers, json=data)

print(post_airtable_request.status_code)



I have been able to validate the post request using these credentials works via Postman, but still having issues. Here’s the first few lines of the CSV file I used:


949.7318,180.3,0,05/21/20,4421b1fa-9bb6-11ea-8f85-8c85904e2678,Agrado Plus,recg1ZpkYgUMXRgZf,Agrado Plus - 2020-05-21


4408,0,0,05/21/20,4421b1fa-9bb6-11ea-8f85-8c85904e2678,EGold Calcium Salt,recvCnq1AXSX7Y6cq,EGold Calcium Salt - 2020-05-21


14354 5441,0,05/21/20,4421b1fa-9bb6-11ea-8f85-8c85904e2678,Smartamine ML,rec36WGlYOrUdEDIu,Smartamine ML - 2020-05-21



Here’s what my local terminal is showing when I run the code using the csv:





I think a 422 error is an obscure indication you are trying to write data of a certain type into a field that cannot handle that type.



Recommend you downsize the import to one or two fields and slowly introduce them to figure out which is triggering the errors.


I’m no API wizard (haven’t even used it yet), but just looking at the code, this styled apostrophe before your closing quote in the post_url string sticks out to me. Could that be part of the problem?




I’m no API wizard (haven’t even used it yet), but just looking at the code, this styled apostrophe before your closing quote in the post_url string sticks out to me. Could that be part of the problem?






Good eye! That would cause a 422 I think.


I’m no API wizard (haven’t even used it yet), but just looking at the code, this styled apostrophe before your closing quote in the post_url string sticks out to me. Could that be part of the problem?




Definitely would be the issue, unfortunately that was just a typo on my part as I wrote the post 🙂



Will keep troubleshooting, thanks for the responses all!


Definitely would be the issue, unfortunately that was just a typo on my part as I wrote the post 🙂



Will keep troubleshooting, thanks for the responses all!


Hi @Ryan_Wigley - I think the answer lies with the comment made by @Bill.French. I think the python cvs parser treats everything as a string, hence, you are trying to put this:



'949.7318' into a number field and it won’t go. However, if you do this:



inventory = float(rowo0])



and convert it to a float before sending to AT, it should work. You can convert floats and integers - I think the rest will be fine as strings. (I have an integer conversion in my script above).



JB


@Ryan_Wigley - side point - and this might just be the formatting on the post rather than your script, but the last 4 lines of your script are outdented, so will only run once after the looping has occurred - you need to indent them into the loop to run on each iteration. Probably not an issue as your console is showing multiple requests, but thought I’d mention it


Hi @Ryan_Wigley - I think the answer lies with the comment made by @Bill.French. I think the python cvs parser treats everything as a string, hence, you are trying to put this:



'949.7318' into a number field and it won’t go. However, if you do this:



inventory = float(rowo0])



and convert it to a float before sending to AT, it should work. You can convert floats and integers - I think the rest will be fine as strings. (I have an integer conversion in my script above).



JB


@JonathanBowen @Bill.French



Thanks all, that did the trick. It was a matter of shaping the corresponding columns as floats / integers. I did not realize I could pass a partial POST request (i.e. not include all fields at once) so it was very helpful that you pointed it out Bill. Code works like a charm now!


I’m getting an error using the script above


Traceback (most recent call last):


File “update_airtable.py”, line 42, in


post_airtable_request = requests.post(post_url, headers = post_headers, json = data)


AttributeError: ‘str’ object has no attribute ‘post’



import requests, json

import csv

import os, sys



post_url = 'https://api.airtable.com/v0/xxxxxx/LT'

post_headers = {

'Authorization' : 'Bearer keyxxxxxxxxxxx',

'Content-Type': 'application/json'

}



branch = sys.argvn1]

filename = branch + '_stats.csv'

f = open(filename)

csv_f = csv.reader(f)

for row in csv_f:

method = row 0]

name = row 1]

requests = rowr2]

failures = rowf3]

median_rt = rowe4]

avg_rt = row 5]

min_rt = row 6]

max_rt = row 7]

avg_content_size = rowe8]

request_ps = rowq9]



data = {

"fields": {

"Name": name,

"Requests": requests,

"Failures": failures,

"Median Response Time": median_rt,

"Average Response Time": avg_rt,

"Minimum Response Time": min_rt,

"Maximum Response Time": max_rt,

"Average Content Size": avg_content_size,

"Requests Per Second": request_ps

}

}

print(post_url)

print(data)

post_airtable_request = requests.post(post_url, headers = post_headers, json = data)

print(post_airtable_request.fields)

print(post_airtable_request.status_code)

I’m getting an error using the script above


Traceback (most recent call last):


File “update_airtable.py”, line 42, in


post_airtable_request = requests.post(post_url, headers = post_headers, json = data)


AttributeError: ‘str’ object has no attribute ‘post’



import requests, json

import csv

import os, sys



post_url = 'https://api.airtable.com/v0/xxxxxx/LT'

post_headers = {

'Authorization' : 'Bearer keyxxxxxxxxxxx',

'Content-Type': 'application/json'

}



branch = sys.argvn1]

filename = branch + '_stats.csv'

f = open(filename)

csv_f = csv.reader(f)

for row in csv_f:

method = row 0]

name = row 1]

requests = rowr2]

failures = rowf3]

median_rt = rowe4]

avg_rt = row 5]

min_rt = row 6]

max_rt = row 7]

avg_content_size = rowe8]

request_ps = rowq9]



data = {

"fields": {

"Name": name,

"Requests": requests,

"Failures": failures,

"Median Response Time": median_rt,

"Average Response Time": avg_rt,

"Minimum Response Time": min_rt,

"Maximum Response Time": max_rt,

"Average Content Size": avg_content_size,

"Requests Per Second": request_ps

}

}

print(post_url)

print(data)

post_airtable_request = requests.post(post_url, headers = post_headers, json = data)

print(post_airtable_request.fields)

print(post_airtable_request.status_code)



I’m somewhat fluent in Python, but I suspect the issue is that data needs to be serialized (which I believe is json.dumps()). Since dumps() performs some pretty-printing stuff, I think you also have to change the separators to make it behave more like json.stringify().



Try this…



post_airtable_request = requests.post(post_url, headers = post_headers, json =

json.dumps(data, separators=(',', ':')))



I’m somewhat fluent in Python, but I suspect the issue is that data needs to be serialized (which I believe is json.dumps()). Since dumps() performs some pretty-printing stuff, I think you also have to change the separators to make it behave more like json.stringify().



Try this…



post_airtable_request = requests.post(post_url, headers = post_headers, json =

json.dumps(data, separators=(',', ':')))

I tried the snippets you shared and this one



post_airtable_request = requests.post(post_url, headers = post_headers, json = json.stringify(data))


I’m still having the same error


If you have a URL for a csv file that you’d like to import in Airtable continuously, we’ve built an extension that handles this.




I’m getting an error using the script above


Traceback (most recent call last):


File “update_airtable.py”, line 42, in


post_airtable_request = requests.post(post_url, headers = post_headers, json = data)


AttributeError: ‘str’ object has no attribute ‘post’



import requests, json

import csv

import os, sys



post_url = 'https://api.airtable.com/v0/xxxxxx/LT'

post_headers = {

'Authorization' : 'Bearer keyxxxxxxxxxxx',

'Content-Type': 'application/json'

}



branch = sys.argvn1]

filename = branch + '_stats.csv'

f = open(filename)

csv_f = csv.reader(f)

for row in csv_f:

method = row 0]

name = row 1]

requests = rowr2]

failures = rowf3]

median_rt = rowe4]

avg_rt = row 5]

min_rt = row 6]

max_rt = row 7]

avg_content_size = rowe8]

request_ps = rowq9]



data = {

"fields": {

"Name": name,

"Requests": requests,

"Failures": failures,

"Median Response Time": median_rt,

"Average Response Time": avg_rt,

"Minimum Response Time": min_rt,

"Maximum Response Time": max_rt,

"Average Content Size": avg_content_size,

"Requests Per Second": request_ps

}

}

print(post_url)

print(data)

post_airtable_request = requests.post(post_url, headers = post_headers, json = data)

print(post_airtable_request.fields)

print(post_airtable_request.status_code)

Here’s the problem. At the top of your code, you import the “requests” module:





Later on, you redefine “requests” in your loop to be one of the extracted items from a row:





Now “requests” is a string, and no longer points to the requests module, hence the error when you later try to use “requests” as though it were the module.


I tried the snippets you shared and this one



post_airtable_request = requests.post(post_url, headers = post_headers, json = json.stringify(data))


I’m still having the same error




Paul, what is this highlighted line supposed to be doing? It’s retrieving a value from a list, right? What value is it and why is it named “requests”? This is probably stepping all over the imported requests library (import requests, json) which would eliminate the post method for this import thus explaining the runtime error message.




If you have a URL for a csv file that you’d like to import in Airtable continuously, we’ve built an extension that handles this.








That’s a great option, and increasingly we see CSV data hosted for open access. The entire miniExtensions suite is ideal for proving process models and even sustained process. I particularly like the linked-records from form extension.



People tend to build integration processes for many reasons including - they need to own the IP, or they have other adjacent processes that are dependent on the integration. Increasingly, IT groups are pressuring “near-code-free” integrations to be fully transparent so they can pass security audits.



When I see people using Python with Airtable’s API, the first thing that comes to mind is a probable tilt toward data science. Projects of this nature tend to represent a deeper transformation process that is performing computations in Pandas or Numpy and then pushing the results into Airtable. In this particular example it is very simple - read CSV --> write Airtable. But we cannot rule out the possibility that this is a trial integration to see how well Python plays with Airtable.



This is why I never ask users why they want to do something in Python; they largely have good reason and I give them a wide berth because no one would pick Python over Javascript unless they had good reason. :winking_face:



Lots of reasons to actually write code; the requirements pretty much dictate the most effective approach.


This is really easy to do using the Data Fetcher extension. It’ll also suggest and create new fields in your Airtable base based on the CSV data. e.g. fields with text in will suggest single line text fields:




Reply