Hello. I have been using an API token (the original method) for authenication to get records from a particular table. I have been using the 'airtable' Python library to do this. We are now switching to use a personal access token (PAT) because the API token authentication method is going away. So, I can no longer use the 'airtable' Python library because it doesn't support PATs. So, I am simply using the 'requests' Python library this time around. For whatever reason, I have a handful of columns that will not download when using the 'requests' Python library with my PAT. However, I can download these columns when using the 'airtable' Python library and my API token. Below is my code. In the field list, I put a comment to denote the fields that won't download when I run my code. These columns just don't appear in my final downloaded file. Is there something I'm missing? Or is there a better library that I should be using? Thanks!
import os
import json
import requests
import pandas as pd
base_id = 'appdoyHZ7qdkwyaka'
table_id = 'tblgHQ36AT2JqRItF'
personal_access_token = 'patxxxxx.xxxxx'
headers = {
'Authorization': 'Bearer ' + str(personal_access_token),
'Content-Type': 'application/json',
}
# Create params
field_list=["YouTube Title" ,
"Series" ,
"Show Code" ,
"YouTube Channel" ,
"YouTube Video ID" ,
"YouTube Publish Date" ,
"Branded" ,
"Campaign" ,
"Content Originator",
"🔒 YouTube Video URL" ,
"🔒Genre" ,
"🔒Sub-Genre" ,
"🔒Language" ,
"🔒 YouTube Analytics URL" ,
"Cross-Platform Runtime (Timecode H:MM:SS)",
"🔒Cross-Platform Runtime (Seconds)" ,
"Grid Only: Genre" ,
"🔒Lead Contact" ,
"Addt'l Lead Contact" , ### WILL NOT DOWNLOAD
"Campaign Roll-Up (Brand-Wide)" ,
"In-Series Categorization",
"Translated Title (English)",
"Campaign Roll-Up",
"YouTube Edit Page URL",
"Paid: Category",
"Paid: Target Views",
"Paid: Territory",
"Paid: Flight Length (# of Days)",
"Paid: Notes",
"Paid: Flight End Date",
"YouTube Migration: Date Migrated",
"Brand Roll-Up",
"Series Roll-Up",
"Content Type",
"Angie Monetization Status",
"YouTube Migration: Original Channel",
"In-Series Categorization #2", ### WILL NOT DOWNLOAD
"In-Series Categorization #3",
"Airtable: Record ID",
"YouTube Asset ID",
"YouTube Kids App Status",
"Comp - Marked for Autoplay?",
"Kids Demo",
"Kids Gender Appeal",
"Branded Disclosure Required?", ### WILL NOT DOWNLOAD
"In-Series Categorization #4", ### WILL NOT DOWNLOAD
"In-Series Categorization #5", ### WILL NOT DOWNLOAD
"In-Series Categorization #6",
"In-Series Categorization #7",
"YouTube Migration: Addt'l Notes", ### WILL NOT DOWNLOAD
"Original Video Show Code (For Localizations)",
"Season Grouping", #
"YouTube Kids App Change Notes",
"YouTube Kids App Previous Status",
"YouTube Channel ID"]
params = {
"filterByFormula": "NOT({YouTube Video ID} = '')",
"pageSize": 100,
"offset": 0,
"fields": field_list
}
# Get initial response
response = requests.get(airtable_url, headers=headers, params=params)
# Process records in initial response
all_records = []
for x in response.json().get('records'😞
x.update(x.get('fields'))
x.pop('fields')
all_records.append(x)
final_df = pd.DataFrame(all_records)
# Get offset from initial response
offset = response.json().get('offset')
# While offset exists
while offset:
# Include offset value in the params during next api call
params['offset'] = offset
response = requests.get(airtable_url, headers=headers, params=params)
# Store new records
records = []
for x in response.json().get('records'😞
x.update(x.get('fields'))
x.pop('fields')
records.append(x)
df = pd.DataFrame(all_records)
final_df = pd.concat([final_df,df], axis=0, ignore_index=True)
# Get next offset value
offset = response.json().get('offset')
print(final_df)
final_df.to_csv('airtable_peacock_kids.csv', header=True, index=False, sep=',', encoding='utf-8', quotechar='"', line_terminator='\n', escapechar='\\')