PAT verses API Token Authentication

4247 4
Showing results for 
Search instead for 
Did you mean: 
5 - Automation Enthusiast
5 - Automation Enthusiast

Hello.  I have been using an API token (the original method) for authenication to get records from a particular table.  I have been using the 'airtable' Python library to do this.  We are now switching to use a personal access token (PAT) because the API token authentication method is going away.  So, I can no longer use the 'airtable' Python library because it doesn't support PATs. So, I am simply using the 'requests' Python library this time around.  For whatever reason, I have a handful of columns that will not download when using the 'requests' Python library with my PAT.  However, I can download these columns when using the 'airtable' Python library and my API token.  Below is my code.  In the field list, I put a comment to denote the fields that won't download  when I run my code.  These columns just don't appear in my final downloaded file.  Is there something I'm missing?  Or is there a better library that I should be using?  Thanks!

import os
import json
import requests
import pandas as pd

base_id = 'appdoyHZ7qdkwyaka'
table_id = 'tblgHQ36AT2JqRItF'
personal_access_token = 'patxxxxx.xxxxx'

airtable_url = f"{base_id}/{table_id}"

headers = {
    'Authorization': 'Bearer ' + str(personal_access_token),
    'Content-Type': 'application/json',

# Create params
field_list=["YouTube Title" ,
              "Series" ,
              "Show Code" ,
              "YouTube Channel" ,
              "YouTube Video ID" ,
              "YouTube Publish Date" ,
              "Branded" ,
              "Campaign" ,
              "Content Originator",
              "🔒 YouTube Video URL" ,
              "🔒Genre" ,
              "🔒Sub-Genre" ,
              "🔒Language" ,
              "🔒 YouTube Analytics URL" ,
              "Cross-Platform Runtime (Timecode H:MM:SS)",
              "🔒Cross-Platform Runtime (Seconds)" ,
              "Grid Only: Genre" ,
              "🔒Lead Contact" ,
              "Addt'l Lead Contact" , ### WILL NOT DOWNLOAD
              "Campaign Roll-Up (Brand-Wide)" ,
              "In-Series Categorization",
              "Translated Title (English)",
              "Campaign Roll-Up",
              "YouTube Edit Page URL",
              "Paid: Category",
              "Paid: Target Views",
              "Paid: Territory",
              "Paid: Flight Length (# of Days)",
              "Paid: Notes",
              "Paid: Flight End Date",
              "YouTube Migration: Date Migrated",
              "Brand Roll-Up",
              "Series Roll-Up",
              "Content Type",
              "Angie Monetization Status",
              "YouTube Migration: Original Channel",
              "In-Series Categorization #2",  ### WILL NOT DOWNLOAD
              "In-Series Categorization #3",
              "Airtable: Record ID",
              "YouTube Asset ID",
              "YouTube Kids App Status",
              "Comp - Marked for Autoplay?",
              "Kids Demo",
              "Kids Gender Appeal",
              "Branded Disclosure Required?",  ### WILL NOT DOWNLOAD
              "In-Series Categorization #4",  ### WILL NOT DOWNLOAD
              "In-Series Categorization #5",  ### WILL NOT DOWNLOAD
              "In-Series Categorization #6",
              "In-Series Categorization #7",
              "YouTube Migration: Addt'l Notes",  ### WILL NOT DOWNLOAD
              "Original Video Show Code (For Localizations)",
              "Season Grouping", #
              "YouTube Kids App Change Notes",
              "YouTube Kids App Previous Status",
              "YouTube Channel ID"]
params = {
    "filterByFormula": "NOT({YouTube Video ID} = '')",
    "pageSize": 100,
    "offset": 0,
    "fields": field_list

# Get initial response
response = requests.get(airtable_url, headers=headers, params=params)

# Process records in initial response
all_records = []
for x in response.json().get('records'😞
    final_df = pd.DataFrame(all_records)

# Get offset from initial response
offset = response.json().get('offset')

# While offset exists
while offset:
    # Include offset value in the params during next api call
    params['offset'] = offset
    response = requests.get(airtable_url, headers=headers, params=params)

    # Store new records
    records = []
    for x in response.json().get('records'😞
        df = pd.DataFrame(all_records)

    final_df = pd.concat([final_df,df], axis=0, ignore_index=True)

    # Get next offset value
    offset = response.json().get('offset')


final_df.to_csv('airtable_peacock_kids.csv', header=True, index=False, sep=',', encoding='utf-8', quotechar='"', line_terminator='\n', escapechar='\\')
4 Replies 4
6 - Interface Innovator
6 - Interface Innovator

Are you sure you can't use the PAT? In all of my integrations, I simply swapped the API key for the token.

5 - Automation Enthusiast
5 - Automation Enthusiast

So I messed up!  You are correct, I can use the PAT with the Airtable Python libary.  I wish I had figured that out sooner!  Haha!!!  Thanks for responding though!  I appreciate it!

However, I do find it strange that using the requests library, I was unable to get the same results as when using the Airtable library.  The results usign the requests library had slighty more records (around a hundred or so) and a couple columns would not come through.  ??

Yes that is strange, although it's hard for me to give an explanation without directly working with your code.