Help

Re: Connect Airtable to OpenAI's text-davinci-003 with our new extension!

36964 0
cancel
Showing results for 
Search instead for 
Did you mean: 
Lom_Labs
7 - App Architect
7 - App Architect

Want to use ChatGPT in your Airtable? Integrate your Airtable data with OpenAI's powerful text-davinci-003 model using our new Airtable extension! 

GPT Script Extension.gif

Please note that you need to use your own OpenAI key

Our new Airtable extension connects your Airtable data to OpenAI's text-davinci-003, a powerful language model that can generate human-like text, answer questions, and even write code with incredible accuracy and fluency. 

In future versions, we're planning to add even more features, including the ability to choose which OpenAI model to use and the ability to select data from multiple records at once.

Imagine being able to select a column containing multiple submissions of customer feedback and instantly gaining insights and sentiment analysis. Or selecting a view that contains your previous content ideas and asking OpenAI to generate new, relevant ideas for you. Maybe even submitting an image and asking for alt text!

Please also contact us directly if you have any ideas and would want us to build it with this form!

Installation instructions:
1. Click "Extension" at the top right
2. Click "Add Extension"
3. Click the "Scripting" extension
4. Click "Add Extension"
5. Paste the code below in!

Installation.gif

 

const {
    table,
    promptField,
    outputField,
    openaiApiKey,
    maxTokens,
} = input.config({
    title: "Connector to OpenAI API - Using text-davinci-003",
    description: "",
    items: [
        input.config.table("table", {
            label: "Table",
            description: "Where your fields are"
        }),
        input.config.field("promptField", {
            parentTable: "table",
            label: "Prompt Field",
            description: "Text you want a response to"
        }),
        input.config.field("outputField", {
            parentTable: "table",
            label: "Output Field",
            description: ""
        }),
        input.config.text("openaiApiKey", {
            label: "OpenAI API Key",
            description: "Get it from https://platform.openai.com/account/api-keys"
        }),
        input.config.number("maxTokens", {
            label: "Max Tokens",
            description: "Max: 4,097 tokens. A helpful rule of thumb is that one token generally corresponds to ~4 characters of text for common English text. This translates to roughly ¾ of a word (so 100 tokens ~= 75 words)."
        })
    ]
});

if(outputField.type != "singleLineText" && outputField.type != "multilineText"){
    throw "Output field must be a single line text or long text field"
}

const record = await input.recordAsync("Pick a record", table);

const userInput = record?.getCellValueAsString(promptField);

if (!userInput) {
    throw "Error: Prompt is empty"
}

let response;
try {
    response = await getGPTResponse(userInput);
} catch (error) {
    console.error(error);
    throw "Error: Failed to get GPT response"
}

output.markdown(`Received Prompt: **${userInput}**`);
output.markdown(`Response: **${response}**`);

const updates = [{
    id: record?.id,
    fields: {
        [outputField.name]: response
    }
}];

while (updates.length > 0) {
    await table.updateRecordsAsync(updates.slice(0, 50));
    updates.splice(0, 50);
}

async function getGPTResponse(userInput) {
  const prompt = `The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.\n\nUser: ${userInput}\nAI:`;
  const response = await fetch('https://api.openai.com/v1/completions', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${openaiApiKey}`,
    },
    body: JSON.stringify(
      {
          "model": "text-davinci-003",
          "prompt": prompt,
          "max_tokens": maxTokens,
          "temperature": 0
      }
    ),
  });

  if (!response.ok) {
    console.error(getHTTPStatusHelpText(response.status))
    throw new Error(`HTTP error! status: ${response.status} - ${response.statusText}`);
  }

  const responseData = await response.json();
  return responseData.choices[0].text.trim();
}

function getHTTPStatusHelpText(statusCode) {
  switch (statusCode) {
    case 400:
      return "Bad Request: The server cannot understand the request due to invalid syntax.";
    case 401:
      return "Unauthorized: Authentication is required and has failed or has not yet been provided.\n\nPlease double check your API key";
    case 402:
      return "Payment Required: The request cannot be processed until a payment is made.";
    case 403:
      return "Forbidden: The server understood the request, but is refusing to fulfill it.";
    case 404:
      return "Not Found: The requested resource could not be found but may be available in the future.";
    case 429:
      return "Too Many Requests: The user has sent too many requests in a given amount of time.\n\nDo you still have credits in your account?";
    case 500:
      return "Internal Server Error: The server has encountered a situation it doesn't know how to handle.";
    case 502:
      return "Bad Gateway: The server was acting as a gateway or proxy and received an invalid response from the upstream server.";
    case 503:
      return "Service Unavailable: The server is currently unable to handle the request due to a temporary overload or maintenance.";
    default:
      return "Unknown HTTP status code.";
  }
}

 

11 Replies 11

Thank you for sharing this script. Integrating AI with systems is becoming very popular.

Are you interested in getting feedback on how this script is written?

Yes, I would love feedback on the script!  Any suggestions would be greatly appreciated, and thank you for taking the time 😊

Here are a few quick observations about your script:

- In your screen capture, it looks like there are red blocks on the right side of the script editor. This means that there are parts of your script that the editor thinks have problems. Sometimes the script editor flags things that are not errors, but you should aim to write your script to eliminate those red flags if possible.

- You input the max tokens as a text string and then convert it to a number. You can input the number as a number instead of a text string so that you don't have to do the conversions. You might still want to do some validation to make sure the number is a reasonable value, but you won't have to check if it is a number.

- You require that the prompt field be a text field. This seems overly restrictive to me. What if someone wanted to generate a prompt using a formula field or with a rollup from a different linked record?

- A lot of your code deals with error conditions. In the vast majority of cases, the end user won't know what to do even with the error code messages.

- Although you use some functions, I suggest using a few more functions and having fewer top level statements to make reading the code easier.

Thank you for taking the time to review the script and provide feedback!

I have updated the input field to use numbers like you suggested and also removed the restriction of the prompt field being a text field.  I had not thought about people wanting to use formula fields or rollups with this and you are right that it should not be this restrictive.

I have tried to fix some of the red blocks but am unable to fix them all 😅. I will work on this and the error conditions as well when I have time to make the error conditions more clear so that the user knows what to do.

May I know if you have any suggestions for replacing the top level statements with functions?  I was thinking I could put the following:

if (!userInput) {
    throw "Error: Prompt is empty"
}

Inside a function called "Check user prompts"?  

function checkUserPrompts(userInput){
  if (!userInput) {
    throw "Error: Prompt is empty"
  }
}

But I am worried I do not understand something 

Would it be possible to add a VIEW restriction?  I would like to restrict the available records to a view. That way I can filter the records that have already been processed (filled in by GPT) and they do not show up in the record picker. I have no clue about code. Thank you.

ARC_Gonza
5 - Automation Enthusiast
5 - Automation Enthusiast

It's great!!

Is it possible to activate it from Interfaces?
Would it be possible to use the script within an automation and activate it when a field changes its value for example?
Thank you so much.

@itoldusoandso Greetings! This extension needs to be activated for each individual record, eliminating the need for any view restrictions.

@ARC_GonzaHello there! Sadly, extensions cannot be activated through interfaces or automations :(. However, the code can be modified to function within an automation instead

Thank you so much,
The truth is that I've been trying to adapt the code to work as an automation, but I can't. I only have the help of Chat-GPT😉

I'm sure someone has done it.

Greetings

I have same question @Lom_Labs