Help

OpenAI - API scripts - Exceed Max script execution time limit of 30 s

8200 12
cancel
Showing results for 
Search instead for 
Did you mean: 
morganngraom1
4 - Data Explorer
4 - Data Explorer

OpenAI's API for GPT-4 constantly has an over 30sec delay in responding to larger token prompts. 

This makes automation in AirTable rather useless unless you go to the trouble of building cloud functions. 

I have multiple working automation steps to categories data, create short email replies etc. But if you want to do anything real with GPT-4 it's limited by the response time. 

Question - how can I get around this? Any thoughts? 

12 Replies 12

You are going to have to use a different service that isn’t limited by the 30 seconds of Airtable scripting automations. 

One option is to use Scripting Extension in a “data” view, which is not limited to 30 seconds. However, you cannot access Scripting Extension from an interface. To do that, you would need to use a third party integration service that isn’t subject to the same time limits, such as Make.com.

That isn't a solution it's a work around that degrades the use of AirTable in the age of OpenAI and other LLMs becoming more dominant. 

Why would you want to have your data stored somewhere you can not run AI queries, enhancement etc. on it. 

 

I hate Khoros, but feel free to explain more about your project here. Happy to try to help.

Mario_Granero
6 - Interface Innovator
6 - Interface Innovator

For make OpenAI API Calls i usually use another service:

n8n ( ℹ️ affiliate link--> https://n8n.io/?ref=yeswelab&utm_source=affiliate)  that work very well for this purpose.

Making in the automation a script block with a fetch request to a n8n Webhook that make a trigger (giving the recordID as parameter).

Then in the n8n flow, i retrieve everything i need from airtable, make openAI calls and update back to Airtable.

I hope it helps.

Roy_Golan
5 - Automation Enthusiast
5 - Automation Enthusiast

I'm running into the issue, and I believe anyone using Airtable and OpenAI together with a somewhat complex prompt will experience. Airtable should increase the limit. 

>>> Airtable should increase the limit.

The worst thing Airtable could do is increase the limit. You should either proxy (buffer) these calls as @Mario_Granero suggests or you should use the streaming response (which may not be possible in Airtable script - I never tested this).

If your use case requires user interactivity, streaming results is preferred so that users get the sense that the system is not hung up or frozen.

Roy_Golan
5 - Automation Enthusiast
5 - Automation Enthusiast

This means using another automation tool just to complete an Airtable automation. Using GPT in an automation is probable to become a basic need and should be possible while using Airtable independently. For my team, the need to integrate another tool for each AI driven automation means more weight towards getting the product out of Airtable completely

... using another automation tool just to complete an Airtable automation

The insanity, right? You realize there was a time when Airtable had no automation and no scripts. 😉

In any case, why aren't you using the integrated AI feature (AI field type)?

Mario_Granero
6 - Interface Innovator
6 - Interface Innovator

In my use cases, I'm used to consume OpenAI API always with external automation tools and use Airtable apps (data, automations, interfaces, ...) as a Framework (source of truth, QA and orchestrator ) 🤷‍♂️

For me is a perfect tool