May 30, 2023 07:04 PM
OpenAI's API for GPT-4 constantly has an over 30sec delay in responding to larger token prompts.
This makes automation in AirTable rather useless unless you go to the trouble of building cloud functions.
I have multiple working automation steps to categories data, create short email replies etc. But if you want to do anything real with GPT-4 it's limited by the response time.
Question - how can I get around this? Any thoughts?
May 30, 2023 07:11 PM
You are going to have to use a different service that isn’t limited by the 30 seconds of Airtable scripting automations.
One option is to use Scripting Extension in a “data” view, which is not limited to 30 seconds. However, you cannot access Scripting Extension from an interface. To do that, you would need to use a third party integration service that isn’t subject to the same time limits, such as Make.com.
May 30, 2023 07:30 PM
That isn't a solution it's a work around that degrades the use of AirTable in the age of OpenAI and other LLMs becoming more dominant.
Why would you want to have your data stored somewhere you can not run AI queries, enhancement etc. on it.
Jun 12, 2023 07:36 AM
I hate Khoros, but feel free to explain more about your project here. Happy to try to help.
Oct 02, 2023 07:17 AM - edited Feb 05, 2024 03:01 PM
For make OpenAI API Calls i usually use another service:
n8n ( ℹ️ affiliate link--> https://n8n.io/?ref=yeswelab&utm_source=affiliate) that work very well for this purpose.
Making in the automation a script block with a fetch request to a n8n Webhook that make a trigger (giving the recordID as parameter).
Then in the n8n flow, i retrieve everything i need from airtable, make openAI calls and update back to Airtable.
I hope it helps.
Feb 04, 2024 03:57 AM
I'm running into the issue, and I believe anyone using Airtable and OpenAI together with a somewhat complex prompt will experience. Airtable should increase the limit.
Feb 04, 2024 05:43 AM
>>> Airtable should increase the limit.
The worst thing Airtable could do is increase the limit. You should either proxy (buffer) these calls as @Mario_Granero suggests or you should use the streaming response (which may not be possible in Airtable script - I never tested this).
If your use case requires user interactivity, streaming results is preferred so that users get the sense that the system is not hung up or frozen.
Feb 04, 2024 08:54 AM
This means using another automation tool just to complete an Airtable automation. Using GPT in an automation is probable to become a basic need and should be possible while using Airtable independently. For my team, the need to integrate another tool for each AI driven automation means more weight towards getting the product out of Airtable completely
Feb 04, 2024 10:20 AM
... using another automation tool just to complete an Airtable automation
The insanity, right? You realize there was a time when Airtable had no automation and no scripts. 😉
In any case, why aren't you using the integrated AI feature (AI field type)?
Feb 05, 2024 03:06 PM
In my use cases, I'm used to consume OpenAI API always with external automation tools and use Airtable apps (data, automations, interfaces, ...) as a Framework (source of truth, QA and orchestrator ) 🤷♂️
For me is a perfect tool