Skip to main content

Hi everyone, Stephen here from the Airtable Product team.

Today we’re rolling out AI-Generated Interface Elements to AI Labs. This new capability lets you build bespoke interfaces to visualize and act on your data in completely new ways. Just describe what you want, and Omni creates an interface within minutes.

 

Why we built it

Our community of builders is constantly inspiring us with how they push the boundaries of Airtable. Although there are so many possibilities using Airtable’s out-of-the-box components, we’ll never be able to build every possible way to interface with your data as a first-party feature. Now Omni can spin up completely unique elements in seconds from your prompt.

 

What you can create

Here are just a few examples of the types of interfaces that builders have already created: 

  • A 3D viewer to explore product models and cost breakdowns
  • A heatmap of revenue by region and segment
  • A network diagram showing relationships between members of a community
  • A map of potential store openings
  • An infinite canvas for visualizing a workflow

 

Tips for best results

  • Be descriptive in your prompts: spell out exactly what the interface should do
  • Iterate in small steps: adjust one element at a time
  • Copy errors back into your prompt: tell Omni what it needs to fix if something doesn’t work as expected

 

How to try it today

AI-Generated Interface Elements are now live in AI Labs.

To enable on Business/Enterprise Scale plans:

  1. Enterprise admins go to Admin Panel > Settings > AI settings
  2. Toggle on AI Labs
  3. Start building by selecting the tool in Omni and describing the interface you want to create

To enable on Team/Free plans:

  1. Workspace admins go to Account > Workspace settings
  2. Toggle on AI Labs
  3. Start building by selecting the tool in Omni and describing the interface you want to create

 

Learn more

Read our FAQ documentation to get up to speed on how to create AI-generated elements.

This documentation covers: 

  • Different types of AI-generated elements
  • Prompt writing tips
  • How to iterate on your elements
  • Permissions
  • Pricing
  • Known gaps & limitations
  • And more

 

What’s next

We’d love to see what you create. Share screenshots, demos, and stories of what you’re building here in the thread and on social.

We can’t wait to see how you use AI-Generated Interface Elements to bring your ideas to life.

— Stephen

Interesting, and I know everyone has been dying for this as it’s been teased for the past few months…

That said, I wish this wasn’t just a black box reliant on an LLM only to generate code tweaks. I understand have the LLM as a primary UX component but the option should exist to examine and edit the generated code.

Also, I just refuse to use AI and have it disabled across the board. If you could use custom code then these interface components could be shared and utilized beyond one off builds. One very useful function of an active community like here is sharing things like useful formulas and scripts that can be repurposed and utilized by other users. The closest you could do here is share an exact prompt and hope that the LLM interprets it the same?   


I wish this wasn’t just a black box reliant on an LLM only to generate code tweaks. I understand have the LLM as a primary UX component but the option should exist to examine and edit the generated code.

Totally fair perspective. For those of you who are looking to have finer-grained control by writing your own code, stay tuned — we’ll have an exciting announcement for you in the coming weeks.


I’ve been waiting to try this out! Essentially, I’m trying to make a similar interface to that of the 3D annotation tool but for video. Though, it seems there are issues linking two tables together.

For example, I have a “videos” table containing public video URL’s and a “comments” table - which are linked via linked records. When trying to use the AI tool to create a comment on a video, it isn’t able to create the comment and link it to the video.

Any insight on how you managed to create the 3D model viewer so I can learn best practices here for how to structure my data to fit my use-case?


I’ve been waiting to try this out! Essentially, I’m trying to make a similar interface to that of the 3D annotation tool but for video. Though, it seems there are issues linking two tables together.

Currently, AI-generated interface elements don’t work with multiple tables; you can only interact with data from a single source table.

It is possible to read data from a linked table via linked records/lookups/rollups, but in your case, users wouldn't be able to create records in the comments table. An alternate approach you could explore is allowing users to open a record detail page for the video from your interface element and leaving comments in there.

With that said, first-class support for multiple tables is on our roadmap and we’re hoping to enable more complex use cases soon.


Yuck, currently unworkable for an organisation running its day-to-day on Airtable, unfortunately.

We've tried the functionality today, but while it's generating the element (took approx. 5 min, then ended with ‘I hit an unexpected error. Please try again in a bit.’) the rest of Airtable is completely inaccessible.

The entire team couldn't load any interface pages and all Make scenario's interacting with Airtable crashed with Timeout errors.


Hey ​@Stephen_Suen,

Congrats!!! Super excited about it.

My first attempt was to build a form that could create records on multiple tables from one unique submission/page (e.g. create Contact AND Company records). This obviously did not work given the one table limit (I was aware of the limit, but hey… a man can still dream, right? lol). -I’ve been pushing for this form feature for a long time!

I do believe that as AI gets better, and new features get enabled (specially multiple tables), this will become really valuable. Thanks for sharing!

Mike, Consultant @ Automatic Nation 
YouTube Channel


Yuck, currently unworkable for an organisation running its day-to-day on Airtable, unfortunately.

We've tried the functionality today, but while it's generating the element (took approx. 5 min, then ended with ‘I hit an unexpected error. Please try again in a bit.’) the rest of Airtable is completely inaccessible.

The entire team couldn't load any interface pages and all Make scenario's interacting with Airtable crashed with Timeout errors.

Hey ​@Jeroen_Sarink, that doesn’t sound right. Do you mind filing a support ticket so our engineering team can look into this issue more closely?


I’ve been waiting to try this out! Essentially, I’m trying to make a similar interface to that of the 3D annotation tool but for video. Though, it seems there are issues linking two tables together.

Currently, AI-generated interface elements don’t work with multiple tables; you can only interact with data from a single source table.

I’ve been putting the new AI interface element builder through its paces and I’ve found a way to interact with multiple tables by utilizing GET and POST web requests via the AI through interfaces.

I had the AI adjust my interface so that whenever it’s about to load my video (hosted via Mux) it sends a GET webhook to load existing comments based from the video’s record ID via n8n. Then, once it receives the comments, it displays them. To add new comments, it sends POST webhooks which writes the comment to my comments table with associated video record ID. n8n handles all of the backend web requests and sends webhook responses, when necessary.

I can definitely see ways for the AI builder to continually improve, but so far - I’m completely blown away. I love this feature and it opens so many possibilities! Let me know if you need any product feedback or input from a user.

Can’t wait to keep building!


@kyle.chefnick So glad to hear you’re having some fun testing it out! Keep the feedback coming — more features are rolling out soon 👀


I wish this wasn’t just a black box reliant on an LLM only to generate code tweaks. I understand have the LLM as a primary UX component but the option should exist to examine and edit the generated code.

Totally fair perspective. For those of you who are looking to have finer-grained control by writing your own code, stay tuned — we’ll have an exciting announcement for you in the coming weeks.

Given the fact that I have received an error with every iterative update to the advanced pivot table interface I’ve been having it try to design for me, I will likely stop using the feature until this is available. I’m unable to present the error back to omni & have it perform any level of AI based error handling & correction - once that error is present, it feels like my custom interface is DOA & no further iteration will resolve it. 

I like how the source code is available - however I’d like to see a more robust interaction with Omni surrounding that source code. It would also be nice to be able to see a versioning aspect, because is there anything in place that would allow me to revert an iterative change? 

Ultimately, I feel like for basic use cases the omni interactive option will be great - but I’ll be holding on until I can leverage this further with more granular manipulations with personal code & based off the present experience with omni, I’d feel much better leveraging cursor for AI based iterative code
 


I wish this wasn’t just a black box reliant on an LLM only to generate code tweaks. I understand have the LLM as a primary UX component but the option should exist to examine and edit the generated code.

Totally fair perspective. For those of you who are looking to have finer-grained control by writing your own code, stay tuned — we’ll have an exciting announcement for you in the coming weeks.

Given the fact that I have received an error with every iterative update to the advanced pivot table interface I’ve been having it try to design for me, I will likely stop using the feature until this is available. I’m unable to present the error back to omni & have it perform any level of AI based error handling & correction - once that error is present, it feels like my custom interface is DOA & no further iteration will resolve it. 

I like how the source code is available - however I’d like to see a more robust interaction with Omni surrounding that source code. It would also be nice to be able to see a versioning aspect, because is there anything in place that would allow me to revert an iterative change? 

Ultimately, I feel like for basic use cases the omni interactive option will be great - but I’ll be holding on until I can leverage this further with more granular manipulations with personal code & based off the present experience with omni, I’d feel much better leveraging cursor for AI based iterative code
 

I want to piggyback off my previous comment in that I do know there is an undo function, however that is only structured based off the most recent development.  If you receive an error, its simple enough to just say undo - however if you get into an iteration loop with the AI & want to undo multiple revisions, or lets say you attempt to interpret the error you receive and attempt to explain the error & ask the AI to handle that error, and it doesn’t - you no longer have an opportunity to revert the change that caused the error, because now the only ‘undo’ option is for whatever it did to try and handle for that error.


I wish this wasn’t just a black box reliant on an LLM only to generate code tweaks. I understand have the LLM as a primary UX component but the option should exist to examine and edit the generated code.

Totally fair perspective. For those of you who are looking to have finer-grained control by writing your own code, stay tuned — we’ll have an exciting announcement for you in the coming weeks.

Could you offer some more concrete info on this, please? I have a number of custom extensions that are preventing me from moving all users to interface-only. It sounds like you’re hinting at custom extensions in interfaces but it’s hard to plan my work based on hints. Even just a, “no that is not what we’re planning,” would be really helpful, thanks.


Love this direction—AI-generated elements can finally cover the long-tail of bespoke views. Two asks: versioned rollback/sandboxing for prompts and clear permission inheritance/audit trails, especially when an element embeds a website or other external source. Also curious how teams “promote” a scrappy prototype into a blessed, reusable template across workspaces.


Reply