“Can’t even wrap my head around it” is exactly where I am. I’m in shock they would do this.
Hi @ScottWorld, the issue with that option is there are image limits.
I am trying to use Google Drive to attach one image to a record, but the image limitation is low.
I have about 1,000 images and this news from Airtable will have a far reaching effect on attaching images, which is a common functionality in Airtable.
Especially for me!
Airtable has image limits, but Google Drive does not really have any practical image limits. The limits of Google Drive are limited only by the storage space that you pay for.
This change, regardless of why AT is doing it, can probably be addressed by automations, Zapier or Make, and a cloud storage provider like Google Drive, DropBox, or even AWS S3.
- AT: User uploads file to [attachment field].
- AT: Automation triggers on [attachment field] modification, sets [image changed field]=TRUE, putting the record in a [needs image uploaded view].
- Zap: On record entering [needs image uploaded view], take newly uploaded image, save it to Dropbox, return the image URL, save it to an [dropbox attachment URL field] in the original AT record, and unset the [image changed field].
Yes, there are more complicated versions of this – multiple attachments, creating folders for records, etc. etc. And I will admit that maybe I’m missing something fundamental. But the core problem seems solvable with no code/off the shelf tools.
Indeed, but, ya’ should have known. I predicted this change in 2019 and even encouraged Airtable to weigh in on the risks associated with the idea that (a) it makes no sense to treat image URLs as immutable and sustained, and (b) that it makes no sense to assume that Airtable (a database for small systems) is also going to provide you with a globally sustained CDN for free.
If you’re a total geek and recognize the importance of data architectures that include binary artifacts by reference, not by value - you might enjoy this thread from about three years ago where I predicted Airtable would eventually realize their shortfalls in the attachment design.
Evan Hahn (Airtable Engineer with Deep Insight)
… can’t guarantee fully static URLs
Bill French (Mr Nobody)
Nor should you. Related to this topic are the attachment URLs themselves (which are publicly accessible). I (and many of my clients) have trepidation about this and it is a factor that often rules out Airtable as a choice. Unbeknownst to most users – all attached documents in a base are openly exposed in a CDN-like environment (i.e., dl.airtable.com 6 ). I get it - the hash-keys for any given document are unpredictable and this is the basis for claiming they are secure. “Security by obscurity” are often the last words any CEO remembers just before seeing the “On-Air” light flash from a chair at CNBC as they queue up Kate Fazzini 6 to drill you about a security breach. I have to believe you and the team are pondering how and when this design must change. Have you considered signed-URLs 6 and a new API method that would give us the ability to create signed URLs for attachment documents?
The party ended in 2019; we just didn’t know it.
Airtable’s support pages and Airtable’s REST API documentation have been warning about URLs not being permanent for at least a year, and quite possibly several years before that.
In fact, the REST API still has the warning that it has always had:
Note: These URLs do not currently expire, but this will change in the future. If you want to persist the attachments, we recommend downloading them instead of saving the URL. Before this change is rolled out, we will post a more detailed deprecation timeline.
Yep - this is a safe assumption and Airtable could have managed this better. But there have been warning signals here in the community going back as far as 2018. Inexperienced database makers don’t typically think about the architectural consequences of using a feature designed for internal use in an external dependency.
Airtable promised that you could upload attachments as “copies” of documents and images and they have upheld that promise. They never promised immutable and sustained addressability to those attachments outside of the Airtable UI. This is the gotcha’ moment that everyone is surprised about and some of us saw it and warned of it for many years.
I’m surprised you’re all surprised.
I did a quick search and I alone have warned of this delicate and likely-to-vanish pattern 27 times in this community. Most of my threads are related to API-uploads that are unreliable. And in those specific threads, I advised clients and all users to consider the idea that in mission-critical applications where there is a distinct dependency on binary artefacts, you best address them by REFERENCE, not by VALUE.
The key takeaway is that it was never a good idea to lazily make copies of artefacts. Imagine a 10k image, a 100k image, and a 100GB video. At what point do you realize that making copies is impractical? Sadly, we use size as a justification for convenience and tolerance of the laziness Airtable has afforded us historically. It’s just 100k - no big deal, right?
As it turns out - it’s a big deal irrespective of size.
Binary assets are best integrated into data systems by reference, not by value. The requirement to manage digital assets separate and apart from the data model always existed; we just chose to sidestep those requirements for simplicity, faster time-to-production, and for our own profit of course.
The party’s over.
Bill, with all due respect, your reply is nonsense.
The overwhelming majority of people who depend on Airtable do not understand or care about “data architectures that include binary artifacts by reference, not by value”
Airtable is a no-code tool that works magically in its current state for us plebes who care more about the sausage than how the sausage is made.
If you’re concerned about the recent change in Attachment URLs that Airtable just announced, we want to hear from you.
At On2Air, we’re exploring ways to overcome this. (We build apps for Airtable - on2air.com)
Let us know if interested and your use case by filling out this form.
If you decide to use Airtable’s internal links in ways that its no-code product did not intend and openly advised against, you have a responsibility to understand and care deeply about data architectures that include binary artefacts by reference, not by value.
Indeed, it works as expected with attachments. It never failed before this change and it will not fail after this change. Anyone affected by this change chose to design critical systems with an unorthodox approach. Leaning on CDN URLs as if they were part of the “product” is a mistake not just with Airtable - with every vendor except, of course, vendors who provide CDN services such as Cloudinary (for example).
With all due respect to your respect, if you believe this is nonsense, read the ToS for any number of no-code platforms, many of which do not even chance the exposure of attachment URLs for exactly this reason.
No debate - Airtable has handled this exposure poorly, but there’s plenty of evidence to suggest your contract with Airtable is related to all things in the Airtable app. They will capture your images and host them for purpose of recalling them and displaying them in the UI. However, they will not host them externally for any scale that you may see fit to subject them to. This doesn’t seem to be an irrational position. Would you accept the responsibility of presenting an image requested 300 million times a month for $24?
If my reply is nonsense, your assertion is irrational.
I feel that Airtable did not adequately advise against this usage. There is a post in this community forum that advises against this. Finding this post involves knowing to look for it. There are the legal terms and conditions that can be interpreted to mean this, but do not explicitly state this.
On the other hand, there has been a proliferation of third party products and services that take advantage of using Airtable as a CDN without any apparent backlash.
Indeed, they mismanaged the apparent ability to use the product in unexpected ways. They should have anticipated this and counselled “developers” to be more aware of the risks. But who are the “developers” who typically extend the use of CDN URLs beyond the scope of Airtable? I think it’s those who actively use the API or the various SDKs because these attachment URLs are not easily discoverable unless you are writing code, right?
Well, perhaps. We don’t know how calm or chaotic it is over at Stacker concerning this change. My hunch is third-party developers were - for the most part - aware there could be risks and they took adequate steps to ensure their services were insulated from such changes concerning attachments. If they were not aware and blindly led their users into this abyss they are now in a position to lead them out.
These attachment URLS are very easily discoverable without code.
- If you do a CSV export of a view, attachments are exported as their URLs.
- If you create a formula field that includes an attachment, the result of the formula is the URL.
- If you copy/paste an attachment to somewhere other than another attachment field, the result includes the URL.
People could have been depending on CSV backups, trusting that the actual attachment could be retrieved from the url in the CSV backup.
People could have been sharing CSV exports with people, trusting that the person who received the CSV export could click the url in the file to get the actual attachment.
People could have used formula fields to extract the url, and then used the URL of attachment elsewhere.
No, we don’t know what is going on at Stacker or any of the other portal services right now. We don’t know if they cache attachment urls or actual attachment files.
But we do know that portal services have been displaying Airtable attachments for as long as there have been portal services. And in order to display these attachments, they must at some point use the URLs provided by Airtable.
And above all; these people knew - or should have known - the risks because the warnings in the API are clear and obvious. Even without the warnings, any business based on another vendor’s technology is built with the assumption that (a) they are experts in crafting portals, and (b) are tightly connected with the vendor with at least a modicum of confidence that their architecture is sustainable.
Without question. And it’s likely these URLs work and will continue to work, but not indefinitely. In my view, a CSV is external to the UI and Airtable has no obligation to ensure the CSV will be accurate at some point in the distant future. They are certainly obligated to sustain the accuracy of a CSV for a reasonable time and that seems to coincide with the new announcement.
Yeah, this is tricky territory. CSV exports are data snapshots in time. At that time, all the data in the system matched the export. Should field values be treated any different than fields that contain URLs? They each have the capacity to change at the source while we’re holding the CSV snapshot. Should one class of field be guaranteed to be persistent? I don’t think so. That’s a big ask for a company whose prime directive is to manage your lists of data.
CSVs are code and they are external to the app. Formulas are code and they produce a URL of an underlying feature. My sense is that these formulas will still work when the signed URLs are changed so this is probably fine for these use cases. If they fail, I would interpret that as a failure.
Once again, you are suggesting this is an issue Airtable should be concerned about. If you take ANY data and copy it and paste somewhere and then the original data changes, this problem will bite you. So why should attachment URLs be sustained and preserved any more than any other data values? Isn’t it possible that someone deletes and re-uploads an attachment thus altering the image address?
Due to the ease in getting the urls without using the API, it isn’t reasonable for people to have to look at the API documentation for warnings. Plus, those warnings were not in the API documentation three years ago.
How are CSVs code? They are data. CVSs have no instructions. They take no input. They produce no output.
Yes, formulas are code. But many users don’t think of formulas as code, and it isn’t fair to expect formula writers to have the same level of diligence as people who write code for the REST API.
However, in the past, formulas have always produced the same output when given the same input. It sounds like that might change. It isn’t clear what is going to happen with formula fields. I’m looking forward to hearing more from Airtable about the impact of the changes on formula fields and scripting.
Overall, I think that this change is an important security enhancement. I also am glad that Airtable is announcing this well in advance to allow people time to make any necessary adjustments. However, I feel that we need a lot more information on how things will work in the future.
Slicing hairs now; this is codified data, not easily read or utilized by humans. It is external to Airtable and subject to the erosion of time. Are you suggesting Airtable should somehow warrantee the data in a CSV beyond a reasonable point in time? And if so, what is your expectation of a reasonable time?
Which “people” are you referring? Those who simply use the Airtable product to manage data? Those who attempt to integrate Airtable with other websites? Describe the personas who are impervious to the responsibilities associated with extending their Airtable solutions.
And they don’t have to, right? Aren’t formulas likely to keep working because they update in near-real-time against the latest signed URLs?
Can you be more specific about your trepidation concerning formulas and scripting. Internally, I assume (and Airtable has all but stated it) that formulas and scripts that access attachment URLs will continue to function, right? They’re just reading the latest instance of the URL and that will work for a few hours (apparently). If you then ship that URL off to another machine or human who needs to consume that content at a much later date, you have a problem. Aside from that use case, it should all be fine.
I get the sense there is a bit of conflation ongoing in this latest panic session. An email automation is a good example - you can create an email that exposes an attachment URL but that URL may expire before the recipient has a chance to read the message. This is unfortunate if you built a business process that depends on this functionality. But let’s be clear - this could fail even if Airtable never institutes this change. The record containing the attachment may be changed or deleted entirely. As such, when designing systems like this, even the no-coders must consider these likely scenarios.
@Bill.French - thank you for sharing this, very interesting that you had predicted this years ago! I do agree with @Portfolio_Pet that most people don’t care. But I’ve now lost hope that this WON’T happen if we complain enough… I head back from the support team who said they would share my concerns with the product team… I do believe in miracles…
Never lose hope. There may be some clever approaches that will emerge as a result of your comments. And who knows, this could be the ideal tipping point for new aftermarket solutions to come forth mitigating the impact of these coming changes. I am very thankful Airtable has published a deprecation roadmap - it gives everyone time including the new aftermarket products to come to fruition. Just a wild guess - @openside is probably hard at work at this very moment.
When you say “this WON’T happen”, I suspect you have in your mind the perfect remedy. Please share if so. I’d like to know to what lengths you would like to see Airtable go to sacrifice security in the interest of flexibility.
Indeed, no one wants to be concerned with such details. That’s why we’re all huddled around the magnificent Airtable interface, right? However, when a portion of the user base decide to use Airtable as a back-office hosting server, should you be expected to subsidize the rise in prices when a small percentage of users force Airtable to serve up millions of requests per hour for product catalogs?
I’m sure we can all agree no one wants to pay more and especially not for Jimbo’s Jumbo Shrimp aprons that sell like - well - jumbo shrimp on special at 89 cents a pound.
Airtable has a duty to walk a very tight line between being a database management app and an accidental back-office web server. They have chosen – as I predicted they would – to be guarded against possible use cases that would risk everyone’s performance, security, and prices.
Considering all the constraints and customer interests, please tell me exactly what you would do?