Help

Incoming webhooks: Missing properties in JSON response

Topic Labels: Automations
1542 8
cancel
Showing results for 
Search instead for 
Did you mean: 
Martin_Malinda
5 - Automation Enthusiast
5 - Automation Enthusiast

Hello!

I’m trying to set up an automation with Incoming Webhooks. The source of data is Active Campaign automation that sends a contact information to my webhook endpoint.

The issue is, if some information is missing in the contact record, ActiveCampaign does not send that property at all in the JSON response.

{
  body: {
   linkedin: undefined
  }
}

vs.

{
  body: {}
}

In case of ActiveCampaign, the second response is being sent instead of the first one. Which then makes Airtable tell me there is a “configuration error”:

Screenshot 2022-08-10 at 13.52.42

Should I just ignore this error? Or if some property is missing in the JSON input it will make the automation error out?

Is there any workaround here?

8 Replies 8
ScottWorld
18 - Pluto
18 - Pluto

Welcome to the community, @Martin_Malinda!

Airtable’s automation webhooks are pretty basic and are lacking a tremendous amount of sophistication. Several of the limitations are outlined in this support article.

The only option you would have (outside of writing your own custom JavaScript code to handle the incoming JSON) would be to turn to a professional no-code webhook automation tool like Make’s custom webhooks & custom webhook responses, which is a much better automation & webhook tool than what Airtable provides.

Make can handle all variations of your incoming JSON data without errors, it can handle both GET and POST requests, it can provide custom webhook responses, and much more.

This is the option that I use for most of my Airtable consulting clients.

Thank you @ScottWorld. I can write up a custom API endpoint and I can also solve this via Zapier. But having it right inside Airtable seems the most convenient and easily maintainable solution.

I’ll see what (if any) errors this causes. And write up here.

Can AC send Null if there is nothing there? You can probably just ignore it if nothing else down the chain is looking for that data.

To be clear, this is not an “issue”. Most modern APIs are intentionally designed to avoid sending data that is not in existence; this is how we avoid really poorly optimized network traffic. Airtable’s API functions exactly like this.

Do this with likely peril over the horizon. :winking_face:

In most cases, automations that fail in some way consume more resources than those that succeed. Airtable already doesn’t give your instance enough resources, so I think it’s a mistake to run processes that fail ungracefully. Furthermore, how do we know Airtable won’t quietly disable high failure rate automations at some point in the not-too-distant future? To conserve compute power they are already disabling syncs that are dormant for a certain period of time.

Given the way this webhook sends data, it probably makes sense to tease out the incoming elements in script where you can take full advantage of error handling. This allows you to read the body and then decide how to utilize what values were actually sent. And besides, there’s likely cases where you want to create error logs when data is missing or at least supplement the data with indications that certain values are missing.

UPDATE: Another idea, force ActiveCampaign to always send data values. This is typically possible in most systems by setting certain fields to mandatory entry. Even a value such as zero or “TBD” would pave the way for an uneventful webhook process.

Martin_Malinda
5 - Automation Enthusiast
5 - Automation Enthusiast

In my practice in web development it’s better to send all the keys always for consistency. A consumer of the API can review the JSON and see all potential values. The size of the JSON payload isn’t that much different. If there’s an issue with large JSON payloads, it’s usually lacking pagination or in general requesting too many resources. If you’re sending one resource then the difference between 500B and 1KB is quite negligible, at least in my experience. And if the resource is too big (too many properties) there can be specific API params to filter what fields you need (Airtable allows that AFAIK).

:thumbs_up:

Yes I’m working with these inputs in a script. Exactly to do some error handling and data validation before creating a row. But I’m not able to select the whole “body” object as an input to the script. Clicking it forces me to enter it and picking once specific property of it.

I think this happens a lot in web app circles, but in practice, it is a design dependency that should be avoided.

Almost the entire database world (including Airtable) has adopted the idea that producing and consuming data be as optimized as possible. The very essence of a NoSQL platform is to avoid storing, sharing, and indexing data that is simply not there. For small data sets in web projects, this is typically not an issue. However, at scale, it is a huge benefit to not move empty containers all over the world. :winking_face:

The better design choice (and especially in web apps) is to embrace data elements in JSON payloads as they are, not as you wish them to be. This requires a little more work, but the upshot is code that always runs despite changes in schemas, for example. Or other more difficult issues like an API that is unable to aggregate information from multiple sources.

As you may have determined, Airtable’s [data] API doesn’t provide responses that include all potential values; it simply includes values. A 250-field record with three populated values will rightly return three elements. By “review”, I assume you mean that your integration code can review the JSON results, not the API. All potential values are a matter of schema inquiry, and Airtable provides a special API for this as well.

Indeed. But what if you’re making requests that are collectively greater than 20GBs over an entire month, and all in a mobile data plan? Someone, somewhere, always cares about a doubling of size without any benefit for doing so. :winking_face:

Indeed, this is a known design deficiency and there’s a deep conversation about this challenge in this forum. I believe the only workarounds include these options:

  1. Proxy all webhook calls through a service that can include the body as a unified stringified payload.
  2. Force the webhook caller to include the body as a unified stringified payload.
  3. Force the webhook caller to include all fields in the schema.

Since #2 and #3 are extremely unlikely, that leaves only #1. In some cases, you are in control of #2 and/or #3, but this is not common.

UPDATE: There is another workaround; use the inbound webhook event to ONLY inform your automation of changed data. Then call back into the originating API to retrieve the full payload ready for dynamic parsing.

A 250-field record would certainly create problems. I would argue if having such large record is maybe a design problem and you should split it into smaller ones anyways. And if you have to work with it, always explicitly ask for specific fields that you need. But yes, I get your point now, I completely agree the rules are different for this dynamic land, where people can have all kinds of data in their DBs. It’s a different scenario from opinionated, specifically structured APIs.

I can totally create my own middleman endpoint for this. I’ll see how things go :thumbs_up:

Thanks for all the thoughts, I’ll write here later once I gather some more experience with this.

Indeed. But is a large number of empty fields for any given record a reason to rearchitect your data model? What if that record is used in a process that captures increasingly more data over time, and it needs to be de-normalized (flattened) for other unseen integration requirements? Maybe it’s a design issue, but maybe not. It may be fully intentioned and a business requirement - we don’t know. But what we do know (as integrators) is we won’t always know. LOL Therefore, it’s always wise to assume that results generated by API calls will, in almost every case, conceal empty fields.