Nor would it guarantee that the email was actually received. :winking_face: Email, after all, is where knowledge goes to die [more].
These are to very different requirements. And we can add at least two more dimensions to your requirements:
Which students did we attempt to send an email, but the email address was compromised?
Which students did we believe we attempted to send an email, but the data required for the automation to process, was incomplete and prevented the process to complete.
The only way to know if a student has actually received an email is to track it using SMTP’s protocol for read-receipt or by placing a special unique image in the body of the email that is served from a site where you have instrumented tracking analytics.
There are many things about this that are less than ideal. Let’s discuss the big issues.
It’s possible to build no-code automations that “successfully” execute, but fail. This is true with script automations as well.
The automation runs feature is best used as a diagnostic tool, not an operational management feature. It is more useful to automation developers and depending on the nature of the automation, sussing out failures can be very tedious work.
I have a hunch that what @Chloe_Walsh is in need of is a registry of this automation’s outcomes. It’s a reporting function and tends to intimate a compliance requirement.
In my view, a simple registry containing every automation execution is possible by simply creating a registry table and adding a record to that table every time the automation executes. This could be run at the end of the automation process which passes outcome data into the final step which then captures that data. This would provide a very operational-friendly way to see what’s been happing inside the automated process. Conditional branches could also be added to create outcome classes.