AI Zombies - The Torment of the AI Engineer

AI engineers - have you heard this today?

The chat did this weird thing … Do you know why it did that?

Something resembling this question breaks free over and over again to torment you like an AI zombie that just won't stay dead. You just added some context to deal with that edge case and boom! The AI zombie breaks free! That same context causes some other odd behavior or edge case.

And just like real zombies 🤷‍♂️ our systems are a hybrid. Half dead, half alive's more sinister cousin - half deterministic, half AI.

We have bolted AI onto our existing software and in the process created a hybrid of deterministic and non-deterministic code. What this means in practice is that when something odd happens we don't know if it's the code or the AI. All our users know is that the software you've built is acting weird.

So we rifle through our code scouring for the culprit. Phew, looks like all good. It's zombie hunting time. Time to mow down this edge case with some well placed context for the AI…

Boom! Crash! The walls cave in, the ground heaves and hundreds of zombies break free!

The small context update woke the dead. We forgot that we are using structured outputs from the AI to feed back into our code (an attempt to make the non-deterministic AI have some consistency). Except we forgot one important detail. We removed the word JSON from the context.

OpenAI's JSON mode "important notes"

Don't worry! They will throw an error to help you remember 🤦‍♂️.

(Actually Open AI's error handling is pretty good most of the time. When it throws this error it tells you to add JSON somewhere in the context)

When I take a step back and think about these systems I realize how complex and brittle they are. We really are at the whim of the model. One little change could render all our wrappers/parsers/helpers useless. So why do we get so worked up when the zombies creep in?

Maybe there is some underlying anxiety that the inner workings of the AI too closely mirror our own psyche. We all have had a sudden wave of emotions and reacted to the same situation differently - and poorly. Now to be fair human brains have to deal with the enormous fluctuations of hormones, glucose and our bosses daily goals. I reflect on how AI hallucinations are similar to the way humans respond differently to the same situation. Maybe this realization is the cause of the zombie anxiety?

Naa. I don't buy it. It's simpler than all that.

As engineers our self-worth is tied up in being right about the deterministic systems we've built. Except now they are not so deterministic. Often far from it. So when someone questions why the program we built does something unexpected we react as if it's an assault on our ability, our mastery or even our self-worth.

So when the AI zombies are breaking down the door just remember …. aim for the head.