Observing the Emergence of Agents
Emptying the trash all at once. Reconciling records overnight. Cleaning up inconsistencies in bulk.
It’s a way of handling work—of running processes—that exists in every system, large or small. And inevitably, some tasks are simply more efficient when handled this way.
In most organizations, however, these batch jobs live in the shadows. They sit behind the service layer, rarely visible, rarely discussed. People avoid owning them. They get wrapped up in the convenient label of “technical debt” and quietly ignored.
They embody years of accumulated operational knowledge—edge cases, exceptions, scars from past incidents. Touch them without context and things break. As a result, only long-tenured “keepers of the lore” dare to open them, usually just long enough to close them again. They are latent explosives: stable, until they aren’t.
If you’re a developer who can freely use modern AI infrastructure, you’ve probably already noticed this: Today, you can implement much of this logic with prompts.
An “AI agent” is not some mystical new entity. It can be as simple as saying:
“You are the steward responsible for X.
You monitor Y, enforce Z, and handle exceptions according to these principles.
Here are good examples. Here are bad ones.”
Give the model access to the right data, define its role, grant it a mission—and that’s it.
What’s striking is how readable this is.
This is no longer code that must be reverse-engineered to understand intent.
The intent is the program.
A single batch program—50 to 200 lines worth of prompts and glue code—can perform the combined work of one or two operations staff plus a back-office maintenance developer. And it does so continuously, 24 hours a day.
Even tasks we once associated with “real programming”—logic, heuristics, probabilistic judgment—can now be expressed this way. You write the logic in natural language, and it runs. It’s less about knowing algorithms and more about knowing how to delegate.
In a sense, it feels like learning how to manage people—except the “person” never sleeps, never forgets, and never loses context.
It’s an entirely new experience.
It’s that we are finally externalizing and executing what used to live only in people’s heads:
operational judgment,
accumulated exceptions,
“we do it this way because last time it went wrong.”
AI agents simply thaw them out—and make them explicit.
This shift is profound, but also dangerous if misunderstood.
If these agents run quietly for months, their prompts unexamined, they risk becoming the next generation of inscrutable batch jobs—the same hidden bombs, just written in better prose.
And responsibility doesn’t disappear just because judgment is automated.
Someone must still own the rules, the outcomes, and the failures.
This is a new world.
Not because AI is thinking for us,
but because we can finally write down our intent—clearly, legibly—and have it act.
And once you’ve felt that, it’s hard to go back.