Gif by NetflixisaJoke on Giphy

“Who's accountable when the AI writes the code?”

Someone raised a question during a strategy session last week, and it's one I think every organization using AI should be asking right now.

"When AI is generating more and more of the code, how do we make sure the humans reviewing it actually understand what's in it, and can clearly communicate that to customers?"

It's a great question because it cuts to something most companies haven't thought through yet. We've been working through our own framework for this, and the patterns I'm seeing across client engagements tell me most organizations are way behind on it.

1. The question nobody's asking

We're in the middle of a massive shift in how software gets built, and almost nobody is talking about the accountability side of it.

I was on a call recently with a director at a financial services firm. He's vibe-coding an internal tool with Claude Code. His words: "I don't know what's possible, but I might as well try it, fall on my face, and then bring in the experts." He's also using a second AI tool to check the first AI tool's work because, as he put it, "I don't know."

He was just promoted to an AI leadership role, is sharp, motivated, and doing exactly what you'd want someone in that role to do. But he's the first to admit he can't fully explain what the code is doing under the hood.

Now multiply that across your entire organization.

2. The macro/micro split

What I've seen across a dozen client engagements is a consistent pattern. Engineering teams generally know what the AI is building at a macro level. They understand the big rocks, the architecture, the intent. But at the micro level, the specific logic in any given function or the precise behavior of a workflow, there's a gap.

That gap creates a communication problem. I've watched it play out at multiple organizations. Engineering says "we're moving these big rocks" and the people talking to customers are asking "but what exactly is in this release, and what isn't?" Both sides are right, but they're operating at different altitudes. The distance between those altitudes used to be small enough to bridge with a standup or a spec doc. It's widening across the industry, because AI generates code faster than teams can fully review and document it.

The companies that recognize this gap and build processes around it are in a much stronger position than the ones pretending it doesn't exist.

3. Code is cheap. Understanding is expensive.

Code is cheap now. You can put out code at an exponential rate. But teaching, enabling, and equipping people is the hardest part, and it'll probably be the hardest part for a long time.

That statement used to apply to customers adopting new tools. Now it applies just as much to the teams building them.

Here's what I mean. When a senior engineer writes code by hand, they carry the context of every decision in their head. They can explain why they chose this approach over that one, what edge cases they considered, and what trade-offs they made. When AI writes the code and the engineer reviews it, that context gets thinner if the team isn't intentional about preserving it.

This is where a lot of organizations are getting caught. If your team can't explain the "why" behind what was built, they can't set expectations with customers. They can't scope what's in and what's out with confidence. And they can't answer the question every customer will eventually ask: "What exactly did you build, and who's responsible if it breaks?"

4. This isn't a technical problem, it's an organizational one.

The natural reaction is to say "we need better code review processes" or "we need more documentation." That helps (kind of), but the real issue is structural.

Most organizations haven't updated their accountability models for AI-generated work. The person who prompted the AI isn't the same as the person who wrote the code, because nobody “wrote” the code in the traditional sense. The reviewer may catch obvious issues but miss the subtle ones if they're not asking the right questions. The project manager communicates what they can, based on what engineering tells them, which is based on what they understand at a macro level.

So when something goes sideways, who owns it?

This matters way beyond engineering. Every team using AI to generate content, analysis, recommendations, or decisions faces the same question. If AI writes your marketing copy and it misrepresents your product, who's accountable? If AI generates a financial model and the assumptions are wrong, who catches it?

The organizations that figure this out early will have a real advantage. Not because they'll avoid every mistake, but because they'll have clear ownership, clear communication, and clear expectations about what AI-generated work actually means.

Putting this into action

Here's where I'd start if I were staring at this problem inside my own organization:

  1. Define the review standards for AI-generated work. Not just "someone looked at it," but what does a meaningful review actually look like for your team? What's the threshold for understanding before something ships?

  2. Build a communication bridge between the people building and the people talking to customers. If your technical team can explain the macro but not the micro, your customer-facing team needs to know that so they can set expectations honestly.

  3. Treat AI-generated output like work from a new hire. You wouldn't ship a junior developer's code without review and context-setting. Apply the same standard to AI output, especially in the early days of adoption.

  4. Build "explain it back" into your workflow. If someone can't explain what the AI built in plain language, it's not ready to ship. This isn't about slowing things down. It's about making sure you can stand behind what you deliver.

  5. Assign ownership explicitly. Every AI-generated deliverable should have a human name next to it. Not the AI tools name. A person who's accountable for the quality and accuracy of what went out the door.

Wrapping up

The companies that will win in the age of AI aren't necessarily the ones that generate the most code or the most content. They're the ones that can explain what they built, why they built it, and who stands behind it. Accountability hasn't changed.. the way work gets done has. Most organizations haven't caught up yet.

Onward & upward,
Drew

P.s. If we haven’t met yet, hello! I’m Drew Burdick, Founder and Managing Partner at StealthX. We work with brands to design & build great customer experiences that win. I share ideas weekly through this newsletter & over on the Building Great Experiences podcast. Have a question? Feel free to contact us, I’d love to hear from you.

Keep Reading