Why you shouldn't let AI write your documentation

AI can generate documentation that looks complete. That doesn't mean it is. A case for why clarifying intent is the real work of documentation, and why it still requires a human in the loop.

·

2

min read

Why you shouldn't let AI write your documentation

There's a quiet assumption spreading through product and engineering teams right now: that documentation is a generation problem. Point an LLM at your Figma files and your codebase, and let it produce the docs.

It sounds efficient. It might even look right at first glance. But it misses something fundamental about what documentation is actually for.

Documentation is an act of clarification, not transcription.

When a human writes documentation, they're not just describing what exists. They're making a series of decisions about what matters, what's confusing, and what a reader needs to understand before they can move forward. That judgment is hard-won. It comes from someone who sat in the room when the trade-offs were made, or from someone who has tried to explain this thing to someone else and watched them stumble.

An LLM doesn't have that context. It has the artifacts. And artifacts are not the intent.

A Figma file tells you what something looks like. Code tells you what something does. Neither tells you why a decision was made, what alternatives were considered, or what the edge cases are. That knowledge lives in people's heads. Getting it into documentation requires someone to deliberately extract and translate it.

This is especially true for anything consequential.

If your documentation explains a simple UI pattern, maybe generated text gets you most of the way there. But if it explains an API contract that downstream teams will build on, or an architectural decision that will constrain you for years, the cost of vague or misleading documentation is real. Teams will misunderstand. Systems will be built on false assumptions. And tracing that back to a docs gap is painful.

AI can help with documentation, just not in the way most teams are using it.

It's genuinely useful for reformatting, restructuring, or making existing content more readable. It can catch gaps if you describe your intent and ask it to identify what's missing. It can help a subject-matter expert write faster once they know what they're trying to say.

What it can't do is replace the human judgment required to decide what needs to be said in the first place.

The machines are good at many things. Clarifying intent isn't one of them. That part is still yours.


Have a project in mind?

Whether you're looking for a consulting partner or building a team, I'd love to talk.

Have a project in mind?

Whether you're looking for a consulting partner or building a team, I'd love to talk.

Have a project in mind?

Whether you're looking for a consulting partner or building a team, I'd love to talk.