Making design system intent legible to AI

Making design system intent legible to AI

Making design system intent legible to AI

Making design system intent legible to AI

Overview

AI doesn't need more content. It needs content in the right place, structured the right way.

Figma components don't fail because they're poorly designed. They fail because the thinking behind them doesn't travel.

A component can be built correctly, documented thoroughly, and still be opaque to the AI workflows that are increasingly being asked to work with it. The design intent lives in the documentation. The component lives in Figma. Those two things are not in the same place, and for AI, proximity matters.

That gap was the problem. The annotation work was an attempt to close it.


Epic Games

2026

Process

The descriptions were there. They didn't say anything useful.

When the team started seriously examining what AI-readiness meant at the component level, one thing became clear quickly: Figma's component description fields were largely blank or carried minimal placeholder text that tells a human just enough while telling a model nothing useful.

The intent behind every component existed. It was in the documentation: when to use it, when to reach for something else, accessibility requirements, default variants, and size guidance. The work wasn't to create that content. It was to move it to where AI could actually find it.

The template defined what the annotation needed to carry.

Before anything could be generated, the team had to answer a harder question: what does a component annotation actually need to contain to be useful to an AI workflow?

That answer became the AI readiness template, a framework defining the required elements at two levels. At the component set level: use case, when-to-use-something-else guidance, accessibility labels, and default variant. At the individual variant level: size guidance, specific behavioral context, and relevant constraints.

The template wasn't a writing guide. It was a heuristic, a definition of the signal a machine reader needed to work from to work with a component correctly. Getting that definition right was the work. The generation came after.

The generator automated the extraction. The instructions shaped the output.

An engineer building a separate Claude-powered tool for querying the design system created a skill that could be adapted, a /describe command that pulled from the packaged documentation and output a formatted annotation. The instruction file was refined to meet the writing standards, punctuation guidelines, and structural requirements defined by the template.

What came out of testing quickly: formatting choices that worked for human readers didn't work here. Dashes read as noise. Over-description risked working against retrieval rather than supporting it. The calibration wasn't about style. It was about signal density. Enough context for the model to use the component correctly. Not so much that the useful parts got buried.

Output was tested with designers until the level was right. That wasn't a polish pass. It was an evaluation that checked the generated annotations against a standard and iterated until they met it.

It shipped incrementally and was tracked like any other system update.

For every component set updated with annotations, a Slack message went to the help channel: which component, what was added, and a screenshot of the descriptions. The changelog reflected the work. This wasn't an experiment running in parallel to the system. It was part of it.

The automation layer was in progress when the layoff came. The generator was working. The remaining problem was closing the last manual step: getting descriptions to populate Figma directly without the copy-paste.

Outcome

This wasn't a documentation project that touched AI.

It was the problem of making design intent legible to a different kind of reader, and working backward from what that reader needed to how content had to be structured, placed, and calibrated to serve it.

The template is the part that generalizes. A component annotation is only useful if the heuristic behind it is sound, and if the people defining it have thought carefully about what a model needs to understand about a component's intent, not just its appearance.

Have a project in mind?

Whether you're looking for a consulting partner or building a team, I'd love to talk.

Have a project in mind?

Whether you're looking for a consulting partner or building a team, I'd love to talk.

Have a project in mind?

Whether you're looking for a consulting partner or building a team, I'd love to talk.