Five working interface patterns for AI products: word-by-word streaming, model state indicators, low-context warnings, confidence flags, and error recovery. The loading states, empty states, and error states the UX Engineer JD asks for — built as live React/TypeScript, not Figma mockups.
Most AI interfaces design around the output. This one designs around the states between outputs.
The problem: five distinct model behaviors — streaming, thinking, done, uncertain, failed — and most products render all of them the same way. The decision was to treat each state as a separate design problem. Different visual contract, different affordance, different meaning for the user.
These are working patterns, not screenshots. The state machine runs. The streaming render is real. The confidence and error treatments are demonstrated, not connected — the patterns exist, not the production wiring. The next version of this page integrates them; this version documents them.
The primary output pattern. Text arrives character by character — each arrival is a render event. The cursor shows the model is present. Connected to the prompt input below.
Five states. Each a design decision. Idle, thinking, generating, done, error — none of these should look the same. Click advance to step through the machine.
Token-level confidence mapped to visual weight. High renders full. Medium dims. Low underlines. Uncertain goes red. Hover any word for its score.
Context window usage as a first-class UI element. Drag the slider to simulate different usage levels and watch the component respond.
Three classes of AI failure, each with different meaning and different design response. Not generic browser errors — designed AI error states.
. , — ? !. The cursor blinks while writing, goes solid on done. The model finishes a sentence — not a string.aria-live="polite" with aria-busy toggling. Screen readers get the full sentence once on done — no per-character stutter.429. Move the code into a collapse and lead with what to do.