Static user interfaces have long been the ceiling on agentic AI’s practical reach — agents can reason dynamically, but the screens they operate within cannot.
A design pattern called A2UI (agent-to-user interface) is now being positioned as the architectural answer to that gap. According to the analysis, the core idea is straightforward: rather than pre-designing fixed screens, developers define a UX schema — a loosely coupled specification — that tells a compliant renderer how interface components should be built. Agents then produce JSON output at runtime, and the renderer constructs the screen dynamically from that content. Every screen is generated fresh, shaped by the data the agent is working with at that moment.
The pattern builds on an existing standard. AG-UI (agent user interface) already handles communication between UX layers and agents, but it still requires screens to be defined at design time. A2UI sits above that layer, using AG-UI underneath to preserve bidirectional interactivity — button clicks, form submissions, and other user events route back to the originating agent. The result is a fully interactive experience generated on the fly, contained within a single interface such as a conventional chatbot window.
Ontology as the Governing Layer
The architecture pairs A2UI with a business domain ontology. Using a standard like FIBO (financial industry business ontology), the ontology defines the business concepts — in a loan approval scenario, for instance, that means loans, parties, interest terms, covenants, and conditions — and serves as a shared language across source systems. A2UI then handles only the rendering logic: how those concepts are surfaced to users as interface components. The two layers are complementary and deliberately separate.
The practical consequence is that neither the UX designer nor the UI developer needs to rebuild individual screens when business rules change. Only the A2UI specification updates, and the change propagates the next time a user accesses any related form. The analysis offers a concrete illustration: if a company undergoes an acquisition and needs to add new branding to thousands of forms, the logo logic is configured once in the specification rather than applied screen by screen.
Standardization and Compression
Reusable components are defined once and applied consistently. The analysis notes that an organization could specify, for example, that all user-facing communications — errors, warnings, informational messages — render inside a branded panel compliant with ISO 9241-110, with a dedicated agent validating and constructing each message to that standard automatically.
Efficiency at the model level is also addressed. Newer compression standards like TOON (token object notation) can embed schema definitions — including ontology and A2UI specifications — directly into context prompts with lower token overhead. As models develop further, the analysis says, pre-training could enable them to auto-generate screens already compliant with both A2UI and AG-UI without additional instruction.
Companies including CopilotKit are, according to the report, actively building A2UI renderers capable of constructing UI from JSON specifications and wiring interactions back to agents via AG-UI. The broader claim from the analysis is that tying ontology, agents, dynamic JSON screens, and AG-UI message exchanges into one coherent system reduces interpretive burden on UX and development teams while making the overall product more resilient to regulatory and business change.
Photo by Pixabay
This article is a curated summary based on third-party sources. Source: Read the original article