Skip to content

Flutter GenUI

Status: Alpha (v0.7.0) — API is likely to change.

GenUI is a Flutter SDK that transforms text-based AI agent conversations into rich, interactive graphical UIs. Instead of rendering walls of text, the AI generates dynamic widget trees at runtime from a JSON-based format composed against a catalog of Flutter widgets you define.

We use genui: ^0.7.0 in the Arcnem Vision client.


GenUI works by defining a widget catalog — the vocabulary of Flutter widgets the AI is allowed to use — and connecting it to a content generator (an LLM backend). When the user sends a message, the AI responds with structured JSON that the SDK renders into live Flutter widgets.

Key ideas:

  • Catalog: The set of widgets the AI can generate (each with a JSON schema and a builder).
  • Surface: A rendered UI region. The AI can create, update, and delete surfaces.
  • DataModel: A reactive, centralized store. Widgets bind to data paths and rebuild automatically when values change.
  • A2UI Protocol: The JSON message protocol between AI and client (v0.8).

GenUI has five core components:

ComponentPurpose
GenUiConversationPrimary facade — manages conversation, orchestrates components
Catalog / CatalogItemDefines the widget vocabulary available to the AI
DataModelReactive centralized state store
ContentGeneratorAbstract interface for communicating with the LLM
A2uiMessageProcessorProcesses AI messages, manages surfaces and DataModel
User Input → GenUiConversation.sendRequest()
→ ContentGenerator calls LLM
→ LLM streams A2uiMessages
→ A2uiMessageProcessor updates state + DataModel
→ GenUiSurface widgets rebuild
→ User interacts with generated UI
→ Interaction updates DataModel, dispatches UiEvent
→ Event sent back to LLM as UserUiInteractionMessage
→ Cycle repeats

  • Flutter >= 3.41
  • An LLM provider
Terminal window
cd client
flutter pub add genui

For provider-specific packages:

Terminal window
# Google Generative AI (fastest for prototyping)
flutter pub add genui_google_generative_ai
# Firebase AI Logic (production)
flutter pub add genui_firebase_ai
# A2UI server (custom backend)
flutter pub add genui_a2ui a2a

final contentGenerator = GoogleGenerativeAiContentGenerator(
catalog: catalog,
systemInstruction: 'You are a helpful assistant.',
modelName: 'models/gemini-2.5-flash',
apiKey: 'YOUR_API_KEY',
);
final contentGenerator = FirebaseAiContentGenerator(
systemInstruction: 'You are a helpful assistant.',
additionalTools: a2uiMessageProcessor.getTools(),
);
final contentGenerator = A2uiContentGenerator(
serverUrl: Uri.parse('http://localhost:8080'),
);

Catalogs define what widgets the AI can generate.

  1. Define the JSON schema:
final documentCardSchema = S.object(
properties: {
'title': S.string(description: 'Document title'),
'imageUrl': S.string(description: 'Thumbnail URL'),
'status': S.string(description: 'Processing status'),
'confidence': S.number(description: 'Classification confidence 0-1'),
},
required: ['title', 'status'],
);
  1. Create the CatalogItem with a builder:
final documentCard = CatalogItem(
name: 'DocumentCard',
dataSchema: documentCardSchema,
widgetBuilder: ({
required data,
required id,
required buildChild,
required dispatchEvent,
required context,
required dataContext,
}) {
final json = data as Map<String, Object?>;
final title = json['title'] as String;
final status = json['status'] as String;
final confidence = (json['confidence'] as num?)?.toDouble();
return Card(
child: ListTile(
title: Text(title),
subtitle: Text(status),
trailing: confidence != null
? Text('${(confidence * 100).toStringAsFixed(0)}%')
: null,
),
);
},
);
  1. Add to the catalog and reference in system instructions:
final processor = A2uiMessageProcessor(
catalogs: [
CoreCatalogItems.asCatalog().copyWith([documentCard]),
],
);

The DataModel is a reactive store. The AI can set values at paths, and widgets bound to those paths rebuild automatically.

// Literal value
{
"Text": {
"text": { "literalString": "Welcome to Arcnem Vision" },
"hint": "h1"
}
}
// Bound to DataModel path
{
"Text": {
"text": { "path": "/user/name" }
}
}

User interactions in generated widgets are captured and sent back to the AI through a structured event system.

final actionButton = CatalogItem(
name: 'ActionButton',
dataSchema: S.object(
properties: {
'label': S.string(description: 'Button label'),
'actionName': S.string(description: 'Action identifier'),
},
required: ['label', 'actionName'],
),
widgetBuilder: ({
required data,
required id,
required buildChild,
required dispatchEvent,
required context,
required dataContext,
}) {
final json = data as Map<String, Object?>;
return ElevatedButton(
onPressed: () {
dispatchEvent(
UserActionEvent(
name: json['actionName'] as String,
sourceComponentId: id,
),
);
},
child: Text(json['label'] as String),
);
},
);

GenUI can extend the Arcnem Vision client with:

  • Dynamic result displays: Rich DocumentCard surfaces showing processing status, classification results, and confidence scores.
  • Interactive search: Search result UIs from pgvector similarity queries — clickable cards, sortable lists, or comparison views.
  • Conversational document analysis: Chat interface where users ask questions and the AI responds with generated UIs (charts, annotated images, data tables).
  • Form generation: AI dynamically builds project/device configuration forms based on context.

Since we already use Go-based agents with Inngest, the A2UI server approach is the best fit:

final contentGenerator = A2uiContentGenerator(
serverUrl: Uri.parse(dotenv.env['AGENT_URL'] ?? 'http://localhost:8080'),
);

ClassPurpose
GenUiConversationPrimary facade — manages conversation, orchestrates components
A2uiMessageProcessorProcesses AI messages, manages surfaces and DataModel
ContentGeneratorAbstract interface to LLM
Catalog / CatalogItemWidget vocabulary definition
DataModelReactive centralized state store
GenUiSurfaceWidget that renders a generated UI surface
A2uiMessageSealed class for AI→UI messages
UserMessageUser’s text message to the AI
UiEvent / UserActionEventUser interaction events