Patterns are based on our 200+ real-world examples,
organized across five phases of the product journey.
Onboarding
Entry touchpoints
Entryways that show users how to begin; search, suggested prompts, open input, icons, proactive nudges, or autocomplete.
Open input
Offer a flexible chat-style input so users can express needs naturally.
Icons
Use recognizable AI icons (e.g., sparkles) to indicate AI-powered actions and areas.
Suggested prompts
Provide starter prompts that showcase range and help users get unstuck.
Searching & filtering
Help users explore what AI can do directly from search and filtering experiences.
Proactive suggestions
Surface context-aware suggestions that guide users to the most useful action or question to ask next.
Autocomplete
Predict and complete inputs to reduce effort and nudge users toward effective prompts.
Setting expectations
The system frames its role, boundaries, and personality through concise notes and brand/tone.
Disclaimer
Set expectations with concise notes about capabilities, limits, and appropriate use.
Branding & tone
Use branding and tone to set the AI’s personality and expectations.
Input
Expressive Input
Ways for users to express intent; voice, images, handwriting, gestures, or structured prompts.
Voice input
Support speech-to-text for hands-free conversational interactions.
Visual input
Let users attach images, screenshots, or other visuals as part of the request.
Handwriting input
Accept handwritten notes or sketches for pen-first scenarios.
Gesture input
Allow gesture-based inputs (touch, gestures) where applicable for natural interactions.
Prompt builder
Offer lightweight structure (slots, hints) to compose clearer prompts faster.
Expanding context
Additional signals that enrich the request; prompt help, model choice, connectors, or knowledge sources.
Prompt assistance
Suggest improvements and show a quick preview of likely output to confirm direction.
Model selection
Let users pick the right model or mode for the task when it matters.
MCP connectors
Connect to tools and data sources so AI can act with relevant information.
Knowledge base
Use organizational knowledge (MCPs, docs, wikis) to specialize responses.
Output
Types of outputs
Different ways responses are delivered; previews, video, images, audio, summaries, or structured formats.
Preview output
Show a preview of the output before it’s generated to help users understand what to expect.
Video output
Generate or assemble video for dynamic explanations and demos.
Image output
Return images or generated visuals as the primary output form.
Audio output
Speak results aloud for accessibility and hands-free use.
Variations
Provide multiple alternatives so users can choose the best fit.
Multi-modal output
Combine text, images, audio, and video where appropriate for richer responses.
Summarize
Condense long content into concise takeaways or executive summaries.
Structured output
Return structured data (tables, JSON, steps) to make results actionable and machine-readable.
Processing
How the system handles generation; real-time, streaming, listening, parallel work, or stepwise updates.
Real-time
Return immediate answers when latency matters.
Streaming
Send partial or streaming results as they become available for faster perceived responsiveness.
Listening
Keep an open channel for incoming context (e.g., continuous audio or live inputs) as needed by the flow.
Parallel processing
Run multiple candidates or tasks concurrently to speed up complex jobs.
Processing steps
Show progress through distinct steps so users understand what’s happening.
Explainability
Help users understand answers and recover gracefully when things go wrong.
Confidence indicators
Communicate certainty to help users judge reliability.
Citations
Show sources and references to build trust.
Error Recovery
Offer clear remediation when outputs fail. Explain issues and suggest fixes or retries.
Refinement
Correction & Iteration
Tools for revising outputs — continue, retry, act inline, edit visually, or review results.
Reply
Continue the conversation to refine results with follow-up instructions.
Regenerate
Request a fresh attempt with the same or tweaked prompt.
Inline actions
Expose quick actions directly on content (rewrite, summarize, translate, etc.).
Visual editing
Enable direct manipulation of generated visuals and layouts.
Review
Enable reviewing outputs to either accept them in one go or with revisions.
Learning
Managing memory
The system recalls, persists, or forgets context to support continuity and control.
Managing memory
Anticipate needs by recalling preferences and past interactions at the right time; store durable context while respecting user control.
Collecting feedback
Collect feedback to improve outputs and personalize future interactions.
Ratings / Thumbs up down
Capture quick sentiment on results to steer quality.
User rating
Allow detailed ratings or comments where nuance helps training.
Choose a response
Offer multiple answers and let users select the best to reinforce preferences.
Personalization
Adjust behavior, tone, and defaults to fit the individual user.
Personalization
Tailor tone, suggestions, and defaults based on user history and choices.
Stay ahead of the curve
for designers and product teams in the new AI-UX paradigm.
