Patterns

[43]

The emerging standards of AI UX.

Phase 1
Phase 1

Onboarding

How do users discover what AI can do in the first interaction?

2 principles
8 patterns
Phase 2
Phase 2

Input

How does the user input context into the AI?

2 principles
9 patterns
Phase 3
Phase 3

Output

How does the AI respond and in what formats?

3 principles
16 patterns
Phase 4
Phase 4

Refinement

How does the user edit or improve results?

1 principles
5 patterns
Phase 5
Phase 5

Learning

How does the system adapt, remember, and improve over time?

3 principles
5 patterns

Patterns are based on our 200+ real-world examples,
organized across five phases of the product journey.

Onboarding

Entry touchpoints

Entryways that show users how to begin; search, suggested prompts, open input, icons, proactive nudges, or autocomplete.

Entry touchpoints

Open input

Offer a flexible chat-style input so users can express needs naturally.

Entry touchpoints

Icons

Use recognizable AI icons (e.g., sparkles) to indicate AI-powered actions and areas.

Entry touchpoints

Suggested prompts

Provide starter prompts that showcase range and help users get unstuck.

Entry touchpoints

Searching & filtering

Help users explore what AI can do directly from search and filtering experiences.

Entry touchpoints

Proactive suggestions

Surface context-aware suggestions that guide users to the most useful action or question to ask next.

Entry touchpoints

Autocomplete

Predict and complete inputs to reduce effort and nudge users toward effective prompts.

Setting expectations

The system frames its role, boundaries, and personality through concise notes and brand/tone.

Setting expectations

Disclaimer

Set expectations with concise notes about capabilities, limits, and appropriate use.

Setting expectations

Branding & tone

Use branding and tone to set the AI’s personality and expectations.

Input

Expressive Input

Ways for users to express intent; voice, images, handwriting, gestures, or structured prompts.

Expressive Input

Voice input

Support speech-to-text for hands-free conversational interactions.

Expressive Input

Visual input

Let users attach images, screenshots, or other visuals as part of the request.

Expressive Input

Handwriting input

Accept handwritten notes or sketches for pen-first scenarios.

Expressive Input

Gesture input

Allow gesture-based inputs (touch, gestures) where applicable for natural interactions.

Expressive Input

Prompt builder

Offer lightweight structure (slots, hints) to compose clearer prompts faster.

Expanding context

Additional signals that enrich the request; prompt help, model choice, connectors, or knowledge sources.

Expanding context

Prompt assistance

Suggest improvements and show a quick preview of likely output to confirm direction.

Expanding context

Model selection

Let users pick the right model or mode for the task when it matters.

Expanding context

MCP connectors

Connect to tools and data sources so AI can act with relevant information.

Expanding context

Knowledge base

Use organizational knowledge (MCPs, docs, wikis) to specialize responses.

Output

Types of outputs

Different ways responses are delivered; previews, video, images, audio, summaries, or structured formats.

Types of outputs

Preview output

Show a preview of the output before it’s generated to help users understand what to expect.

Types of outputs

Video output

Generate or assemble video for dynamic explanations and demos.

Types of outputs

Image output

Return images or generated visuals as the primary output form.

Types of outputs

Audio output

Speak results aloud for accessibility and hands-free use.

Types of outputs

Variations

Provide multiple alternatives so users can choose the best fit.

Types of outputs

Multi-modal output

Combine text, images, audio, and video where appropriate for richer responses.

Types of outputs

Summarize

Condense long content into concise takeaways or executive summaries.

Types of outputs

Structured output

Return structured data (tables, JSON, steps) to make results actionable and machine-readable.

Processing

How the system handles generation; real-time, streaming, listening, parallel work, or stepwise updates.

Processing

Real-time

Return immediate answers when latency matters.

Processing

Streaming

Send partial or streaming results as they become available for faster perceived responsiveness.

Processing

Listening

Keep an open channel for incoming context (e.g., continuous audio or live inputs) as needed by the flow.

Processing

Parallel processing

Run multiple candidates or tasks concurrently to speed up complex jobs.

Processing

Processing steps

Show progress through distinct steps so users understand what’s happening.

Explainability

Help users understand answers and recover gracefully when things go wrong.

Explainability

Confidence indicators

Communicate certainty to help users judge reliability.

Explainability

Citations

Show sources and references to build trust.

Explainability

Error Recovery

Offer clear remediation when outputs fail. Explain issues and suggest fixes or retries.

Refinement

Correction & Iteration

Tools for revising outputs — continue, retry, act inline, edit visually, or review results.

Correction & Iteration

Reply

Continue the conversation to refine results with follow-up instructions.

Correction & Iteration

Regenerate

Request a fresh attempt with the same or tweaked prompt.

Correction & Iteration

Inline actions

Expose quick actions directly on content (rewrite, summarize, translate, etc.).

Correction & Iteration

Visual editing

Enable direct manipulation of generated visuals and layouts.

Correction & Iteration

Review

Enable reviewing outputs to either accept them in one go or with revisions.

Learning

Managing memory

The system recalls, persists, or forgets context to support continuity and control.

Managing memory

Managing memory

Anticipate needs by recalling preferences and past interactions at the right time; store durable context while respecting user control.

Collecting feedback

Collect feedback to improve outputs and personalize future interactions.

Collecting feedback

Ratings / Thumbs up down

Capture quick sentiment on results to steer quality.

Collecting feedback

User rating

Allow detailed ratings or comments where nuance helps training.

Collecting feedback

Choose a response

Offer multiple answers and let users select the best to reinforce preferences.

Personalization

Adjust behavior, tone, and defaults to fit the individual user.

Personalization

Personalization

Tailor tone, suggestions, and defaults based on user history and choices.

Insights in your inbox, monthly

Stay ahead of the curve

for designers and product teams in the new AI-UX paradigm.