Summary: We need to build clearer, human-centred approaches to how we hand over control to generative AI agents. “Agentic AI” or automation using AI-assisted workflows, including generative AI, is emerging quickly. It builds on the excitement around generative AI, which is not fully baked in terms of safety, bias, ethics, and harm. Likewise, without clearly defining agentic AI, we risk falling prey to worst-case scenarios before enjoying the benefits of AI automation. We need a clearly defined Agentic user experience.
Want a fuller picture? Attend either or both of these FREE Lightning Talks on Redefining Agentic AI with UX & Service Design and Orchestration: The Missing Layer in AI UX & Service Design
Defining Agentic UX
Agentic AI means an AI agent (automation) does something for you, often without your involvement. However, without UX, this is risky in certain contexts. Agentic UX and Service Design can help us redefine how we navigate the risks inherent in agentic AI.
Agentic UX can be defined as:
The design of interactions where humans set intentions, boundaries, and oversight mechanisms, and AI agents carry out tasks on their behalf… while maintaining transparency, accountability, and user control.
With AI agents, the UX mandate isn’t about screen behaviors (buttons and screen flows). Instead, it’s about:
-
Designing mandates (users specify goals, rules, and comfort zones), constraints, escalation and risk appetite.
-
Balancing delegation vs. control (users choose how much autonomy to grant).
-
Building trust through visibility (users see rationale, confidence levels, reversal option, logs, approvals).
-
Creating standards for interoperability (users find agents working across systems: don’t lock users in).
Why This Matters
Without guardrails, agents become black boxes that erode trust. Designing clear human-agent interaction patterns enables safe delegation. We already see this with developers using AI to generate code. The new developer workflow: Generate code/Review/Edit or Approve/Commit. Here, reviewing agent output becomes an operational necessity for automation.
Imagine tedious, confusing or complex experiences improved by agentic UX:
- Agents that book travel within your budget and preferences. But also shape their AI actions based on your values— what the AI field calls “alignment”. In other words, “no 5 am flights”, “aisle seat”, “back of the plane”, “always make sure my frequent flyer number is added”, etc. Think of these as the filters of yesteryear’s interfaces, but with agentic behavior we add more flexibility in the grey areas of your needs (filters seem rigid in a conversational AI context).
- Agents that track and report your health and wellness to your care provider, then interpret that data and your lab or specialist results, safely, calmly, sensibly —with human expert oversight. Today, the MyChart Epic system, over 50% of US patients are currently a disgrace. Designed like it’s 2010. Yeah, I’m speaking from direct experience. Epic provides a mobile app and site that doesn’t make it easy for the patient’s family to access (poor sociability). But worse, they give you a running commentary of data providing: no information, no meaning, no understanding. This adds immeasurable real-time distress, confusion, and demand on medical staff for interpretation. With agentic AI, Epic patient portals could provide real-time insight and understanding instead of patient data transparency theatre.
With Agentic AI UX, think supervised autonomy, not full automation.
Case study worthy of analysis
In late 2025, Google launched the Agent Payments Protocol (AP2), an open framework that enables AI agents to securely authenticate, authorize, and complete payments across platforms. Backed by over 60 financial and technology organizations, AP2 extends the Agent2Agent (A2A) and Model Context Protocol (MCP).
Note where users are involved and where they use ‘supervised autonomy’.
Key Features
- Mandates: Signed digital contracts prove what the user authorized.
- Dual approval: Users approve both intent (search) and cart (final items).
- Delegated tasks: Pre-signed Intent Mandates allow agents to execute purchases automatically under user-defined conditions.
- Universal payments: Works with cards, bank transfers, and crypto (stablecoins, web3).
- Audit trail: Every step is logged for security and accountability.
How It Works
- Intent Mandate: User tells an agent what to buy (e.g., shoes, tickets). This request is recorded as a verifiable contract.
- Cart Mandate: Agent presents options. User signs off on the final selection, price, and items.
- Delegated Mode: For tasks without the user present, the Intent Mandate already defines rules. The agent executes a Cart Mandate automatically when conditions are met.
- Payment Link: The chosen method (card, bank, or crypto) connects securely to the Cart Mandate.
- Audit Trail: The full chain, intent, cart, and payment is cryptographically secured and reviewable.
Throughout the process, users are not just waiting for outputs, but also orchestrate agent actions and outputs.
The Future of UX is Agentic
Google’s AP2 shows us where design is heading. Instead of only asking How do users interact with the interface?… We now ask:
-
What should users delegate?
-
How do they express intent clearly?
-
How do we make agency reviewable and reversible?
Agentic UX is about creating the languages, contracts, and rituals of trust between humans and AI agents. Done well, it can free people from repetitive tasks. Done poorly, it can strip away control.
The design community has a choice: de-risk agentic AI with UX & Service Design or fasten your seatbelts for automation with poorly defined guardrails.
Want a fuller picture? Attend either or both of these FREE Lightning Talks on Redefining Agentic AI with UX & Service Design and Orchestration: The Missing Layer in AI UX & Service Design




