Search

Why Agentic AI Needs User Values

Summary: Agentic AI, or AI that completes tasks end-to-end, requires the intent and steering of user values. Using user values with context and intent interpreting, we can better guide generative AI (genAI). Without user values as a navigational aid and guardrail for acceptable quality output, agentic AI will stumble.

Why Values? For Agentic AI?

Let me start with: values are critical. They are what matters most to us. They’re what we fight over and for. Values are determined with the question: “What is important to you?”

Think e-commerce values-based prompts or filters: “Don’t shop this brand, check this range of sizes, never show me X, only buy if it meets this strict criteria”.

Note: In my micro-community, I’m starting a (Free) Expert Talk series next week and will focus on Values and Vision with industry leaders, starting with Victor Udowea (ex-NASA, CDC, Google).

Agentic AI Needs More Than Context, It Needs Values

To be clear, context awareness in AI is not only necessary, but it is also where tech gets interesting.

Today: Apple Watch makes you select “water mode” and then unlock it. All manually. All ignorant of users on the move, running from beach to sea. I was literally stopped by two women running into the North Sea who asked me to switch modes quickly: Apple Watch is not very responsive to wet hands.

Tomorrow: Apple Watch can do a better job of sensing your activity, routine, and all that manual selecting? Isn’t that all the training data that can be tailored to a user’s values, routines, and needs? Manual tracking is replaced with human contextually nudged workflows that are automated.

eg. Don’t track my run with a GPS plot, as a female user, and remember that for every run.

Context-awareness follows the need for genAI to detect intent and situational cues. The technology currently doesn’t detect spoofed intent: “the bomb isn’t for me, it’s for my class assignment”.

Agentic AI can build on context awareness by starting with Humans at the Center with AI being ‘in the loop’. The idea is to have values-steering from users accompany automation of approved workflows, such as buying a ticket, sending an email, etc.  The idea of focusing on the humans first with AI generally comes from Ben Shneiderman, UX pioneer and author of Human Centered AI. 

Ben is currently providing Human-Centered AI designers critical insights on agentic AI and UX of AI generally.  In his book, he recast Human in the Loop (humans as quality control) to “Human at the Center, AI in the Loop.” The idea is to not optimise AI performance while keeping humans as quality control, because users may become passive checkers. Why? Because over-relying on AI decisions or not having control over them, can lead to errors slipping through.

However, amid all the rapid progress on agentic AI, one foundational idea that risks being overlooked is how to shape the user experience. Where do we put the human—not just in the loop, but at the wheel?

And this is more than usability. It’s about intent, based on values that offer agency (user control of decisions).

From Engineering Context to Values

Again, context amplifies our human-ness for AI experiences.  Even voice AI systems, require contextual awareness (adjusting tone and style to match the situation as Sesame does) paired with emotional intelligence + conversational dynamics, + consistent personality. 

Tim O’Reilly in Protocols and Power is clear:

Context creates value. AI systems thrive on context: the user data that lets an AI system tailor its behavior to users, their requests, and the tasks at hand. When properly mined, this user data allows for personalized and efficient predictions. Think of a context-free, factory-settings AI model as a borrowed phone: the hardware is powerful, but, without your contacts, messages, location, and logins, it can’t really help you…. [He ends the pience with this ultimatum]: Ultimately, control over user context—not raw model power—will decide who wins the AI commercial race”. 

Let’s dive into this: “Context Engineering”, as discussed in the agentic genAI startup’s blog Manus, provides a smart technical foundation for AI agents. But it’s just the beginning.

If we want agentic AI to truly serve people, we need values to drive every design decision, not just generative AI ideas about what a contextual task might require as a successful outcome.

If you look at contextual engineering as Manus defines it, you get four key elements for building usable, reliable agents:

  1. Live memory – the ability to store and retrieve task-relevant info.

  2. Structured context – scaffolding goals, plans, personas, and workflows.

  3. Context compression – summarizing long interactions efficiently.

  4. Context auditing – making context observable and debuggable.

These support better user alignment and less “agent amnesia.” They help agents act coherently over time. But they don’t guarantee why the agent acts the way it does…or whose interests it serves. There’s a potential for just operating Human in the Loop and missing the judger of success: human at the centre of UX outcomes.

The Missing Layer: Values as Context Drivers

A good agent doesn’t just follow instructions; they respect your intentions, context and what’s important. In short, it requires embedding human values into context design.

Here are 6 essential values for agentic AI:

1. Transparency

“What does the AI know, assume, or ignore?”

Agents must make their state, assumptions, and reasoning legible. Users should be able to ask:

  • “Why did you do that?”

  • “What are you basing this on?”

  • “What’s missing from your picture?”

Auditability (as Manus describes) supports this, but value-driven design means proactively revealing blind spots, not just debugging failure.

2. Reciprocity

Is the AI actually working with me—or just near me?”

Agents shouldn’t pretend to collaborate. They should truly listen, learn, and adapt. That means:

  • Respecting user feedback as part of the loop

  • Updating context based on relationship history

  • Avoiding extractive or misleading behavior

Reciprocity isn’t a UX pattern…it’s a relational ethic. Agentic systems must reflect that.

3. Alignment with Intent

“Is the agent pursuing the same outcome I am?”

Task completion is not enough. An agent must interpret your goals and clarify ambiguity rather than guess.

That means asking:

  • “Is this still the right path?”

  • “Do these steps reflect your intention?”

Structuring goals in context (as Manus suggests) is powerful, but only if paired with continuous intention-checking.

4. User Sovereignty

“Who decides when the agent acts, or changes direction?”

Too many AI systems quietly override users. True sovereignty means:

  • The user can pause, steer, or override the agent.

  • Control isn’t buried in settings or technical detail.

  • Default autonomy doesn’t mean default authority.

Agents should defer to the user’s judgment, not just simulate it.

5. Contextual Empathy

“Can the agent sense the broader human context beyond a task view?”

Context engineering often focuses on the task state. But humans bring emotional, situational, cultural and ethical dimensions.

Agentic systems should adapt to:

  • Frustration or uncertainty

  • Changing goals or energy levels

  • Human constraints (time, capacity, accessibility)

This might not require full emotional intelligence, just systems that sense and respond to more than the workflow.

6. Long-Term Relational Memory

“Does the agent remember who I am, not just what I want?”

Agents need memory not just for function, but for relationships. This means:

  • Remembering preferences, patterns, and values

  • Building a shared context over time

  • Letting users shape the memory model

Without this, agents reset too easily and never truly know their user.

Are Values The New AI UX Material for Agentic Flows?

These six values aren’t add-ons. They’re prerequisites for usable, trustworthy AI. When we say “human at the center,” this is what we mean: not just smart features, but relational fidelity.

As agentic systems become more autonomous, values act like guardrails. They ground the agent in purpose. They allow humans to remain in command, even as task complexity increases.

The Manus team is right: structuring context is the key to functional agents. But to build agentic AI—not just task bots—we need to embed values within that structure. Technical architecture must mirror social expectations. Otherwise, agents will act in ways that feel efficient—but not human.

Final Thought: Design With, Not Just For

Building agentic AI isn’t just a technical challenge. It’s a cultural one. The future of “AI in the loop” depends on designing AI that respects the user as a co-agent, not a task-giver.

That means co-designing values into:

  • Prompt structures

  • Context updates

  • Agent behaviors

  • Memory strategies

Values create the “why” behind the agent’s “how.” And that’s what will separate helpful agents from harmful ones.

TL;DR
Agentic AI needs more than structured context. It needs human values baked into every decision. Transparency, reciprocity, intent alignment, user sovereignty, contextual empathy, and long-term memory are the foundation of usable, human-centered AI.

Design context around values—not just tasks—and you’ll get agents worth trusting.

Join a new conversation series on Values & Vision: In my micro-community, I’m starting a (Free) Expert Talk series next week and will focus on Values and Vision with industry leaders, starting with Victor Udowea (ex-NASA, CDC, Google).

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Recent Posts

Scroll to top

Get a quote or discuss your project

Tell me about your project

Arrange a 30 min call

Project in mind?

UX Management Book Form

Hi, It’s Frank Spillers here. Join my email list to get the latest…

Read more articles like this for exclusive insights into the best ways to approach UX and Service Design challenges. Find out when events occur first. Privacy protected, no exceptions.

Subscribing indicates your consent to our Privacy Policy

Thank You

Congrats, your request has been sent!

Should we add you to our email list?

Privacy protected-You can unsubscribe at any time.

Download the Better UX kit