Summary: Call centre customers are rejecting AI-only support roll-outs. This is a major red flag for UX and service designers. It’s not enough for AI to be functional or cost-effective. If AI undermines user trust, strips out context, or removes human options, it becomes a liability.
A Warning for AI-First: Why Customers Are Rejecting AI-Only Support
In May 2025, TechSpot reported that 50% of companies like Air Canada, H&M, and Klarna are rolling back their efforts to replace human customer service with AI-only solutions. Why? Because customers were unhappy. The backlash wasn’t about AI itself, but about how AI was designed into the experience.
What this teaches us
Successful AI integration requires four non-negotiables for UX—Relevance, AI in the loop, Context-awareness, and User Choice.
1. Relevance: Is the AI solving the right problem?
AI often fails because it’s answering the wrong question—or the same question for every user. The TechSpot article shows how this plays out:
-
Air Canada’s chatbot invented refund policies that didn’t exist.
-
H&M’s AI created a rigid, one-size-fits-all experience.
-
Klarna reversed course, reinstating 700+ humans to their customer service team.
Customers want fast, helpful answers—but they also want answers that make sense to their situation. Relevance means the AI understands not just what a user asks, but why they’re asking. Relevance = empathy through design, not guessing.
UX takeaway: AI can’t replace service design. It must be embedded in journeys that adapt to users’ goals, contexts, and emotions.
2. AI in the Loop: Keep humans visible, not hidden
A core mistake in many AI-first customer service deployments? Removing humans entirely. Users were offered chatbots with no clear path to a person—unless they knew the “magic words” or got lucky.
That’s not just bad UX. It’s an erosion of trust.
“Human-in-the-loop” is an idea in computing that pre-dates Human Centred AI approaches. UX pioneer Ben Schneiderman nudges us to instead see humans at the centre, with AI supporting their tasks. “AI in the loop” means we use sensible automation first. For example, users can see, reach, and request human help easily, especially when AI doesn’t cut it.
Designers should ask:
-
Can the user escalate without friction?
-
Is human help presented as second-class, or as a partner?
-
Are AI and human roles clearly defined?
UX takeaway: Hide the human and users feel abandoned. Show the human as a safety net, and users feel supported.
3. Context Matters: Design for the full situation
Most bad AI support experiences fall apart because AI lacks situational awareness. It doesn’t know the user’s journey, issue history, emotional state, or what they just tried and failed to do.
Context awareness isn’t optional—it’s essential. When AI lacks it, users are forced to repeat themselves, rephrase questions, or explain things the system should already know.
The result? Rage.
Klarna’s reversal came after customer complaints piled up about robotic answers and repeated handoffs. Not because AI didn’t work—but because it didn’t understand.
UX takeaway: AI that ignores user context breaks the experience. AI that respects it earns trust.
4. By Choice: AI should assist—not coerce
The backlash shows a bigger pattern: users don’t want AI forced on them. They want options. In fact, they reward systems that offer AI by choice.
Amazon’s “Talk to a person” option is a good example: it’s not hidden. It respects urgency and preference.
H&M’s mistake was putting the bot in front of users with no clear way around it. Users don’t want to be trapped in a script. They want to opt in to automation when it helps—and opt out when it doesn’t.
UX takeaway: Design AI as a co-pilot, not a gatekeeper. Give users control.
Notice a Pattern? Let’s untangle these repeat UX flaws
There’s a reason AI keeps disappointing users in customer service. It’s not the tech—it’s the approach. Hint: UX is usually the afterthought since much of it is invisible in AI UX design.
Many deployments start from this flawed logic:
AI is cheaper and scalable → replace humans → savings and consistency.
But fundamental service interactions are complex, messy, and personal. Automating them without care leads to UX failure and brand damage. Even the recent Rosenfeld (UX Publisher) ‘Rosenbot‘ is a disaster. The experience is overshadowed by the business-heavy need to promote and sell books, not help you “search and learn” as it purports. On an up note, it did make me want to buy a couple of new books. Ha, Ha.
Better logic:
What does AI help users do better, faster, or with less effort? Where does it need backup? Where does it create risk?
That’s a design question, not a tech one.
What Good AI UX Looks Like
To fix this, we need to shift from “AI as replacement” to “AI as augmentation.” That means:
-
Start with real user needs, not just business KPIs
-
Prototype the worst-case scenarios, not just happy paths
-
Measure emotional impact, not just response times
-
Design AI as a role player in the service, not the star
Use AI where it adds value: speed, suggestions, summaries, quick retrievals. Let humans shine at what AI can’t: judgment, empathy, lived experience and handling the unexpected.
Final Thought: You don’t get a second chance at AI leadership
Once users feel ignored, confused, or trapped by AI, their trust drops, and they may never return. The TechSpot examples are both warnings and opportunities. Done right, AI can elevate service. Done wrong, it drives users away.
Let this be your UX checklist for AI services:
✅ Is the AI relevant to the user’s real need?
✅ Can the user reach a human easily?
✅ Does the AI understand the user’s situation?
✅ Can the user choose when to use AI?
If you can’t answer yes to all four, don’t ship it yet.
Designers: This is our moment. AI isn’t going away. However, trust, empathy, and thoughtful service design must remain at the forefront.
Let’s put people, not bots, at the heart of the AI loop!
Learn more: I’m hosting monthly Redesign it with AI: UX Redesign sessions of popular products, Apple Watch, Spotify, Duolingo, starting June 20th.