Summary: Two narratives that saw heavy rotation in 2025 ought to be challenged. The first challenge is that AI is not a feature. It changes workflows and decisions. The second is that AI replaces humans or replaces human intelligence (both linked to AI job loss forecasts), and the use of not being able to remember stuff you’ve offloaded to AI (‘AI makes you stupid’). The real task with UX work is designing shared control or better Human Computer Collaboration.
The Death of Two Toxic Thoughts in AI Strategy
Two ideas are quietly damaging how organizations approach AI.
The first is the belief that AI is a feature. The second is the framing of AI as a replacement of human (jobs) and intellect (making you stupid).
Both ideas simplify complex change.
Both distort strategy.
Both lead to shallow implementation.
It is time to retire them. Here’s why:
AI Is Not a Feature
Many teams (and the failed 95% of demo-AI projects) treat AI like a product enhancement. They add a chatbot, a summarizer, or a recommendation widget and label the release “AI-powered.” Even Microsoft has recently added Co-Pilot as a ‘summarizer’ feature in Office…instead of natively rebuilding the tools for 21st-century workflow, native AI from the ground up. A tangible example, Microsoft Office struggles with early 2000 ‘smart tags’. It burdens users with configuring the smart tag. Ugh. Or fonts that PowerPoint doesn’t recognize require you to find the font (in a 50-300 page deck that’s not trivial) and replace it with a ‘safe font’. Neither of these administrative burden tasks will be helped by CoPilot hovering in the toolbar.
The AI-as-feature approach reduces AI to a surface-level capability. It positions AI as something decorative rather than structural.
Basically, if you can remove the AI component without redesigning the workflow, then AI was never strategic.
Real AI impact changes how decisions are made. It redistributes cognitive effort between user and system. It alters task flow, prioritization, and responsibility. In this way, AI feels like co-creation or Human Computer Collaboration.
When it goes wrong: When organizations treat AI as a feature, it often results in isolated tools that sit on top of unchanged processes. Users experiment with them briefly, but behavior does not fundamentally shift. Adoption plateaus because the surrounding system remains the same.
In contrast, companies like Amazon and Netflix do not treat AI as a visible feature. Personalization and recommendation systems are embedded in their core architecture. They shape discovery, revenue, and user behavior at scale.
And yet, even they haven’t fully cracked it. Amazon’s Rufus and its review summaries are genuinely useful, but they’re skippable. Netflix’s core algorithm predates the LLM era entirely… its latest AI innovations are in serving better ads, not in building better experiences. Both companies have AI deeply embedded in their infrastructure, but at the user experience layer, they’re still thinking in features.
Which means the ceiling is still wide open. The organizations that get there won’t be the ones who asked where to add AI. They’ll be the ones who redesigned around where intelligence should live.
Design leaders should stop asking, “Where can we add AI?” Instead, they should ask, “Where should decision-making move from human to system, or from system back to human?” That question leads to structural change rather than cosmetic innovation.
Get more insight: Attend either or both of my FREE Lightning Talks on Redefining Agentic AI with UX & Service Design and Orchestration: The Missing Layer in AI UX & Service Design
Not Replace, but Augment
The second toxic narrative frames an AI inevitability of replacing humans (agentic AI without UX). This includes replacing human (job) roles or intelligence in AI interactions. The media (and AI CEOs and veterans) love to forecast this narrative, though it shows very little traction.
Example in call centres: King’s College in late 2025 found 4-6% job loss due to AI, but heavily concentrated in entry-level roles, with young people in their early 20s.
Yes, but: Artificial General Intelligence was heavily trafficked as a narrative in 2024-2025 as well. AI CEO’s and people like Bill Gates, Eric Schmidt and others continue to forecast 80%+ job losses. See the State of AI: Actual vs Imagined…
This framing turns design complexity into an assumption (bias and belief). Replacement narratives assume that humans are inefficient cost centres. Or that the customer will mass-adopt AI agents, replacing customer service agents. Yet…
In May 2025, TechSpot reported that 50% of companies like Air Canada, H&M, and Klarna are rolling back their efforts to replace human customer service with AI-only solutions. Why? Because customers were unhappy. The backlash wasn’t about AI itself, but about how AI was designed into the experience. See Red Flag: No AI without UX
Even 70% of call centre managers think more human agents will be required in the future.
The real issue is how agency is distributed.
In some contexts, automation should replace manual effort. Few people argue for humans calculating complex data when machines can do it more accurately and consistently. In other contexts, human judgment is essential. Situations involving vulnerability, ethics, or ambiguity often require oversight, escalation, and interpretation.
The more useful framing is adjustable autonomy. Different tasks require different levels of machine control and human oversight. The balance should shift depending on risk, complexity, and user capability.
For example, in contact centers, AI can attempt to authenticate users, summarize prior interactions, and suggest next best actions. However, human agents may retain authority for exceptions, emotional nuance, or policy judgment.
This is not even a replacement-versus-augmentation decision. It is a control architecture decision.
Designing adjustable autonomy requires clear guardrails. It requires visible logs, override mechanisms, and escalation pathways. Trust does not come from declaring that AI augments humans. Trust comes from making machine behaviour observable and correctable.
Need more on Guardrailing LLM experiences? Check out my upcoming Inclusive AI Flipped masterclass-part 3
If users cannot see when AI acts, they cannot calibrate trust. If employees cannot override automated decisions, they will resist adoption.
Why these narratives dominate thinking
These toxic thoughts persist because they make executive conversations easier. The media repeats the same narrative, selling fear. And fear in politics is control or power.
- “Add AI features” sounds manageable. It fits within existing roadmaps and funding models.
- “Replace staff with AI” sounds efficient. It promises cost savings and scale.
Both narratives reduce complexity, but they also hide risk. Even the MIT AI Makes You Stupid study was deeply flawed, based on a limited essay writing task, which tested memory. Students who wrote without AI remembered better. It showed how cognitive offloading works (like when you use a calculator). Even the lead researcher rejected the viral “AI makes you stupid” headline. But the “replaces your intelligence” narrative took hold.
Bottom line: Treating AI as a feature leads to underinvestment in integration and governance. Treating AI as a replacement strategy increases resistance, morale issues, and operational risk.
A Better Approach
Instead of thinking in features, think in workflows.
Map the end-to-end journey. Identify decision points. Identify friction. Then determine where intelligence should sit. In some places, the system should lead. In others, humans should remain accountable.
For each task, define who initiates the action, who decides, who reviews, and who is accountable. Then assign those roles intentionally to humans and machines.
This approach moves the conversation from ideology to design. AI changes power structures within systems. It influences who decides, who knows, and who is responsible. If that shift is not designed deliberately, it creates confusion and risk.
The Role of Design Leadership
Design leaders should resist surface-level AI integration or automation-only assumptions. They should:
- Advocate for deeper structural AI-First UX & Service Design.
- Demand new decision flows. Distribute decisions across humans and machines.
- Calibrate autonomy carefully. Expose system behaviour through observability: logs, metrics and traces.
- Design human override paths. Humans must have the ability to retain final control.
AI is not a feature to ship. It is a shift in agency to manage. See this FREE Lightning Talk: Orchestration: The Missing Layer in AI UX & Service Design
Organizations that succeed with AI will not be those that added intelligent widgets. They will be those who redesigned responsibility and control.
The death of these two toxic thoughts is not philosophical. It is operational.
AI is not a feature.
And the future is not even a binary choice between replace or augment.
It is about deliberately designing how humans and machines share power.
Get more insight: Attend my FREE Lightning Talk on Redefining Agentic AI with UX & Service Design




