AI Can’t Have Lived Experience: Why That’s a Problem

Summary: AI can’t have lived experience. This fact is a critical limitation, especially when it is used to make decisions about people. Lived experience is contextual, emotional, and shaped by power and identity—things AI can’t feel or understand. When organisations treat AI-generated outputs as equal to human insight, they risk flattening nuance, erasing marginalised voices, and reinforcing systemic bias. AI can support but must never replace real listening, storytelling, or decision-making led by people with lived experience.

AI Can’t Have Lived Experience — Why That’s a Problem

Artificial Intelligence can do many impressive things. It can translate text in seconds, detect patterns in data, draft content, and even simulate tone. But one thing it cannot do — and never will — is have lived experience. And that’s not a minor limitation. It’s a structural problem that raises serious risks when we use AI in domains that rely on human insight, empathy, and context.

UX job hunters are learning this fast. And some employers are adding human oversight back. Applicant tracking algorithms in HR tools like Workday score candidates on generic impressions, not informed judgment. As a result a lot of good candidates are passed up.

Where This Matters Most: When AI is used to make decisions about people, in healthcare, education, hiring, social services, justice systems, or experience design. The more we outsource listening, interpretation, or even judgment to machines, the more likely we are to misrepresent, distort, or erase real human lives.

Let’s break down why.

What Lived Experience Really Means

Lived experience is more than anecdote or opinion. It’s what people know from being in their bodies, physical, cultural, emotional and social. This includes being excluded, being racialised, being marginalised, or resisting those structures. It’s not abstract. It’s relational, emotional, sensory, and deeply contextual.

Lived experience examples:

  • Feeling your voice tremble in a benefit appeals meeting. Service Design leader Sarah Drummond recommends the Ken Loach film I, Daniel Blake for a reality check on the lived experience of an unemployed man applying for benefits.

  • Knowing which hair salons are experienced with diverse hair types, especially Afro and curly textures.

  • Remembering the exact moment you stopped trusting your healthcare provider. See Moments of Truth… 

  • Being too exhausted after giving care to a disabled child at home, I couldn’t fill in a 75-page online form for disability care benefits (from one of my recent projects). “I’ll need to print it out and work on it over time”.

AI doesn’t have a nervous system. It doesn’t live through power imbalances. It doesn’t have to worry about affording rent or how racism shapes your interactions with police. It can’t reflect on its past or imagine its future. It has no stakes in the decisions it makes.

Yet increasingly, we are using AI to do tasks that require that kind of knowing. Even robot manufacturers are struggling now to figure out how to mimic context-awareness so humans adopt them more easily. See What is Context-Awareness in AI?

The Illusion of AI Empathy- still not lived experience

Many AI tools today are designed to sound empathetic. Chatbots use phrases like “That must be difficult” or “I understand how you feel.” But this is performance, not presence.

True empathy comes from connection. From making sense of a situation within your own lived history. What we call “perspective-taking” or “mentalizing” requires not just a cognitive model of another’s feelings, but emotional resonance, which AI fundamentally lacks.

This is dangerous in systems where care is needed most: Imagine a chatbot responding to a domestic abuse survivor. Or an algorithm denying disability support because it’s been trained on biased data. These aren’t theoretical problems. They are already happening.

Why UX people and lived experience need to steer AI research

If your AI system is summarising user feedback, prioritising insights, or making recommendations, who is checking what’s missing? You better be. We explored this in ‘Using AI alongside User Research’ recently- join my microcommunity to access all five recordings (6 hours).  The biggest problem we uncovered?

You can’t automate the social context of discovery. Yet many organisations would love to move in that direction. Tools promise to speed up user research or replace qualitative analysis. But without human oversight — ideally by people with lived experience — your way of knowing becomes dangerously disconnected from lived reality.

Lived experience often shows up in the margins: in edge cases, outliers, or stories that don’t fit cleanly into categories. If your system has been trained to optimise for averages or efficiencies, it will flatten these nuances.

This Isn’t a Tech Problem — It’s a Power Problem

It’s not just that AI can’t have lived experience — it’s that the people building, training, and deploying AI often overlook it. Marginalised voices are often excluded from product design and policymaking. They are also at risk of being excluded from the training data.

This is especially urgent in public sector service design. If we’re using AI to “listen” to service users, who gets heard? If we’re automating triage or eligibility, who gets left behind?

Build an Inclusive AI approach

Here are a few principles for responsible practice:

  1. Never let AI replace listening.
    Use it to augment, not substitute, engagement with real people. Especially those with frontline or lived experience.

  2. Center lived experience in governance.
    Include people with relevant life experience in shaping how AI tools are used, not just as test subjects but as co-designers and decision-makers.

  3. Challenge data essentialism.
    Numbers don’t tell the whole story. Ask what context is missing. Who benefits from this framing? Who is harmed?

  4. Embrace storytelling as a core UX requirement, not a pet interest.
    AI loves summaries. Humans live in stories. Prioritise storytelling, sensemaking, and depth when working with insights. See Storytelling: UX soft skill you can’t ignore

  5. Treat absence as a red flag.
    If your AI model isn’t surfacing complexity or contradiction, it’s probably failing to reflect lived reality. Silence isn’t neutrality; it’s design bias.

Bottom line

AI is here to stay — but we have to stop pretending it’s human. It doesn’t feel. It doesn’t carry intergenerational trauma or systemic resilience.

So let’s be clear: Lived experience is not a dataset to mine. It’s not a persona to simulate. It’s not a prompt for better output. It’s a source of wisdom and legitimacy in design, governance, and care.

Respect it. Include it. And never assume AI can replace it.

Learn more: Attend this FREE Inclusive AI masterclass (July 10th, join www.uxinnercircle.com for the recording and full access to 300 recordings)

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Recent Posts

Scroll to top

Get a quote or discuss your project

Tell me about your project

Arrange a 30 min call

Project in mind?

Hi, It’s Frank Spillers here. Join my email list to get the latest…

Read more articles like this for exclusive insights into the best ways to approach UX and Service Design challenges. Find out when events occur first. Privacy protected, no exceptions.

Subscribing indicates your consent to our Privacy Policy

Should we add you to our email list?

Privacy protected-You can unsubscribe at any time.

Download the Better UX kit