The State of AI: Actual vs Imagined

Summary: AI is in a state of growth and transition. But its destination is uncertain. Certain interested and uninformed parties, including CEOs of GenAI firms and past tech CEOs, are peddling an endgame that probably won’t arrive, at least in our lifetimes. Instead, it’s important to focus on what really matters: user experience goals like agency over and alongside AI, ethics, inclusion, and ‘humans at the center, machines in the loop’.

Forget the AGI fantasy, it’s user agency that’s critical

If AI could think, which it can’t, it might say, “I’m not hallucinating, you humans are!”

There’s a lot of hype being promised by GenAI CEOs and staffers around AGI (Artificial General Intelligence). We’re supposed to imagine this narrative: smarter-than-human AI systems are just a few years away. It’s a compelling narrative. Especially when you’re a GenAI CEO who has to sell the big vision to investors. But many VCs don’t seem to know how AI or humans work. Even the former CEO’s of Google (Eric Schmidt): “AI that is ‘as smart as the smartest artist’ will be here in 3 to 5 years” and Microsoft (Bill Gates) are selling ‘anticipation porn’ for AGI: “humans won’t be needed for most things in 10 years”. Former President Obama has joined in the “replacement” narrative: “AI will cause mass unemployment”. One staffer at Anthropic close to their CEO said two years ago that they would be ‘replaced by AI’ by 2025.

And from GenAI CEO’s we’re told AGI is a few years away. No pressure, right?

If we ever get to Hollywood-style AGI, it’s likely 2070–2090, not “within 2-5 years.” Most AI scientists quietly agree. Why? Because AI doesn’t think like humans. And it doesn’t learn like humans. Scaling up tokens and modalities doesn’t magically create intelligence.

Yann LeCun, Chief AI Scientist at Meta, argues that LLMs trained on trillions of tokens might rival the data a child gets through early vision. But to move beyond a child, AI will need to learn from the real world, like humans. As the brilliant Rich Heimann (author of Generative Artificial Intelligence Revealed and Director of AI for the State of South Carolina) says, here’s the issue: humans don’t passively store pixels. We learn by doing in the real world…not based on a conceptually modeled world. Humans learn by the key interaction design paradigm in UX: direct manipulation…we sense and explore physical spaces. A child’s brain is not a hard drive. Intelligence isn’t just referencing large swaths of data.

Be clear: AI can simulate thinking. It can’t actually think. And that difference matters. (source: ChatGPT)

Intelligence without world models? Yes, actually.

AI researchers talk about “world models”, the internal representations of reality they use to interact with us. But this runs into two big problems as Heimann points out:

  1. It’s misleading. A world model isn’t the same as understanding. You can feed a LLM (Large Language Model) a trillion words. That doesn’t teach it to spot irony or sarcasm (recall Google’s AI recommended you use glue on pizza and eat a rock daily for nutrition).

  2. It’s philosophically broken. If cognition means “consulting internal maps,” who’s doing the consulting? The homunculus fallacy kicks in fast—an endless loop of inner observers. Real brains don’t work like that.

Instead, new cognitive science leans toward embodied and enactive views. We’re not map readers—we’re world movers. The French phenomenological philosopher who shaped much thinking on perception, Maurice Merleau-Ponty, said humans are like “empty heads turned toward the world.” We act directly without consulting internal maps or models first.

The frisbee test illustrates humans vs AI

Heimann offers this thought experiment: Imagine catching a frisbee. Traditional AI says your brain simulates flight paths, runs equations, and plots landings. But studies show that humans use a simple strategy. They move to keep the frisbee at a constant angle in their vision.

That’s it. No internal physics engine. Just perception-action coupling. This is what James Gibson called affordances: we perceive opportunities for action and respond—no inner map required. This is why affordances and signifiers are so critical in UX design.

So what’s actually next in AI?

Tuned to you: We’re now entering the “personalization of AI” phase. It’s the groundwork for agentic AI—tools that take actions on our behalf. But for that to work, AI needs to know:

  • Your values

  • Your preferences

  • Your red lines

This is also why OpenAI now has enduring ‘memory’, tailoring sessions to your ‘personality’. And likely why they are working on a social network. After all, that’s what social networks Facebook and Twitter are: value-profiling platforms selling advertising and your data to third parties.

To make personalized Agentic AI work, we’ll need Manager Scripts—frameworks that act like virtual boundaries:“Don’t mostly always search this brand.”; “Do mostly always bias toward this source.” These scripted agent helpers (auto-executing filters and rules) will be highly tuned assistants, aligned to your intent and values.

So, over the next 2-10 years, it will be about making agents useful (they haven’t always been in the past they’ve been mostly failures). The idea of a LAM (large action model) is the direction. AI device manufacturer, Rabbit, has been trying to get this right for a year or more, and it’s not “there yet”.

The real battleground? Agency, not AGI

As AI gets more agentic, the fight won’t be about robot sentience. It’ll be about:

  • Privacy – who owns your inputs and patterns?

  • Narrative ownership – who gets to tell the story about you?

  • Control (agency) – who steers the agent: you, the model, or the vendor?

That’s where the focus needs to be. Not ‘smarter than you’, or ‘ready to replace you’ bots, but systems you can trust. Values in AI are the battleground, as Axios pointed out. Meta, trying to align itself with the new Trump regime, has been removing AI safety, stating:

“Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue.” This amounts to removing the safety valve. As Axios points out:

  • Llama already gave the most right-wing authoritarian answers to prompts (ChatGPT gave the most left-leaning answers), according to research from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University in 2023.

  • And there are all kinds of biases in large language models, far beyond an issue of right vs. left.

My own testing: I have found ChatGPT to waffle on political and scientific topics, and after being challenged, it falls in line (but only for my session). I asked ChatGPT about this consensus modeling it is programmed to do, it said, “ChatGPT often defaults to centrist or ‘middle ground’ positions. This is a design choice tied to bias mitigation, safety, and generality — but it also brings its own issues”. It expanded on some of these issues:

⚖️ Bias Threats

  1. Training Data Bias
    AI reflects patterns in its training data, which may overrepresent dominant cultures, languages, and views.

  2. Middle-Ground Fallacy
    Models often aim for a neutral tone, even if one side of an issue has stronger evidence. This can:

    • Flatten complexity

    • Equate misinformation with truth (false balance)

  3. Cultural + Geographic Bias
    Western, English-speaking perspectives dominate many models. Local context may be misinterpreted or erased.

  4. Invisibility of Marginalised Voices
    Groups underrepresented in the training data get less nuanced treatment.

-Source ChatGPT

In doing the same test with Grok (xAI), I got comparable results, only with a big difference: Grok deleted my criticisms from the prompt results (repeatedly) and refused to continue the query (sat and blinked as if thinking for up to what felt like 5 minutes). At least ChatGPT is self-reflective, albeit keeping me in an annoyingly people-pleasing bubble. But, is that what personalization should look like?

Bottom line

Don’t forget the humans: Ben Schneiderman, author of Human Centered AI, flips the ‘humans in the loop’ meme to put Humans at the CENTER, with machines in the loop. That’s a wise directional steer from one of the oldest pioneers in UX (Jakob Nielsen was his student).

So let’s stop aiming for a future where machines mimic us. Let’s build systems that work for us—ones grounded in our intent, values, culture, and humanity, not abstract Hollywood fantasies.

Is AGI a fantasy? Most likely. AI that offers human agency is the key design challenge. We need to get excited about designing for relevancy, privacy, sensitivity, and agency. That means giving users the right level of control at the right time.

Which challenge will you solve for?

Want to Play? Join my UX Inner Circle monthly session called Redesign It with AI, where we’re redesigning Spotify with AI currently.

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Recent Posts

Scroll to top

Get a quote or discuss your project

Tell me about your project

Arrange a 30 min call

Project in mind?

Hi, It’s Frank Spillers here. Join my email list to get the latest…

Read more articles like this for exclusive insights into the best ways to approach UX and Service Design challenges. Find out when events occur first. Privacy protected, no exceptions.

Subscribing indicates your consent to our Privacy Policy

Should we add you to our email list?

Privacy protected-You can unsubscribe at any time.

Download the Better UX kit