Beyond the Hype: Building AI That Solves the Problems We Need Solved (Guest blog from Braidr)

This blog was written by James Wolman, Head of Data Science, Braidr.
My hairdresser, Alan, mentioned something interesting last week. Between snips, he casually dropped that half his clients are now panicking about AI taking their jobs. "It's all anyone talks about anymore," he said, gesturing with his scissors. "Makes cutting hair quite depressing, actually."
He laughed. His clients didn't.
Here's the thing: while the tech industry celebrates another "productivity revolution," regular people are genuinely terrified. But their anxiety isn't irrational fear-mongering. It's a canary in the coal mine, warning us about a fundamental flaw in how we're building AI.
More importantly, it's showing us exactly how we could build it better.
The Productivity Paradox That's Breaking Our Brains
We're living through the most schizophrenic moment in tech history. Pick your favourite economic forecast, and I'll show you another that says the exact opposite.
PwC trumpets that AI is driving productivity growth four times higher than normal. AI-skilled workers commanding 56% wage premiums. The future is bright, the future is automated.
But wait. McKinsey's data tells a darker story: job postings for AI-exposed roles have nosedived 38% since 2022. The Institute for Public Policy Research warns of an "AI apocalypse" with up to 8 million UK jobs at risk.
So which is it? Economic renaissance or employment extinction?
The answer is both. And that contradiction reveals everything wrong with our current approach.
We're Using a Sledgehammer to Crack Eggs
The panic isn't just hype – it's the predictable result of how we've chosen to deploy our most powerful technology. We've built Large Language Models that excel at precisely the tasks that define modern knowledge work: writing reports, summarising meetings, answering customer queries, basic analysis.
Then we called this "productivity."
But productivity toward what end?
Consider the absurdity: we're using some of the most sophisticated computational infrastructure ever built – neural networks with billions of parameters – to write better emails and automate chat support. It's like using the Large Hadron Collider to make toast.
Meanwhile, Yann LeCun, one of AI's godfathers, keeps dropping truth bombs that nobody wants to hear. His favourite? We can't even replicate the intelligence of a house cat.
Your cat understands physics. It knows unsupported objects fall. It plans complex action sequences – stalking, pouncing, that impossibly elegant leap from floor to counter. Our "revolutionary" AI can write a sonnet about quantum mechanics but has zero understanding of how a ball rolls down a hill.
This isn't just a quirky technical limitation. It's a massive, criminal waste of potential.
The Real Revolution: AI That Understands Reality
LeCun isn't just complaining – he's building something better. His vision? World models. Instead of feeding AI billions of words to predict the next word, we'd train it on sensory data – especially video – to understand how reality actually works.
These systems would develop internal simulations of the physical world. They'd grasp causality, not just correlation. Physics, not just prose. Here's where it gets genuinely exciting:
- Robotics that handle complex, chaotic real-world tasks – not just picking boxes, but navigating disaster zones or performing delicate surgery
- Logistics systems that understand when a lorry might jackknife in rain, not just optimal routes on a map
- Drug discovery that simulates molecular interactions at unprecedented scale, potentially cutting development time from decades to years
- Climate modelling sophisticated enough to help us solve the crisis, not just document it These aren't incremental efficiency gains.
They're solutions to problems we've never been able to solve before.
The Real Fear: Losing Our Agency
But Alan's clients are sensing something deeper than job displacement. The fear isn't really about unemployment – it's about agency.
Game designer Hideo Kojima nails this perfectly. He warns we'll be "unknowingly led into a predetermined lifestyle" by algorithms that optimise away life's beautiful chaos. Those chance encounters at coffee shops, wrong turns that become adventures? Current AI treats these as bugs to be fixed.
When companies announce "AI-driven efficiency gains," people don't hear innovation. They hear: "Your judgment no longer matters. Your choices are inefficient."
World models offer a radically different relationship with technology. By grounding AI in physical reality rather than linguistic patterns, we'd build systems that augment human capability instead of replacing human judgment.
The Infrastructure Opportunity Everyone's Missing
Here's what should make every cloud architect salivate: world models demand infrastructure that makes current LLM training look like a primary school science project.
Consider the requirements: massive video datasets requiring distributed storage, simulation environments that push cloud compute boundaries, real-time sensor integration from millions of devices, collaborative platforms where global teams can share models without breaking the internet.
These aren't just technical challenges – they're the kind that creates new industries, not just automates old ones.
Choosing Our Future (Before It Chooses Us)
The question isn't whether AI will transform everything. That ship has sailed, caught fire, and is currently doing doughnuts in the harbour.
The question is: what kind of transformation do we want?
Do we want AI that replaces human thinking, or AI that amplifies human capability? Systems that optimise for corporate efficiency, or tools that expand human possibility?
Alan's clients aren't paranoid. They're perceptive. They sense we're at a crossroads, and they're terrified we're choosing the wrong path.
They're right to be worried. But they're wrong about one thing: the path isn't fixed.
The UK's Moment
The current AI bubble will burst – bubbles always do. But the choices we make now about what kind of intelligence we build will echo for generations.
The UK tech sector stands at a unique inflection point. We have the cloud infrastructure expertise, the research excellence, and – crucially – the regulatory wisdom to lead a different vision.
Some of us are already walking this path. At Braidr, we're exploring how data and AI can unlock new kinds of capability rather than just automate existing ones. We're not alone – across the UK, teams are choosing to build tools that amplify rather than replace.
So here's my challenge to every developer, architect, and decision-maker reading this: next time someone pitches you an AI solution, ask them one question.
"Does this make humans more capable, or more replaceable?" If they can't answer that, you're not building the future.
You're just automating the past.
For more information please contact: