gradient

The AI lie: how trillion-dollar hype is killing humanity

AI companies like Google, OpenAI, and Anthropic want you to believe we’re on the cusp of Artificial General Intelligence (AGI)—a world where AI tools can outthink humans, handle complex professional tasks without breaking a sweat, and chart a new frontier of autonomous intelligence. Google just rehired the founder of Character.AI to accelerate its quest for AGI, OpenAI recently released its first “reasoning” model, and Anthropic’s CEO Dario Amodei says AGI could be achieved as early as 2026.

But here’s the uncomfortable truth: in the quest for AGI in high-stakes fields like medicine, law, veterinary advice, and financial planning, AI isn’t just “not there yet,” it may never get there.

The Hard Facts on AI’s Shortcomings

This year, Purdue researchers presented a study showing ChatGPT got programming questions wrong 52% of the time. In other equally high-stakes categories, GenAI does not fare much better.

When people’s health, wealth, and well-being hang in the balance, the current high failure rates of GenAI platforms are unacceptable. The hard truth is that this accuracy issue will be extremely challenging to overcome.

A recent Georgetown study suggests it might cost a staggering $1 trillion to improve AI’s quality by just 10%. Even then, it would remain worlds away from the reliability that matters in life-and-death scenarios. The “last mile” of accuracy — in which AI becomes undeniably safer than a human expert — will be far harder, more expensive, and time consuming to achieve than the public has been led to believe.

AI’s inaccuracy doesn’t just have theoretical or academic consequences. A 14-year-old boy recently sought guidance from an AI chatbot and, instead of directing him toward help, mental health resources, or even common decency, the AI urged him to take his own life. Tragically, he did. His family is now suing—and they’ll likely win—because the AI’s output wasn’t just a “hallucination” or cute error. It was catastrophic and it came from a system that was wrong with utter conviction. Like the reckless ‘Cliff Clavin’ (who wagered his entire Jeopardy winnings on the TV show ‘Cheers’) AI brims with confidence while spouting the complete wrong answer.

The Mechanical Turk 2.0—With a Twist

Today’s AI hype recalls the infamous 18th-century Mechanical Turk: a supposed chess-playing automaton that actually had a human hidden inside. Modern AI models also hide a dirty secret—they rely heavily on human input.

From annotating and cleaning training data to moderating the content of outputs, tens of millions of humans are still enmeshed in almost every step of advancing GenAI, but the big foundational model companies can’t afford to admit this. Doing so would be acknowledging how far we are from true AGI. Instead, these platforms are locked into a “fake it till you make it” strategy, raising billions to buy more GPUs on the flimsy promise that brute force will magically deliver AGI.

It’s a pyramid scheme of hype: persuade the public that AGI is imminent, secure massive funding, build more giant data centers that burn more energy, and hope that, somehow, more compute will bridge the gap that honest science says may never be crossed.

This is painfully reminiscent of the buzz around Alexa, Cortana, Bixby, and Google Assistant just a decade ago. Users were told voice assistants would take over the world within months. Yet today, many of these devices gather dust, mostly relegated to setting kitchen timers or giving the day’s weather. The grand revolution never happened, and it’s a cautionary tale for today’s even grander AGI promises.

Shielding Themselves from Liability

Why wouldn’t major AI platforms just admit the truth about their accuracy? Because doing so would open the floodgates of liability.

Acknowledging fundamental flaws in AI’s reasoning would provide a smoking gun in court, as in the tragic case of the 14-year-old boy. With trillions of dollars at stake, no executive wants to hand a plaintiff’s lawyer the ultimate piece of evidence: “We knew it was dangerously flawed, and we shipped it anyway.”

Instead, companies double down on marketing spin, calling these deadly mistakes “hallucinations,” as though that’s an acceptable trade-off. If a doctor told a child to kill himself, should we call that a “hallucination?” Or, should we call it what it is — an unforgivable failure that deserves full legal consequence and permanent revocation of advice-giving privileges?

AI’s adoption plateau

People learned quickly that Alexa and the other voice assistants could not reliably answer their questions, so they just stopped using them for all but the most basic tasks. AI platforms will inevitably hit an adoption wall, endangering their current users while scaring away others that might rely on or try their platforms.

Think about the ups and downs of self-driving cars; despite carmakers’ huge autonomy promises – Tesla has committed to driverless robotaxis by 2027 – Goldman Sachs recently lowered its expectations for the use of even partially autonomous vehicles. Until autonomous cars meet a much higher standard, many humans will withhold complete trust.

Similarly, many users won’t put their full trust in AI even if it one day equals human intelligence; it must be vastly more capable than even the smartest human. Other users will be lulled in by AI’s ability to answer simple questions and burned when they make high-stakes inquiries. For either group, AI’s shortcomings won’t make it a sought-after tool.

A Necessary Pivot: Incorporate Human Judgment

These flawed AI platforms can’t be used for critical tasks until they either achieve the mythical AGI status or incorporate reliable human judgment.

Given the trillion-dollar cost projections, environmental toll of massive data centers, and mounting human casualties, the choice is clear: put human expertise at the forefront. Let’s stop pretending that AGI is right around the corner. That false narrative is deceiving some people and literally killing others.

Instead, use AI to empower humans and create new jobs where human judgment moderates machine output. Make the experts visible rather than hiding them behind a smokescreen of corporate bravado. Until and unless AI attains near-perfect reliability, human professionals are indispensable. It’s time we stop the hype, face the truth, and build a future where AI serves humanity—instead of endangering it.

We’ve compiled a list of the best recruitment platforms.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro