By Iryna Tymchenko
I keep hearing it at conferences, in podcasts, and in late-night business chats with friends from tech and marketing:
“Did you hear? Another company fired hundreds of workers and replaced them with AI.” Then, a few weeks later, you hear the other half of the story: the same company quietly re-hiring people to fix the mess the AI left behind.
2025 has become the year when the world learned the hard way that AI can’t run on autopilot. And that the smartest companies aren’t the ones replacing humans, they’re the ones empowering them.
Salesforce: when AI meets supervision
Let’s start with Salesforce. Early this year, they made headlines after rolling out their AI-powered customer service system called Agentforce. Thousands of support jobs disappeared overnight, around 4,000 roles gone, as AI “took over” the chat queues.
But here’s the interesting twist: Salesforce didn’t fall apart. Why? Because they didn’t make the fatal mistake of turning the system loose unsupervised. They introduced something called an “omni-channel supervisor”, real people who monitor and guide the AI, correct it when it slips, and step in when empathy or context is needed.
It wasn’t perfect. Some customers noticed that replies felt colder or overly scripted. But in the big picture, Salesforce demonstrated what hybrid intelligence can look like:
AI handling the repetitive work, and humans keeping it humane.
Klarna: from “AI replaces 700 workers” to “wait, not so fast…”
Then came the Klarna story, which spread like wildfire. In early 2025, the fintech giant proudly announced that its AI assistant now performed the work of 700 employees. The world gasped: had we reached the holy grail of automation?
Well… almost. Within months, small cracks began to show. Customers reported confusing answers. Some account-related cases got mishandled. And behind the scenes, Klarna started quietly re-hiring specialists to review, supervise, and fix what the AI couldn’t handle on its own.
The truth was simple: the assistant did many tasks impressively well, but it lacked domain judgment, and no one had planned for the hundreds of “edge cases” that humans resolve intuitively. Their story became a global lesson: efficiency headlines are easy; long-term quality is hard.
Duolingo: the AI-first shift that upset its biggest fans
Another big name, Duolingo, took the AI-first route too. The company started phasing out hundreds of contractors as it leaned on generative AI to create course material and translations.
At first, users didn’t notice. Then they did. Some lessons started sounding too robotic. Others lost cultural nuance or humor. Suddenly, the language-learning app famous for its personality started feeling mechanical. The backlash was quick and loud.
Duolingo didn’t lose users overnight, but it lost a piece of its soul that human playfulness and linguistic intuition no AI can yet replicate. It reminded everyone in ed-tech and content industries that replacing humans entirely is a shortcut that often leads to a dead end.
Google’s “AI Overviews”: when hallucinations go viral
Even the biggest names stumbled. Google’s “AI Overviews,” the generative summaries placed at the top of search results, made headlines: unfortunately, for the wrong reasons.
Screenshots went viral: AI confidently telling users to put glue on pizza or eat rocks for minerals. The company insisted such errors were rare (and statistically they were), but public trust doesn’t work on percentages. When millions read something wrong, even a 0.1% error becomes a front-page disaster.
It’s a perfect example of what I call the “trust debt” of AI: every hallucination costs a company credibility it may never fully earn back.
Air Canada: the chatbot that cost a lawsuit
And then there’s my favorite cautionary tale of the year: Air Canada. In what became a legal precedent, the airline was held liable for misinformation produced by its own AI chatbot. A passenger received wrong info about refund eligibility, followed the AI’s advice, and lost money. The company tried to argue: “It was the chatbot’s mistake.” The tribunal replied: “If it speaks for you, you’re responsible.”
That ruling sent shivers down the spine of every corporate lawyer. From that day on, no company could hide behind “the bot said it.” AI may write the words, but humans own the consequences.
Why these failures happen
As someone who spends her days helping businesses adopt AI responsibly, I see clear patterns behind all these stories:
-
Over-automation without safety rails. Companies replace entire departments before setting up confidence thresholds, escalation rules, or human review.
-
Neglecting AI operations. Nobody assigned ownership of quality monitoring, prompt tuning, or post-deployment feedback loops.
-
Testing only “happy paths.” Real-world users break systems in ways pilot projects never simulate.
-
Misaligned metrics. Executives celebrate reduced costs and response times, while customers care about empathy and resolution accuracy.
-
Underestimating legal and brand risk. A single hallucination in a high-visibility channel can wipe out months of trust.
The irony? Every one of these failures could have been prevented by keeping more humans in the loop.
A better model: AI-supervised work
Here’s my hypothesis, and it’s shared by many people who work on the frontlines of AI adoption:
Instead of firing hundreds of people, companies should train and equip those same people to supervise AI systems.
Imagine if those laid-off customer support agents became AI Quality Supervisors: people who correct outputs, monitor drift, and improve performance over time.
AI can process data at lightning speed, but humans hold the context, the ethics, the emotion.
At Salesforce, this hybrid model works.
At Klarna, they learned it the hard way.
At Duolingo, they’re re-thinking their balance.
At Google, they’re investing millions in “trust & safety” teams.
And at Air Canada, they probably wish they had.
How to make AI work with people, not instead of them
Based on dozens of cases I’ve studied, here’s a short playbook that helps companies avoid disaster:
-
Assign accountability. Every AI process must have a named human owner.
-
Add guardrails. Use retrieval-based grounding (RAG), confidence scoring, and fallback flows to humans.
-
Evaluate constantly. Track not only accuracy and speed but also escalation rates, complaint themes, and trust indicators.
-
Train your people. Teach employees when to trust AI and when to override it.
-
Be transparent. Tell users when they’re talking to AI and how to reach a person.
-
Document everything. Keep logs, version histories, and audit trails for legal protection and learning.
-
Reward collaboration. Celebrate humans who improve AI, not those who compete with it.
Protecting end users
Users shouldn’t be the ones discovering that your AI broke.
That’s why smart businesses now include:
-
“Talk to a human” buttons within 2 minutes.
-
Visible source citations (“This answer is based on…”).
-
“AI confidence: low” markers when uncertain.
-
Automatic compensation when AI errors cause harm.
-
Public post-mortems for transparency.
It’s about trust, not just technology.
The big lesson of 2025
2025 will go down in business history as the year AI proved its power, but also its fragility.
The year when companies learned that cutting payroll by 40% doesn’t mean cutting responsibility by 40%.
And the year when the best-performing organizations discovered that the future of work is not AI-only, but AI-supervised.
The companies that thrive in 2026 will be those where humans teach AI, and AI teaches humans back.
Because intelligence, whether artificial or real, only scales safely when it’s guided by conscience, context, and care.
When I look at the next wave of automation, I don’t see a battle between humans and machines.
I see a partnership waiting to be designed properly.
AI can work 24/7. But it can’t care.
And care, in the end, is still the greatest business advantage humans have.
