Human in the Loop
It started as a routine upgrade—just some new software meant to replace the piles of paper, plastic buttons, and controlled chaos that had long defined emergency dispatch in London.
The year was 1992. The London Ambulance Service, one of the busiest in the world, was under pressure to modernize. Paramedics were overworked, response times lagged, and the public demanded better. So they turned to technology.
A brand-new Computer-Aided Dispatch system—CAD, they called it—was pitched as the fix. It promised to speed things up, reduce human error, and bring the service into the 21st century.
What it delivered, however, was something no one expected: death by degrees.
From the start, the project was rushed. Politicians and leadership wanted results—fast. What should have been a three-year development cycle was squeezed into barely over one. The vendor, eager to win the contract, overpromised. The managers, eager to impress, didn’t push back.
And the people who actually used the system—the dispatchers, the call handlers, the paramedics on the street? They weren’t invited to the planning table.
No one asked what the old system did well. No one asked what would happen if the new one failed.
When the CAD system went live in October 1992, problems started almost immediately.
Calls were logged but not dispatched. Ambulances were sent to the wrong addresses—or not sent at all. Sometimes, multiple vehicles were dispatched to the same call while others sat idle. Other times, no vehicles were dispatched at all.
Operators watched helplessly as the screen blinked, froze, then reset—again and again.
To make matters worse, the new system had completely replaced the old one. There was no paper-based backup. No parallel system. When the CAD system failed, there was nothing to fall back on.
In the days that followed, response times collapsed. Emergency calls went unanswered. Patients waited for hours—or died waiting.
A formal investigation found no single bug or sabotage. The system had done what it was told. It just hadn’t been told the right things. It had been designed in isolation, built too quickly, and deployed too broadly, with no safety net.
The headlines were devastating. But the most chilling part? It didn’t take a villain. No hacker. No saboteur. No scandal.
Just overconfidence in a system that hadn’t earned the trust placed in it.
I’ve thought a lot about that story lately, because we’re living through something similar right now—though in a very different domain.
You’ve probably heard the buzzwords: AI-driven financial planning. Robo-advisors. Smart automation. Predictive wealth tech.
It all sounds slick. And to be fair, some of it is incredibly useful. Algorithms can crunch mountains of data faster than any human. They can flag patterns, surface trends, and make tasks like budgeting or portfolio rebalancing more efficient.
But here’s what they can’t do:
They can’t understand your fears.
They can’t navigate your family politics.
They can’t ask the question behind the question.
And they definitely can’t slow down when something feels off.
AI is impressive. But like that dispatch system in London, it’s only as good as the assumptions it’s built on—and the safeguards around it. It doesn’t know if the data is wrong. It doesn’t know if your goals have changed. It doesn’t understand nuance. It just executes.
Which is exactly why it can go very wrong, very quietly.
I’ve already seen people blindly accept AI-generated financial plans that were mathematically elegant but personally disastrous. I’ve seen risk profiles built on six-question quizzes that misread someone’s true tolerance for loss. I’ve seen rebalancing algorithms sell winning assets in the middle of a volatile market—all because “the model said so.”
Just like that ambulance system, these tools don’t pause to ask whether they should.
And if no human is watching the inputs—or challenging the outputs—there’s no one left to say: Wait a minute. This doesn’t make sense.
The London Ambulance disaster didn’t happen because of a bug. It happened because people assumed the system would take care of itself. That good intentions and clean code would be enough.
But systems don’t save lives. People do.
And when it comes to something as personal and complex as your money, your future, your ability to sleep at night—it’s not enough for a tool to be fast or efficient or modern. It has to be right for you.
That means asking hard questions. Challenging assumptions. Building in backup plans. And above all, keeping a human in the loop.
We can embrace innovation—just not blindly.
Technology should serve your values. Not the other way around.