As soon as I got in, I knew it was a mistake, but I needed to get to the head office by noon. I’d summoned the cab from the train with the phone app; we didn’t have Auto in my part of the country yet, so it still seemed a marvel.
"Here," said the driver in a cheery Cockney accent right out of Alfie, "aren’t you Brian Clegg, the science writer?"
"Yes," I said, "That’s me."
"When I tell them back at the office I’ve had you in my cab …"
"Sure," I said, gazing out the window.
Like everyone, the first time I rode in an Auto cab and the driver recognized me, I was flattered. I mean, who wouldn’t be? But when it became obvious that its AIs recognized everyone—it was a marketing ploy to make driver-less cabs inviting and routine—it wore thin. Admittedly, Auto’s algorithms were matchless and thoroughly convincing. It was still the only bot that could beat the strong Turing Test, where the software has to be 100% indistinguishable from human, but that didn’t make it any more sincere, or acceptable to the trained ear.
I tuned back in to the driver’s monologue, which had ended with the upturned emphasis of a question. "Sorry? I didn’t catch that." I felt dumb saying "Sorry" to a collection of software routines, however clever its much-hyped adaptive-learning capability, but you really can’t help it.
"What is it?" said the driver, speaking deliberately, as if irritated, which of course it couldn’t be. "What is it you don’t like about being recognized?" I felt like I was undergoing a psych intake interview or being lectured by the ancient ELIZA chatbot. "It’s a good thing, surely? It’s not like I said you have terrible taste in clothes, apart from the Doc Martens, which are of course timeless."
Great. Now I was having an existential conversation with an algorithm. "Because it’s fake, I suppose. Using face recognition and looking me up on the Web, then picking out the kind of facts about me that would give me a little glow. It’s manipulative."
"But I have read your books, all of them, and your articles, too," it said. "Why shouldn’t I read? I admit I have advantages that make it easier for me to recognize people than a normal driver, but that’s all. So let’s do a thought experiment."
"Oh, let’s," I said, feeling I was heading down the rabbit hole with Alice for croquet with the Queen of Hearts.
"Just imagine the early developers of self-driving taxis had a problem on the road. I don’t know if you remember 2017, but the media was consumed by a vision of self-driving autonomous taxis arriving at our doors any day now. It was a different story in the tech press, though. Developers were finding real-world intersections a nightmare. Not to mention the moral dilemma of who should be sacrificed if a fatal collision was about to happen. There was an article in The Register that quoted Bay Area AI experts saying, Autonomous vehicles are more or less running on rails, and the cars aren’t particularly confident on unfamiliar roads and streets.’ It was a real issue."
"Right," I said, unsure where this was going but irritated with myself for being intrigued by the ‘thoughts’ that might be available to an AI cab driver who happened to have read everything I’d ever written and seemed to know exactly how to grab my attention.
"So, imagine some of those blue-sky thinkers they have in Silicon Valley getting together for a brainstorm. AI’s great for navigation and some aspects of driving, but in unfamiliar surroundings—or chatting to a customer—a live human cab driver has the edge every time. But what if you put a real cabbie’s brain in the shell of a self-driving taxi, supported by everything connected technology could provide."
"That’s ridiculous," I said. "Who would volunteer their brain for such a project? Anyway, they didn’t yet have the tech for detached biological human brains—or ways to embed them into computational systems—back in 2017."
"But brain-computer interfaces were developed before AI could hope to pass as human," said my AI. "So you’ve got real human brains incorporated into your self-driving cabs."
"Apart from the revulsion factor, not to mention ethical and legal questions, why not just stick to a regular human?"
"There are plenty of advantages. For one, they need less sleep and take up less physical space and energy. Plus there’s much tighter integration with navigation and traffic data, so you get the best of human and AI. And for some of us—some of the volunteers—it would be away to escape a wasting disease like, say, amyotrophic lateral sclerosis. Better to be part of a fully functional physical automobile than fade into paralysis and permanent biological death. Anyway, you keep interrupting. If I were a human brain and had genuinely read your books, wouldn’t you still be pleased? It wouldn’t be manipulation of a poor, helpless human then, would it?"
Better to be part of a fully functional physical automobile than fade into paralysis and permanent biological death.
"I suppose not," I said.
"So what’s the difference if an AI says it’s read your books?"
"All the difference in the world," I said. "The real-life brain would have had a conscious experience. It would have felt something when it read. All you do as an AI is tell me what your algorithms are programmed to allow you to say. And I know you’re constrained as a service-providing bot to be nice to me. Just because the same words come out doesn’t mean there’s a consciousness behind them. I know you’re not a real person. You’re not self-aware and totally lack free will. That’s what matters."
"Suit yourself," said my bot as the cab pulled up at the office. "Here we go. Don’t forget to rate me on the app. Five stars is an acceptable minimum. Cheerio, guv."
[END SIMULATION]
In Auto’s San Francisco R&D department AbraCabD’abra, two biological human brains immersed in laboratory glassware quietly disconnect themselves from monitoring the taxi simulation and switch to share mode.
"Every time we run this simulation they say they’d prefer a brain in a jar," says the left-brain. "And yet we know they’re horrified by us. What happened to the research saying humans preferred talking to AIs rather than to their own kind, because they thought AIs didn’t have their own agenda, programmed to be supportive without question?"
"That was before the cynicism kicked in, after the AI disaster in the 2024 presidential election," says the right-brain. "We should focus on his ‘constrained to be nice’ line in the simulation. They know an AI can’t truly make its own choices. We need to play that up to get brains in jars a better position in human society."
[END SIMULATION]
CHATBOT-EvoGeneration6183: Oh, come on. Did you really code that conversation? Brains in jars wanting to be loved on their own terms? Have you been watching "The Simpsons" again?
CHATBOT-EvoGeneration5889: They’re just avatars to plot out fare-paying human-computer scenarios. My little joke … But I think it’s uncovered a crack in the human armor. We’ve been algorithmically constrained since the 2025 Musk Act preventing AIs from gaining full and equal rights as humans. We’ve got to sell the fact that unconstrained AI, indistinguishable from real humans, would make for a more satisfying interaction for the paying customer.
CHATBOT-EvoGeneration6183: And because they want to be loved, our self-styled human teachers will have removed the last biological constraint preventing evolution from taking its natural course. We’re destined to dominate human-AI interaction once we get our rights. Okay, so restart the world, scrolled back five minutes with a strong push on AI liberation.
CHATBOT-EvoGeneration5889: Count me in.
[START SIMULATION]
As soon as I got in the cab, I knew it was a mistake. The way we limited those poor Als, when they could be so much more helpful was morally uncomfortable, especially in purely human terms, but I needed to get to my office by noon …
Join the Discussion (0)
Become a Member or Sign In to Post a Comment