The dominant public narrative about artificial intelligence is that we are building increasingly intelligent machines that will ultimately surpass human capabilities, steal our jobs, possibly even escape human control and kill us all. This misguided perception, not widely shared by AI researchers, runs a significant risk of delaying or derailing practical applications and influencing public policy in counterproductive ways. A more appropriate framingbetter supported by historical progress and current developmentsis that AI is simply a natural continuation of longstanding efforts to automate tasks, dating back at least to the start of the industrial revolution. Stripping the field of its gee-whiz apocalyptic gloss makes it easier to evaluate the likely benefits and pitfalls of this important technology, not to mention dampen the self-destructive cycles of hype and disappointment that have plagued the field since its inception.
At the core of this problem is the tendency for respected public figures outside the field, and even a few within the field, to tolerate or sanction overblown press reports that herald each advance as startling and unexpected leaps toward general human-level intelligence (or beyond), fanning fears that "the robots" are coming to take over the world. Headlines often tout noteworthy engineering accomplishments in a context suggesting they constitute unwelcome assaults on human uniqueness and supremacy. If computers can trade stocks and drive cars, will they soon outperform our best sales people, replace court judges, win Oscars and Grammys, buy up and develop prime parcels of real estate for their own purposes? And what will "they" think of "us"?
The plain fact is there is no "they." This is an anthropomorphic conceit borne of endless Hollywood blockbusters, reinforced by the gratuitous inclusion of human-like features in public AI technology demonstrations, such as natural-sounding voices, facial expressions, and simulated displays of human emotions. Each of these techniques has valuable application to human-computer interfaces, but not when their primary effect is to fool or mislead. Attempts to dress up significant AI accomplishments with human-oid flourishes does the field a disservice by raising inappropriate questions and implying there is more there than meets the eye. Was IBM's Watson pleased with its "Jeopardy!" win? It sure looked like it. This made for great television, but it also encouraged the audience to overinterpret the actual significance of this important achievement. Machines don't have minds, and there is precious little evidence to suggest they ever will.
The recent wave of public successes, remarkable as they are, arise from the application of a growing collection of tools and techniques that allow us to take better advantage of advances in computing power, storage, and the wide availability of large datasets. This is certainly great computer science, but it is not evidence of progress toward a superintelligence that can outperform humans at any task it may choose to undertake. While some of the new toolsmost notably in the field of machine learningcan be broadly applied to classes of tasks that may appear unrelated to the non-technical eye, in practice they often rely upon certain common attributes of the problem domains, such as enormous collections of examples in digital form. High-speed trading algorithms, tracking objects in videos, and predicting the spread of infectious diseases all rely on techniques for finding subtle patterns in noisy streams of real-time data, and so many of the tools applied to these apparently diverse tasks are similar.
We are certainly using machines to perform all sorts of real-world tasks that people perform using their native intelligence, but this does not mean the computers are intelligent. It merely means there are other ways to solve these problems. People and computers can play chess, but it is far from clear that they do it the same way. Recent advances in machine translation are remarkably successful, but they rely more on statistical correlations gleaned from large bodies of concorded texts than on fundamental advances in the understanding of natural language.
Machines have always automated tasks that previously required human effort and attentionboth physical and mentalusually by employing very different techniques. And they often do these tasks better than people can, at lower cost, or bothotherwise they would not be useful. Factory automation has replaced myriad highly skilled and highly trained workers, from sheet metal workers to coffee tasters. Arithmetic problems that used to be the exclusive domain of human "calculators" are now performed by tools so inexpensive they are given away as promotional trinkets at trade shows. It used to take an army of artists to animate Cinderella's hair, but now CGI techniques render Rapunzel's flowing locks. These advances do not demean or challenge human capabilities; instead they liberate us to perform ever more ambitious tasks.
Machines don't have minds, and there is precious little evidence to suggest they ever will.
Some pundits warn that computers in general, and AI in particular, will lead to widespread unemployment. What will we do for a living when machines can perform nearly all of today's jobs? A historical perspective reveals a potential flaw in this concern. The labor market constantly evolves in response to automation. Two hundred years ago, more than 90% of the U.S. labor force worked on farms. Now, barely 2% produce far more food at a fraction of the cost. Yet, everyone isn't out of work. In fact, more people are employed today than ever before, and most would agree their jobs are far less taxing and more rewarding than the backbreaking toil of their ancestors. This is because the benefits of automation make society wealthier, which in turn generates demand for all sorts of new products and services, ultimately expanding the need for workers. Our technology continually obsoletes professions, but our economy eventually replaces them with new and different ones. It is certainly true that recent advances in AI are likely to enable the automation of many or most of today's jobs, but there is no reason to believe the historical pattern of job creation will cease.
That's the good news. The bad news is that technology-driven labor market transitions can take considerable time, causing serious hardships for displaced workers. And if AI accelerates the pace of automation, as many predict, this rapid transition may cause significant social disruption.
But which jobs are most at risk? To answer this question, its useful to observe that we don't actually automate jobs, we automate tasks. So whether a worker will be replaced or made more productive depends on the nature of the tasks they perform. If their job involves repetitive or well-defined procedures and a clear-cut goal, then indeed their continued employment is at risk. But if it involves a variety of activities, solving novel challenges in chaotic or changing environments, or the authentic expression of human emotions, they are at far lower risk.
So what are the jobs of the future? While many people tend to think of jobs as transactional, there are plenty of professions that rely instead on building trust or rapport with other people. If your goal is to withdraw some spare cash for the weekend, an ATM is as effective as a teller. But if you want to secure an investor to help you build your new business, you won't be pitching a machine anytime soon.
This is not to say that machines will never sense or express emotions; indeed, work on affective computing is proceeding rapidly. The question is how these capabilities will be perceived by users. If they are understood simply as aids to communication, they are likely to be broadly accepted. But if they are seen as attempts to fake sympathy or allay legitimate concerns, they are likely to foster mistrust and rejectionas anyone can attest who has waited on hold listening to a recorded loop proclaim how important their call is. No one wants a robotic priest to take their confession, or a mechanical undertaker to console them on the loss of a loved one.
Then there are the jobs that involve demonstrations of skill or convey the comforting feeling that someone is paying attention to your needs. Except as a novelty, who wants to watch a self-driving racecar, or have a mechanical bartender ask about your day while it tops up your drink? Lots of professions require these more social skills, and the demand for them is only going to grow as our disposable income increases. There's no reason in principle we can't become a society of well-paid professional artisans, designers, personal shoppers, performers, caregivers, online gamers, concierges, curators, and advisors of every sort. And just as many of today's jobs did not exist even a few decades ago, it is likely a new crop of professions will arise that we can't quite envision today.
So the robots are certainly coming, but not in the way most people think.
So the robots are certainly coming, but not quite in the way most people think. Concerns that they are going to obsolete us, rise up, and take over, are misguided at best. Worrying about superintelligent machines distracts us from the very real obstacles we will face as increasingly capable machines become more intricately intertwined with our lives and begin to share our physical and public spaces. The difficult challenge is to ensure these machines respect our often-unstated social conventions. Should a robot be permitted to stand in line for you, put money in your parking meter to extend your time, use a crowded sidewalk to make deliveries, commit you to a purchase, enter into a contract, vote on your behalf, or take up a seat on a bus? Philosophers focus on the more obvious and serious ethical concernssuch as whether your autonomous vehicle should risk your life to save two pedestriansbut the practical questions are much broader. Most AI researchers naturally focus on solving some immediate problem, but in the coming decades a significant impediment to widespread acceptance of their work will likely be how well their systems abide by our social and cultural customs.
Science fiction is rife with stories of robots run amok, but seen from an engineering perspective, these are design problems, not the unpredictable consequences of tinkering with some presumed natural universal order. Good products, including increasingly autonomous machines and applications, don't go haywire unless we design them poorly. If the HAL 9000 kills its crewmates to avoid being deactivated, it is because its designers failed to prioritize its goals properly.
To address these challenges, we need to develop engineering standards for increasingly autonomous systems, perhaps by borrowing concepts from other potentially hazardous fields such as civil engineering. For instance, such systems could incorporate a model of their intended theater of operation, (known as a Standard Operating Environment, or SOE), and enter a well-defined "safe mode" when they drift out of bounds. We need to study how people naturally moderate their own goal-seeking behavior to accommodate the interests and rights of others. Systems should pass certification exams before deployment, the behavioral equivalent of automotive crash tests. Finally, we need a programmatic notion of basic ethics to guide actions in unanticipated circumstances. This is not to say machines have to be moral, simply that they have to behave morally in relevant situations. How do we prioritize human life, animal life, private property, self-preservation? When is it acceptable to break the law?
None of this matters when computers operate in limited, well-defined domains, but if we want AI systems to be broadly trusted and utilized, we should undertake a careful reassessment of the purpose, goals, and potential of the field, as least as it is perceived by the general public. The plain fact is that AI has a public relations problem that may work against its own interests. We need to tamp down the hyperbolic rhetoric favored by the popular press, avoid fanning the flames of public hysteria, and focus on the challenge of building civilized machines for a human world.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.
Someone has needed to say this in public for a very long time, so thank you Dr Kaplan! Now if we can just convince luminaries such as Elon Musk, Stephen Hawking, and Ray Kurzweil that we are very far off from "strong" AI and that the "weak" domain-specific (and even broader) AI that is improving all the time will not cause a Hollywood-style apocalypse, we should be in good shape.
You might find these blog posts entertaining:
This is an excellent article, but it misses some points that need to be considered in order to design a morally optimal future. The interaction of AI, genetic reprogramming, neural mesh (https://humanizing.tech/what-is-a-neural-lace-628eae0f6ec4#.tyhnjg51l) and nanobot interactivity to create cloud-based personalities, and the models of social theories, including the win-win solutions of cooperative game theory. The bottom-up algorithms of deep-learning neural networks cannot be considered to be intelligent without a top-down reinforcement algorithm such as A3c reinforcement learning (https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-c88f72a5e9f2#.i2v5ohc6i ) that uses information theory to hill climb to an optimal goal. The goals, subgoals, possible actions, moral values, personal and social values, etc. must all be very well-defined in order to achieve a good design of plans of action for a future society where socially intelligent machines can cooperate with humans.
The issue is not whether technology has always redefined work, that is agreed upon. The issue here is the pace of change. If the available technologies are vigorously pursued, it is possible that nearly half of the jobs in the US are vulnerable to replacement by robots or computers using deep learning within two decades. Inventory managers, financial advisors, real estate agents, paralegals, language translators, sportswriters, radiologists among them. That kind of disruption is unprecedented.
That, and other factors, mean that a guaranteed minimum salary, apart from a job, will be needed to prevent societal cataclysm on an enormous scale. There is some agreement about that, but beyond the concept things are murky. So there are experiments starting around the world to learn how this could work.
I was delighted to read your clear & strong Viewpoint. You deftly point out the troubling stories by journalists who can't resist promoting AI ideas well beyond what the developers claim they have done. You also clearly push the AI community to clean up its act so as to save their discipline from their self-destructive impulse.
While it is true that automation is making many jobs obsolete, this is a trend with precedents going back hundreds of years. Looms produced 50-fold increases in productivity, but the number of people employed in the fabric industry grew. Bank machines replaced bank tellers, but the number of people employed in banking has grown because of increased demand and new services.
The extreme language from AI promoters, which suggests jobs are going to disappear (e.g. Martin Fords The Robots Are Coming), has already proven to be wrong. Their predictions of net losses of jobs has just NOT happened, in fact in the past three years there is a net increase of at least 6M jobs in the US.
I think the exciting story is the unbounded creativity of people to create new jobs (ebay, Airbnb, etsy, etc.) , business innovation that offers new products/services (medical, education, entertainment, leisure, etc.), and continuing expansion of human needs/desires accelerated by social media (Facebook, Amazon, etc.).
Your design prescriptions are also helpful, yet AI designers are likely to go slow in adopting and applying them. BRAVO for your well-crafted sharp-witted Viewpoint... I hope it gets widespread attention.
A big question is how the professional artisans, designers, personal shoppers, performers, caregivers, online gamers, concierges, curators, and advisors of every sort will be well-paid. Many among us have already chosen these lifestyles, but as the December 2, 2016, Ghost Ship fire in Oakland, CA, tragically showed, most live at the poverty level. Years ago my brother asked me if I recalled the predictions we read in the 1950s about automation producing a leisure class. Yes, I said. He replied, It has; theyre called homeless.
This, of course, is a political issue, but its important that we keep the realities of AI in the forefront of the discussion with the general public so they will at least have a chance to provide the political will to figure out how to distribute the wealth. So far, the increased productivity has primarily increased the wealth of those at the top. Since the beginning of the industrial revolution, we have seen many changes in how wealth is distributed. Child labor laws allow our children to become better educated, and the 40-hour work week allows more leisure time. Perhaps we need a 30-hour work week and a couple more years of free public education. Ultimately, as R. Oldehoeft pointed out above, we probably need a guaranteed universal income, which would probably be provided by a tax on the results of our higher productivity. I dont know the answers, but articles like this are essential in trying to figure them out.
Displaying all 5 comments