The dominant public narrative about artificial intelligence is that we are building increasingly intelligent machines that will ultimately surpass human capabilities, steal our jobs, possibly even escape human control and kill us all. This misguided perception, not widely shared by AI researchers, runs a significant risk of delaying or derailing practical applications and influencing public policy in counterproductive ways. A more appropriate framing—better supported by historical progress and current developments—is that AI is simply a natural continuation of longstanding efforts to automate tasks, dating back at least to the start of the industrial revolution. Stripping the field of its gee-whiz apocalyptic gloss makes it easier to evaluate the likely benefits and pitfalls of this important technology, not to mention dampen the self-destructive cycles of hype and disappointment that have plagued the field since its inception.
At the core of this problem is the tendency for respected public figures outside the field, and even a few within the field, to tolerate or sanction overblown press reports that herald each advance as startling and unexpected leaps toward general human-level intelligence (or beyond), fanning fears that "the robots" are coming to take over the world. Headlines often tout noteworthy engineering accomplishments in a context suggesting they constitute unwelcome assaults on human uniqueness and supremacy. If computers can trade stocks and drive cars, will they soon outperform our best sales people, replace court judges, win Oscars and Grammys, buy up and develop prime parcels of real estate for their own purposes? And what will "they" think of "us"?
Someone has needed to say this in public for a very long time, so thank you Dr Kaplan! Now if we can just convince luminaries such as Elon Musk, Stephen Hawking, and Ray Kurzweil that we are very far off from "strong" AI and that the "weak" domain-specific (and even broader) AI that is improving all the time will not cause a Hollywood-style apocalypse, we should be in good shape.
You might find these blog posts entertaining:
This is an excellent article, but it misses some points that need to be considered in order to design a morally optimal future. The interaction of AI, genetic reprogramming, neural mesh (https://humanizing.tech/what-is-a-neural-lace-628eae0f6ec4#.tyhnjg51l) and nanobot interactivity to create cloud-based personalities, and the models of social theories, including the win-win solutions of cooperative game theory. The bottom-up algorithms of deep-learning neural networks cannot be considered to be intelligent without a top-down reinforcement algorithm such as A3c reinforcement learning (https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-c88f72a5e9f2#.i2v5ohc6i ) that uses information theory to hill climb to an optimal goal. The goals, subgoals, possible actions, moral values, personal and social values, etc. must all be very well-defined in order to achieve a good design of plans of action for a future society where socially intelligent machines can cooperate with humans.
The issue is not whether technology has always redefined work, that is agreed upon. The issue here is the pace of change. If the available technologies are vigorously pursued, it is possible that nearly half of the jobs in the US are vulnerable to replacement by robots or computers using deep learning within two decades. Inventory managers, financial advisors, real estate agents, paralegals, language translators, sportswriters, radiologists among them. That kind of disruption is unprecedented.
That, and other factors, mean that a guaranteed minimum salary, apart from a job, will be needed to prevent societal cataclysm on an enormous scale. There is some agreement about that, but beyond the concept things are murky. So there are experiments starting around the world to learn how this could work.
I was delighted to read your clear & strong Viewpoint. You deftly point out the troubling stories by journalists who can't resist promoting AI ideas well beyond what the developers claim they have done. You also clearly push the AI community to clean up its act so as to save their discipline from their self-destructive impulse.
While it is true that automation is making many jobs obsolete, this is a trend with precedents going back hundreds of years. Looms produced 50-fold increases in productivity, but the number of people employed in the fabric industry grew. Bank machines replaced bank tellers, but the number of people employed in banking has grown because of increased demand and new services.
The extreme language from AI promoters, which suggests jobs are going to disappear (e.g. Martin Ford’s “The Robots Are Coming”), has already proven to be wrong. Their predictions of net losses of jobs has just NOT happened, in fact in the past three years there is a net increase of at least 6M jobs in the US.
I think the exciting story is the unbounded creativity of people to create new jobs (ebay, Airbnb, etsy, etc.) , business innovation that offers new products/services (medical, education, entertainment, leisure, etc.), and continuing expansion of human needs/desires accelerated by social media (Facebook, Amazon, etc.).
Your design prescriptions are also helpful, yet AI designers are likely to go slow in adopting and applying them. BRAVO for your well-crafted sharp-witted Viewpoint... I hope it gets widespread attention.
A big question is how the “professional artisans, designers, personal shoppers, performers, caregivers, online gamers, concierges, curators, and advisors of every sort” will be “well-paid.” Many among us have already chosen these lifestyles, but as the December 2, 2016, Ghost Ship fire in Oakland, CA, tragically showed, most live at the poverty level. Years ago my brother asked me if I recalled the predictions we read in the 1950s about automation producing a leisure class. “Yes,” I said. He replied, “It has; they’re called ‘homeless.’”
This, of course, is a political issue, but it’s important that we keep the realities of AI in the forefront of the discussion with the general public so they will at least have a chance to provide the political will to figure out how to distribute the wealth. So far, the increased productivity has primarily increased the wealth of those at the top. Since the beginning of the industrial revolution, we have seen many changes in how wealth is distributed. Child labor laws allow our children to become better educated, and the 40-hour work week allows more leisure time. Perhaps we need a 30-hour work week and a couple more years of free public education. Ultimately, as R. Oldehoeft pointed out above, we probably need a guaranteed universal income, which would probably be provided by a tax on the results of our higher productivity. I don’t know the answers, but articles like this are essential in trying to figure them out.
Displaying all 5 comments