News
Artificial Intelligence and Machine Learning

Who Decides the Future of Work?

Posted
Part of the De Lange Conference banner.
Technologists, sociologists, historians, and philosophers gathered at Rice University last week to consider the outlook for "Humans, Machines, and the Future of Work" at the 10th De Lange Conference.

While the technologists participating in the De Lange Conference on Humans, Machines, and the Future of Work at Rice University in Houston last week were bullish on the societal benefits of robotics and artificial intelligence (AI), speakers with backgrounds in economics and the humanities suggested we need to make the right choices to integrate these technologies into the world of work without devaluing the role of people.

Rice president David W. Leebron underscored the increasing importance of education, and continuing education, to keep up with technology trends, in the labor markets, observing that of 11.6 milion jobs added to the U.S workforce since the economic rebound in 2010, “99% went to the educated.” 

In the context of the major question underlying the conference, Leebron wondered, “what will we do in a world in which machines make machines … what will humans do in that world?”

Conference organizer and chair Moshe Vardi, Karen Ostrum George Distinguished Serve Professor of Computational Engineering, professor of computer science and diretor of the Ken Kennedy Institute for Information Technology at Rice (as well as editor-in-chief of CACM), said the issue to consider is not that manufacturing has fled the U.S., but that jobs have been lost as manufacturing has become increasingly automated. Rather than simply allowing this to happen, he said, “the future of work deserves a broad conversation,” and should be an issue of public policy, as well as an academic topic.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


 

Conference chair Moshe Vardi makes the point that U.S. manufacturing
efficiency has soared as employment has declined.

 

In a presentation on “Robot Autonomy: Symbiotic Interaction and Learning,” roboticist Manuela Veloso, Herbert A. Simon University Professor at the School of Computer Science of Carnegie Mellon University, said that collaborative or consumer-level service robots increasingly are perceived as something that coexists with humans; they are about complementing human activity, rather than doing the same things that humans do.

Vijay Kumar, Nemirovsky Family Dean of the School of Engineering and Applied Science at the University of Pennsylvania, said some fears of AI are irrational, he is optimistic more jobs will be created,  and that jobs will continue to be available to those who are sufficiently educated to take advantage of them. He advised, “Even if you are a liberal arts major, you should expose yourself to as much technology as possible; that’s the surest way to ensure you don’t become irrelevant.”

IBM Research vice president of Cognitive Computing Guruduth Banavar said intelligence in the AI context is “likely to be a portfolio of technologies; we don’t know all the elements yet.”

Banavar said cognitive computing systems will grow to be as essential to decision-making as search engines have been to basic Internet search. He added, “Combining humans and machines leads to better decision-making and to improving the human condition.”

Frank Levy, Rose Professor Emeritus at the Massachusetts Institute of Technology and research assocaiate at Harvard Medical School, said that during the U.S. presidential election, most economists “missed the ball and were asleep at the switch.”

Levy discussed trends in trade with China, and suggested technology and trade are affecting many of the same occupations, such as work in assembly lines (much of which has been off-shored), accounting,  and call centers. He suggested, “We should expect that rapid expansion of computers in the workplace, when and if it occurs, should have the same social and political implications as we’ve seen in the rapid expansion of Chinese imports. That is to say, we shouldn’t be talking about this as if it’s all going to take place in a stable political environment, because I don’t think there’s much reason to believe that’s true.

Levy advised, “It is particularly important that we not be asleep at the switch again, and we understand just where we are in the process of technical evolution. This is particularly true since in most cases, there’s nothing that corresponds to imposing a 35% tariff to slow down the spread of technology when it does get going. ”

Richard Freeman, Herbert Aschermann Professor of Economics at Harvard University and director of the National Bureau of Economic Research, said that past scares that automation would eliminate jobs “didn’t pan out.” 

He added that in his view, while “technological change is perhaps the single largest force facing the labor market,” there are “clear and present problems right now” to address. “Why worry about something that could be a problem 10 or 15 years from now?”

The impact of robots on the job market, Freeman said, is “more of an income problem than an employment problem.” He pointed out that the effect of such technology on incomes depends on who owns the robots: if you own a robot that does your job, it is an instrument to improve your work, and you benefit. “But if I own a robot that does your job, tough luck!”

To Lawrence Mishel, president of the Economic Policy Institute, “technological change has had little to do with wage stagnation, especially in the 2000s.” He asked, “if technology is going to eliminate jobs, how come we haven’t seen growing unemployment for 150 years, or even the last 10?”

Mishel said there is “no single cause of our economic problems. Technology is not the single driving force of everything, but it has a role to play.”

The concern, Mishel enunciated, is “capital replacing human labor and possibly eroding the total number of jobs and the skill composition of jobs.” Public policy choices “made on behalf of those with the most wealth and power, that have undercut the wage growth of a typical worker,” include excessive unemployment, weakened labor standards, globalization, eroded institutions/collective bargaining, and the wage and income growth for the top 1%.

Said Diane Bailey, associate professor in the School of Information at the University of Texas at Austin, “technological progress, or lack thereof, is shaped by people making choices.” Bailey said the issues have been framed so that it is “not a question of if, but when AI and robots will take over our jobs,” but this needs to be considered from a number of relevant standpoints, including financial (how will I buy things if I don’t have a job?), political (what power will I have?), psychological (from what will I derive identity?), sociological (what will life be like without a job?), and philosophical (what is the meaning of work, of leisure?) viewpoints.

In the context of the twin market forces of technological innovation and the free market, Bailey said, we need to consider making choices about technology, particularly as it impacts the outlook for jobs. “Maybe we don’t need or want to replace all jobs with automation.”

“Technological progress, or the lack thereof, is shaped by people making choices.”

Bailey pointed to the advent of self-driving vehicles, which has been touted as the Next Big Thing by technology companies and which could put millions of professional drivers out of work overnight. Considering the often-cited justification for implementing the technology as saving the millions of lives lost in traffic accidents each year, Bailey pointed out that the preponderence of those millions were outside the U.S., and that government statistics show that fatal vehicle-related accidents in the U.S. continue to decline year over year as technologies to enhance safety continue to be incorporated into vehicles. She wondered, “Are self-driving cars fixing a problem that was fixing itself?” 

(Interestingly, a week after this conference, Google reportedly shelved its long-standing plan to develop an autonomous vehicle in favor of pursuing partnerships with existing car makers; the company’s parent, Alphabet, then formed a new business unit, Waymo, for its driverless-car technology.)

A distinction needs to be made, Bailey said, between “what is possible and what is desired.” We need to think about choices, she said, wondering, ultimately, “who decides?”

In considering whether technological change is a thing of the past, Joel Mokyr, Robert H. Strotz Professor of Economics and professor of economics and history at Northwestern University, and Sackler Professor in the Eitan Berglas School of Economics at Tel Aviv University, said, “you ain’t seen nothin’ yet. “

Mokyr said that, “in a competitive world, no single polity can suppress innovation. “ In addition, he said, “The historical evidence for technology-induced long-term unemployment is non-existent,” although  short-term disruptions from the advent of new technologies were inevitable.

It matters how technology is integrated into work, Mokyr said; process innovation can make workers redundant, while “product innovation invariably creates new jobs never imagined before.”

He pointed out that “without technological progress, economies will stagnate,” acknowledging that “technological progress is never painless,” and one of its results can be that “work may not be what it used to be.”

Debra Satz, Marta Sutton Weeks Professor of Ethics in Society, and professor of philosophy and political science at Stanford University,  said that we should consider some ethical values regarding the possibility that new technologies can affect various types of harms on society.

In the context of self-driving vehicles, Satz pointed out that these can include harms to workers (in that professional truck, taxi, and other drivers can lose their jobs); to consumers (when the machine-substitute is now that they most prefer, such as seniors whose Meals on Wheels may be delivered by drone, depriving them of a potential human interaction with a driver), and to society, in the long term. 

Satz said technology has implications for the distribution of skills in society, as the Internet and the Gig Economy has given rise to “casual labor, with less bargaining power than labor has had in the past.”

She said consideration of three basic values (well-being, autonomy, and equality) should be guiding our response to technology, and the outcomes “should reflect our considered judgements.”

Judy Wajcman, Anthony Giddens Professor of Sociology at the London School of Economics and Political Science, pointed out that consideration of time is cultural, not technological. We often hear about the accelerating speed of life in the Digital Age, and that  “ robotics and automation embody our desire to save time, to delegate labor, and thuse free us for life’s imporant things,” but Wajcman noted that speed as the ultimate measure of progress is simply “an engineering mindset” rather than a universal one.

Wajcman pointed out that people typically are happier dealing with other people than with technology, and that some of the tasks being taken over by social robots were opportunities for social interactions. “When dead labor replaces living labor, the opportunity for social interactions is lost,” as when social robots assist seniors in the place of human care workers.

In addition, she said, “Work is a source of meaning to people, it’s a source of identity,“ and the people designing and making decisions about these technologies “may not be in the best position to make decisions for us.”

Said John Seely Brown, co-chair of Deloitte’s Center for the Edge and adviser to the provost at the University of South Carolina, we are experience a “big shift,” as “yesterday’s best practices are rapidly becoming outmoded.

“This era is no longer about deepening individual expertise within silos. Instead, it is about participating in and shaping knowledge flows.”

Brown predicted a “rapid set of punctuated jumps” in technology over the next 20 to 40 years. “Given the relentless pace of change and disruption, incremental learning won’t work. We must be willing to regrind our conceptual lenses. We need scalable learning.” 

Said Brown, we need to cultivate the imagination through a blended ontology considering knowledge, work, and play, in the context of AI.

Daniel Castro, vice president of the Information Technology and Innovation Foundation, said, “We should be focused on how to better use AI in society, especially in government.” Castro suggested we need a “National Productivity Policy” to consider how best to balance technology with employment.

David Nordfors, co-founder and co-chair of i4J Innovation for Jobs, stressed the human element above all: “we shouldn’t humanize robots, because that dehumanizes people. “ Instead, he said, “We’ve got to raise the value of interpersonal skills, get into the mindset of helping other people.”

“Economy happens,” Nordfors said,  “when people need each other. We want to have an economy in which people need each other more, value each other more.”

The topic will not languish; conference chair Vardi said he has received a grant for a one-day summit on the future of work, to be held in 2017 in Washington, D.C.

Lawrence M. Fisher is Senior Editor/News for ACM Magazines.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More