The Stanford One Hundred Year Study on Artificial Intelligence, a project that launched in December 2014, is designed to be a century-long periodic assessment of the field of artificial intelligence (AI) and its influences on people, their communities, and society. Colloquially referred to as "AI100," the project issued its first report in September 2016. A standing committee of AI scientists and scholars in the humanities and social sciences working with the Stanford faculty director of AI100 oversees the project and the design of its activities. A little more than two years after the first report appeared, we reflect on the decisions made in shaping it, the process that produced it, its major conclusions, and reactions subsequent to its release.
The inaugural AI100 report,6 called Artificial Intelligence and Life in 2030, examined eight domains of human activity in which AI technologies are already beginning to affect urban life. In scope, it encompasses domains with emerging products enabled by AI methods and domains, raising concerns about technological impact generated by potential AI-enabled systems. The study panel members who wrote the report and the AI100 standing committee, the body that directs the AI100 project, intend for it to be a catalyst, spurring conversations on how we as a society might shape and share the potentially powerful technologies AI could deliver. In addition to influencing researchers and guiding decisions in industry and governments, the report aims to provide the general public with a scientifically and technologically accurate portrayal of the current state of AI, along with that potential. It aspires to replace conceptions rooted in science fiction novels and movies with a realistic foundation for these deliberations.
The report focuses on AI research and "specialized AI technologies," or methods developed for and tailored to particular applications, that are increasingly prevalent in daily activities rather than deliberating about generalized intelligence, which is often mentioned in the media and is much further from realization. It anticipates that AI-enabled systems have great potential to improve people's daily lives worldwide and positive impact on economies worldwide but also create profound societal and ethical challenges. It thus argues that deliberations involving the broadest possible spectrum of expertise about AI technologies and the design, ethical, and policy challenges they raise should begin now to ensure the benefits of AI are broadly shared, as well as that systems are safe, reliable, and trustworthy.
In the rest of this article, we provide background on AI100 and the framing of its first report, then discuss some of its findings. Along the way, we address several questions posed to us during the years since the report first appeared and catalog some of its uses.
The impetus for the AI100 study came from the many positive responses to a 20082009 Association for the Advancement of Artificial Intelligence Presidential Panel on Long-Term AI Futures that was commissioned by then-AAAI President Eric Horvitz (Microsoft Research) and co-chaired by him and Bart Selman (Cornell University). Intending a largely field-internal reflection on the state of AI, Horvitz charged the panel with exploring "the potential long-term societal influences of AI advances." In particular, he asked them to consider AI successes and the societal opportunities and challenges they raised; the socioeconomic, ethical, and legal issues raised by AI technologies; proactive steps those in the field could take to enhance long-term societal outcomes; and the kinds of policies and guidelines needed for autonomous systems. The findings of the panel (http://www.aaai.org/Organization/presidential-panel.php) and reactions to it led Horvitz to design AI100, a long-horizon study of how AI advances influence people and society. It is intended to pursue periodic studies of developments, trends, futures, and potential disruptions associated with developments in machine intelligence and formulate assessments, recommendations, and guidance on proactive efforts. The new project was to be balanced in its inward (within the AI field) and outward-looking (other disciplines and society at large) faces. The long-term nature of the project is its most novel aspect, as it is intended to periodically (typically every five years) assemble a study panel to reassess the state of AI and its impact on society. A "framing memo" (https://ai100.stanford.edu/reflections-and-framing) laid out Horvitz's aspirations for the project, along with the reasons for situating it at Stanford University.
Assemble a study panel. As the AI100 project was launched in December 2014, the standing committee anticipated that several years would be available for shaping the project, engaging people with expertise across the social sciences and humanities, as well as AI, identifying a focal topic, and recruiting a study panel. Within a few months, however, it was clear that AI was entering daily life and garnering intense public interest at a rate that did not allow such a leisurely pace. The standing committee thus defined a compressed schedule and recruited Peter Stone of The University of Texas at Austin (co-author of this article) as the chair of the report's study panel. Together they assembled a 17-member study panel comprising experts in AI from academia, corporate laboratories, and industry, and AI-savvy scholars in law, political science, policy, and economics. Although their goal was a panel diverse in specialty and expertise, geographic region, gender, and career stages, the shortened time frame led it to be less geographically and field diverse than ideal, a point noted by several report readers. In recognition of these shortcomings, and as it considers the design of future studies, the steering committee, which has increased its membership to include more representation from the social sciences and humanities, has developed a more inclusive planning and reporting process.
Design the charge. The standing committee considered various possible themes and scopes for the inaugural AI100 report, ranging from a general survey of the status of research and applications in subfields to an in-depth examination of a particular technology (such as machine learning and natural language processing) or an application area (such as healthcare and transportation). Its final choice of topical focus reflects a desire to ground the report's assessments in a context that would bring to the fore societal settings and a broad array of technological developments. The focus on AI and Life in 2030 arose from recognition of the central role cities have played throughout most of human history, as well as a venue in which many AI technologies are likely to come together in the lives of individuals and communities. The further focus on North American cities followed from recognition that within the short time frame allowed by the panel's work, it was not possible to adequately consider the great variability of urban settings and cultures around the world. Although the standing committee expects the projections, assessments, and proactive guidance stemming from the study to have broader global relevance, it intends for future studies to have greater international involvement and scope.
The charge the standing committee communicated to the study panel asked it to identify possible advances in AI over 15 years and their potential influences on daily life in urban settings (with a focus on North American cities), to specify scientific, engineering, and policy and legal efforts to realize these developments, consider actions to shape outcomes for societal good, and deliberate on the design, ethical, and policy challenges the developments raise. It further stipulated that the study panel ground its examination of AI technologies in a context that highlights inter-dependencies and interactions among AI subfields and these technologies and their potential influences on a variety of activities.
Create the first report. In the absence of precedent and with a short time horizon for its work, the study panel engaged in a sequence of virtually convened brainstorming sessions in which it successively refined the topics to consider in the report, with the aim of identifying domains, or economic sectors, in which AI seemed most likely to have impact within urban settings between publication of the report and 2030. Then, during a full-day intensive writing session during an in-person meeting at the 2016 AAAI conference in February, they drafted several report sections. They then iteratively revised these drafts with the goal of producing a report that would be accessible to the general public and convey the study panel's key messages. At a final in-person meeting in July at the 2017 International Joint Conferences on Artificial Intelligence, the study panel identified the main messages of the report, to appear as callouts in the margins of the report.
The report aims to address multiple audiences, ranging from general public to AI researchers and practitioners, and thus to be both accessible and provide depth. As a result, it has a three-part hierarchical structure: executive summary, more expansive five-page overview summarizing the core of the report, and a core with further details. The core examines eight "domains" of typical urban settings on which AI is likely to have impact over the coming years: transportation, home and service robots, healthcare, education, public safety and security, low-resource communities, employment and workplace, and entertainment. The authors deliberately did not give much weight to positions they considered excessively optimistic or pessimistic, despite the prevalence of such positions in the popular press, as they intended the report to provide a sober assessment by the people at the heart of technological developments in AI.
For each domain the study panel investigated, the report looks back to 2000 to summarize the AI-enabled changes that have already occurred and then project forward through 2030. It identifies the availability of large amounts of data, including speech and geospatial data, as well as cloud computing resources and progress in hardware technology for sensing and perception, as contributing to recent advances in AI research and to the success of deployed AI-enabled systems. Advances in machine learning, fueled in part by these resources, as well as by development of "deep" artificial neural nets, have played a key role in enabling these achievements. The goal of the study panel's forward-looking assessment, which we summarize briefly, was to call attention to the opportunities the study panel anticipated for AI technologies to improve societal conditions, lower the barriers to realizing this potential, and address the realistic risks it likewise anticipated in applying AI technologies in the domains it studied.
The projected time horizons for AI-enabled systems to enter daily life vary across these domains, as do the opportunities for transforming people's lives and the challenges posed in each domain. Moreover, the challenges so identified ranged across the full spectrum of computer science, from hardware to human-computer interaction. For instance, improvements in safe, reliable hardware were determined to be essential for progress in transportation and home-service robots. Autonomous transportation, which the report projects, may "soon be commonplace," is among today's most visible AI applications; in addition to changing individuals' driving needs, it is expected to affect transportation infrastructure, urban organization, and jobs. Experience with home-service robots illustrates the key role of hardware. Although robotic vacuum cleaners have been in home use for years, technical constraints and the high cost of reliable mechanical devices has limited commercial opportunities to narrowly defined applications; the report projects they will do so for the foreseeable future. For healthcare, the challenges so highlighted include developing mechanisms for sharing data, removing policy, regulatory, and commercial obstacles, and enhancing the ability of systems to work naturally with care providers, patients, and patients' families. The report also identifies capabilities for fluent interactions and effective partnering with people as key to achieving the promise of AI technologies for enhancing education. Major challenges toward realizing the potential of AI to address the needs of low-resource communities include design of methods to cooperate with agencies and organizations working in those communities and the development of trust of AI technologies by these groups and by the communities they serve. Such challenges also arise in public safety and security. In the domain of employment and the workplace, while noting that AI-capable systems will replace people in some kinds of jobs, the report also predicts AI capabilities are more likely to change jobs by replacing tasks than by eliminating jobs. It highlights the role of social and political decisions in approaching a range of societal challenges that will arise as work evolves in response to AI technologies and argues these challenges should be addressed immediately.
It aspires to replace conceptions rooted in science fiction novels and movies with a realistic foundation for these deliberations.
In assessing "What's next?" in AI research, the report says: " ... as it becomes a central force in society, the field of AI is shifting toward building intelligent systems that can collaborate effectively with people, and that are more generally human-aware." It also identifies several "hot areas" of AI research and applications. For example, in the area of machine learning, it describes efforts in scaling to work with very large datasets, deep learning, and reinforcement learning. Robotics, computer vision, and natural language processing (including spoken language systems) already incorporated into a variety of applications have made great strides recently and are poised for further advances. Research in two relatively newer areascollaborative systems and crowdsourcing/human computationare developing methods, respectively, for AI systems to work effectively with people and for people to assist AI systems in computations that are more difficult for machines than for people. Other research areas the report highlights are algorithmic game theory and computational social choice, the Internet of Things, and neuromorphic computing.
The report concludes with a section on policy and legal issues, summarizing the study panel's views on the state of regulatory statutes relevant to AI technologies and includes its recommendations to policymakers. It notes that "The measure of success for AI applications is the value they create for human lives. In that light, they should be designed to enable people to understand AI systems, participate in their use, and build their trust." The report encourages "vigorous and informed debate" about AI capabilities and limitations, recommending that much broader and deeper understanding of AI is needed in government at all levels to enable expert assessments of AI technologies, programmatic objectives, and overall societal values. It argues that industry needs to formulate and deploy best practices, and that AI systems should be open or amenable to reverse engineering so they can be evaluated adequately with respect to such crucial issues as fairness, security, privacy, and social impacts by disinterested academics, government experts, and journalists. It also notes the importance of expertise across a variety of disciplinary areas being brought to bear assessing societal impact and thus the need for increased public and private funding for interdisciplinary studies of the societal impacts of AI.
While noting that AI-capable systems will replace people in some kinds of jobs, the report predicts AI capabilities are more likely to change jobs by replacing tasks than by eliminating jobs.
Here, we list several of the report's most important takeaways and findings. We hope it provides a sense of the scope of the report and encourages reading the report, at least at one of the levels of detail provided:
General observations. Like other technologies, AI has the potential to be used for good or for nefarious purposes. A vigorous and informed debate about how to best steer AI in ways that enrich our lives and our society is an urgent and vital need. As a society, we are today underinvesting resources in research on the societal implications of AI technologies. Private and public dollars should be directed toward interdisciplinary teams capable of analyzing AI from multiple angles. Misunderstandings about what AI is and is not could fuel opposition to technologies with the potential to benefit everyone. Poorly informed regulation that stifles innovation would be a tragic mistake.
Potential near-term applications and design constraints. While many AI-based systems draw on common research and technologies, all such existing systems are specialized to accomplish particular tasks. Each application requires years of focused research and unique construction. AI-based applications could improve health outcomes and quality of life for millions of people in the coming years but only if they win the trust of doctors, nurses, and patients. Though quality education will always require active engagement by human teachers, AI promises to enhance education at all levels, especially through personalization at scale. With targeted incentives and funding priorities, AI technologies could help address the needs of low-resource communities. Budding efforts (such as those reported in recent workshops on AI and social good2,3) are promising.
Societal concerns. As highlighted in the movie Minority Report and subsequently reported by ProPublica,1 predictive-policing tools raise the specter of innocent people being unjustifiably targeted. But well-designed and appropriately deployed AI prediction tools have potential to remove or at least reduce human bias. AI will likely replace tasks rather than jobs in the near term and also create new kinds of jobs. But imagining what new jobs will emerge is more difficult in advance than is identifying the existing jobs that will likely be lost. As AI applications engage in behavior that, if done by a human, would constitute a crime, courts and other legal actors will have to puzzle through whom to hold accountable and on what theory.
Even more than when the AI100 project was first planned in 2014, we are at a crucial juncture in determining how to deploy AI-based technologies in ways that support societal needs and promote rather than hinder democratic values of freedom, equality, and transparency. The philosopher J.H. Moor wrote5 that in ethical arguments, most often people agree on values but not on the facts of the matter. This first AI100 report aims to bring AI expertise to the forefront so the challenges, as well as the promise, of technologies that incorporate AI methods can be understood and assessed properly.
Although the report's impact over time remains to be seen, we hope it will establish a strong precedent for future AI100 study panels. We are gratified to have seen that since the report first appeared, it seems to have succeeded in this aim, along with the larger AI 100 goals, in several ways. For instance, shortly after it was released, September 1, 2016, it was covered widely in the press, including in the New York Times, Christian Science Monitor, NPR, BBC, and CBC radio. It helped shape a series of workshops sponsored by the White House Office of Science and Technology Policy and the reports that emanated from them.4 Requests for permission to translate the report into several languages demonstrate worldwide interest. Various members of the AI 100 standing committee and the inaugural study panel have been asked to organize workshops for various governmental and scientific organizations and give talks in many settings. The study panel chair (and co-author of this article) was invited to speak by the Prime Minister of Finland, Juha Sipilä, on the occasion of his announcement of a new "AI strategy" for Finland, in February 2017; http://valtioneuvosto.fi/live?v=/vnk/events-seminars/professori-peter-stonen-puhe-tekoalyseminaarissa. The report is also being used in AI classes in various ways.
AI technologies are becoming ever more prevalent, and opinions on their impact on individuals and societies vary widely, from those the (inaugural) study panel considered overly optimistic to others it considered overly pessimistic. The need for the general public, government, and industry to have reliable information is of increasing importance. The AI100 project aims to fill that need. This first report is an important initial step, launching a long-term project. It crucially illuminates the enormous technical differences between AI technologies that are developed and targeted toward specific application domains and a "general-purpose AI" capability that can be incorporated into any device to make it more intelligent. The former is the focus of much research and business development, while the latter remains science fiction. It is quite tempting to think that if AI technologies can help drive our cars, they ought to also be able to fold our laundry, but these two activities make very different types of demands on reasoning. They require very different algorithms and capabilities. People do both, along with a full range of equally distinct activities requiring intelligence of various sorts. However, current AI applications are based on specialized domain-specific methods, and the normal human inclination to generalize from one intelligent behavior to seemingly related ones leads some people astray when assessing machine capabilities. This first AI100 report aims to provide insights to its readers, enabling them to better assess the implications of any AI success for other open challenges, as well as alert them to the societal and ethical issues that must be addressed as AI pervades ever more areas of daily life.
Since publishing the inaugural study panel's report, the AI100 project has begun a complementary effort, the Artificial Intelligence Index (AI Index), an ongoing tracking activity led by a steering committee of Yoav Shoham, Ray Perrault, Erik Brynjolfsson, Jack Clark, John Etchemendy, Terah Lyons, and James Maniyka. It complements the major studies originally envisioned for AI100 by providing annual reports and, in the future, an ongoing blog to augment the periodic AI100 studies to be produced by future study panels. The AI Index follows various facets of AI, including those related to volume of activity, technological progress, and societal impact, as determined by a broad advisory panel with advice from the AI100 standing committee. As with the study panel reports, the AI Index aims to provide information on the status of AI that is useful for those both outside the field and those engaged in developing AI technologies, as well as those actively involved in AI research and applications, policymakers and business executives, and the general public. This nascent effort issued its first report in December 2017.
The AI100 project (https://ai100.stanford.edu/) welcomes advice as it plans its next report, as does the AI Index (http://aiindex.org/). We look forward to following, and continuing to help shape, the AI100 trajectory over the coming years.
1. Angwin, J. et al. Machine bias: There's software used across the country to predict future criminals. And it's biased against blacks. ProPublica (May 23, 2016); https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
2. Association for the Advancement of Artificial Intelligence. AAAI Workshop on AI and OR [Operations Research] for Social Good (San Francisco, CA, Feb. 2017); https://www.aaai.org/Library/Workshops/ws17-01.php
3. Computing Community Consortium Workshop on AI and Social Good (Washington, D.C., June 7, 2016); https://cra.org/ccc/events/ai-social-good/
4. Felten, E. and Lyons, T. The Administration's Report on the Future of Artificial Intelligence. The White House, Oct. 12, 2016; https://obamawhitehouse.archives.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence
6. Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., Hirschberg, J., Kalyanakrishnan, S., Kamar, E., Kraus, S., Leyton-Brown, K., Parkes, D., Press, W., Saxenian, A., Shah, J., Tambe, M., and Teller A. Artificial Intelligence and Life in 2030. One Hundred Year Study on Artificial Intelligence (A100): Report of the 20152016 Study Panel. Stanford University, Stanford, CA, Sept. 2016; http://ai100.stanford.edu/2016-report
Copyright held by authors. Publication rights licensed to ACM.
Request permission to publish from email@example.com
The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.
No entries found