Artificial Intelligence and Machine Learning

Irrational Exuberance and the ‘FATE’ of Technology


I am sure that many of us still remember the Netscape IPO in 1995 and the fivefold growth in share value in four months. Expectations for technology and its impact were in the stratosphere. The then-Federal Reserve Board chairman, Alan Greenspan, gave a speech at the American Enterprise Institute where he famously questioned the "irrational exuberance" in the market and in technology[1].  I believe today we are seeing a similar exuberance with technology.

Are revolutionary technologies for cancer screening — ones that rely only on a finger prick drawing just one thousandth the normal amount of blood — really feasible? Of course, Theranos had everyone believe that such a revolutionary advance was possible[2] not because of new techniques in analytical chemistry, but because they had developed novel software and new automation technologies! Can we really hope to replace 8 million cars in Los Angeles by boring tunnels[3] — which "automation" would allow us to do at a fraction of the normal cost — for high-speed pods that will travel at 150 mph at $1 per ride?  This is what Boring Company is selling to the City of Los Angeles. Do recent advances in data science and machine learning really mean that artificial general intelligence is around the corner? This is the pitch of so many startups today.

Indeed, there have been remarkable advances in statistical machine learning, which have had a remarkable impact in fields like computer vision and speech recognition when the underlying neural networks can be trained by large-enough, representative data sets. What "large-enough" means, we don't yet know. Neither do we know when we have a representative data set. But there are many interesting cases where deep learning "works." Unfortunately, these success stories are oversold. In my own field, robotics, autonomy is a challenging problem especially in tasks involving manipulation and perception-action loops. However, despite all the claims you hear, our best robots lack the dexterity of a three-year-old child.

Nowhere is this irrational exuberance more evident than in the field of self-driving cars. Not many people know that the first demonstrations of an autonomous car were in the late 1980s at the Bundeswehr University Munich and at Carnegie Mellon University. Autonomous vehicles no doubt can have a tremendous social, economic, and environmental impact. This fact, and the exciting technical challenges in realizing a bold vision, has attracted some of the top talent in science and engineering over the last 30 years. However, so many of us don't remember history, and many choose to ignore it since problems that are known to have not been solved for three decades are unlikely to attract a large amount of private investment.

According to even recent predictions[4], fully autonomous cars should have been or will be available any day now.  Fully autonomous Audis and Teslas were promised only several years ago by 2018. Uber has even promised us flying cars powered by clean energy by 2023, even though the basic physics and chemistry underlying battery technology tells us otherwise[5].

It is worrisome when engineers make these claims and even more so when entrepreneurs use such claims to raise extraordinary amounts of funding. However, the biggest concern should be about embedding software for autonomy in safety-critical systems.  There is a difference between running tests and logging data, and verification of software that is guaranteed not to have unwanted and unsafe behaviors. Can we claim vehicles are safe just because the underlying software has been tested with over a billion miles of data? U.S. National Safety Council statistics suggests that a billion miles of human driving, on average, results in 12.5 fatalities,[6] and a billion-mile data set cannot possibly be viewed as either large enough or representative enough to train software to prevent human fatalities.

The Uber-Waymo trial led to the release of a treasure trove of documents that were truly shocking in this regard. It reveals a culture[7] that appears to prioritize releasing the latest software over testing and verification, and one that encourages shortcuts. This may be acceptable for a buggy operating system for a phone that can be later patched, but it should be totally unacceptable for software that drives a car.

The reason for this irrational exuberance may have its roots in the exponential growth in computing and storage technologies predicted by Gordon Moore five decades ago. The fact that just over a decade ago, smartphones, cloud computing, and ride-sharing seemed like science fiction and technologies like 3D printing and DNA sequencing were prohibitively expensive, has led to a culture of extrapolation which has been fueled by exponential growth. Advances in creating programs that can play board games like chess and recent results with Alpha Go and Alpha Zero have been mind-boggling. But unfortunately, from this comes the extrapolation that it is only a question of time before we conquer general intelligence.

There is at least one argument that says that we are not making significant progress in understanding intelligence if we take into account the exponential growth in computing due to Moore's law. While computers have achieved superhuman performance in chess, the Elo rating of chess programs has merely increased linearly over the last three decades[8]. If we were able to truly exploit the benefits of Moore's law, our chess-playing programs should be a billion times better than the programs from 30 years ago instead of merely 30 times better. This suggests that the exponential growth of technology may not even apply to algorithmic advances in artificial intelligence[9], let alone to advances in energy storage, biotechnology, automation, and manufacturing.

Unfortunately, the irrational exuberance in technology has led to an even bigger problem, intellectual dishonesty, something that every engineer and computer scientist must guard against. As professionals, it goes without saying it is our responsibility to call out this intellectual dishonesty.

Questions of verification, safety, and trust must be central when we embody intelligence in physical systems. Indeed, similar questions of fairness, accountability, transparency and ethics (FATE) should be addressed for data and information in society. And it is great to see such efforts taking shape both in industry[10] and academia[11].

As teachers, we have an even bigger responsibility, as technology is no longer taught to just computer scientists or engineers. Indeed, technology is now a new liberal art. It is critical to address with the true limitations of what technology can really bring about in the imminent future and the real dangers of extrapolation. And it is important that every university student who designs or creates anything, whether it is physical objects or software artifacts, is sensitized to fundamental concerns of accountability and transparency and ethical responsibilities. It is critical we address the FATE of technology, not just in the context of data science but across all activities of design, synthesis and reduction of technologies to practice.













Guest blogger Vijay Kumar is the Nemirovsky Family Dean of Penn Engineering with appointments in the Departments of Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering at the University of Pennsylvania.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More