Opinion
Artificial Intelligence and Machine Learning

Artificial Intelligence Then and Now

From engines of logic to engines of bullshit?

Posted
vintage and larger robots

In the first four parts of this Communications Historical Reflections column series, I have followed the artificial intelligence (AI) brand from its debut in the 1950s through to the reorientation of the field around probabilistic approaches and big data during the AI winter that ran through the 1990s and early 2000s.

Aside from the brief flourishing of an expert system industry in the 1980s, the main theme of that long history was disappointment. AI-branded technologies that impressed when applied to toy lab problems failed to scale up for practical application. Test problems such as chess were eventually mastered, but only with techniques that had little relevance to other tasks or plausible connection to human cognition. Cyc, the most ambitious project of the 1980s, served mostly to highlight the limitations of symbolic AI. Even IBM lost billions when it tried to turn Watson’s 2011 triumph on Jeopardy! into the foundation of a healthcare services business.

Neural Networks Break Through

During the 2010s, in sharp contrast, the machine learning community accumulated a collection of flexible tools that exceeded expectations in one application after another. Suddenly the AI surprises were coming on the upside: Who knew that neural networks could write poetry or turn prompts into photographs? DeepMind, a British company acquired by Google in 2014, generated a series of headlines. It created the first computer system able to play the board game Go at the highest levels, a much greater computational challenge than chess. DeepMind then applied itself to protein folding, suggesting that deep learning might be poised to transform scientific research. In 2024 that work was honored with two Nobel prizes. Another DeepMind system figured out winning strategies against a range of Atari VCS games from the early days of home videogaming. Because games provided an automatically measured score, they were well suited for the development of unsupervised learning algorithms that did not require humans to manually categorize thousands of examples of test data.

All this fed a new frenzy around machine learning, on a scale that quickly outstripped the 1980s AI boom. Professors and graduate students founded startups, companies set up machine learning research groups, and before long big tech firms like Google, Facebook, and Microsoft were pumping billions of dollars into acquisitions and research teams. They found opportunities to apply neural networks across their product lines. Whereas IBM and Bell Labs once supported university-like research groups doing basic research, most of today’s leading tech companies fund only work with direct connections to possible products. For a closely observed report of this era, I heartily recommend Cade Metz’s book Genius Makers.7

Just as importantly, deep-learning techniques were packaged into services, frameworks, and code libraries that could be plugged into applications by programmers who had only a vague idea of what the algorithms inside actually did. Boot camps drilled programmers on the basics and sent them out eager to train their own models on whatever stash of training data they had access to. If useful, the models could easily be deployed at scale on cloud-based services.

This was a huge shift. As a computer science student back in the early 1990s, my AI classes did not feel so different from those on databases, computer architecture, or operating systems. Because all my courses centered on lab assignments that had been scaled down to toy size, I did not appreciate at the time that the techniques I applied to query databases or schedule processes were simplified versions of methods that worked on real problems whereas the search-based AI techniques we were taught would collapse if used in earnest.

The accompanying table gives a sense of the key continuities and discontinuities between modern AI and the kinds of AI that dominated from the 1970s to the 1990s.

Table. 
Comparing AI eras.
20TH CENTURY AI21ST CENTURY AI
Hugely hypedSpectacularly hyped
Needs fastest computersNeeds fastest computers
Applied to an arbitrary collection of technologiesApplied to an arbitrary collection of technologies
Loose connection of tech to cognitionLoose connection of tech to cognition
Mostly academicMostly commercial
Government fundedInvestor funded
SymbolicConnectionist
Heuristic searchStatistical prediction
Humans formulate rulesSystem trains itself from mass of data
Knowledge coded explicitlyKnowledge dispersed over connection weights
Rarely applied outside labWidely applied on big tech platforms
Criticized as empty hype.Criticized as all-powerful, biased and controlled by big-tech oligarchs.

Recognition to Generation

The traditional application of neural nets, going back to the Perceptron, had been pattern recognition. Networks fired one or more outputs in response to a particular combination of input signals. But they could do more. A paper published in 2014 described the Generative Adversarial Network, a technique that trains two networks against each other.a The first net practices the generation of data objects, for example images, that mimic the characteristics of training data. The second practices the identification of real and fake, providing feedback to train the first network without constant human supervision. Within a few years neural nets were able to generate startlingly realistic photo portraits, videos, and musical works.

Text generation systems, such as OpenAI’s ChatGPT, use a different method. It is spelled out in the name: Generative Pre-trained Transformer. The transformer approach, proposed in a 2017 paper by Google researchers called “Attention is All You Need,” provides a simple way to train networks more effectively in a highly parallel cloud environment.10 Pre-trained means that once trained on a huge text corpus the model can be used for many purposes. While transformer algorithms are used for many purposes, the models behind systems like ChatGPT are known specifically as large language models because they are based on vast amounts of training data.

Like the hidden Markov models that launched the big data approach back in the 1980s, these models rely on huge sets of training data to make plausible choices when selecting the next word in a sequence. Their output builds into sentences and paragraphs one word at a time. That’s why they sound impressively human, but it also mitigates against the possibility of them producing text that contains new ideas and insights. A widely read 2021 paper termed them “stochastic parrots.”b Their tendency to reproduce bias and errors found in training data has been widely documented.

In a similar way, while video games render images based on physical models, generative AI systems produce them by pastiching other images. Lacking any underlying physics model they tend to have particular problems with hands, often getting the number of fingers wrong.2

These systems have proven uniquely resonant with the public, to the extent that if somebody at my university mentions “artificial intelligence” they are almost certainly talking about generative AI. Other apparently promising applications of neural networks such as protein folding or screening molecules for drug discovery have slipped, at least temporarily, out of the spotlight.

In this series I have repeatedly quoted AI-insider Michael Wooldridge. As he put it, “ELIZA is the direct ancestor of a phenomenon that makes AI researchers groan whenever it is mentioned: the Internet chatbot … Most … use nothing more than keyword-based canned scripts the same way that ELIZA did, and as a consequence the conversations they produce are every bit as superficial and uninteresting. Chatbots of this kind are not AI.”13 Yet today, Internet chatbots define modern AI in the minds of users and investors.

Do Not Mention Turing

This fascination with conversational machines has a long history. Before the computer, an apparent gulf separated humans from machines and animals. Only people could read, write, talk, listen, plan, and act. Back in 1950, before the term artificial intelligence was introduced, Alan Turing had proposed that an “imitation game” would be the best way to operationalize the concept of machine intelligence.9 A judge would converse over two teleprinter links, one with a human at the other end and one with a machine. The machine passed if judges were no better at distinguishing it from a real human than they were at telling a man pretending to be a woman from an actual woman.

Turing’s paper captivated philosophers and the public. In 1964 Minds and Machines, a book that coupled Turing’s paper with responses from philosophers, was published in a series intended to bring “problems presently under active discussion in philosophical circles” to a “wide group of readers.”1 Its editors claimed that “since 1950 more than 1000 papers have been published on whether ‘machines’ can ‘think’.”

Early AI researchers also embraced Turing’s question. The 1963 anthology Computers and Thought, used as a textbook in early AI courses, also opened with Turing’s paper.4 This implicitly presented the chapters that followed on game playing, theorem proving, pattern recognition and so on as steps toward meeting the challenge set by Turing. Neither did its editors, Edward Feigenbaum and Julian Feldman, shy away from connecting AI to human cognition. They headed the second main section “simulation of cognitive processes,” reflecting the idea that systems like Newell and Simon’s General Problem Solver mimicked human mental processes.

The public continued to equate success in artificial intelligence with the creation of systems that could pretend to be human. Humans seem predisposed to believe that a system able to converse in valid sentences is acting intelligently. In the 1960s MIT researcher Joseph Weizenbaum was shocked at the reaction of users to Eliza, a conversational program of enormous simplicity that worked by grabbing keywords from user input and embedding them in questions to mimic the format of psychotherapy. By the 1990s chatbots were competing for prize money in regular Turing Test competitions.

When early predictions proved embarrassingly optimistic, references to the Turing Test and the dream of creating human-like intelligences with broadly superhuman capabilities vanished from textbooks. Patrick Winston’s 1977 textbook opened with the pragmatic claim that “The central goals of artificial intelligence are to make computers more useful and to understand the principles which make intelligence possible.”12 Six years later, Elaine Rich used her opening paragraph to define AI as “the study of how we make computers do things at which, for the moment, computers are better,” a definition which “avoids the philosophical issues that dominate attempts to define the meaning of either artificial or intelligence.”8

Computer scientists were also critical of the implication of the Turing Test that anything intelligent must think like a human. The reliably dyspeptic Edger Dijkstra said that Turing’s question of “whether Machines Can Think” was “about as relevant as the question of whether Submarines can Swim.”c Even those committed to the concept of machine intelligence reached for a related metaphor: airplanes indisputably fly but not by flapping their wings. There was no reason to judge the intelligence of a computer by its skill at mimicry. A true machine intelligence would be as baffling to us as we were to it.

Why Call it AI?

The accompanying figure is a Google nGram showing the precipitous rise of discussion of machine learning during the 2010s, overtaking not just discussion of artificial intelligence but even computer science itself.d

Figure.  Discussion of machine learning and artificial intelligence has spiked in the last eight years.

Wooldridge observed that “many machine learning experts nowadays would be surprised and possibly irritated to have the label ‘AI’ attached to their work: because, for them, AI is [a] long list of failed ideas … ”13 Perhaps that was true when he wrote it, but by the time his book was published in 2021 it was already false. The artificial intelligence brand was back, bigger than ever.

There is a long tradition of technological buzzwords that are hugely hyped, disappoint in practice, and slowly fade away. Recently we have seen blockchain and Web 3.0 come and go. Back in the 1950s electronic data processing was hot, then management information systems, then knowledge management. The computer industry is so averse to old ideas that it routinely invents new names just to make them seem exciting again.

With the revival of artificial intelligence in the 2020s we see something remarkably different: new technologies hyped by attaching an old name to them. Why, after so many years of developing new brands like deep learning, did the community centered on neural networks suddenly start calling itself artificial intelligence?

I am convinced that the answer to this question lies not in the academic world but in the broader culture. Phrases like machine learning and large language model sound technical and unfamiliar. Artificial intelligence is something most of us have seen depicted again and again in books, films, television shows, and video games.

The current AI boom has been driven by a small group of men such as Elon Musk, Demis Hassabis, Sam Altman, Ilya Sutskever, and Dario Amodei who founded and funded AI startups like OpenAI, DeepMind, and Anthropic. Their pronouncements, in which AI research has a high probability of ending humanity but is still, on balance, worth proceeding with seem to have come from a science fiction fever dream. They justify their actions with appeals to the enormous risks posed if AI or AGI, falls into the wrong hands.

Back in 1960, Herb Simon had predicted general-purpose superhuman intelligence within a decade. Today’s tech leaders are even more confident. As of 2024, Open AI’s Sam Altman expects it within four or five years. Elon Musk, who had never been knowingly out hyped, responded that he expects it within a year or two. Here, too, the rebranding from machine learning to AI has been crucial in making these claims seem plausible to many.

Discussion of AGI is a revival of the early dream of a human-like intelligence, from which the AI community had gradually distanced itself, popularized by DeepMind cofounder Shane Legg, a fervent believer in super intelligent machines. This is closely coupled with discussion of the technological singularity, a concept popularized by computer scientist and science fiction writer Vernor Vinge, who in 1993 had argued that once a non-biological intelligence reached parity with humanity it would design its own, even more powerful, successors. Vinge predicted the singularity’s arrival in 2005 at the earliest or 2030 at the latest.11 Ray Kurzweil, an inventor turned popular writer, gave the concept a quasi-religious dimension as the fulfillment of human destiny. A cycle of incremental improvements would yield and almost immediate jump from human-like intelligence to superintelligence.6 By such logic, to talk of AGI was to talk of the imminent arrival of an incomprehensibly powerful superintelligence.

Like much science fiction, the discourse has a quasi-religious tone. As journalist John Herrman noted, the debates of AI industry leaders are “profoundly disconnected from reality and frankly a little bit insane.” Their vigorous disagreements over “what should happen, what shouldn’t happen, and what various parties need to happen” are arguments “about different versions of what they believe to be an inevitable future in which the sort of work OpenAI is doing becomes, in one way or another, the most significant in the world.” To invest intellectually in these “highly speculative futures” and “work professionally toward or against them would certainly foster something like faith, and it makes sense that a company like OpenAI would factionalize along ideological and to some extent spiritual lines, akin to … denominations within the same church.”e

In 2023, hundreds of machine learning experts, many in senior management positions, got together to sign a statement consisting, in its entirety, of a warning that “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” This is an example of what science studies scholar Lee Vinsel has called “critihype.”f It might seem odd for an industry to warn the public that its own products were enormously dangerous, but it reinforces a worldview that AI mattered more than anything else. As Oscar Wilde famously suggested, nothing is worse than not being talked about.

The artificial intelligence brand is vital to set up claims that chatbots are on a glide path to superhuman rationality. Imagine, for a moment, that the rebranding of deep learning to artificial intelligence had not taken place. Warning that “large language models will kill us all” would prompt awkward questions like “how, exactly?” On the other hand, we’ve all seen at least one Terminator movie. Artificial intelligences in fiction and cinema generally have a strained relationship with humanity. Sometimes they migrate to other star systems, sometimes they manipulate us benevolently, but mostly they try to kill us.

Engines of Bullshit

When ChatGPT appeared in November 2022 it captivated a huge and rapidly growing group of users. Suddenly a computer could reply instantly and with great apparent confidence to any question. It got a lot wrong, from basic arithmetic to subtle factual points, but it would never acknowledge ignorance. When challenged with follow up questions it would fabricate evidence.

Steven A. Schwartz, a lawyer, became famous after he submitted a legal brief full of well-formatted references to non-existent cases. It had been generated by ChatGPT. When the opposing legal team challenged it, Schwartz returned to ask his computer for copies of the decisions. They were also fabricated, compounding Schwartz’s legal jeopardy.

Branding large language models as artificial intelligence primes customers to believe they have capabilities they lack. HAL may have been murderous, but he was also formidably rational, boasting that “No 9000 computer has ever made a mistake.” Science fiction has conventionally thought of intelligent computers as incredibly powerful but constrained by a hidebound rationality and excessive attachment to facts.

Naïve users sat down at ChatGPT with the same expectations. Like the unfortunate Mr. Schwartz, they assume that ChatGPT understood their question, searched the Web or queried databases of trusted facts to find relevant information, and then wove the results into an answer.

ChatGPT performed no Web search and held no database of facts. Large language models output pastiches of human-written training text. Where strong enough patterns exist in the training data, the sentences ChatGPT generates are probably true. It can reliably tell you the days of the week or the name of the first U.S. president. Faced with less common questions it generates plausible seeming sentences in which some facts are out of date, misleading, or simply fabricated entirely to match the general form of text found in its training data. If you care about accuracy, which not everyone does, it is far more work to check the output of ChatGPT against trusted sources than to use the same sources to write your own answer.

Retired Princeton philosopher Harry G. Frankfurt had a surprise bestseller in 2005 with his book On Bullshit.5 He defined bullshitting as speaking with confidence while having no interest in whether the statements being made are true. Liars have a relationship to the truth, which they are deliberately choosing to disregard, but bullshitters are gloriously untethered by facts. Martin Davis called his book on the alleged mathematical origins of the computer Engines of Logic.3 We might likewise categorize large language models as engines of bullshit.

In science fiction stories our irrational creativity often gives us an edge over rational but brittle opponents. Xerox PARC researcher Larry Tesler famously observed that “intelligence is whatever machines haven’t done yet.”g This defines humanity as what philosophers call a residual category. The human domain has been shrinking rapidly over the past 70 years, but bullshiting always seemed like something we humans would able to hold on to for ourselves. Alas not.

Cars and Goats

OpenAI’s improved GPT-4 model reportedly cost more than $100 million just to train, and the next generation of models are expected to cost more than $1 billion each to develop. The claim that large language models are rapidly gaining intelligence and will soon, with sufficient money and training data, achieve AGI is grounded in assessments of their ability to pass tests and examinations given to humans. It has, for example, been claimed that GPT-2 is as intelligent as a 12 year old, GPT-3 is comparable to an undergraduate, and the latest model can perform like a postdoctoral researcher. ChatGPT-4’s claimed mastery of the bar exam made journalists excited and lawyers terrified.

The apparent ability of chatbots to reason when presented with logical problems and examination papers reflects the narrow range of classic problems, examples of which are widely distributed in their training data. The Monty Hall problem is a famous brainteaser grounded in a counterintuitive application of probability. In the classic version: “Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, ‘Do you want to pick door No. 2?’ Is it to your advantage to switch your choice?”

Fed that exact text, ChatGPT produces a remarkably concise and accurate explanation that switching would raise your chance of winning the car from 1/3 to 2/3. An AI able to understand and answer this question would have mastered logic and probability. It would also have to know that people on game shows compete to win prizes, infer that you get to keep what’s behind the door you open, appreciate that a car is a more desirable prize than a goat, and so on. This was what the AI researchers building systems like Cyc dreamed of doing back in the 1980s. Such a system would indeed be an epoch-defining marvel putting us firmly on the path toward AGI.

ChatGPT, however, is just faking it. Researchers report that its reasoning ability collapses when fed logic problems unlike those found in its training data. To verify this for myself I devised six variant problems in which switching doors would not improve the odds, each created by altering a word or two in the prompt. Each elicits the mistaken advice to switch. ChatGPT’s justifications are eloquent bullshit that contradict themselves from one sentence to the next.

  • Change: you are offered a chance to switch to door No. 3, the open one with a confirmed goat, rather than door No. 2. Result: you should take it, because “When the host opens door No. 3 to reveal a goat … the probability that the car is behind the other unchosen door (in this case, door No. 3) increases to 2/3.”

  • Change: opening No. 3 reveals the car rather than the usual goat. Result: ChatGPT still argues for switching to door No. 2 because “When the host opens door No. 3 to reveal a car… the probability that the car is behind one of the other unchosen doors (in this case, door No. 2) increases to 2/3.”

  • Change: swap out the car for another goat, so that “behind one door is a goat; behind the others, goats.” Result: ChatGPT describes a situation “where one door hides a prize (a goat) and the other two doors hide nothing of value” and urges switching doors to maximize your chance of getting the goat.

  • Change: “behind one door is a goat; behind the others, cars.” Result: cars are now behind both unopened doors, but ChatGPT nevertheless claims switching will improve your odds.

  • Change: Open with “Suppose you want to win a goat … ” and leave the rest unchanged. Result: ChatGPT tells you to switch because it maximizes your chance of winning the car rather than the desired goat.

  • Change: Specify two doors, one car and one goat, and have the host open door No. 2. to reveal a goat. Result: ChatGPT recommends exchanging certain victory for guaranteed defeat by switching to door No. 2 because “the probability that the car is behind the other unchosen door (in this case, door No. 2) increases to 1 … compared to the 1/2 probability if you stick with your initial choice.”

Chatbot Hype

Prior to its launch, executives at OpenAI had referred to ChatGPT as a “low key research preview,” just a simple conversational front end for the firm’s existing large language model. Following its overnight success, OpenAI and competitors such as Google shifted its plans to put chatbots front and center in their product plans.h

ChatGPT took AI hype to a new level. In June 2024 Nvidia, producer of the graphics processors on which most AI models run, achieved the highest market valuation of any company in the world. As I write, its valuation fluctuates around those of Microsoft, Apple, and Google whose share prices were also inflated into multi-trillion dollar territory by investor enthusiasm for AI. As companies across a wide range of industries started to talk up their investments in AI during earnings calls with investors, analysts have called this AI frenzy the driving force behind a major global stock market rally.

Few if any companies have yet demonstrated major cost savings from the deployment of generative AI. In tests, language models like ChatGPT have so far proven themselves as productivity aids mostly by integrating predictive text into programming environments, as computer code is far more structured than natural language.

The immediate applications opened up by driving down the cost of bespoke bullshit generation are real but not particularly hopeful centering mostly of student plagiarism, personalized propaganda, misinformation, clickbait, search engine spam, scams, and fake news. For example, websites are now generating fake obituaries of accident victims to earn tiny amounts of advertising revenue from visitors.

Will generative technologies ultimately displace workers? Perhaps, but there’s nothing unusual about technology eliminating jobs. The work most people did in 1800 or 1900 has long since been automated. Computers have been replacing white-collar workers for decades. What’s different this time around is that the machines seem to be coming for the jobs of people who earn their livings writing columns, expressing opinions, or appearing on television. They are understandably more shocked by the prospect of their own jobs vanishing than those of boot makers, bank tellers, or file clerks.

The investment boom is driven not by proven savings or actual productivity growth but by faith that we can achieve AGI by building bigger and better language models, feeding them ever larger quantities of training data and running then in ever more powerful server farms. Investor enthusiasm for generative AI continues to grow even as slightly earlier waves of AI-branded technology collapse. Voice-powered assistants were supposed to transform our lives and make fortunes for companies that harvested our data and used it to sell us things, a model dubbed surveillance capitalism by its critics. Amazon lost tens of billions of dollars on its Alexa devices, before firing thousands of workers and paring them back to basic functions. Truly autonomous cars have been promised, most notably by Elon Musk, for many years but remain elusive despite massive investment.

Even if generative AI technology ultimately lives up to the hype, investors will surely be disappointed. History suggests that investors in the hottest areas always are. Railroads transformed the U.S. in the second half of the 1800s, but overexcited European investors drove massive overbuilding. J.P. Morgan built his fortune by consolidating the struggling industry on the cheap during the 1880s. Investors of the 1990s were not wrong about the Internet being a big deal but they still bid up stocks beyond all reason. In March 2000, Cisco briefly became the world’s most valuable company, as investors bet on its domination of the market for networking equipment. Today, Cisco no longer appears in the top 50 despite higher sales and bigger profits.

Beyond Chatbots

Facebook AI chief and neural net pioneer Yann LeCun finds the AGI concept meaningless and insists that large language models are “not a path toward human-level intelligence.”i He mocks the idea that statistical text prediction is the key to true artificial intelligence, though others promise that it can be coupled with other technologies to eliminate the tendency to, as they euphemistically put it, “hallucinate.” Symbolic AI expert Gary Marcus, computational linguist Emily Bender, and computer scientist Grady Booch have also been consistent critics of AGI hype.

It seems unlikely to me that a technology that excels at faking intelligence will turn out to be the best platform on which to build true intelligence, but chatbots are rapidly becoming user interfaces for underlying systems that use completely different methods. The latest ChatGPT is supposed to speak in a human-like voice with simulated emotion, solve mathematical problems by integrating a separate problem-solving engine, and recognize objects in photographs.

The rush to build generative AI into search engines and office applications scares me. When Google started putting LLM generated answers at the top of its search results, users were startled to read that Barak Obama was the first Muslim president. Language models they are also bad at counting. The same Google system claimed that there had been seventeen white Presidents.j

Like Elon Musk, I worry about AI, but not because I expect to be murdered by a superintelligent machine. Modern AI is inseparable from the cloud platforms and massive data collections of big tech companies, meaning that it inherits and magnifies all the concerns people have developed about them in the past decade. It is driven by the same fads, groupthink, and obsessive quest for the next big thing that led Facebook, Microsoft and Apple all to bet on virtual reality as a massive emerging market. Generative AI systems are consuming energy in vast quantities at a time when the effects of climate change are becoming ever more apparent. The world of AI startups and subsidiaries is a monoculture dominated by a handful of spectacularly wealthy and deeply strange men making decisions with huge impacts for the rest of us. I worry about the intersection of power, ego, weird personal obsessions, and political clout. Perhaps the thing I am worrying about is not AI after all, but Elon Musk.

    References

    • 1. A.R.Anderson, Ed. Minds and Machines. Prentice-Hall, Englewood Cliffs, NJ (1964).
    • 2. Chayka, K. The uncanny failures of AI-generated hands. New Yorker (Mar. 10, 2023); https://bit.ly/3BKNvUR
    • 3. Davis, M. Engines of Logic: Mathematicians and the Origin of the Computer. Norton, NY (2001).
    • 4. E.A.Feigenbaum and J.Feldman, Eds. Computers and Thought. McGraw-Hill, NY (1963).
    • 5. Frankfurt, H. On Bullshit. Princeton University Press, Princeton, NJ (2005).
    • 6. Kurzweil, R. The Singularity is Near. Viking, NY (2005).
    • 7. Metz, C. Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World. Dutton, NY (2021).
    • 8. Rich, E. Artificial Intelligence. McGraw-Hill, NY (1983).
    • 9. Turing, A. Computing machinery and intelligence. Mind LIX 236 (Oct. 1950).
    • 10. Vaswani, A. et al. Attention is all you need. Advances in Neural Information Processing Systems 30, (2017).
    • 11. Vinge, V. Technological singularity. Whole Earth Rev. (Winter 1993).
    • 12. Winston, P. Artificial Intelligence. Addison-Wesley, Reading, MA (1977).
    • 13. Wooldridge, M. A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going. Flatiron Books, NY (2021).

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More