Moshe Y. Vardi identified important negative trends in his Editor's Letter "Will MOOCs Destroy Academia?" (Nov. 2012) concerning massive open online courses, saying, "If I had my wish, I would wave a magic wand and make MOOCs disappear..." But we should instead regard MOOCs as part of an early, awkward stage of a shift in education likely to produce something unrecognizable within even our own generation. Like journalism, retail sales, and many other fields, education is undergoing a sea change to something more fluid in time, space, and participation, as well as more peer-oriented. With lifelong learning increasingly critical today, institutions must aim for a vision of the future that finds ways to tap subject experts, as well as a proper business model that keeps both the institutions and the experts relevant. However, one thing the change does not involve is moving the old educational model, with all its flaws, to a new online medium.
Andy Oram, Cambridge, MA
Exploring non-human intelligencereal and artificialis fascinating. Consider novels like Arthur C. Clarke's 2001: A Space Odyssey and stories like Isaac Asimov's I, Robot, as well as cinematic adaptions like Blade Runner based on Philip K. Dick's novel Do Androids Dream of Electric Sheep? The plot invariably revolves around machines with an intelligence level comparable to that of humans that communicate with humans, so not far from a Turing test. Fascinating, because deep down, we, as humans, believe we are unique in our level of cognition and ability to emote.
A credible intelligent agent must be able to relate to human perception, reasoning, communication, and life experience, including emotion. In "Moving Beyond the Turing Test" (Dec. 2012), Robert M. French argued this is impossible, outlining a scenario only a human could truly understand, backed up with an example involving a series of instructions for manipulating one's fingers. He implied that answering a question about a particular step in the sequence is, and always will be, out of bounds for machines. His assertion (about answering out-of-bounds questions) was: "Don't try; accept that machines will not be able to answer them and move on."
A credible intelligent agent must be able to relate to human perception, reasoning, communication, and life experience, including emotion.
I must disagree. My company, North Side Inc. (http://www.northsideinc.com/), pursues research and development toward endowing machines with verbal ability anchored in real-world knowledge. Work in this direction requires that we account for (and simulate) human perception, motor function, cognition, and emotion. Though still far from being able to pass the Turing Test, we are making good progress; for descriptions of our recent work on embodied intelligent agents with conversational ability, see our video at http://www.botcolony.com and my paper at http://lang.cs.tut.ac.jp/japtal2012/special_sessions/GAMNLP-12/papers/gamnlp12_submission_3.pdf. Credible high-fidelity agents with human-like behavior promise great technological and economic benefit in such fields as entertainment, mobile computing, e-commerce, and training. We have also found that an agent attempting to emulate human behavior (often failing) has a quirky, humorous side that makes it endearing. Why go for a humorless computer in a world where marketers dream of intelligent assistants connecting (emotionally) with their human owners? In 1996, Byron Reeves and Clifford Nass offered ample evidence for the theory that people tend to treat computers and other media as if they were real people in The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places (http://csli-publications.stanford.edu/site/1575860538.shtml).
We must keep trying to make intelligent agents as credible and human-like as we know how. However, the premise of French's article was that it is time for the Turing Test to take a bow and leave the stage. Embodied artificial cognition is an extremely difficult (but fascinating) endeavor, and the benefits of success are enormous. It is way too early to even contemplate giving up.
Eugene Joseph, Montréal, Canada
Joseph claims his robots are "making good progress" toward passing a full-blown Turing Test. This is delusional, cynical (perhaps in order to attract financing), or shows he does not fully understand how incredibly difficult it would be for a machine to actually pass a carefully constructed Turing Test. My point in the article was that intelligent robots, capable of meaningful interaction with humans, do not have to be Turing-Test indistinguishable from humans. Just ask Jimmy [North Side's robot] if Ayame [North Side's nominal adult human] can put her little finger all the way up her nose.
Robert M. French, Dijon, France
I was disappointed in Aman Yadav's and John T. Korb's Viewpoint "Learning to Teach Computer Science: The Need for a Methods Course" (Nov. 2012). There is no question that teaching anything well requires knowledge of the subject and proper pedagogical technique, both covered nicely. Left out, however, and worse, mischaracterized, was skills. With any human behavior, knowledge is only part of the equation, typically not the most important. Yadav and Korb omitted all discussion of skills, except for mistakenly calling pedagogical knowledge a "skill set" (second paragraph in their "Learning to Teach" section). Knowledge is not skill. Skills, or competencies, are the know-how that enables a teacher to assess what method, technique, demo, analogy, illustration, or exercise works best for which students in which circumstances. Competencies cannot be reduced to knowledge.
No amount of content or pedagogical knowledge can substitute for teaching skill. Generalizations, including empirical studies, concerning how to present topic X are great, but doing it well means crafting it to the students and the case at hand. I have, for almost 30 years, taught computer science, from freshman-level intro to computing to advanced graduate courses in software engineering and AI. I focus on the student(s) and what they need to grasp the concept or acquire the skills they need. I ask myself, where are they confused? What distinction are they missing? Where did they get a wrong idea? What do I know about them that would enable me to choose the analogy that works for them, how to say it so it connects, and how to motivate them to keep working on something they likely find difficult and confusing? How can I motivate them to engage with computer science at all? Also, how do I invent new examples when the usual ones don't work? And how do I assess whether students are getting the concept or skill I am teaching? Moreover, how do I respond to the student who says, "I'm just dumb"?
The National Science Foundation CS10K Project (http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf12527) may indeed produce 10,000 teachers by 2016 but will not have much influence on the number of teenagers with knowledge, skills, and, most important, interest in computing if it does not give those teachers the skills that make them teachers, not mere knowledge transmitters.
H. Joel Jeffrey, DeKalb, IL
In the news story "In the Year of Disruptive Education" (Dec. 2012), Paul Hyman explored the challenge of how to award college credit for learning gained from free online courses offered by colleges and universities. The solution may emerge in two ways:
Credit by examination (CBE). Despite already being offered by many colleges as a way to give credit for knowledge, CBE also has a downsidethat students typically pay the same amount of tuition as if they were taking the course and that some schools limit the awarding of credit to those students who complete some period of residency at the school; and
Government-sponsored course recognition. Like many states, Ohio has developed pseudo-course designations, called Career-Technical Assurance Guides, or CTAGs. A CTAG identifies the core content of individual courses commonly offered at colleges, technical schools, and secondary schools; private and public colleges in Ohio can choose to tie one of their courses to a CTAG "virtual" course, in which case students earning credit for a tagged course at one institution carry that credit to all colleges with a similar tagged course.
It may be that states or even countries will develop CBE for virtual courses, and colleges that tag their courses will award college credit regardless of how a student gains proficiency. A college willing to reduce the cost of CBE and waive residency requirements could unilaterally implement it. Governments are usually motivated more than the colleges themselves to offer CBE at the lowest cost possible.
Christine Wolfe, Lancaster, OH
Communications welcomes your opinion. To submit a Letter to the Editor, please limit yourself to 500 words or less, and send to firstname.lastname@example.org.
©2013 ACM 0001-0782/13/03
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from email@example.com or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.
I just received the paper version of this magazine in USMail today (3/14/2013). How long has this online version been available for users? - probably around 2 weeks.
Several factors could create a delay between an issue's publication online and its delivery by mail. Magazines generally arrive in concentrated metropolitan areas before rural or less-concentrated areas. Issues are published in the ACM Digital Library and on the CACM Web site more quickly than in the past. U.S. Postal Service delays may also be a factor.
Displaying all 2 comments