Computing Applications BLOG@CACM

Understanding CS1 Students; Defective Software

The Communications Web site,, features more than a dozen bloggers in the BLOG@CACM community. In each issue of Communications, we'll publish selected posts or excerpts.

Follow us on Twitter at

Mark Guzdial writes about why teachers must grasp introductory CS students' theories about computing. Bertrand Meyer argues for the necessity of analyzing large-scale software disasters and publishing a detailed technical study.
  1. Mark Guzdial "We're too Late for 'First' in CS1"
  2. Bertrand Meyer "Again: The One Sure Way to Advance Software Engineering"
  3. References
  4. Authors
December 7, 2010

We in computer science education have long argued about how to start the first course. "If they just see X first, they will understand everything in terms of X" where we might replace X with objects, functions, or recursion. We express concern about what will happen if they don’t see the right stuff first. You may recall the Edsger W. Dijkstra quote, "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: As potential programmers, they are mentally mutilated beyond hope of regeneration."

I don’t know if any of those beliefs about "what students need to know first" were once true, but I’m quite confident that none of them are true today. There is no first. There is no objects-first, functions-first, hardware-first, or non-BASIC-first. No, it’s not that I’m suggesting that there already is computer science in high school—there are way too few teachers and classes, on average, for that to be true. Rather, in a world where students live with Facebook, Wi-Fi, and email, they most certainly have theories about how computing works. There’s no way to get to them before they make up their theories. By the time they decide to study computing, they have them.

Leigh Ann Sudol-DeLyser had a nice paper, "Mental Models of Data," at the 2009 Koli Calling conference, where she asked students to talk about the computing in their lives, then she tried to figure out which data structures they were already thinking about. For example, students already realize that a Facebook newsfeed has the newest information on top, and the oldest disappears off the bottom—sounds like these students already recognize a queue, even if they don’t know the term yet.

I thought of this looking at Carsten Schulte’s ITICSE 2010 paper, "Comparison of OOP First and OOP Later: First Results Regarding the Role of Comfort Level," on studying student perceptions of difficulty in an objects-first vs. an objects-later class. This is a follow-up to his earlier ICER paper, where he reported no learning differences between an objects-first and an objects-later class. While I’ve heard complaints about Schulte’s analysis methods, I found his experimental setup to be as careful as one could possibly be. The only differences between his two classes are the order of topics. Objects-first vs. objects-later doesn’t matter, but neither does any other "first." His results are really not surprising. We already know that the sequence of topics in a curriculum rarely makes much difference in the learning. Students are really quite adept at getting by with less knowledge, and filling in the gaps as new information comes in.

It’s an important open research question: How do students understand the computing around them? What theories do they have? They might not have any—until an error occurs. How they respond to that error suggests what kind of computational model they have. For example, watch a student do a Google or Bing search and then revise it to get better results. How did she revise it? What did she add to get a better result, and why did she think that that would work?

We know something about how novices develop naive theories of computing. John Pane’s research shows us that novices tend to develop declarative explanations of software that are driven by events, and don’t seem to develop notions of objects at all. The "Common-sense Computing" group has shown us that novices can create algorithms for a variety of problems, although that doesn’t really tell us how they think software and software development works in the world around them.

We are now in the same position as educators in physics (or biology, chemistry, or other sciences). Students have theories about how Wii controllers, voicemail menu systems driven by spoken voice commands, touch screens, and Google and Bing search work. If these novice theories "mutilate" their minds, then it’s done, it’s happened to everyone, and we’d best just get on dealing with it. There is no chance to place a theory in their minds before they learn anything else. We have to start from where the students are, and help them develop better theories that are more consistent and more correct. There is no first, but we can influence next.

Back to Top

Bertrand Meyer "Again: The One Sure Way to Advance Software Engineering"
January 13, 2011

Once again, bad software has struck. From 7:30 A.M. to late afternoon on November 10, 2010, Internet access and email were unavailable to most customers of Swisscom, the main mobile services provider in Switzerland. Given how wired our lives have become, such outages can have devastating consequences. As an example, customers of some of the largest banks in Switzerland cannot access their accounts online unless they type in an access code, one-time-pad style, sent to their cellphone when they log in.

That is all the news we will see: Something really bad happened, and it was due to a software bug. A headline for a day or two, then nothing. What we will miss in this case as with almost all software disasters—most recently, the Great Pre-Christmas Skype Outage of 2010—is the analysis: what went wrong, why it went wrong, and what is being done to ensure it does not go wrong again. Systematically applying such analysis is the most realistic technique available today for breakthrough improvements in software quality. The IT industry is stubbornly ignoring it. It is our responsibility as software engineering professionals to change that self-defeating attitude.

BERTRAND MEYER "In Rahm Emanuel’s immortal words, ‘You never want a serious crisis to go to waste.’"

I have harped on this theme before1,2,3 and will continue to do so until the attitude changes. Quoting from the first reference:

Airplanes today are incomparably safer than 20, 30, 50 years ago: 0.05 deaths per billion kilometers. That’s not by accident.

Rather, it’s by accidents.

What has turned air travel from a game of chance into one of the safest modes of traveling is the relentless study of crashes and other mishaps. In the U.S. the National Transportation Safety Board has investigated more than 110,000 accidents since it began its operations in 1967. Any accident must, by law, be analyzed thoroughly; airplanes themselves carry the famous "black boxes" whose only purpose is to provide evidence in the case of a catastrophe. It is through this systematic and obligatory process of dissecting unsafe flights that the industry has made almost all flights safe.

Now consider software. No week passes without the announcement of some debacle due to "computers"—meaning, in most cases, bad software. The indispensable Risks Digest Forum4 and many pages around the Web collect software errors; several books have been devoted to the topic. A few accidents have been investigated thoroughly; two examples are Nancy Leveson’s milestone study of the Therac-25 patient-killing medical device2, and Gilles Kahn’s analysis of the Ariane 5 crash, which Jean-Marc Jézéquel and I used as a basis for our 1997 article6. Both studies improved our understanding of software engineering, but these are exceptions. Most of what we have elsewhere is made of hearsay and partial information, and plain urban legends—like the endlessly repeated story about the Venus probe that supposedly failed because a period was typed instead of a comma, most likely a canard.

Part of the solution is to use the legal system. For any large-scale software failure in which public money is involved, a law should require the convocation of an expert committee and the publication of a detailed technical analysis. The software engineering community should lobby for the passage of such a law and should not rest until it is enacted.

For private businesses the legal approach may be harder to pursue as some might view it as undue government interference, but it may still be pushed given the obvious public interest in software that works. The scenario would be for the industry to adopt, as a voluntary standard, the principle that every large-scale mishap must automatically lead to an exhaustive and public post-mortem analysis; in Rahm Emanuel’s immortal words, "You never want a serious crisis to go to waste."

Until that happens, software will remain brittle. Think of the last time you stepped into a plane, and how different you would have felt if aircraft manufacturers had been allowed, disaster after disaster in the past 70 years, to keep the embarrassing details to themselves and continue business as usual.

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More