Approximately 20 years ago, a physicist at Harvard read studies of students who had passed physics courses, yet showed little gain in their conceptual understanding of Newtonian physics. This "conceptual understanding" was measured by a newly developed test called the Force Concept Inventory (FCI). What this showed was that students could "plug and chug" using appropriate equations to solve standard problems on physics tests. But, when asked about a situation where a large truck runs into a small car, students could not correctly identify that the force exerted by the car on the truck was the same as that exerted by the truck on the car (applying Newton's Third Law). The professor, Eric Mazur, thought to himself, "oh, but not my students!" However, when he gave his students the FCI test he saw similar results. Even more poignantly, he recalls how, during the exam, a student asked "How should I answer these questions? According to the way you taught me, or according to the way I usually think about these things?"4
Essentially, Mazur found that though students could solve the standard problems physicists generally accepted as demonstrating understanding of physics concepts, this did not mean students really saw the world as a physicist does. Students could "do" as the instructor did, but did not deeply understand.
Just like physics instructors, we want computing students to not merely be able to parrot our actions, but to be able to have a deep understanding of the concepts and how to apply them.
What guaranteed evidence does a working program provide us about student understanding?
So, do computing students struggle with deep understanding? Since our field has a notable lack of concept inventories3 or other commonly accepted assessments7 with which to document levels of such thinking, we cannot say with certainty. However, we do know we have serious issues in CS education. The high failure rates and lack of students interested in computing, especially from diverse backgrounds, suggest that we are failing to transfer our ways of thinking to a broad audience.
Moreover, there are important parallels between physics and computing instruction. Similar to physics problem solving, the most engaged part of many computing courseswhether that be operating systems or introductory programmingis when students develop programs embodying the concepts of the course. Many instructors strongly value code writing including out-of-class programming, laboratory assignments, and/or program writing on exams as an assessment of deep understanding of computing concepts.
Is it possible that students "plug and chug" in computing, not really understanding the concepts as we would like them to? That is, are we absolutely certain that, in the process of writing a program that exercises concept X, students fully grasp the behavior of X and the various appropriate decisions regarding X we assume occurred in producing a working program? In short, what guaranteed evidence does a working program provide us about student understanding? We propose it can only assure us that our students can produce a working programother inferences on our part are mostly thatjust inferences.
With the FCI, physics faculty were shown, through a "new" type of assessment question, that their existing assessment approaches did not give them an accurate view of student understanding. We posit that the need exists for computing instructors to design assessments more directly targeting understanding, not just doing, computing. And, of course, to adopt teaching approaches that support student development of these skills.
How can one foster deep understanding in the standard educational environment? Fortunately, Mazur did not throw up his hands at his student's baffling question. He developed a teaching method called Peer Instruction (PI) that has been used in numerous science and mathematics courses. The cornerstone of PI involves students attempting to explain to each other how they understand core physics concepts via a series of deceptively simple-looking problems. The emphasis is not on getting to a right answer via a mechanical process; instead, the right answer is apparent once the students use the appropriate core concepts in their attempts to articulate their understanding of the problem and their solution to it.
In a variety of studies, this approach has been shown to improve learning twofold over the standard lecture format.1,2 These dramatic learning effects, as well as PI's documented use across a variety of disciplines, should make computing educators take note: to foster the desired level of understanding, it is the teaching method that Mazur addressed. Often the computing community seems focused on what to teach, not how to teach it. While content is important, we need to focus more on theoretically and experimentally grounded methods, such as Peer Instruction, which are designed to support the development of deep understanding.
In Peer Instruction, students gain preparatory knowledge before class (for example, through textbook reading) and complete a pre-lecture quiz to both incentivize their preparation and to give them feedback on whether they are ready to learn in a lecture format. During class, lecture is interspersed with or largely replaced by multiple choice questions (MCQs) and discussion. MCQs are designed by instructors to engage students in thinking about deep conceptual issues or common misconceptions. This is instantiated via a four-part process:
Each of the PI components is important. The initial solo vote (step 1) ensures that each student has committed to an answer and is at least at some level engaged with the problem. This is not only important in preparing the brain to learn from explanation, but also necessary to prepare the student to contribute to his or her group's discussion. The group discussion (step 2) engages the students in articulating their understanding of the concepts, so that they can be sure they have arrived collectively at the correct answer. It is common to see students pointing at the projected screen, to be heard cautiously "trying out" new technical terms, and to be speaking in partial sentences that other students then try to complete. During this time the instructor can circulate around the room, listening in to prepare to address common issues or perhaps clarifying for a specific group. As a manner of incentivizing active discussion, each group is asked to reach consensus and agree on an answer.
Often the computing community seems focused on what to teach, not how to teach it.
The second vote guides the instructor on his or her facilitation role in the final component, the classwide discussion. Even if the class has nearly all got the correct answer, it is valuable for the students to hear how others explain it. The noise level skyrockets when students in a group agree on the answer, but have completely different reasoning. Additionally, it is very useful to ask students to explain why wrong answers are wrong, both as a means of contrast and to provide additional models of how to understand the problem. The instructor facilitates these contributions, and may offer their own way of thinking about and analyzing the question.
Purely as a feedback mechanism, when the vote and the student discussion have not gone well, the instructor can identify that an issue exists and respond immediately, possibly with a mini-lecture on the topic. At this time, expert explanation has much greater value in supporting learning, as the students' brains are primed to connect the explanation with their personal understanding.
The Peer Instruction process helps instructors learn what it is that is difficult about learning computing.
Peer Instruction has been used successfully across CS subjects, from lower level CS0 and CS1 classes through to advanced level computer architecture and theory of computation.5,6 Computing students say they find Peer Instruction valuable for their learningin our experiences it is common for approximately 90% of a computing class to recommend that other instructors adopt PI. Quotes such as the following are common among students reflecting on their use of PI in a computing class: "Discussing my understanding in comparison to my seatmates helped me gain a larger understanding of the material by approaching it from different perspectives."
Peer Instruction is an active-learning teaching method that can be employed to increase student learning. It is an especially important method for the computing education community to embrace because of its emphasis on development of deep understanding of the subject, because of its relative ease of adoption within the standard educational framework, and because of its applicability across a wide spectrum of courses.
Even more importantly, the Peer Instruction process helps instructors learn what it is that is difficult about learning computing. Compared to other fields, we do not have a deep literature on misconceptions and challenges in learning computing. Peer Instruction takes the guesswork out of figuring out what your students are struggling with. You do not have to try to read their minds. You can count their votes and listen in on their discussions.
3. Herman, G., Louie, M., and Zilles, C. Creating the digital logic concept inventory. In Proceedings of the Forty-First ACM Technical Symposium on Computer Science Education (Milwaukee, WI, March 1013, 2010), 102106.
5. Porter, L., Bailey-Lee, C., Simon, B. Cutts, Q., and Zingaro, D. Experience report: A multi-classroom report on the value of peer instruction. In Proceedings of the 16th Annual Joint Conference on Innovation and Technology in Computer Science Education (ITiCSE '11) 2011.
6. Porter, L., Bailey-Lee, C., Simon, B., and Zingaro, D. Peer Instruction: Do students really learn from peer discussion in computing? In Proceedings of the 7th International Computing Education Research Conference, Aug. 2011.
7. Tew, A.E. and Guzdial, M. Developing a validated assessment of fundamental CS1 concepts. In Proceedings of the 41st SIGCSE Technical Symposium on Computer Science Education, (Milwaukee, WI, 2010), 97101.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.