A high school teacher sent me a note asking, "Can you send me cites about theory-heavy/programming-lite approaches and their failures?" Do students really have to program in introductory courses in order to learn computing well?
It’s a great question. Unfortunately, I don’t have a great answer. As far as I know, there hasn’t been a good comparative study of programming vs. non-programming CS1’s (introductory courses). However, there is evidence that points toward the need for programming.
The first piece of evidence is general community consensus. In the Computing Curriculum 2001 document from ACM and IEEE, an "algorithms-first" CS1 was suggested and promoted. In the newly released CS 2009 curriculum volume from ACM and IEEE, the "algorithms-first" approach is discouraged. Of the approaches described in CC2001, "the algorithms first approach – in which ‘basic concepts of computer science are introduced using pseudo-code rather than an executable language – seems to have received less favor." The sense from the community represented by the ACM and IEEE task force that assembled the volume is that "algorithms-first" has not worked.
The second piece of evidence comes from the great report from the American Association for the Advancement of Science "Science for All Americans." This report presents not just what science should be taught (aiming mostly at K-12) but how it should be taught to students. Let’s consider the possibility that Computer Science is also a science, and should be taught similarly. In the excellent Chapter 13, which discusses teaching and learning approaches, the argument is made:
In science, conclusions and the methods that lead to them are tightly coupled. The nature of inquiry depends on what is being investigated, and what is learned depends on the methods used. Science teaching that attempts solely to impart to students the accumulated knowledge of a field leads to very little understanding and certainly not to the development of intellectual independence and facility.
In computer science, the way that we investigate computation is with programming. We don’t want to teach computing as a pile of "accumulated knowledge." We know that that doesn’t lead to learning. We need to teach computation with exploration and investigation, which implies programming.
The best research study that I know of that addresses this question is Chris Hundhausen’s study where he used algorithmic visualization in CS1. He had two groups of students. One group was to create a visualization of an algorithm using art supplies. The students were learning theory and describing the process without programming. The second group had to use a visualization system, ALVIS. The students were learning theory and encoding their understanding in order to create a presentation. As Chris says in his paper, "In fact, our findings suggest that ALVIS actually had a key advantage over art supplies: namely, it focused discussions more intently on algorithm details, leading to the collaborative identification and repair of semantic errors." If you have no computer system, it’s all too easy to say "And magic happens here." It’s too easy to rely on intuitive understanding, on what we think ought to happen. Having to encode a solution in something that a computer can execute forces an exactness such that errors can be identified.
The idea isn’t that programming creates barriers or makes it harder. Rather, using the computer makes it easier to learn it right. Without a computer, it’s easier to learn it wrong, where you just learn computing as a set of accumulated knowledge (as described in the AAAS report) or with semantic errors (as with art supply algorithm visualization). If you don’t use programming in CS1, you avoid tedious detail at the possible (even likely) loss of real learning.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment