BLOG@CACM
Education

Trip Report on the 2011 International Computing Education Research Workshop

Posted
Georgia Institute of Technology Professor Mark Guzdial

The International Computing Education Research (ICER) conference for 2011 was August 8-9 at Rhode Island College in Providence, RI. ICER is one of my favorite conferences with fascinating results.  This was the 6th ICER, sponsored by ACM SIGCSE.

Colleen Lewis of Berkeley talked about “Deciding to Major in Computer Science: A grounded theory of students’ self-assessment of ability,” which for me was intriguing much more for the “self-assessment” part than for the “deciding” part.  Colleen told us that a common theme in her interviews with students (at Berkeley and at U. Washington-Seattle) was the tension between growth vs. fixed mindset (drawing on Carol Dweck’s work). Many students decide early on that they’re bad at computing and they can’t get better (fixed mindset), i.e., they don’t have the “Geek gene.” Those students won’t choose CS, of course, but for such a disappointing reason. Dweck found that students in many disciplines associate failure in some area as a sign of their own, fixed inability, as opposed to being an outcome that more work (growth) might change.

My student, Mike Hewner, presented work from his dissertation on how CS students choose specializations within their program. He’s using grounded theory, which is a demanding social science method. He has several interesting insights already. First, students don’t “begin with the end in mind.”  Students he interviewed had little idea what job they wanted, and if they did, they didn’t really know what the job entailed.  Second, students don’t think that the choice of specialization is all that important — they figure that they’re at a good school, they trust the faculty, so whatever choice they make will turn out fine.  Finally, an engaging, fun class can dramatically influence students’ perception of a field.  A “fun” theory class can convince students that they like theory.  Their opinion of the subject is easily swayed by the qualities of the class and the teacher. “Why are you in robotics (even though it doesn’t have much to do with what you say you want to do for your job)?” “Well, I really liked the robots we used in CS101…”

The best paper award (voted on by the participants, called the “Fool’s Award” at ICER) was won by Sally Fincher, Josh Tenenberg, and Anthony Robbins for their paper “Research Design: Necessary Bricolage.” Sally was reflecting on how we go about gathering information about our students’ practices.  She said that we rely far too much on semi-structured interviews, and we should think about combining other methods and practices to gain more insight.  She showed examples of some of her research instruments, which were really wonderful (i.e., I plan to steal them as early as this semester!). One of the methods she and her colleagues used was to ask students to keep diaries of their work (not unusual), but also to take digital pictures of where they worked (quite unusual). These photos were surprisingly informative.  The upper left corner picture below is a bus seat, and the lower left is in the lab.  Students don’t always work in a computing-rich setting, and they still do seek out social situations to do their work.

 

I enjoyed the papers by Cynthia Bailey-Lee, Beth Simon (for her paper on PeerWise with lead author Paul Denny — Beth’s name seemed to be on every-other paper this year!), and Matt Jadud because they were all replication studies.  Cynthia was taking a finding from Biology (on using peer instruction) and seeing how it worked in CS.  Beth and Matt were both taking earlier CS Ed papers, and see if they still worked in new settings.  It doesn’t matter what the bottomline finding was.  It’s so cool that our field is starting to go deep and check the work of earlier papers, to explore where it works and where it doesn’t, and to develop more general understanding.

Michael Lee presented on “Personifying programming tool feedback improves novice programmers’ learning.”  They created a programming task (moving a little graphical character around on a board), but “personified” the parser.  A mistyped command might get the little character to say sheepishly, “I’m sorry, but I really don’t know how to do that. I wish I did.  I know how to do X. Is that what you would like me to do?”  The authors measured was how long students stuck with the programming tasks — and a personified compiler is not nearly as scary, so students stick with it longer and do more.

Our keynote for ICER 2011 was Eric Mazur, famous Harvard physics education researcher.  Mazur maintains a terrific website with his publications and talks, so the slides from his talk are available as well as the papers which serve as the content for this talks.  His keynote talk was on “The scientific approach to teaching: Research as a basis for course design.” He presented a series of results from his research at Harvard. A couple of the results changed how I thought about our teaching practice in CS.

One set of findings was on physics demonstrations, when teachers make spark and lights, balance weights, make things explode (if you’re lucky), and do all kinds of things to wake you up and make you realize your misconceptions.  Do they really help?  Mazur tried four conditions (rotated around the classes, so students would try them each across a sequence of demo topics in a semester): No demo, observing a demo, observing a demo after making a prediction of what you thought would happen, and then having a discussion afterward.  The results were pretty much always the same (below is a set from one study):

Observing a demo is worse than having no demo at all!  Students do worse on a follow-up quiz! The problem is that you see a demo, and remember it in terms of your misconceptions.  A week later, you think the demo showed you what you already believed.  On some of the wrong answers that students gave in Mazur’s study, they actually said “as shown in the demo.”  The demo showed the opposite!  The students literally remember it wrongPeople remember models, not facts, said Mazur.  By recording a prediction, you force yourself to remember when you guessed wrong. That last line in the data table is another really interesting finding — talking about it didn’t improve learning beyond just making the prediction.  Social doesn’t help all learning. 

This result has some important implications for us computing educators.  When we run a program in class, we’re doing a demonstration.  What do students remember of the results of that program execution?  Do they even think about what they expect to see before the program executes?  What are they learning from those executions?  Live coding (and execution) is very important for CS education. We need to think through what students are learning from those observations and how to make those demonstrations more productive in terms of learning.

Mazur’s last finding was about students preference for clarity over confusion. Students praise teachers who give clear lectures, who reduce confusion.  Student evaluations of teaching reward that clarity.  Students prefer not to be confused.  Is that always a good thing?  Mazur tried an on-line test on several topics, where he asked students a couple of hard questions (novel situations, things they hadn’t faced previously), and then a meta-question, “Please tell us brie?y what points of the reading you found most dif?cult or confusing. If you did not ?nd any part of it difficult or confusing, please tell us what parts you found most interesting.”  Mazur and his colleagues then coded that last question for “confusion” or “no confusion,” and compared that to performance on the first two problems.

Confused students are far more likely to actually understand.  It’s better for students to be confused, because it means that they’re trying to make sense of it all. I asked Mazur if he knew about the other direction: If a student says they know something, do they really?  He said that they tried that experiment, and the answer is that students’ self-reported knowledge has no predictive ability for their actual performance.  Students really don’t know if they understand something or not — their self-report is just noise.

ICER 2011 was terrific! Next year’s ICER will be September 10-12 in Auckland, NZ, and I highly recommend it!

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More