Last week was the 2016 International Computing Education Research (ICER) conference in Melbourne, Australia. The papers are freely available in the ACM Digital Library for another week, so I recommend grabbing them from the Proceedings Table of Contents soon. The conference was terrific, as is usual for ICER, but you don't have to take my word for it. Two of the papers were meta-papers which studied the ICER community itself. They found that the community has healthy levels of newcomers and collaboration and features methodological rigor and strong theoretical foundations.
In this post, I report on three of the papers: the two paper award recipients, and one of my (totally subjective) favorite papers at the conference. ICER has two paper awards: A "people's choice" (voted on by attendees) John Henry award for innovation and new directions, and a "Chairs" award selected by the conference chairs based on the paper reviews.
The people's choice award was won by Elizabeth Patitsas (with Jesse Berlin, Michelle Craig, and Steve Easterbrook) from the University of Toronto for the paper Evidence that Computer Science Grades are not Bimodal. Elizabeth's paper had two studies in it and a provocative discussion section.
In general, many CS teachers believe that grades in CS classes are bimodal — some students have innate ability and just "get it." Others don't. There are even research papers presenting explanations for the bimodality effect. But is the effect really there?
In her first study, Elizabeth did a large analysis of 18 years' worth of grade data from one large university CS department, and found less than 5% of the courses had signs of non-normality. Her second study was a "deception study" (which she debriefed here). She asked 60 CS teachers (mostly from the SIGCSE members list) to judge if a number of grade distributions were bimodal. The reality was that none of them were. She also asked teachers if they agreed with the statements "Some students are innately predisposed to do better at CS than others" and "Nearly everyone is capable of succeeding in computer science if they work at it." Both of these statements were strongly correlated with "seeing bimodality" in the distributions, the first positively and the second negatively. If teachers believed in a "Geek Gene" (that some students are innately gifted at programming), they saw bimodality, even if it wasn't there.
The provocative explanation is that CS teachers see bimodality because they don't teach well. Elizabeth used a social defense theory to explain "seeing-bimodality." If teachers think that they're good at teaching, but the students aren't doing well, it's natural to think that it's the students' fault. Elizabeth is suggesting that our over-confidence in CS teaching leads to seeing bimodality when there is none.
The Chair's award was particularly exciting because it was won by a team led from a School of Education. Computing education research has been dominated by CS researchers, and it's terrific to see the Education side playing a more prominent role. The paper was Learning to Program: Gender Differences and Interactive Effects of Students' Motivation, Goals, and Self-Efficacy on Performance by Alex Lishinski, Aman Yadav, Jon Good, and Richard Enbody from Michigan State University. Self-efficacy is a person's own rating of their ability to succeed or perform in a particular discipline. We knew from prior work that women tend to have low self-efficacy ratings at the start of in CS classes, while men have high self-efficacy ratings. What hasn't been studied previously was how these changed with feedback. As students get grades back on homework and exams, what changes? Alex showed that women more quickly adapt their self-efficacy ratings compared to men — the scores rise dramatically. It takes a long time (more feedback) to get men to down grade their over-estimated skills to match their performance.
One of my favorite papers at ICER 2016 was Some Trouble with Transparency: An Analysis of Student Errors with Object-oriented Python by Craig S. Miller and Amber Settle. Anyone who writes object-oriented programs in Python knows that methods in Python classes must explicitly state a parameter self
. Craig and Amber call that "transparency." References to the receiving object are available in Java (for example) methods, too, but not as an explicit parameter. Is that a problem? Amber presented evidence that it really is. In a study of object-oriented programming in Python (where students were asked to code a particular method for a given class), some errors (like returning too early from a method, or forgetting to loop through all items in a list) occurred relatively frequently — 19% and 31% of all errors, respectively. The self
-related errors were far more common. 53% of all errors were due to missing the self
parameter in the method declaration, 39% were due to missing self
in an object reference, and 39% used self
incorrectly. That's a cost of using Python for novice students that had not been previously measured.
There were lots of other great papers that I'm not going to discuss here. I recommend Andy Ko's excellent ICER 2016 trip report for another take on the conference. You can also see the Twitter live feed from hashtag #ICER2016.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment