Computing education is suddenly everywhere. Numerous U.S. states and many countries around the world are creating requirements and implementing programs to bring computing to their students. Tech innovators have jumped in, too, sometimes to "disrupt" the educational system. Opinion pieces create parental anxiety that their children are not being trained properly for the future; products claim to mollify these anxieties (while perhaps simultaneously amplifying them). Academics, looking to address the Broader Impact criteria of funding agencies, are eager to burnish their credentials by giving guest lectures at local schools. In certain neighborhoods, toystores feel compelled to stock a few products that claim to enhance "computational thinking."3
Unfortunately, a lot of current discussion about curricula is caught up in channels (including in-school versus after-school courses), media (such as blended versus online learning), and content (for example, Java versus Python). As computer scientists, we should recognize this phenomenon: a focus on implementation before specification. Instead, in sober moments, we should step back and ask what the end goals are for this flurry of activity. Is a little exposure good for everyone? How many Hours of Code will prepare a child for a digital future? If a few requirements are good, are more requirements better? In short: What does it mean for computing education to succeed?
No entries found