BLOG@CACM
Architecture and Hardware BLOG@CACM

Protecting the Power Grid, and Finding Bias in Student Evaluations

The Communications Web site, http://cacm.acm.org, features more than a dozen bloggers in the BLOG@CACM community. In each issue of Communications, we'll publish selected posts or excerpts.

twitter
Follow us on Twitter at http://twitter.com/blogCACM

http://cacm.acm.org/blogs/blog-cacm

John Arquilla considers the growth of cyberattacks on infrastructure, while Mark Guzdial wonders how beginning computer science students can possibly evaluate their teachers fairly.
Posted
  1. John Arquilla: The Rise of Strategic Cyberwar?
  2. Mark Guzdial: Evaluating Computer Science Undergraduate Teaching: Why Student Evaluations Are Likely Biased
  3. Authors
BLOG@CACM logo

http://bit.ly/2htUUe5 September 25, 2017

Over the past few years, a troubling hacking trend has emerged, characterized by serious intrusions into electric power infrastructures. Most of this activity has been system-mapping across several countries, ranging from the U.S. to Ireland, and on to Switzerland and Turkey. There is evidence of actual attacks, notably in Ukraine’s Ivano-Frankivsk region in December 2015, when power was knocked out. The prime suspects in these intrusions appear to be Russia-friendly hacker groups known variously as "Dragonfly" and "Energetic Bear," among other names.

The attention to power grids seems to have emerged hand in hand with a growing hacker interest in the broader realm of automated system controls, commonly called SCADA (supervisory control and data acquisition), whose uses are increasing across the spectrum of activities essential to a modern society’s ability to function smoothly. This focus on mapping, and launching the occasional attack, on infrastructure may herald the coming rise of strategic cyberwarfare as a means of striking in costly, disruptive ways at an adversary without a need to defeat opposing military forces. Further, the possibility such attacks can be launched anonymously, or at least "deniably" via proxies, may reduce the risk of retaliatory conflict escalation.

Cyberwar seems to be following a path similar to that followed during the rise of air warfare a century ago, when military thinkers like the American Billy Mitchell and the Italian Giulio Douhet were holding forth with their views about the independent, war-winning potential of strategic attack from the air. Douhet went so far as to encourage the use of chemical weapons in aerial bombing of population centers, to hasten the psychological breaking-point he was sure would follow. While Douhet’s call for chemical attack from the air was almost completely rejected worldwide, there was still broad acceptance of his notion that civilian populations would not bear up under bombardment.

Strategic bombing campaigns from World War II to Korea, Vietnam, and beyond, have been repeatedly launched—with very few successes, per the study by Robert Pape, "Bombing to Win" (http://bit.ly/2iU3zLH). NATO’s successful 78-day Kosovo air war in 1999 against Serbia may be the lone clear exception that proves the rule about how difficult it is to win by means of aerial bombardment.

"Shock and awe" from the air just does not work. On the other hand, the wars of the past 75-plus years have repeatedly seen the close air support of military and naval forces by attack aircraft fundamentally transform and dominate warfare on land and at sea.

What if cyberwar follows a similar path? Recent indicators of hacker interest in infrastructure may be a sign cyber attack is being viewed primarily in strategic terms—that is, as a way of inflicting material and psychological costs on the enemy—instead of as a means of improving the performance of forces in battle. In World War II, Germany and Japan first focused, respectively, on the tremendous combat value of close air support on land and carrier operations at sea. Their opponents were slow off the mark, and the outcome of the war hung in the balance for years.

If interest in mapping power infrastructures is a sign cyber is viewed as a form of strategic attack, it seems the same wrongheaded path that misled so many about which aspect of air power to emphasize is being pursued. If the widespread destruction of strategic aerial bombardment has seldom worked, "mass disruption" from cyber attacks on infrastructure is even less likely to achieve the desired psychological effects. Such attacks will kindle great rage among those affected, leading to conflict escalation. In that larger conflict, the side that has learned to use cyber at the tactical level will prevail.

It may seem reassuring that the apparently Russia-friendly hacker groups are focusing on infrastructure targets, the implication being this suggests an emphasis on developing strategic, rather than tactical, cyberwar capabilities. But this is not an either-or situation. Aggressors might be cultivating battlefield cyber capabilities as well. How might one tell? One clue could be that infrastructure probes and attacks to date have generally not used zero-day exploits; almost all have been simple, employing watering-hole techniques (lying in wait at frequented sites), man-in-the-middle attacks (rerouting individuals’ Internet traffic), and other basic methods. The world’s cyber aggressors may have a whole other gear we have not seen, which will revealed in a shooting war.

It is this latter sort of militarized conflict that David Ronfeldt and I envisioned when we wrote "Cyberwar Is Coming!" (http://bit.ly/2AtTlbt) a quarter-century ago. It is in its effects on the course of battles—on land, at sea, in the air, and outer space—that cyber will show its true potential to transform warfare in the 21st century.

Cyberwar is not simply a lineal descendant of strategic air power; rather, it is the next face of battle.

Back to Top

Mark Guzdial: Evaluating Computer Science Undergraduate Teaching: Why Student Evaluations Are Likely Biased

http://bit.ly/2AwBT3H April 23, 2017

Our campus has been having discussions about student evaluations of teaching. Our Center for Teaching and Learning circulated a copy of an article by Carl Wieman from Change magazine, "A Better Way to Evaluate Undergraduate Teaching" (http://bit.ly/2ipatVy).

Wieman argues we need a better way to evaluate teaching; student evaluations do not correlate with desirable outcomes (as described at http://bit.ly/2iXrn17) and are biased.

"To put this in more concrete terms, the data indicate that it would be nearly impossible for a physically unattractive female instructor teaching a large required introductory physics course to receive as high an evaluation as that of an attractive male instructor teaching a small fourth-year elective course for physics majors, regardless of how well either teaches."

Wieman suggests a Teaching Practices Inventory (http://bit.ly/2ioK5Le) as a better way to evaluate undergraduate teaching. Using practices that are evidence-based is likely to lead to better outcomes. This hasn’t been an easy sell, as Wieman discovered at the White House Office of Science and Technology Policy (http://bit.ly/2B1giUo). It has not gone over well on my campus, either.

Scholars like Nira Hativa argue student evaluations are an effective way to recognize good teaching (see http://amzn.to/2ingr94). Student evaluation of teaching is easy, and is current standard practice, which is difficult to change. Wieman’s Teaching Practices Inventory has been called "radical" on my campus.

I am not a scholar of studies about student evaluation of teaching. I study computing education. From what I know about computer science and unconscious bias, the quote from Wieman is likely just as true in computer science.

Unconscious bias is a factor in women’s underrepresentation in STEM generally, and computer science specifically. The idea is that we all have biases that influence how we make decisions. Unconsciously, many of us (at least in the Western world) are biased to think computer scientists are mostly male. Unless we consciously recognize our biases, we are likely to express them in our decisions. A 2013 multi-institutional study (http://bit.ly/2jUJj9p) found undergraduates see computer scientists as male. That’s a source for bias.

Women in computer science (CS) report on biases that keep them from succeeding in computer science (http://bit.ly/2BH6N9P). Studies show female science students are more likely to be interrupted and less likely to get instructors to pay attention (http://for.tn/2A7ZIlu). The National Center for Women and IT (NCWIT) has developed a video titled "Unconscious bias and why it matters for women and tech" (http://bit.ly/2zPxyHW). A recent report from Google and researchers at Stanford University (http://bit.ly/2A8WiPL) presents evidence that unconscious bias influences teachers’ decisions in CS classrooms; they recommend professional development for the teachers, to help reduce their expression of bias. Google is funding the development of a simulation for teachers to address unconscious bias (http://bit.ly/2jhpEkp).

The tech industry recognizes unconscious bias is a significant problem. Microsoft is making its unconscious bias training available worldwide (http://bit.ly/2AsUOyu). Google is asking 60,000 employees to train to recognize unconscious bias (http://read.bi/2kp144m).

So here’s the question: If unconscious bias is pervasive in computing, and training is our best remedy, how can untrained students evaluate their CS teachers without bias?

Computing Research News raised concerns about bias in student evaluations of CS teaching in 2003 (http://bit.ly/2koz7tk). A recent study found students biased against female instructors (http://bit.ly/2AVRdJZ). There is evidence online students evaluate instructors more highly if they think they are male (http://bit.ly/2AZuk95).

I have not seen a study showing bias in CS students’ evaluations of their teachers, but the evidence is pretty overwhelming it’s there. How could the students avoid it? We know without training, students evaluate teachers with bias. We have found unconscious bias across computing. How could undergraduates evaluate a female CS instructor fairly? What might lead them to evaluate teaching without gender bias?

We have too few women in computer science. We need to recruit more female faculty in CS and retain them. We need to encourage and reward good teaching. Biased student evaluations as our only way to measure undergraduate teaching quality doesn’t help us with either need.

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More