BLOG@CACM
Computing Profession

Why I Don’t Recommend CSRankings.org: Know the Values You are Ranking On

Posted
Professor Mark Guzdial

It’s the season for promotion and tenure packets, and starting to talk about hiring priorities. Twice in one day last week, I heard people using CSRankings.org to make decisions. In one case, the candidate was praised for publishing in the two conferences, as indicated by CSRankings.org. In another, someone was saying that hiring a particular person might raise a department’s position in CSRankings.org because they published in all the right places.

I don’t recommend CSRankings.org, and I discount letters that reference them.

CSRankings.org aims to be a "GOTO Ranking" of the world’s top computer science departments. It uses "Good data," that is "Open" and available to all, with a "Transparent process," and is "Objective" because the ranking is clearly measurable and computable. I understand the claim of  "objective" in the sense that is measurable. What’s more, the standards used are described clearly in the FAQ (see link here) and even the source code that does the computation is freely available. However, I argue that it’s  "subjective" because it’s "based on or influenced by personal feelings, tastes, or opinions." It’s the view of those who contribute to the source code for how to rank CS departments. If you agree with those values, CSRankings.org is a great site. I can understand why many computer scientists use it. 

I disagree with the values of the site.

It is America-first. For a research area to be included in the rankings, 50 top United States (R1) institutions must have publications in the top conferences in that area in the last 10 years. While non-US institutions are included in the ranking, which venues and research areas count is determined by where US institutions publish.

It is anti-progressive. The research areas and conferences included are the ones that are valued by the top researchers in the top institutions. These are the established conferences. New areas and the up-and-coming conferences will have to wait 10 years to see if they shake out. It’s a conservative ranking. If you agree with the values in computer science, then this is a reasonable choice. If you think that maybe computer science has gone wrong, then you probably don’t. I just recent read Emily Chang’s Brotopia (see publisher link here), so I am particularly leaning toward the latter perspective these days.

It is anti-interdisciplinarity. If you collaborate with scientists and engineers, the only publications that count towards your department’s ranking are the ones that appear in the most traditional CS venues. Any publications in other disciplines don’t count toward the quality and ranking of your department.

These are the opposite of my values. I value and publish in international venues like the International Computing Education Research conference (ICER), Koli Calling, and the Workshop in Primary and Secondary Computing Education (WIPSCE). At some of these venues, there might not even be 50 Americans in the room, let alone 50 R1 institutions over 10 years. I publish in my collaborators’ venues which might vary from educational psychology to history education. I believe we should promote CS faculty who take risks, who explore new areas, who work with people in other disciplines and publish in those disciplines’ venues, and who engage with other parts of the world. Those people are a credit to their departments, and I personally rate more highly the departments who have such people.

Education is an example of a research area that is hard hit by this system. It’s new (ICER was just established in 2005). The US is not the center of computing education research. It is inherently interdisciplinary. People who do work in computing education literally don’t count towards their department’s quality in this scheme. 

As a computing application, CSRankings.org is pretty cool. It does something hard (ranking academic departments) entirely through computation. My values would be hard to include in a ranking system based entirely on computable data. But that doesn’t make me wrong.

A common mistake in computer science is to take the easily computed answer as the correct answer to hard problems.We develop a new technology and then ask "Maybe this technology X can solve this problem Y." What we don’t often ask enough is, "Is this a good solution to that problem? What have we lost by using the technological solution?" Ranking academic departments is a hard problem. The 1988 Denning et al. report "Computing as a discipline" (see link here) defines the fundamental research question of computing as, "What can be (efficiently) automated?" Implicit in that question is the possibility that the answer is "Not everything." Ranking academic quality may not be computable in a way that addresses our values. Computable solutions are not the only way to solve hard problems. The work of Joy Buolamwini and others show us that simply making something computable does not make it value or bias free. Artifacts have politics.

I won’t discourage you from using CSRankings.org. It may completely align with your values, so the site may be providing you a great service. I strongly ask that you consider your values, and ask about the values behind any rankings you use. I find CSRankings.org does not rank departments in accordance with my values, so I won’t recommend it.

Mark Guzdial is professor of electrical engineering and computer science in the College of Engineering, and professor of information in the School of Information, of the University of Michigan.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More