Opinion
Letters to the editor

Learn to Live with Academic Rankings

Posted
  1. Introduction
  2. Author Responds:
  3. More Negative Consequences of Academic Rankings
  4. Acknowledge Crowdworkers in Crowdwork Research
  5. Authors Respond:
  6. Computational Biology Is Parallel
  7. References
  8. Footnotes
Letters to the Editor, illustration

No one likes being reduced to a number. For example, there is much more to my financial picture than my credit score alone. There is even scholarly work on weaknesses in the system to compute this score. Everyone may agree the number is far from perfect, yet it is used to make decisions that matter to me, as Moshe Y. Vardi discussed in his Editor’s Letter "Academic Rankings Considered Harmful!" (Sept. 2016). So I care what my credit score is. Many of us may even have made financial decisions taking into account their potential impact on credit score.

As an academic, I also produce such numbers. I assign grades to my students. I strive to have the assigned grade accurately reflect a student’s grasp of the material in my course. But I know this is imperfect. At best, the grade reflects the student’s knowledge today. When a prospective employer looks at it two years later, it is possible an A student had crammed for the exam and has since completely forgotten the material, while a B student deepened his or her understanding substantially through a subsequent internship. The employer must learn to get past the grade to develop a richer understanding of the student’s strengths and weaknesses.

As an academic, I am also a consumer of these numbers. Most universities, including mine, look at standardized test scores. No one suggests they predict success perfectly. But there is at least some correlation—enough that they are used, often as an initial filter. Surely there are students who could have done very well if admitted but were not considered seriously because they did not make the initial cutoff in test scores. A small handful of U.S. colleges and universities have recently stopped considering standardized test scores for undergraduate admission. I admire their courage. Most others have not followed suit because it takes a tremendous amount of work to get behind the numbers. Even if better decisions might result, the process simply requires too much effort.

As an academic, I appreciate the rich diversity of attributes that characterize my department, as well as peer departments at other universities. I know how unreasonable it is to reduce it all to a single number. But I also know there are prospective students, as well as their parents and others, who find a number useful. I encourage them to consider an array of factors when I am trying to recruit them to choose Michigan. But I cannot reasonably ask them not to look at the number. So it behooves me to do what I can to make it as good as it can be, and to work toward a system that produces numbers that are as fair as they can be. I agree it is not possible to come anywhere close to perfection, but the less bad we can make the numbers, the better off we all will be.

H.V. Jagadish, Ann Arbor, MI

Back to Top

Author Responds:

My Editor’s Letter did not question the need for quantitative evaluation of academic programs. I presume, however, that Dr. Jagadish assigns grades to his students rather than merely ranking them. These students then graduate with a transcript, which reports all their grades, rather than just their class rank. He argues that we should learn to live with numbers (I agree) but does not address any of the weaknesses of academic rankings.

Moshe Y. Vardi, Editor-in-Chief

Back to Top

More Negative Consequences of Academic Rankings

I could not agree more with Moshe Y. Vardi’s Editor’s Letter (Sept. 2016). The ranking systems—whether U.S.-focused (such as U.S. News and World Report) or global (such as Times Higher Education, World University Reputation Ranking, QS University Ranking, and Academic Ranking of World Universities, compiled by Shanghai Jiaotong University in Shanghai, China)—have all acquired lives of their own in recent years. These rankings have attracted the attention of governments and funding bodies and are widely reported in the media. Many universities worldwide have reacted by establishing staff units to provide the diverse data requested by the ranking agencies and boosting their communications and public relations activities. There is also evidence that these league tables are beginning to (adversely) influence resource-allocation and hiring decisions despite their glaring inadequacies and limitations.

I have been asked to serve on the panels of two of the ranking systems but have had to abandon my attempts to complete the questionnaires because I just did not have sufficient information to provide honest responses to the kinds of difficult, comparative questions about such a large number of universities. The agencies seldom report how many "experts" they actually surveyed or their survey-response rates. As regards the relatively "objective" ARWU ranking, it uses measures like number of alumni and staff winning Nobel Prizes and Fields Medals, number of highly cited researchers selected by Thomson Reuters, number of articles published in journals of Nature and Science, number of articles indexed in Science and Social Science Citation Index, and "per capita performance" of a university. It is not at all clear to what extent the six narrowly focused indicators can capture the overall performance of modern universities, which tend to be large, complex, loosely coupled organizations. As well, the use of measures like number of highly cited researchers named by Thomson Reuters/ISI can exacerbate some of the known citation malpractices (such as excessive self-citations, citation rings, and journal-citation stacking). As Vardi noted, the critical role of commercial entities in the rankings—notably Times, QS, USNWR, and Thomson Reuters—is also a concern.

Joseph G. Davis, Sydney, Australia

Back to Top

Acknowledge Crowdworkers in Crowdwork Research

Crowdwork promises to help integrate human and computational processes while also providing a source of paid work for those who might otherwise be excluded from the global economy. Daniel W. Barowy et al.’s Research Highlight "AutoMan: A Platform for Integrating Human-Based and Digital Computation" (June 2016) explored a programming language called AutoMan designed to integrate human workers recruited through crowdwork markets like Amazon Mechanical Turk alongside conventional computing resources. The language breaks new ground in how to automate the complicated work of scheduling, pricing, and managing crowdwork.

While the attempt to automate this managerial responsibility is clearly of value, we were dismayed by the authors’ lack of concern for those who carry out the actual work of crowdwork. Humans and computers are not interchangeable. Minimizing wages is quite different from minimizing execution time. For example, the AutoMan language is designed to minimize crowdwork requesters’ costs by iteratively running rounds of recruitment, with tasks offered at increasing wages. However, such optimization is quite different from the perspective of the workers compared to the requesters. The process is clearly not optimized for economic fairness. Systems that minimize payments could exert negative economic force on crowd-worker wages, failing to account for the complexities of, say, Mechanical Turk as a global labor market.

Recent research published in the proceedings of Computer-Human Interaction and Computer-Supported Cooperative Work conferences by Lilly Irani, David Martin, Jacki O’Neill, Mary L. Gray, Aniket Kittur, and others shows how crowdworkers are not interchangeable cogs in a machine but real humans, many dependent on crowd-work to make ends meet. Designing for workers as active, intelligent partners in the functioning of crowdwork systems has great potential. Two examples where researchers have collaborated with crowdworkers are the Turkopticon system, as introduced by Irani and Silberman,1 which allows crowdworkers to review crowdwork requesters, and Dynamo, as presented by Salehi et al.,2 which supports discussion and collective action among crowdworkers. Both projects demonstrate how crowd-workers can be treated as active partners in improving the various crowd-work marketplaces.

We hope future coverage of crowd-work in Communications will include research incorporating the perspective of workers in the design of such systems. This will help counteract the risk of creating programming languages that could actively, even if unintentionally, accentuate inequality and poverty. At a time when technology increasingly influences political debate, social responsibility is more than ever an invaluable aspect of computer science.

Barry Brown and Airi Lampinen,
Stockholm, Sweden

Back to Top

Authors Respond:

We share the concerns Brown and Lampinen raise about crowdworker rights. In fact, AutoMan, by design, automatically addresses four of the five issues raised by workers, as described by Irani and Silberman in the letter’s Reference 1: AutoMan never arbitrarily rejects work; pays workers as soon as the work is completed; pays workers the U.S. minimum wage by default; and automatically raises pay for tasks until enough workers agree to take them. Our experience reflects how much workers appreciate AutoMan, consistently rating AutoMan-generated tasks highly on Turkopticon, the requester-reputation site.

Daniel W. Barowy, Charles Curtsinger,
Emery D. Berger, and
Andrew McGregor, Amherst, MA

Back to Top

Computational Biology Is Parallel

Bonnie Berger et al.’s article "Computational Biology in the 21st Century: Scaling with Compressive Algorithms" (Aug. 2016) described how modern biology and medical research benefit from intensive use of computing. Microbiology has become data rich; for example, the volume of sequence data (such as strings of DNA and RNA bases and protein sequences) has grown exponentially, particularly since the initial sequencing of the human genome at the start of the third millennium. Berger et al. pointed out that the biologist’s growth exponent is greater even than Moore’s Law. This growth has led to an increasing fraction of medical research funding being directed to data-rich ‘omics. But the difference between what Moore’s Law makes affordable and a greater medical exponent is itself in the long term also exponential. Berger et al. proposed smarter algorithms to plug the gap.

The article described bioinformatics’ ready adoption of cloud computing, but the true parallel nature of much of e-biology went unstated; for example, Illumina next-generation sequencers can generate more than one billion short DNA strings, each of which can be processed independently in parallel. Top-end graphics hardware—GPUs—already contain several thousands of processing cores and deliver considerably more raw processing power than even multi-core CPUs. So it is no wonder that bioinformatics has turned to GPUs.

Berger et al. did mention BWA’s implementation of the Burrows-Wheeler compression transform. BarraCUDA is an established port of BWA to Nvidia’s hardware that was optimized for modern GPUs; results were presented at the 2015 ACM Genetic and Evolutionary Computation Conference.1 Also, Nvidia keeps a list of the many bioinformatics applications and tools that run on its parallel hardware.

After CPU clocks maxed out at 3GHz more than 10 years ago, Moore’s Law pushed 21st century computing to be parallel. GPUs and GPU-style many-core hardware are today at the center of the leading general-purpose parallel computing architectures. Much of microbiology data processing is inherently parallel. Computational biology and GPUs are a good match and set to continue to grow together.

W.B. Langdon, London, U.K.

Back to Top

Back to Top

    1. Irani, L.C. and Silberman, M.S. Turkopticon: Interrupting worker invisibility in Amazon Mechanical Turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Paris, France, Apr. 27–May 2). ACM Press, New York, 2013, 611–620.

    2. Salehi, N., Irani, L.C., Bernstein, M.S. et al. We are Dynamo: Overcoming stalling and friction in collective action for crowd workers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seoul, Republic of Korea, Apr. 18–23). ACM Press, New York, 2015, 1621–1630.

    3. Langdon, W.B. et al. Improving CUDA DNA analysis software with genetic programming. In Proceedings of the 2015 ACM Genetic and Evolutionary Computation Conference (Madrid, Spain, July 11–15). ACM Press, New York, 2015, 1063–1070.

    Communications welcomes your opinion. To submit a Letter to the Editor, please limit yourself to 500 words or less, and send to letters@cacm.acm.org.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More