News
Architecture and Hardware News

Always Improving Performance

Jack J. Dongarra is the recipient of the 2021 ACM A.M. Turing Award for his pioneering contributions to numerical algorithms and libraries that enabled high-performance computational software to keep pace with exponential hardware improvements for over four decades.
Posted
  1. Introduction
  2. High Performance, High Standards
  3. Author
2021 ACM A.M. Turing Award recipient Jack Dongarra

As a young man, Jack Dongarra thought he would probably teach science to high school students. That was his plan when he enrolled at Chicago State College, which had become Chicago State University by the time he graduated in 1972. Over the course of his studies, he began to be fascinated by computers. In his senior year, physics professor Harvey Leff suggested he apply for an internship at nearby Argonne National Laboratory, where he could gain some computing experience.

There, Dongarra joined a group developing EISPACK, a software library for calculating eigenvalues, components of linear algebra that are important to performing simulations of chemistry and physics. It was a heady experience. “I wasn’t really a terrific, outstanding student,” Dongarra recalls. “I was thrown into a group of 40 or 50 people from around the country who came from top universities and I got to mix with them.” Project leader Brian Smith became his mentor. “He was very, very patient with me. I didn’t have a very extensive background in computing, and he gave me attention and guided me along.”

The experience changed his plans. After earning his degree in mathematics, he began a master’s program in computer science at the Illinois Institute of Technology. This was the beginning of a career in which he helped usher in high-performance computing by creating software libraries that allowed programs to run on various processors. It was for that work that Dongarra has been named recipient of the 2021 ACM A.M. Turing Award.

He continued to work at Argonne one day a week while in graduate school and, once he graduated, took a full-time job at the laboratory, where he continued to work on EISPACK. The software was intended to be portable, so it could run on different machines. “We sort of expect that to happen today as a matter of course,” he says, “but in those days, it wasn’t so easy to do.”

Back then, there was no standardization among computers. Today, the standard known as IEEE Arithmetic defines how numbers are handled by computers, but in the 1970s, a machine from IBM would not use the same amount of bits to represent a number as did a machine from Control Data Corporation, and a UNIVAC computer would be different from both. EISPACK had to be designed to work across those machines with only minor changes.

Dongarra followed that project with LINPACK, a software library for linear algebra, designed to solve systems of equations. “What we do in LINPACK and EISPACK is really the basis for much of scientific computing,” says Cleve Moler, then a colleague on the project and a professor at the University of New Mexico (UNM), who would later go on to found the computing software company MathWorks. Moler convinced Dongarra to come to New Mexico and study with him. Dongarra took a leave of absence from Argonne and moved to Albuquerque, where in 1980 he earned a Ph.D. in applied mathematics from UNM. While working on his doctorate, he also worked at Los Alamos National Laboratory, where the first Cray supercomputer had been installed, presenting computer scientists with the challenge of making algorithms run on its novel architecture. Using a test program that came to be known as the LINPACK benchmark, Dongarra discovered a timing error in the Cray that was causing it to give the wrong answer.

In 1989, Dongarra was offered a joint position at the University of Tennessee and Oak Ridge National Laboratory. He accepted, and remains there today. The move allowed him to do some teaching, and being in academia lets him be entrepreneurial, he says, in a way that a national laboratory, with its defined projects, did not.

Dongarra has been successful at what he does thanks to both his intelligence and personality, says Moler, a longtime friend. “It’s a beautiful marriage of scientific competence and a kind of humility,” Moler says. “He doesn’t have any hidden agenda. He’s not out to prove himself. He just marches on, old Jack.”

Over the course of his career, Dongarra has been involved in the creation of many libraries. LAPACK, for instance, combined LINPACK and EISPACK into a unified package. Another, BLAS, for Basic Linear Algebra Subprograms, was named by the journal Nature last year as one of 10 computer codes that transformed science. Dongarra chuckles at that designation. “I’m not sure it’s quite as they made it out, but I’m willing to take that,” he says. BLAS are “sort of the computational kernels, if you will, the fundamental building blocks of these other libraries.”

Back to Top

High Performance, High Standards

BLAS eventually became a de facto standard, thanks to the work of many, Dongarra says. “A group of people in the community got together and said, ‘What’s the best way to do these things?’ We argued, fought, drank beer together, and ultimately came up with a package and made it available to the community, and then had further input on how to refine it before it was cast in stone.”

Three qualities have always been important in software libraries he designed, says Dongarra. One is that they can become standard. The second is that they should be portable and able to work on different machines with different architectures, including single processors, parallel computers, multicore nodes and, most recently, nodes containing multiple graphics processing units.

The third quality is that they must run efficiently, which is not always easy to achieve when computer hardware keeps evolving. “Every few years, the hardware changes, and if you don’t make changes to the software to accommodate for those hardware changes, your software will become inefficient,” he says. “We’re always sort of in a catchup game, trying to redesign the software to match the architectural features.”


“It’s a beautiful marriage of scientific competence and a kind of humility. He doesn’t have any hidden agenda. He’s not out to prove himself.”


One such challenge arose in the 1990s, with the growth of parallel computing. Originally, computing took place on a single processor, which performed operations sequentially. Later came parallel processors that shared memory. Those gave way to distributed parallel processors, each of which had its own memory. That raised the question of how to pass messages between processors, and each computer company answered it differently. “From a standpoint of writing software that was going to be used by other people, that was going to be a disaster,” says Dongarra, who solved the problem by creating the Message Passing Interface with an international group of collaborators.

Another way he dealt with hardware differences was through auto-tuning. Developed in his Automatically Tuned Linear Algebra Software library project, auto-tuning probes different chip designs to discover their basic features, such as how much memory cache they have. It then uses machine learning to create thousands of versions of a program, each with slight variations, and runs all of them on each architecture, to find out which is the most efficient.

In his unending quest for efficiency, Dongarra developed batch computation, which takes the calculation of large matrices—used in simulation and data analysis—and breaks them into smaller blocks that can be solved among different processors.

He also developed mixed-precision arithmetic. The standard way of performing numerical computations has been to use 64-bit floating point arithmetic, which produces results with high accuracy. However, in the growing area of artificial intelligence, that sort of accuracy is not always required, and some work can be done with 16-bit precision and completed in about a quarter of the time. Mixed-precision arithmetic helps programmers figure out which parts of their work needs 64-bit accuracy and which can be done in only 16 bits, rendering the whole system more efficient.

Working with colleagues, Dongarra created the Top500, a list of the world’s 500 most powerful supercomputers, ranked by their performance on the LINPACK benchmark. The ranking comes out twice a year, and he predicts the June list will include the first exascale computer, which can perform at least one quintillion (billion-billion) calculations per second. In anticipation of that, he has been working with colleagues around the world to develop a roadmap of what software for such powerful machines should look like.

The Turing Award comes with a $1-million cash prize, and Dongarra says he is not sure what he will do with that. He is still wrapping his head around the honor. “It’s an overwhelming situation,” he says. “These guys who have won this award are leaders in the field. I’ve got their books on my bookshelf, I’ve read their papers, used their techniques. It’s incredible. I must give credit to the generations of colleagues, students, and staff whose work and ideas influenced me over the years and I hope I can live up to all the greatness that the Turing Award has recognized and become a role model, as many of the recipients have been, to the next generation of computer scientists.”

uf1.jpg
Figure. Watch Dongarra discuss his work in the exclusive Communications video. https://cacm.acm.org/videos/2021-acm-turing-award

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More