Government-funded academic research (GoFAR), lately the subject of across-the-board cuts in the U.S., is one of the engines that truly makes America great. When I started as a new assistant professor in 1976, I was advised to aim my research objectives for “home runs.” In an environment in which ambitious projects with high potential were not penalized if they fell short, my colleagues and I strove to make a high impact. The thinking was that you are more likely to hit home runs by swinging for the fences than by bunting for singles. So, we swung. NSF and DARPA used grants, contracts, and fellowships to sponsor our research, alongside some smaller donations from industry.
Five Home-Run Projects as Case Studies of GoFAR
Here is a quick summary of five highest impact projects—which is summarized in the table and detailed in the second half of the article—of the 11 that cover my academic career:
Reduced Instruction Set Computer (RISC).25 Simplified instruction sets enabled faster microprocessors. Today, 99% of all computers follow RISC principles; the R in the ubiquitous ARM architecture—with nearly 300 billion chips built—stands for RISC.
Redundant Array of Inexpensive Disks (RAID).30 Strength in numbers via redundant arrays of small disks provide better cost, performance, and reliability than large disks, leading to more than $100 billion in sales.
Network of Workstations (NOW).2 Networked clusters of off-the-shelf workstations laid the foundation for Internet service infrastructure, whose descendants power hyperscalers worth trillions today.
Reliable Adaptive Distributed Systems Lab (RAD Lab). Machine learning (ML) combined with systems expertise led to the Spark analytics engine40—used by 40,000 companies—and the startup Databricks.
Parallel Computing Lab (ParLab) developed RISC-V,38 an open instruction-set architecture that any company can use for free. In 2024, two billion RISC-V chips shipped, projected to grow to 20 billion by 2031.
Lab | RISC | RAID | NOW | RAD Lab | ParLab |
---|---|---|---|---|---|
When | 1980-1984 | 1988-1992 | 1993-1998 | 2005-2011 | 2007-2013 |
Key result/ artifact | RISC-I and RISC-II microprocessors | RAID-I and RAID-II storage servers, Postgres DBMS, Log structured file systems | NOW-I and NOW-II clusters, Inktomi search engine | Mesos cluster manager, Spark data analytics system | RISC-V open architecture, Roofline performance model, Selective embedded just-in-time specialization |
Startups | MIPS, Pyramid Technology, Ridge Computers | Array Technologies Corporation, NetApp | Inktomi, Scale 8 | Databricks, Mesosphere | AheadComputing, Akeana, Bina Technologies, Codasip, Condor Computing, Cortus, FuriosaAI, Rivos, SiFive, Tenstorrent, Ventana |
Companies influenced | Advanced RISC Machines (ARM), Sun Microsystems, HP, and more | DEC, EMC, Hitachi, IBM, NCR, StorageTek, Sun Microsystems, and more | Google, HP, Sun Microsystems, and more | Amazon, Cloudera, Google, Microsoft, plus 40,000 companies that use Spark | Andes Technology, MIPS, Nvidia, Qualcomm, Samsung, Synopsys, and more |
Open Source/ Open Standard | Magic ECAD tool | Postgres DBMS | xFS cloud file system | RISC-V specification, Rocket chip Open HW,5 GPUSVM | |
Awards | ACM dissertation award; ACM Turing award; ACM/IEEE Eckert-Mauchly award; IEEE von Neumann medal; NAE Draper prize | Two ACM dissertation awards; IEEE Johnson storage award; Four Test of time / Hall of fame awards (One SIGMOD, two from SIGOPS, one from IFIP 10.4) | First cluster in Top 500 supercomputer list; Three Graysort records; One Test of Time award (HPDC); Two Best paper awards (SOSP, Hot Interconnects) | ACM dissertation award; ACM Weiser award; Frontiers of Science award; Five Test of Time awards (ICDE, ICML, NSDI, SIGCOMM, SOCC) | ACM Athena award; ACM Ken Kennedy award; IEEE Charles Babbage award; Two Best paper awards (IPDPS, SPAA); One Test of Time award (DAC) |
Highest Cited Paper, # Citations | Patterson,25 748 | Patterson,30 4742 | Anderson et. al,2 1426 | Armbrust et. al,3 15034 | Asanović et al.,4 3172 |
The companies in the table—with offices and employees across 44 statesa that hold 98% of the total U.S. population—sold hundreds of billions of dollars of products based on these breakthrough technologies from GoFAR, which also found homes in thousands of military systems that enhanced national security.
Eight principles guided these home-run projects, which were structured as “labs” with a collaborative team of students and faculty in a shared space (see Patterson27):
Multidisciplinary teams of three to five faculty experts in different fields, as there are more chances for impact across fields than within individual fields.
Demonstrative usable artifacts require genuine cross-discipline collaboration that leads to breakthroughs. Rather than toy demos whose goal is to produce papers, they are realistic enough to win over skeptics to help technology adoption.
Seven- to ten-year impact horizons instead of distant futures. In our incredibly fast-moving field, no one can see accurately 15 to 20 years ahead.
Five-year sunset clauses create urgency and allow for new opportunities. Many five-year labs over a career offer more chances for home runs than a few 10- or 20-year projects. Deadlines are rare in academia, so the timeline also gives a real target by which to demonstrate lab goals. It is easier as well to get commitments from experts in several fields to collaborate for five years than for longer.
Sunset clauses lower opportunity costs. It takes a decade to determine the level of a lab’s success. Most are not a home run; six of my other projects did not make the table. But any good entrepreneur knows you do not achieve breakthroughs without risk and the possibility of failure. Without a time limit, projects can linger until everyone loses interest, a potentially enormous opportunity cost. Sunset clauses help researchers and funders move on to the next promising project.
Biannual three-day offsite retreats offer regular honest feedback, provide deadlines, and build team spirit. The most important feature is the praise and constructive criticism in the last session from external practitioners and researchers, which are deeper, more thorough, more thoughtful, and more frequent than most paper reviews.
Physical proximity of collaborators, ideally in one large physical space. Multi-university projects are less successful while multi-disciplinary projects at a single university excel.15
Leadership focused on team success rather than individual recognition. I led about half of the 11 labs over my career and was happy to have colleagues lead others. Leaders build team spirit, focuses work on lab goals, and allows delegation of lab administrivia to a benevolent decision maker.
Why Should the Government Partner with Academia?
After World War II, Vannevar Bush argued that investing in scientific research at universities would have a tremendous return to the economy, to healthcare, and to national defense.9 The goal is for government, academia, and industry to be synergistic partners, all playing to their strengths. Eighty years later, here are my top ten reasons why this GoFAR partnership has been so effective:
Universities publish their research results, whether it succeeds or fails, so everyone can learn from the effort. For example, RISC papers inspired ARM and RAID papers, which encouraged EMC to produce successful products without any direct contact with the original inventors. Most companies have no such tradition.
Successful university research projects can lead to new companies. Over my career there have been numerous examples of university research projects that led to startups that grew to be major corporations. It can be challenging to form a successful startup spun off from a large company when it owns the intellectual property.
Multidisciplinary research is an inherent strength of top universities. Where else can one gather experts from all areas of science, engineering, arts, and so on in one location and have them talk to each other? Research at the intersection of computer science (CS) and other disciplines has led to advances that would have been difficult to achieve in industry, where groups are often more narrowly focused. For example, top experts in CS and neuroscience collaborated to read minds from MRI data.23
Academic freedom enables exploration of unconventional ideas. Academic researchers have the intellectual freedom to pursue high-risk, high-reward ideas that may not have immediate commercial applications. This exploratory nature can lead to unexpected paradigm-shifting breakthroughs. For example, RAID was a byproduct of curiosity-driven research. Industry, in contrast, is typically more risk-averse and goal-oriented, understandably more focused on deliverables with short- to medium-term payoff.
It can be awkward for companies to develop and adopt new technologies that disrupt current product lines. An academic’s sole concern is advancing the state of the art. For many of the home-run labs in the table, the leading companies in the area were the last to embrace the innovation, as they were highly profitable in the current marketplace and had little desire for change. Christensen refers to this as The Innovator’s Dilemma.12
Industry has reduced the amount of internal pure research it funds, especially for high-risk research that is the raison d’etre of the high-impact labs. Computer science research labs in industry that played vital roles in the 20th century are shadows of their former selves. Almost all of a company’s R&D in this century is advanced product development. Our society relies much more on GoFAR for foundational research today than when I started my career.
Top universities attract top people from around the world. Academia attracts very bright people worldwide who want to earn advanced degrees. Some who receive advanced degrees become leaders of existing corporations (for example, AMD, Google, and Microsoft) or found new ones (for example, Hewlett Packard, Intel, Nvidia, Netflix). Ensuring that U.S. universities can draw from the brightest of the global population of eight billion—25 times larger than the U.S. domestic population—has been and is vital to U.S. success in science and engineering.
Even projects that are not home runs train students. After graduation they can become innovators and make their own contributions. While universities produce novel ideas and transfer technology to existing companies and to startups, their most important product is people.26 The silver lining of research projects in industry that do not pan out is unclear.
Computer science is a young person’s field. Our technology changes so rapidly that the state of the art 20 or even 10 years ago can be nearly irrelevant today. A student may have better knowledge of the most critical material than someone with decades of experience. Having a research project staffed by brilliant, hard-working, up-to-date, young people with less experience is not necessarily a huge disadvantage in computer science.
Funds go much further at universities than in industry. Faculty salaries are lower than industry, plus research funds pay only a small part of faculty salary—primarily for summer support—and students receive a much smaller salary compared to industry employees. Industry overhead, commonly greater than 100%, is roughly double the university overhead on the GoFAR grants in part due to more layers of management to pay for and in part because universities partially subsidize it. This lower cost means GoFAR can explore more topics for the same investment.
GoFAR Funds Primarily Support Students
Most of the budget of a GoFAR grant is for the students doing the work, not for equipment, staff, or faculty. Such labs lead to a major positive impact on technology and the industry, and tremendous training grounds given their team orientation and collaborative multidisciplinary goals. Renowned computer architect Burton Smith called the Par Lab team in the 2010s “the best group of Ph.D. students that I have ever seen,” which echoed computer visionary Mark Weiser‘s comment almost word for word in the late 1980s about the fourth RISC project. These labs also uncover treasure troves of Ph.D. topics, as the ACM dissertation awards in the table attest. Beyond technical innovations, a byproduct of these labs is future leaders of our field. Lab alumni have gone on to found billion-dollar startups, become technical leads of large corporations, and successful researchers and leaders at top universities.
Exploring Five GoFAR Home Runs
Next are more detailed histories of my five home run labs and their impact, starting with RISC.28
Reduced Instruction Set Computer. When Stanford’s John Hennessy and I were assistant professors in 1980, conventional wisdom held that computer instruction sets—the vocabulary that software uses to talk to hardware—were too low-level, burdening programmers and causing software failures. The trend was toward complex instruction sets to bridge the gap between people and machines.
In the 1970s, microprocessors were only found in home appliances. We believed microprocessors would become computing’s foundation, following Moore’s Law of doubling transistor counts every year or two. The question was: What instruction set would best serve these rapidly improving microprocessors?
The success of the operating system UNIX, written in a high-level language, changed perspectives. The issue wasn’t programmers writing machine code anymore even for operating systems, but whether compilers could produce efficient programs for an instruction set.
Together with my colleague Carlo Séquin, we argued for a reduced instruction set computer (RISC), keeping instructions simple rather than complex.29 We termed the conventional approach complex instruction set computers (CISC). We believed RISC would be easier to build and easier for compilers to use.
The debate centered on performance: While CISC might require fewer instructions due to their sophistication, each instruction might take longer to execute than a RISC instruction—like a page of polysyllabic words potentially taking longer to read than simpler words.
I sent a draft of the case for RISC paper to friends in industry building CISC minicomputers. Instead of sending me comments, they wrote a rebuttal to appear next to our paper.13
This scientific question became emotionally charged in the computer-design community. CISC advocates believed RISC would complicate software; RISC advocates argued compilers could hide these details from programmers. Despite our universities’ on-the-field athletics rivalry, John Hennessy and I joined forces to advocate for RISC.
Industry debates grew heated at conferences beginning in 1982. Similar discussions had occurred earlier at IBM around the 801 project led by John Cocke concerning minicomputers, though IBM management delayed publicly sharing their views until later.
Research ultimately showed that while RISC needed about 30% more instructions, it processed them approximately five times more quickly, making RISC three to four times faster overall. Additionally, RISC microprocessors required less hardware and power—a crucial advantage as computing became mobile and battery-powered.
In 1983, Hennessy’s Ph.D. students, including Chris Rowen and Norm Jouppi, and our Ph.D. students Robert Sherburne and Manolis Katevenis, presented their RISC microprocessors at the major microchip conference, stunning the audience by creating designs arguably superior to industry state of the art.32,33
Max Planck said that scientific truth does not triumph by convincing opponents and making them see the light, but that science advances one funeral at a time. Computer architecture benefits from the commercial market that tests new ideas, so we do not have to wait for funerals to change the field.
For example, in 1983, Cambridge-based Steve Furber and Sophie Wilson created a new microprocessor for the Acorn personal computer. Inspired by our RISC papers, they developed the Acorn RISC Machine (ARM) with two advantages: no money and no engineers. These constraints prioritized simplicity, aligning perfectly with RISC philosophy. The ARM1 debuted in 1985 as the first commercial RISC processor, outperforming all microprocessors in the market.
Apple approached Acorn in 1990, interested in ARM for its new Newton handheld device. Only RISC could meet Newton’s performance, power, and cost requirements. Acorn agreed to Apple’s request to spin off ARM as a joint venture, rebranding it as Advanced RISC Machine. While the Newton failed commercially, ARM’s efficiency made it ideal for cell phones. At that time, Nokia was the leading supplier of cell phones, so the selection of ARM for the Nokia GSM phone (global system for mobile communications) in 1998 was a major boost. The Nokia experience helped ARM understand system-on-chip requirements, positioning it to dominate the smartphone and embedded computing revolution for the following decades.
With almost 300 billion ARM chips shipped—nearly 40 per person globally—99% of processors today are RISC-based, which traces its roots back to GoFAR. RISC’s simplicity was more efficient in silicon use and power consumption, driving its success. Beyond providing faster and more economical computing for the world, RISC generated substantial economic benefits through job creation and tax revenue.b
Redundant Array of Inexpensive Disks. This project started with a question. My colleague Randy Katz was an early user of Macintosh computers, which when announced in 1984, relied on floppy disks for storage. A few years later, the first small hard disk drive (HDD) was developed for Mac and the IBM PC, which was a godsend for personal computer users. Randy’s question was, “I wonder what else we could do with these small disks?” His curiosity sparked a revolution in storage.At the time, HDDs for mainframes were the size of dishwashers and the size of microwave ovens for minicomputers. Katz, our Ph.D. student Garth Gibson, and I speculated that we could replace one mainframe hard drive with 100 small PC drives (and one minicomputer drive with 10). We wrote a draft showing it would be cheaper for the same capacity but offer much greater performance given we had 100 drives accessing data versus one large drive, even if one large drive was much faster than one small PC drive. We sent it to a friend at IBM. His feedback highlighted our critical oversight: reliability. With 100 drives, failure rates multiplied dramatically, as it was at least 100 times more likely that one drive would fail and we would lose data. This insight inspired us to add redundancy—and to name our project RAID, for Redundant Array of Inexpensive Disks.
We found some related projects offering reliable storage via redundancy at other companies. To clarify our work, the paper that debuted RAID identified five levels of sophistication of redundancy that had increasing benefits in cost-performance. Digital Equipment Corporation (DEC) and Tandem Computers offered RAID level 1, Thinking Machines sold RAID level 2, IBM filed a patent on RAID level 4, and we built a prototype of RAID level 5 we dubbed RAID-I. RAID-II was next, which had 144 disk drives under a single storage controller attached to a high-speed network. RAID-II is now housed at the Computer History Museum.
Our bottom line demonstrated strength in numbers; a RAID system was about ten times better in cost/performance/reliability than a mainframe drive and about five times better than a minicomputer drive. When the paper was published,30 it received immediate attention. It even led to a tutorial in a magazine for personal computers,1 which was a much different market than we expected to show interest.
The paper also landed on the desks of executives at EMC Corporation, who were facing a crisis. Their main product was cheaper memory modules for IBM mainframe computers. They had recently been squeezed out of that market due to changes at IBM and needed a new product. EMC decided to embrace the RAID ideas to offer reliable storage for IBM mainframe computers using arrays of more cost-effective smaller drives, which saved the company.
After discussing with Katz, marketers later changed “inexpensive” to “independent” in the RAID name for pricing flexibility. The numbered RAID levels had an unintended consequence: companies invented higher levels (beyond five) to suggest superiority, sometimes with technical merit but often as marketing ploys. These events helped inspire companies to form the RAID Advisory Board to advocate and evolve RAID technology.21
GoFAR-based RAID became tremendously successful, with EMC alone generating $25 billion in seven years, suggesting industry-wide revenues exceeding $100 billion. Two associated projects also achieved significant impact: RAID’s limited write performance inspired the development of log-structured file systems,31 now foundational in many products; and the open source POSTGRES object-relational database,35 which evolved into PostgreSQL, the open source database used by thousands of companies worldwide.
Network of Workstations. Going back at least to the 1980s, some supercomputers were built using many processors.16 In 1995, we proposed building more cost-effective supercomputers using off-the-shelf workstations connected via emerging switch-based networks like Myrinet.7 Our Network of Workstations (NOW) project2 competed philosophically with Stanford’s DASH,20 which bet on cache-coherent memory in large-scale multiprocessors to simplify parallel programming. This rivalry between what became known as clusters versus large-scale shared memory processors was a popular topic of discussion topic in computer architecture circles in the 1990s.
In April 1997, NOW set two sorting records. That same month, NOW demonstrated versatility by becoming the first cluster ranked in the Top500 supercomputers list. It is rare for the same hardware to be great at both data processing and number crunching. Despite some supercomputers lasting six years in the Top500, within five years 20% of the world’s fastest computers were clusters and in ten years 90% of the new entries and 80% overall were clusters (see Figure 1). We also developed xFS, a precursor of cloud file systems.37
However, the most influential application proved to be Internet services. While AltaVista—the leading search engine in the late 1990s—ran on large-scale shared memory multiprocessors, new assistant professor Eric Brewer and his Ph.D. student Paul Gauthier recognized clusters offered better cost/performance, scalability, and fault isolation. Their Inktomi search engine, built on NOW principles, became more popular than AltaVista despite running on a university campus. They then started an eponymous company that dominated search until Google’s debut. Google and many other companies followed Inktomi’s lead of delivering Internet services on such clusters of many inexpensive computers. Today, descendants of these GoFAR clusters in hyperscaler datacenters are the computing foundation for companies worth trillions of dollars employing hundreds of thousands.
Reliable Adaptive Distributed Systems Lab. The Reliable Adaptive Distributed Systems (RAD) Lab combined machine learning with systems expertise to enable rapid development of revolutionary Internet services and for datacenters to become self-healing and self-managing. Pursuing this agenda led us to become early Amazon Web Services customers. Our workloads tested early cloud-scalability limits, positioning us to author a definitive vision paper explaining cloud computing’s importance and research directions to improve it.3
It was also the first lab to face the problem that computers and the Internet were just as fast at home as they were on campus. Students and faculty started working more from home to avoid interruptions. Our solution to combat isolation was remodeling workspaces to encourage on-site collaboration with free drinks and attractive meeting rooms.27 The surge in productivity from spontaneous interactions outweighed occasional interruptions.
In this collaborative open environment, systems Ph.D. students sat near ML Ph.D. students struggling to scale their algorithms using MapReduce. This cross-disciplinary interaction inspired Matei Zaharia to create Spark,40 an efficient programming system for datacenter-scale ML and other algorithms. Spark is now used by more than 40,000 organizations worldwide. A few years later, some graduate students and faculty from the RAD Lab founded Databricks, starting from Spark, which now employs more than 7,000 people.
Parallel Computing Lab. The Parallel Computing (Par) Lab was a multidisciplinary research project exploring the future of parallel processing inside a microprocessor. We projected a future with thousands of cores per chip.4 Today, server microprocessors have hundreds of cores and GPUs support thousands of hardware threads. We developed the roofline performance model39—widely used to improve designs of both parallel hardware and parallel software—and one of the earliest demonstrations that GPUs were also a good match to ML. GPUSVM is an open source program that novelly used GPUs to deliver ~100x improvement over conventional ML libraries running on CPUs.10 All popular programming frameworks for AI—Tensorflow, JAX, Pytorch—are built on Python using selective embedded just in time specialization (SEJITS) to create kernels for parallel computers, which the ParLab also pioneered.11
For our hardware research, Krste Asanović proposed we create a new, clean RISC instruction set architecture leveraging what was learned since the 1980s. The goal was an instruction-set architecture we could use both in our classes and in our research. Following departmental traditions, we made it an open standard in the hopes that other academics would use it in their research and thereby form a community to share software development and other tools. Ph.D. students Andrew Waterman and Yunsup Lee led the design and built RISC chips. To honor the four generations of our RISC chips in the 1980s, they named it RISC-V (“RISC five”).38
A few years later, we discovered a desire for a free open instruction set standard that any company could use. Surprisingly, industry—not academia—embraced RISC-V. We then did whatever we could to make RISC-V succeed in that role. Following the 1980 RISC playbook, we wrote a position paper for open ISAs and invited authors at ARM to write a rebuttal. For many, their first introduction to RISC-V was this debate inspired by “The Case for Open Instruction Sets”6 versus “The Case for Licensed Instruction Sets.”34 A few months later, we held the inaugural RISC-V workshop and started a non-profit foundation to popularize and evolve RISC-V.
The vision of a universal computing instruction set standard available to everyone without charge inspired an almost religious fervor for RISC-V,18 similar to that for the open source software in the 1990s. Today, RISC-V International includes more than 400 organizations with annual summits across three continents. Two billion RISC-V chips shipped in 2024—projected to reach 20 billion by 2031—and it spawned a dozen startups. Once again, high-impact GoFAR created billions in economic value and numerous jobs.
If GoFAR Is Not Broken, Why Change It?
History shows GoFAR works. The home-run labs in the table are just those associated with a portion of the faculty in one department at one university. A 25-year series of reports from the National Academies documents the proud track record of the symbiotic relationship of computer science researchers, government, and the computer industry.8,17,19,22 They created a “virtuous cycle” where new technologies inspire new research, which creates new ideas and new technologies, over and over. Examples of these computer science breakthroughs from this virtuous cycle that benefited the U.S. economy and defense of the U.S. include:
Broadband and mobile in digital communications
Microprocessors
Personal computing
The Internet and the World Wide Web
Cloud computing
Database management systems
Computer graphics for design and entertainment
AI and robotics
Of course, GoFAR discoveries are not limited to computer science: CRISPR gene editing, the Human Genome Project, lasers, mRNA vaccine technology, magnetic resonance imaging (MRI), positron emission tomography (PET), and solar panels are just a few extrinsic examples. Imagine what the world would be like without such innovations that were driven in part by GoFAR at dozens of universities.
Conclusion: A Thousandfold Return on GoFAR Investments
My best estimate of the total funding from the U.S. government for all 11 projects I was involved in over my 40-year career—that supported hundreds of students and dozens of faculty—is less than $100M, even accounting for inflation. Much more than $100B of products were shipped based substantially on technology developed in these GoFAR labs, and likely more than $1,000B in today’s dollars. Note that this metric doesn’t account for the economic benefits of contributions from lab alumni over their careers.
The ratio of product sales to government research investment would then be roughly 10,000:1. Beyond whatever financial benefits innovative products provide to society, an organizational self-interest question is how much is directly returned to the government in terms of taxes on these products. There is no single definitive number, but a conservative estimate is that more than 10% of sales go to the federal government in taxes.c If so, the direct tax return to the federal government would be at least 1000 times its research funds that supported our 11 five-year labs.
Government funding proved essential to these home-run labs on which I was fortunate to participate with colleagues and generations of graduate students. Our successes, combined with similar achievements by others at my university and elsewhere, indicate the critical importance of continued research investment for our nation’s prosperity and for strengthening our national defense. It is the responsibility of those of us whose careers benefited from GoFAR to try to preserve the GoFAR opportunity for future generations of researchers.
America since its founding has capitalized on innovation as the engine of wealth creation and global prosperity. History is full of nations that failed to innovate and faded. Neither Ancient Rome nor Medieval China started an industrial revolution. Despite having brilliant scientists and mathematicians, the Soviet Union fell behind on technology and lost the Cold War. Heeding history, while China’s GDP tripled since 2010, it steadily increased the fraction of its GDP invested in research and development every year since then.36 If instead, the U.S. forgets the past and decimates GoFAR, it could follow in the footsteps of the former “Cities on the Hill” that gave up their place as beacons of talent and innovation and whose positions in the world ultimately fell.24
Acknowledgments
A paper like this makes one reflect on their academic careers. As proud as I am of our research accomplishments, it’s the personal relationships that one treasures. Thanks go to the hundreds of students and postdocs—too numerous to list—who worked hard to realize the visions of these labs, many of whom have made large marks of their own. The labs also relied on dozens of long serving staff, including Kattt Atchley, Damon Hinson, Roxanne Infanti, Tami Johnson, Jon Kuroda, Ken Lutz, Bob Miller, and Terry Lessard-Smith. And special thanks to the dozens of faculty colleagues whom I was honored to work with over four decades: Tom Anderson, Krste Asanović, Eric Brewer, Bob Brodersen, David Culler, Jim Demmel, Al Despain, Richard Fateman, Armando Fox, Mike Franklin, Joey Gonzalez, Joe Hellerstein, Paul Hilfinger, Dave Hodges, Mike Jordan, Anthony Joseph, William Kahan, Dick Karp, Randy Katz, John Kubiatowicz, Kurt Keutzer, Bora Nikolić, John Ousterhout, Raluca Popa, Jan Rabaey, Koushik Sen, Carlo Séquin, Scott Shenker, Ion Stoica, Mike Stonebraker, John Wawrzynek, Kathy Yelick, and Matei Zaharia. I also thank Jeff Dean, Mark Hill, Jiantao Jiao, Ed Lazowska, Jasper Rine, Ray Simar, K. TIghe, Laura Waller, and Cliff Young who, in addition to some of my collaborators above, gave feedback that improved this paper.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment