Statement of Interest to Communications readers: This material is highly relevant and valuable to readers of Communications because it provides a comprehensive historical perspective on how federal funding has directly influenced pivotal developments in computing. Understanding the origins and trajectories of crucial software libraries and high-performance computing paradigms helps computing professionals appreciate the broader context and strategic significance of sustained investment in computational infrastructure. The discussion on current challenges and future policy recommendations offers actionable insights and frameworks essential for researchers, policymakers, educators, and industry leaders who play pivotal roles in shaping the future of computing innovation and maintaining global competitiveness.
Over the past century, U.S. federal research funding has been the engine of innovation, leading to revolutionary progress in a number of different areas of science. From the transistor and global positioning systems (GPS) to the Internet and COVID-19 vaccines, federally funded research has driven the nation’s leadership in science, technology, and economic competitiveness. One of the largest beneficiaries of this investment is the field of high-performance computing (HPC) and computational science. Once relegated to niche uses, computational simulation has become a third pillar of scientific inquiry—alongside theory and experiment—powering advances in physics, climate modeling, medicine, and artificial intelligence (AI).
This article traces the wide-ranging impacts of U.S. federal investment in computing and computational science. It argues that this investment has not just built basic infrastructure and software environments, but also created a trained workforce, enabled industrial competitiveness, and supported national security. Drawing on current reports, retrospective histories, and recent calls for a renewed strategy, we look at how long-term federal investment has shaped the course of computing research—and why its future now hangs in the balance.
Foundations of Scientific Discovery
The federal government played a critical role in creating computational science. From the early Cold War-era funding of nuclear weapons simulation, federal agencies such as the Department of Energy (DOE), the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), and the Defense Advanced Research Projects Agency (DARPA) have provided the long-term, high-risk investment necessary to build computing infrastructure and advance algorithmic research.
The vision for computational modeling as a peer to theory and experiment began to emerge in the latter half of the 20th century. As recounted in my ACM Turing Award lecture,9 strategic investments in scientific computing established a vast ecosystem of mathematical software, numerical libraries, and performance benchmarks. This effort, driven by DOE and NSF investments, paved the way for a new model of discovery in which complex physical phenomena could be simulated with increasing realism.
Projects such as LINPACK,10 EISPACK,18 and LAPACK2 demonstrated the power of numerical algorithms in exploiting new hardware features. These government-built software libraries underpinned an entire generation of scientific computing software. The legacy of this investment can still be seen today in software libraries, such as ScaLAPACK,4 MAGMA,1 and SLATE,13 which remain vital to simulations in earth-system modeling, chemistry, materials design, defense science, combustion, and beyond.
In tandem, publicly funded research has underpinned the creation of integrated modeling frameworks enabling large-scale multidisciplinary simulations. As an example, earth-system models used by climate researchers now include atmospheric chemistry, ocean circulation, ice dynamics, and human feedbacks—each requiring numerically strong coupling between components. The software frameworks for these systems, like E3SM14 and CESM,15 are the result of decades of collaborative development on the back of public investment. Their impact is global, guiding policy and shaping United Nations climate change reports.
Early Government Funding for Software and Its Uptake over Time
The development of foundational scientific software in the U.S. owes much to early and sustained investment by government agencies, particularly beginning in the 1960s and 1970s. At a time when the commercial software industry was still embryonic and the concept of reusable scientific software was only beginning to emerge, national laboratories such as Argonne, Los Alamos, and Oak Ridge became vital hubs for computational research. Their work, heavily funded by the DOE and defense-related agencies with occasional funding from the NSF, laid the groundwork for much of today’s computational science.
One of the earliest and most influential efforts was EISPACK, developed at Argonne National Laboratory under tight federal-funding constraints during the early 1970s. EISPACK produced reliable, portable Fortran implementations of methods for solving eigenvalue and eigenvector problems—critical tools for physics, engineering, and later even search engines like Google’s PageRank. Despite budget cuts, Argonne leadership made strategic choices to protect EISPACK, understanding its long-term value. With government support, EISPACK was not only developed but extensively tested across a range of hardware (IBM, CDC, Univac, DEC, Amdahl, Honeywell), ensuring broad portability and robustness. The project became part of a larger national initiative, the National Activity to Test Software,5 which emphasized creating dependable computational tools as infrastructure for American scientific leadership.
Following EISPACK, the government supported another key project: LINPACK, launched in the mid-1970s with NSF funding. LINPACK aimed to produce a similar package for solving systems of linear equations and least-squares problems—core computational tasks across scientific fields. Although initially seen as a set of utilities, LINPACK’s impact became transformational. Its design philosophy incorporated modularity, clear documentation, and an early commitment to the Basic Linear Algebra Subprograms (BLAS)11 standard, a key innovation that allowed high performance across diverse machine architectures. BLAS enabled LINPACK to serve as both a tool for application developers and a performance driver, later underpinning the now-famous TOP500 supercomputer benchmarking project.
These early investments were not simply about writing programs; they created a culture of rigorous testing, documentation, standardization, and distribution that enabled widespread adoption. Initially distributed through physical media (such as magnetic tapes or punched cards), these software packages reached broader audiences through initiatives like Netlib, arguably one of the first electronic software repository founded in the mid-1980s. Again, supported by government research institutions, Netlib8 enabled researchers around the world to access high-quality mathematical software by email and, later, online, democratizing access and further amplifying the impact of the original federal investments.
Over time, the impact of these early projects has been profound and cumulative. EISPACK and LINPACK were succeeded by even more sophisticated packages, such as LAPACK and ScaLAPACK, funded in part by NSF and DOE grants. Each generation incorporated advances in computing hardware—vector processors, shared-memory systems, and eventually massively parallel clusters—while maintaining a commitment to openness and portability. Notably, the success of these projects depended not just on technology but also on communities: small, closely knit interdisciplinary teams of mathematicians, computer scientists, and domain experts collaborating across government laboratories and universities.
The influence of government-funded software development extended beyond scientific research itself. Standards such as BLAS, practices for benchmarking and portability, and the idea of open software libraries fundamentally shaped how scientific and engineering software was built across industries. Commercial packages such as IMSL and NAG incorporated government-developed software, and machine manufacturers tuned their systems against the LINPACK benchmark. Later, these practices seeded expectations for open source scientific computing, collaborative development models, and reproducibility that persist today.
In short, the early and strategic funding of mathematical and scientific software by U.S. government agencies played a critical enabling role. These investments fostered the creation of reusable computational infrastructure, catalyzed the growth of a global research community, and ensured that American science and engineering remained at the technological frontier. Their success offers a powerful example of how thoughtful public investment in software—especially software developed for broad scientific use—can have cascading, multi-generational impacts across academia, industry, and national innovation ecosystems.
Government Funding and the Development of the Message Passing Interface
The Message Passing Interface (MPI)7 standard represents one of the most consequential achievements in the history of HPC, establishing a scalable and portable programming model for distributed memory systems. Its creation and continued evolution were made possible through strategic investments and leadership from the U.S. government, particularly through the DOE and its Advanced Scientific Computing Research (ASCR) program, alongside support from the NSF.
During the late 1980s and early 1990s, the landscape of computational science was undergoing a profound shift. The advent of massively parallel processing systems, which interconnected thousands of processors each with its own local memory, introduced unprecedented computational potential—but also significant software complexity. Lacking a common standard, scientists and engineers were forced to develop system-specific communication libraries, leading to fragmentation, limited portability, and high development costs. The lack of interoperability across platforms risked stalling progress in scientific computing at a time when demands for large-scale simulations and modeling were rapidly increasing.
An important precursor to MPI was the development of the parallel virtual machine (PVM)20 system, pioneered in the late 1980s by researchers at Oak Ridge National Laboratory, the University of Tennessee, and Emory University, with DOE support. PVM provided a software framework that allowed a heterogeneous collection of computers to appear as a single, unified parallel machine. While PVM was a major step forward and gained widespread adoption in early parallel computing projects, it became clear that a more comprehensive, standardized, and vendor-neutral solution was necessary to meet the needs of increasingly complex scientific applications and to ensure future scalability.
Recognizing this need, the DOE’s ASCR office played a pivotal role by supporting and convening the community to establish a formal message-passing standard. Through DOE and NSF sponsorship, a working group was assembled in 1992, consisting of approximately 20 representatives from national laboratories (such as Argonne and Oak Ridge), leading universities, and key industry stakeholders (including IBM, Intel, and Cray). The working group, which included prominent figures such as Jack Dongarra, William Gropp, Ewing Lusk, and Mark Snir, operated under a consensus-driven model to ensure broad community buy-in.
The resulting MPI 1.07 specification, released in 1994, defined a consistent and comprehensive set of communication protocols, including point-to-point messaging, collective communications, process groups, and synchronization mechanisms. The development of early prototype implementations—critical to validating the standard and demonstrating its viability—was also funded through federal initiatives. DOE’s ASCR program provided sustained support for MPI implementations, particularly through laboratory research programs and competitive grants aimed at advancing scalable scientific computing infrastructure.
Following the release of MPI 1.0, successive versions—MPI-2, MPI-3, and MPI-4—introduced additional capabilities, such as parallel I/O, one-sided communication, dynamic process management, and hybrid programming models compatible with multicore and manycore architectures. The ASCR program continued to sponsor efforts to adapt and optimize MPI for evolving hardware architectures, ensuring that U.S. researchers maintained access to cutting-edge computational tools even as technology advanced.
The impact of MPI has been profound. Today, MPI serves as the foundational programming model for nearly all large-scale scientific computing endeavors. It is deployed on virtually every major supercomputer worldwide, including DOE flagship systems used for climate modeling, materials science, fusion energy research, and nuclear stockpile stewardship.
The development of MPI illustrates a highly successful model of government intervention: identifying a strategic need, funding coordinated community-driven development, and sustaining long-term support through research programs like DOE’s ASCR. Without this vision and investment, the scientific community would have faced far greater barriers in harnessing the power of parallel computing. MPI’s continued relevance three decades after its inception underscores the transformative impact of early and sustained federal funding for critical scientific software infrastructure.
The Software Infrastructure of Discovery
While the public might marvel at the supercomputers that top the TOP50019 list, the actual enabler of progress is elsewhere: in software. Federally supported research has cultivated the development of sound, reusable, and performance-portable software libraries that encapsulate hardware complexity and enable domain scientists to focus on modeling rather than memory layouts.
The BLAS, MPI, and IEEE 754 floating-point standard16 all grew out of federally funded partnerships. These projects provided low-level building blocks for higher-level applications with rigorous requirements for accuracy, portability, and reproducibility. When architectures evolved from scalar to vector to hybrid multicore-GPU designs, federal funding enabled the continuous updating of these libraries so that they remained useful for decades.
This durable software infrastructure has yielded multiplier benefits. It allows application developers to test new models rapidly, assures continuity between generations of hardware, and supports education by making stable tools available to students and researchers. Above all, this software has a tendency to outlast the hardware for which it was originally designed—a testament to the merits of long-term investment.
Besides, scientific libraries have played a major role in closing the hardware-software gap. With increasingly heterogeneous computing architectures, more intricate and deeper memory hierarchies, such libraries shield users from low-level optimization. Packages like PETSc,3 Trilinos,21 and AMReX22—supported by DOE and NSF sustainment—illustrate the strategic utility of software abstractions capable of adapting with hardware.
Economic Competitiveness and Industrial Innovation
Federal investment in computing research has driven not only scientific innovation but also economic transformation. Much of the software and hardware underlying the digital economy of today originated in federally funded research. The Internet emerged from DARPA’s ARPANET, semiconductors and chip-fabrication techniques used today were invented in federally funded labs, and machine learning (ML) frameworks like TensorFlow and PyTorch rely on fundamental linear algebra subroutines that were initially implemented under DOE- and NSF-funded research.
Open-source software libraries such as LAPACK and FFTW12 are routinely embedded in commercial offerings, from engineering design software to financial modeling applications. These libraries, while open source, yield tremendous downstream economic benefit by accelerating product development and reducing software engineering costs.
The CHIPS and Science Act of 2022 is a case in point for this reappreciation of this value. In allocating nearly $280 billion for R&D and semiconductor manufacturing, the Act reasserts the role of federal investment in maintaining U.S. leadership in foundational technologies. Yet as recent reports have suggested, hardware alone is insufficient. Software, education, and codesign ecosystems must also be invested in over the long term if this investment is to yield sustainable returns.
Historically, some of the largest U.S. tech firms—Google, NVIDIA, Intel, and IBM, for example—were assisted by early federal R&D and later commercialized into market-driving products. This public-to-private pipeline demonstrates how federal investment de-risks foundational innovation. Moreover, the widespread adoption of scientific libraries by cloud providers highlights how this infrastructure quietly powers scalable services throughout the economy.
Workforce Development and Community Building
Federal funding has supported generations of scientists, engineers, and computer programmers. Graduate fellowships, postdoctoral appointments, and early-career investigator awards have provided the stability for young researchers to enter the field. Computational science leaders today can trace their roots to NSF Graduate Research Fellowships, DOE Computational Science Graduate Fellowships, or summer internships at the national labs. Critically, these programs not only ripen individual talent but also foster collaborative communities. Leadership class facilities such as the National Center for Supercomputing Applications (NCSA), Texas Advanced Computing Center (TACC), Argonne Leadership Computing Facility (ALCF), Oak Ridge Leadership Computing Facility (OLCF), National Energy Research Scientific Computing Center (NERSC), and Lawrence Livermore National Laboratory’s Livermore Computing (LC) bring together interdisciplinary teams spanning computer science, mathematics, and domain science. These centers serve as incubators for innovation, where new algorithms, programming models, and applications are co-developed in the context of real-world problems.
The payoff from this community-building is immense. It enables transfer of knowledge across disciplines, promotes diversity in research teams, and supports an open culture of reuse. Without sustained support, though, such communities are likely to disintegrate—a threat to innovation and national leadership.
The importance of training the next generation of researchers cannot be overestimated. In this era of burgeoning technological innovation, students trained in algorithm design, numerical methods, data-intensive applications, and software engineering will form the backbone of national competitiveness. Many go on to academic careers at universities, national laboratories, and industry, amplifying the national returns on these early investments.
National Security and Strategic Autonomy
HPC is a cornerstone of national security. Applications including nuclear stockpile stewardship, cryptography, pandemic modeling, and climate resilience all depend on the ability to simulate complex systems with high fidelity and reliability. Federal investment through the DOE National Nuclear Security Administration (NNSA), the National Labs, and the Department of Defense has ensured that the U.S. can conduct these simulations independently and securely.
However, this strategic advantage is being challenged. China has rapidly built its HPC capabilities through indigenous chip design and vertically integrated computing stacks. The European Union’s EuroHPC initiative aims to guarantee data sovereignty and AI-led science leadership. Japan’s Fugaku supercomputer exemplifies public funding.
On the other hand, the U.S. currently lacks an orchestrated HPC roadmap past the Exascale Computing Project. While exascale machines such as Frontier and Aurora are historic achievements, lack of a long-term plan risks leadership. As the Science article “High-Performance Computing at a Crossroads”6 explains, the U.S. must develop a post-exascale strategy immediately that integrates AI, simulation, software, and workforce development into an overarching national strategy.
Not only scientific progress but strategic independence—sovereignty over one’s technological destiny—is on the line. As global tensions rise, control of supply chains, semiconductor design and manufacture, and data sovereignty are at the center of geopolitical influence. Robust domestic HPC capability ensures freedom from foreign disruption and preserves the U.S.’s freedom of action in matters of critical consequence.
Challenges and Structural Gaps
Despite these successes, the difficulties are considerable. Among the most long-standing is underinvestment in software relative to hardware. While the acquisition of leadership-class systems is often in the news and gets political commitment, the ongoing maintenance, refactoring, and enhancement of software receives far less attention—and fewer dollars.
This asymmetry is fueled by the rapid rate of evolution of hardware architectures. With GPUs, tensor cores, and domain-specific accelerators becoming ubiquitous, existing codebases must be rewritten or reconfigured on an ongoing basis. Without long-term stable funding, this falls on already overburdened research groups or gets neglected, diminishing the utility of new systems.
Moreover, the market push toward low-precision computing for AI training threatens to sideline the needs of traditional scientific simulations, which require 64-bit floating point precision. If commercial chip architectures transition completely to target AI markets, the scientific community may be left without hardware suitable to its needs—a risk increased by declining representation of academic representation in hardware design.
In addition, software impact measures remain uneven. Unlike publication citation indices, success in software is often unmeasured, and therefore, underweighted in making funding and tenure decisions. Improved measures of usage, reproducibility, and adoption by users are necessary to reward high-impact work and guide future investment.
Policy Recommendations
To fill these gaps and seize future opportunities, the U.S. must act on an aggressive, coordinated computing research strategy. We recommend the following:
Create a national HPC and computational science roadmap: A 10-year roadmap spanning hardware, software, applications, and workforce that is coordinated across federal agencies and in cooperation with academia, national labs, and industry.
Invest in codesign and prototype systems: Investment in the development of exploratory systems that incorporate custom silicon, new programming models, and AI integration to position for post-exascale challenges.
Boost funding for software and algorithms: Elevate the status of software development in funding calls, provide long-term grants to assist library sustainability, and embrace software as a central piece of scientific infrastructure.
Advance access through initiatives such as NAIRR:17 Grant academic and developing research institutions access to computing resources, data, and tools to participate in sophisticated science.
Enhance education and training pipelines: Increase graduate fellowships, undergraduate research experiences, and interdisciplinary institutes that prepare students to operate at the intersection of computing and science.
Enhance public-private partnerships: Engage cloud providers, chip vendors, and software companies in co-funded R&D partnerships that align national needs with commercial innovation.
Develop software sustainability efforts: Fund long-term maintenance of key software infrastructure, career tracks for research software engineers, and evaluation frameworks for software impact.
Enable cross-agency coordination: Create alignment between DOE, NSF, DOD, and other agencies to combine funding, avoid duplication, and enhance overall impact.
Conclusion
U.S. federal investment in research has been a driving force behind innovation in both computing and computational science. From the algorithms fueling today’s AI revolution to simulations that inform climate policy and pandemic response, this support has built a lasting legacy of talent, technological advancement, and scientific discovery.
Yet the path forward requires more than nostalgia. It requires deliberate, strategic, and sustained action. If the U.S. is to remain a leader in science and technology, it must recommit to investing in the whole ecosystem of computing—from chips to software, from education to applications. The future of innovation depends on it.
A strong national plan will not only guarantee scientific leadership but prepare the next generation to solve society’s most pressing problems—from health to energy, from climate to security. With the right investments, America can continue to lead the world in shaping the digital frontiers of the future.
There are several important lessons learned from decades of U.S. federal research funding in computing that should inform our approach moving forward.
Long-term, stable investment enables lasting impact.
Lesson: Foundational software such as EISPACK, LINPACK, LAPACK, MPI, and BLAS were only possible because of sustained, multi-decade investment from agencies like DOE and NSF.
Implication: Future efforts should prioritize durability and continuity in funding—short-term, project-based funding fails to build reusable infrastructure or cultivate long-term talent.
Public investment seeds broad ecosystems.
Lesson: Federal funding didn’t just produce code—it enabled standards (like IEEE 754), portable libraries, benchmarking frameworks (for example, TOP500), and knowledge transfer mechanisms (for example, Netlib).
Implication: Invest not just in individual projects but in ecosystem-wide efforts—documentation, testing, dissemination, and community-building must be part of any software initiative.
Codesign and interdisciplinary teams are crucial.
Lesson: The success of scientific libraries arose from close collaboration between mathematicians, computer scientists, domain experts, and hardware architects.
Implication: Future programs must embrace codesign—bringing software, hardware, and application teams together from the start—and fund it explicitly.
Software outlasts hardware.
Lesson: Many scientific codes (for example, LAPACK, PETSc) have endured across generations of hardware because of attention to portability and software engineering.
Implication: Investment in software maintainability, performance portability, and refactoring is essential to maximize return on hardware acquisition.
Workforce development is a strategic asset.
Lesson: Programs like DOE Computational Science Graduate Fellowships and national lab internships trained generations of HPC leaders.
Implication: Building a pipeline of well-trained computational scientists—including research software engineers—is a prerequisite for sustained competitiveness.
Software is undervalued and underfunded.
Lesson: There has been persistent underinvestment in scientific software, especially in maintenance and sustainability—even as it becomes more essential.
Implication: Elevate software to first-class infrastructure. Fund career paths, impact metrics, and long-term support—not just flashy new tools
Strategic coordination matters.
Lesson: MPI, EISPACK, and others succeeded because of coordinated efforts across labs, agencies, and vendors—led by shared goals.
Implication: Future initiatives should prioritize cross-agency alignment and community-driven development efforts rather than fragmented, one-off projects.
Software enables sovereignty and security.
Lesson: High-end simulation software underpins strategic domains like nuclear stockpile stewardship, climate forecasting, and pandemic planning.
Implication: Control over HPC software is a matter of national capability and autonomy—outsourcing or neglecting software poses strategic risks.
These lessons argue for a holistic approach to funding scientific computing: one that views software not as an afterthought, but as the core enabler of discovery, innovation, and security. In this context, investments in translational HPC research—bridging algorithmic advances and real-world applications—are becoming increasingly important, ensuring that breakthroughs in computing are rapidly transferred to pressing societal and scientific challenges.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment