News
Computing Applications

NSF Accelerator is Open For Business

Posted
Part of the logo of the NSF Convergence Accelerator program.
The National Science Foundation Convergence Accelerator is an attempt to harness the Foundation's 10 Big Ideas, big-picture areas into which the organization is putting significant effort and funding.

Douglas Maughan has been funding government research for a quarter-century, but his latest endeavor, the National Science Foundation (NSF) Convergence Accelerator, is taking a fresh and dynamic approach to seeding groundbreaking ideas.

"It's a little different, it's essentially a startup inside the government," said Maughan, the program office head. "If you talk to industry and other academic institutions, most people really like this approach."

The accelerator is an attempt to harness the NSF's 10 Big Ideas,big-picture areas into which the organization is putting significant effort and funding.

"We are not one of the Big Ideas," Maughan said. "We are intended to be a consumer of the basic research coming out of them."

That basic research is, by design, not to be confined to a single entity of any sort. Each research team funded by the accelerator contains cross-disciplinary experts in academia, industry, and other research agencies.

Accelerator as Shark Tank

"It's all based on deliverables, but it's quite different from traditional NSF basic research where deliverables are papers and grad students," he said. "We are talking about things like demonstrations and proofs of concept and prototypes, and if something is successful and makes it all the way to commercialization, great. But that's not what we are focused on. We are focused on not only having industry as part of the team, but getting them interested in this research; the government is making this investment because there are places for industry to participate."

In fact, he said, the thrust behind the accelerator is to shift the focus of the research into "use-inspired" or applied research in which industry must be involved. The complexities of research problems in the modern connected world make that approach necessary.

The accelerator solicited proposals around two of the Big Ideas — "Harnessing the Data Revolution" and "Future of Work at the Human-Technology Frontier" — early in 2019, and announced the 43 projects funded for the first phase of research in September.

That initial phase will end in May. At that time, Maughan said, "We'll do a pitch competition and we will select 10 of those initial 43 for Phase II. For Phase II they will get up to $5 million and up to 24 months to make their idea real, to make it something that can be what we call use-inspired research or transition to practice. It's not necessarily focused on a commercialization angle. If something goes that far, all the better, but that's not the primary focus."

Paramount in all the work is a collaborative melding of existing computational capabilities, research into extending or integrating them with each other, and employing them to meet problems and opportunities that have not been met by existing discrete research approaches.

Maughan said he thought the projects being done under Track A, many of which are exploring methods of building multi-discipline, public-private open knowledge networks, could be more likely to include advances in fundamental computer science. The Track B approaches, he said, which explore the human-technology interface, may be approached from a higher level of abstraction, "but there are some really significant opportunities there."

Researchers stress integration, nimbleness

One example of the Track A work is being pursued by University of Cincinnati researcher Lilit Yeghiazarian. Yeghiazarian said the project, The Urban Flooding Open Knowledge Network (UF-OKN), is intended to converge data collected or generated across numerous systems (precipitation amounts and forecasts, transportation networks, inland navigation systems, dams, power grid, drinking water and storm water systems, sewer network, surface water and groundwater), as well as urban socioeconomic and public health information.

"Currently, these data are in silos," Yeghiazarian said. "To solve big societal problems such as urban flooding, data must be linked."

In Phase I, she said the research team is developing vocabularies to semantically link data from the various sub-systems. On completion, according to its abstract, the Phase I work will lay the groundwork for a production-scale urban flooding open knowledge network in Phase II, but also will make publicly available the prototype UF-OKN and its applications. The outcome of a potential Phase II project would be a fully functional UF-OKN that would respond to plain English Internet queries with actionable information on which infrastructure across the affected urban multiplex area would be impacted during storms and flooding.

Another Track A project tackles one of the holy grails of data-dependent research, enabling end users with little or no knowledge of programming to acquire and use large, complex datasets from the Web, so anyone can contribute to an open knowledge network and help create a shared resource of broad value.

"Our team has already developed a tool, Helena, that empowers non-programmers to write Web scrapers by demonstrating how to collect a sample of the target dataset," one of the team's researchers, University of California at Berkeley professor Sarah Chasins, said. "In the Convergence Accelerator, we're integrating Helena with OKN tooling, making it easy for domain experts to populate the OKN with new datasets and integrate datasets from many different sources."

One of the projects in the Track B queue is exploring how to create a universal model of microcredentials that can be employed at any institution of higher education. Such microcredentials, which document that holders have mastered a tightly targeted set of skills, are expected to be increasingly desirable from numerous perspectives in the fast-moving economy of the 21st century.

"If you start with standards, try to create degrees or online certificates that come out of those, and match them to industry needs and align them with pedagogy experts in a specific content area, it takes a while," said the project's principal investigator, SUNY-Buffalo professor Samuel Abramovich. "That's still worth doing and still very important. But microcredentials offer a holder the means to quickly communicate their competence when there becomes an immediate need."

In terms of building a scalable computing infrastructure around microcredentials, Abramovich said system requirements still need to be defined before ontological features can be designed. The parameters of the NSF program, he said, helped expose the wide variety of perspectives around microcredentials and the need to focus design efforts.

"The grant program encouraged us to really talk to people, and in one way we are running up against the challenge that everyone has their own view and understanding of what a microcredential is and where it can best be used," he said. "But we think we can still leverage all we know about learning and assessment and measurement and provide a system that allows local learning organizations to learn about the microcredential. We need to first do that, and then we can get to the ontology."

For instance, Abramovich said, the microcredential work could also help address some of the unintended consequences of existing human resource algorithms. One example is the discarding of candidates who might be otherwise qualified for a position due to lack of a monolithic certificate such as a bachelor's degree.

"We have to develop and study these systems more," he said, "because we don't want to develop the wrong algorithms for this. We don't want to reproduce the same challenges on a new system."

Two years, maybe more

Maughan said the accelerator is formally budgeted through next year, and is tentatively funded through 2023, but also contends the multi-stakeholder approach is popular with industry as well as government agencies and academic researchers, and is confident the approach will prove durable.

"We'll do some new topics in 2020 and 2021," he said. "From 2021 onward, the accelerator will be full. We'll have 10 teams from 2019 that will be in Year Two of Phase II, 10 teams from 2020 in Year 1 of Phase II and 30 teams in Phase I. We're trying to build an accelerator that has 50 teams each year going forward after 2021. We're also trying to push the basic research farther down the field, and doing it in partnership with industry is the right way to approach it."

Gregory Goth is an Oakville, CT-based writer who specializes in science and technology.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More