Opinion
Computing Applications Kode vicious

Pickled Patches

On repositories of patches and tension between security professionals and in-house developers.
Posted
  1. Dear KV,
  2. Dear Pickled,
  3. Dear KV,
  4. Dear Zapped,
  5. Author
Pickled Patches, illustration

back to top   Dear KV,

I recently came upon a software repository that was not a repo of code, but a repo of patches. The project seemed to build itself out of several other components and then had complicated scripts that applied the patches in a particular order. I had to look at this repo because I wanted to fix a bug in the system, but trying to figure out what the code actually looked like at any particular point in time was baffling. Are there tools that would help in working like this? I have never come across this type of system before, where there were more than 100 patches, some of which contained thousands of lines of code.

Pick a Peck of Pickled Patches

Back to Top

Dear Pickled,

The appropriate tools for such a system do exist, but they require a background check in many states in the U.S. and are banned outright in the more developed countries of the world.

What you are faced with is a project that probably ought to have forked the projects it was working with, but, instead, started with one patch, then two patches, then four patches, until you have what you see before you. When a project is developing quickly and has not started out with the understanding that it is a significant derivative work, the proper use of source code control tools may not occur soon enough in the development process. It requires discipline to spend some upfront time thinking about how to integrate existing code with new development, and if that work does not get done early, it often does not get done at all.

If you want to fix a single bug in the system, I suggest you contact the developers, because they should understand what they have done—and the mess they are in—sufficiently well to be able to address your problem more quickly than you can sort out what they have done with their system. On the other hand, if you need to do significant work on the system you are looking at, you may have to take more extreme measures.

You mention that the project you are looking at is a repo of patches for use with another project. That being the case, you need to lay down what I will refer to as a base track. The ultimate, upstream software the system is based on has to be the base layer. It also needs to be placed into a source code control system that allows you to update that base layer from the ultimate source. With the base layer in place, you should create a branch per patch from the derivative system. You could do this blindly, but it is probably best—although possibly quite frustrating—to read through the project build scripts carefully beforehand. I will wager that some of the patches you see in the derived project are not standalone, but instead depend on each other to fix some underlying bug or to implement a complex feature of the system. Once you have collected the patches into groups, you can then create the patch branches and import the patches from the derived repo. With code now properly contained in a source code control system, you should branch the base layer into its own development branch. Never directly modify the base layer in your project repository, as this will make integrating changes from the ultimate upstream repository nearly impossible. Let me say that again, never directly modify the base layer in your project repository, as this will make integrating changes from the ultimate upstream repository nearly impossible.

For the base layer you will always have at least two branches, the pristine branch that includes only changes coming from the upstream, and the development branch that takes code from the pristine branch and merges it with the patches. You can now integrate patches into the development branch and test them one by one to make sure they work individually before trying to make them all work together. KV often goes on about testing, but in your case it bears a good deal of emphasis. Unless you are in close communication with your upstream providers, you have no idea how they are testing these patches, and accepting them wholesale without incremental tests is a great way to wind up paying a lot of money to someone who puts you on a couch and asks you questions about your childhood.

Of course your best bet is to find the people creating patches of patches and then give them this response. You might want to inscribe it in golden fire on tablets first, but that is up to you.

KV

Back to Top

Dear KV,

Many organizations may be institutionalizing tension between security professionals and in-house developers. In organizations where the security professionals and the in-house developers are in different organizational units and the security professionals have the ultimate responsibility for security, it is natural for the security perspective to dominate dialogue between these two camps.


Let me say that again, never directly modify the base layer in your project repository.


Security professionals have a clear mandate to protect the organization, and their toolset necessarily includes rigidly standardized computer settings and policies and enforcement mechanisms.

In-house developers frequently require exceptions to security policies because their work may require access to software tools excluded from the standard office suite (for example, integrated development environments), security testing tools (such as OWASP Zap), and/or elevated privileges. Also, developers who work with multiple projects (as most of us do) may need multiple virtual machines in order to manage multiple development-project contexts.

As a software developer who attempts due diligence with respect to security, I am often disappointed when security professionals seem to pay so little attention to the concerns of in-house software developers. When security policies are inflexible, useful tools or approaches are disapproved, and in-house developers are unable to apply software development best practices, the organization is not necessarily more secure.

I am hoping you will consider using your voice to stimulate debate on this topic. Please consider a blog article discussing how security professionals might collaborate with in-house developers to the benefit of all. You might consider discussing alternative approaches for reconciling corporate policies (or USGCB) with developers executing security probes against their own code, for example with OWASP Zap.

Zapped

Back to Top

Dear Zapped,

Once upon a time I was one of those in-house security professionals, I still have cards with my title, “Paranoid,” printed right on them. I never keep old business cards, but I kept those because they are the only ones I have ever had that were that honest. I could print more honest cards, but then I could not hand them out in polite company.

I have never been a fan of blanket bans of, well, just about anything, and definitely not software that would help developers produce better and more secure code. Blanket bans usually come from a misguided belief that the rules of engagement can be defined by a small group and that if everyone sticks to those rules nothing can possibly go wrong. That belief is not only mistaken, but incredibly dangerous. Any security team with a clue knows you set out general guidelines and then work with the development group in an advisory role to ensure the guidelines make sense. Only an idiot would create a set of rules that apply to both the development and the accounting teams. Alas, the world is filled with idiots as well as those who simply fear the unknown.

A cursory glance at the software you mention does not show it to be more dangerous than any other piece of software a developer might use, right down to a compiler or a debugger. What is important in any of these discussions is an ability to come up with reasonable boundaries and safeguards so the software in question can be used without causing accidental damage to the systems. Reasonable guidelines are developed in concert with the teams doing the work. Members of a good security team know they are playing a supporting role and they must gain the trust of the people they are working with in order to do their job.

When developers need to do something considered particularly dangerous, for instance attack their own systems, it often makes sense to do this in a lab environment, at least at first. Unleashing the latest penetration testing toy on the company website might be amusing, in much the way that some people consider car accidents fun to watch, but it is not going improve site reliability or security. Large companies have dedicated teams of pen testers—often within the security team. Having a small group that can work with developers to create appropriate security test scenarios and schedule them to run at times that are convenient for testing is a good solution if sufficient resources are available. Startups and smaller companies will need to have their developers do this type of work, just as they have them do everything else, from coding to testing and documentation. The rise of cheap cloud computing should actually help in this area because a service can be cloned, walled off, and “attacked” with tools without harming the active service. One has to be careful with this type of testing, as some cloud providers may flag your attack testing as a real attack and shut you down. You mention virtual machines in your letter, and this is another way to achieve the same ends as the cloud solution, by spinning up a cluster of virtual machines on a virtual network on a large server dedicated to security testing.


The case for elevated privileges is another one that comes up frequently between security and development teams.


The case for elevated privileges is another one that comes up frequently between security and development teams. The basic problem is that software systems, including operating systems, but also extending to databases and other pieces of critical software, are not engineered with the idea of security in mind. The now-famous XKCD comic (https://xkcd.com/149/) about the sudo command pretty much says it all. Most software is written to run either as an individual user, who usually has sufficient privilege to run the service but not enough to test or debug it, or as root. When developers are debugging or testing, they want to run the software as “root” or with a similar superuser-type power because then “it just works.” While these two levels of power are easy to understand—user vs. root—and while they do underpin a lot of modern computing thanks to their use in the original Unix operating system, they are insufficiently expressive. But that is a topic for another, much longer discussion.

Short of rewriting a ton of existing software, we come down to needing test environments, walled off from most of the rest of the system, or requiring special commands to get external access, to give developers a safe sandbox in which to work.

If elevated privileges are absolutely required to get a job done, that privilege must have a timeout. To me it seems reasonable to give a developer root powers on a production box so long as they are working with one other person, and all their commands are written to a file. The sudo program can actually do all of this. I would either set the timeout for a day or to when the problem was fixed, whichever came first. That statement is not meant to be a policy to be blindly adopted by all of my willing thralls; it is simply an example of how security and development teams can work together to get a job done.

KV

q stamp of ACM Queue Related articles
on queue.acm.org

Painting the Bike Shed
Kode Vicious
http://queue.acm.org/detail.cfm?id=1557897

Security in the Browser
Thomas Wadlow
http://queue.acm.org/detail.cfm?id=1516164

Patching the Enterprise
George Brandman
http://queue.acm.org/detail.cfm?id=1053344

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More