Dear KV,
I am working on a project that has been selected for an external security review by a consulting company. They are asking for a lot of information but not really explaining the process to me. I cannot tell what kind of review this is—pen (penetration) test or some other thing. I do not want to second-guess their work, but it seems to me they are asking for all the wrong things. Should I point them in the right direction or just keep my head down, grin, and bear it?
Reviewed
Dear Reviewed,
I have to say that I am not a fan of keeping one’s head down, or grinning, or bearing much of anything on someone else’s behalf, but you probably knew that before you sent this note. Many practitioners in the security space are neither as organized nor as original in their thinking as KV would like. In fact, this is not just in the security space, but let me limit my comments, for once, to a single topic.
Overall, there are two broad types of security review: white box and black box. A white-box review is one in which the attackers have nearly full access to information such as code, design documents, and other information that will make it easier for them to design and carry out a successful attack. A black-box review, or test, is one in which the attackers can see the system only in the same way a normal user or consumer would.
Imagine you are attacking a consumer device such as a phone. In a white-box situation, you have the device, the code, the design docs, and everything else the development team came up with while building the phone; in a black-box case, you have only the phone itself. The pen-test idea currently has credence in security circles, but, candidly, that is just a black-box test of a system. In point of fact, the goal of any security test or review is to determine if an attacker can carry out a successful attack against the system.
Determining what is or is not a successful attack requires the security tester to think like the attacker, a trick KV finds easy, because at heart (what heart?) I am a terrible person whose first thought is, “How can I break this?” Security testing is often quite easy because of the incredibly low overall quality of software and the increasingly large number of software modules used in any product. To paraphrase Weinberg’s Second Law, “If architects designed buildings the way programmers built programs, the first woodpecker that came along would destroy all of society.” The difficult parts of security work are constraining the attacks to those that matter and getting past those developers with a modicum of clue who are able to build systems that at least resist the most common script kiddie attacks.
Your letter seems to imply your external reviewers are interested in a white-box review since they are asking for a great deal of information, rather than just taking your system at face value and trying to violate it. What to expect from a white-box security review, at least at a high level, should not be a surprise to anyone who has ever participated in a design review, as the two processes should be reasonably similar. The review would work in a top-down fashion, where the reviewer asks for an overall description of the system, hopefully enshrined in a design document (please have a design document); or the same information can be extracted, painfully, through a series of meetings.
Extracting a design in a review meeting takes a great deal longer in the absence of a design document but, again, looks similar to a design review. First, there must be a lot of coffee in the room. How much coffee? At least one pot per person, or two if you have KV in the room. With the coffee in place, you need a large whiteboard, at least two meters (six feet) long.
Then we have the typical line of interrogation: “What are the high-level features?” “How many distinct programs make up the system?” “What are they called?” “How do they communicate?” and for each program, “What are the major modules of this program?” KV once asked a software designer after he had filled a four-meter whiteboard with named boxes, “What’s the architecture that holds all this together?” to which the answer was, “This system is too complex to have an architecture.” The next sound was KV’s glasses clattering on the table and a very heavy sigh. Needless to say, that piece of software was riddled with bugs, and many were security related.
The goal of any security test or review is to determine if an attacker can carry out a successful attack against the system.
A good reviewer will have a minimal checklist of questions to ask about each program or subsystem, but nothing too prescriptive. A security review is an exploration, a form of spelunking, in which you dig into the dirty, unloved corners of a piece of software and push on the soft parts. Overly prescriptive checklists always miss the important questions. Instead, the questions should start broad and then get more focused as issues of interest appear—and trust me, they always will.
When issues are found, they should be recorded, though perhaps not in an easily portable form, since you never know who else is reading your ticketing system. You want to get inside a system and go read the bugs. If you have a bad apple or two inside the company (and what company is free of rotten apples?) and they do a search on “Security P1,” they are going to walk away with a lot of fodder for zero-day attacks against your system.
Once the system and its modules have been described, the next step is to look at the module application programming interfaces (APIs). You can learn a lot about a system and its security from looking at its APIs, though some of what you will learn will never be able to be unseen. It can be pretty scarring, but it has to be done.
The APIs have to be looked at, of course, because they show what data is being passed around and how that data is being handled. There are security scanning tools for this type of work, which can be used to direct you toward where to perform code reviews, but it is often best to spot-check the APIs yourself if you have any type of ability or intuition around security.
Lastly, we come to the code reviews. Any reviewer who wants to start here should be fired out of a cannon immediately. The code is actually the last thing to be reviewed—for many reasons, not the least of which is that unless the security-review team is even larger than the development team, they will never have the time to finish reviewing the code to sufficient depth.
Code reviews must be targeted and must look deeply at the things that really matter. It is all of the previous steps that have told the reviewers what really matters, and, therefore, they should be asking to look at maybe 10% (and hopefully less) of the code in the system. The only broad view of the code should be carried out, automatically, by the code-scanning tools previously mentioned, which include static analysis. The static analysis tools should be able to identify hot spots that the other, human reviews have missed.
With the review complete, you should expect a few outputs, including summary and detailed reports, bug-tracking tickets that describe issues and mitigations (all while being secured from prying eyes), and hopefully a set of tests the QA team can use to verify that the identified security issues are fixed and do not recur in later versions of the code.
It is a long process littered with broken hearts and coffee mugs, but it can be done if the reviewers are organized and original in their thinking.
KV
Related articles
on queue.acm.org
How to Improve Security?
Kode Vicious
https://queue.acm.org/detail.cfm?id=2019582
Security Problem Solved?
John Viega
https://queue.acm.org/detail.cfm?id=1071728
Pickled Patches
Kode Vicious
https://queue.acm.org/detail.cfm?id=2856150
Join the Discussion (0)
Become a Member or Sign In to Post a Comment