As for many colleagues of a similar age to mine, my path into computer security research was all but academic. In the late 1990s, there were very few academic research centers on security, and even less dedicated courses or degrees, especially outside the U.S. We—many of us, at least—came from a hacking background, experimenting with things such as vulnerabilities and exploitation on our own. We would find our brethren in obscure alleyways of the Internet, and then browse through e-zines and (if lucky) attend hacker meetups with a score of attendees. I remember fondly the feeling of attending DefCon for the first time, 20 years ago, and seeing a few thousand kindred souls together.
It should not come as a surprise that for the "hackademics," as a colleague once half-jokingly defined us, offensive security research has a definite thrill. In a discipline that lacks a fundamental, unified theory of how to build "secure things,"1 and where in fact most properties are defined in negative terms (how "not to build"), this makes rational sense. After all, we define robustness of encryption based on resilience to attacks: We routinely first propose attacks, and then offer mitigations. Even in the applied, corporate world, we use penetration testing and red teaming exercises to assess security level. The strictest security evaluation standards, such as the Common Criteria, define security on the basis of resilience to attack attempts. As the saying goes, in security defense is the child of offense. Not the other way around.
No entries found