In 1983, a Silicon Valley startup called Hunter & Ready offered a Volkswagen Beetle as a reward to anyone who could find a "bug" in the firm's Versatile Real-Time Executive (VRTX) operating system.
The joke was that the Beetle, with its curvy insect-like carapace, was nicknamed "the bug."
That reward-led crowdsourced idea struck a chord, and it has spawned an effective way to spot computer vulnerabilities. Today, hundreds of organizations worldwide run 'bug bounty' schemes in which teams of ethical hackers scrutinize code to identify exploits or other vulnerabilities, in exchange for cash rewards that can stretch from around $600 to $4,000 for run-of-the-mill bugs, to as much as $250,000 for rare, high-severity vulnerabilities.
Bug bounty programs are run either directly, by an organization whose technology needs probing, or via an online platform that farms out the task. Direct programs are run by the likes of Intel, Google, Apple, and Microsoft, firms with the means to cope with the administration such bounties require.
Since 2012, however, firms that do not want to manage such programs—like Twitter, Slack, and Pinterest—have been able to outsource them to online platforms HackerOne and Bugcrowd, each of which have more than 100,000 freelance white hats signed up, ready to reap cash rewards and, of course, hacker bragging rights.
"It used to take an immense amount of effort to develop portals for managing inbound vulnerability reports," explains HackerOne cofounder Michiel Prins. "That's because it involves validating and triaging reports, getting them to the right team to resolve them, all while tracking metrics and response times."
Both types of scheme can pay out big money. Google last year paid out almost $3 million in bug rewards, for instance, which included $112,000 for one researcher who discovered a major flaw in Google Pixel smartphones, and $100,000 to another researcher who discovered a chain of five bugs that allowed an attacker to take remote control of Chrome.
Says Casey Ellis, founder and CTO of Bugcrowd, which runs 800 bounty programs for companies, "Over the past year we have had 37,000 vulnerabilities reported, a 21% increase year on year, and an increase of 36% in bug bounty payouts, with the largest single-bug payout of $114,000 occurring just a few weeks ago."
It's a similar story at HackerOne, which runs 1,000 customer bug-bounty programs. To date, says Prins, HackerOne has paid out $31 million to its crowd of hackers, including $7 million paid in 2016 and $12 million in 2017. In terms of bugs, Prins says, HackerOne white hats resolved 20,000 incidents in 2016 and 27,000 in 2017. However, he points out that numbers alone don't tell the whole story; it's severity that counts, too: "About 24% of the vulnerabilities reported on HackerOne are rated as 'high' or 'critical' severity," Prins says.
One place where severity could be life-critical is in the military, and if a vote of confidence were needed for bug bounties, the fact that the U.S. Department of Defense (DoD) successfully used HackerOne's services in 2016, in a pioneering program called Hack The Pentagon, should quell any concerns. The program demonstrated that the crowd is the fastest way to reveal bugs, says Prins. "The U.S. Air Force is highly secure, but it took our hackers only eight minutes to find the first vulnerability," he says. More than 3,000 vulnerabilities have since been resolved by DoD thanks to the HackerOne team, according to Prins.
Also, in August, hackers in a HackerOne bug bounty program identified 75 vulnerabilities in the U.S. Marine Corps Enterprise Network in just nine hours, earning those hackers more than $80,000.
Now governments outside the U.S. are investigating bug bounties, notably the EU, which in April revealed it wants to launch a bug bounty platform for open source software.
Why do they work? "Cybersecurity is an intractably human problem, but there are more good guys than bad guys. Bug bounties, and crowdsourced security in general, allows defenders to tilt the playing field back in their favor," says Bugcrowd's Ellis.
Market researcher Gartner agrees. "Crowdsourced security testing is rapidly approaching critical mass and ongoing adoption and uptake by buyers is expected to be rapid," the company said in a June report.
Bounty programs are not sitting still; they are moving into new areas. After Cambridge Analytica stole data on millions of Facebook users in 2016, the social network in March announced a bounty program for "data abuse," apps that leak user information. In late June, the first data abuse bounty was won, when a quiz app exposed the data of millions of Facebook users online. In July, Microsoft followed suit, launching a bounty program that could pay from $500 to $100,000 for identifying bugs in mobile/cloud apps and APIs that might leak user identity data.
There are probably more rewards to come. "With the introduction of GDPR and other privacy legislation, we've seen an increase in privacy-related research," says Prins.
It is not all about software, however; the bounty concept is also being used for hardware. HackerOne is running two specific programs in which white hats hack Intel CPUs and Qualcomm Snapdragon mobile processors. In the context of the severity of the Meltdown and Spectre vulnerabilities in its CPUs, Intel has boosted its bounties to $250,000.
Another hardware sector receiving bug-bounty-based examination is the Internet of Things (IoT). According to Ellis, "IoT devices are often rushed to market and security is de-prioritized in favor of speed of development and production; the result is a wide variety of vulnerability types and criticality." Bugcrowd has built a lab where IoT firms can submit their hardware for security testing.
HackerOne's Lauren Koszarek says most IoT products being examined on that platform are in private programs. She adds that there are a few IoT vendors with public hardware programs, including Nintendo for its Switch console, Amazon and Google for their Alexa and Home speech assistants, and Linksys and Netgear for their routers.
Whether the target is software, CPUs, or IoT hardware, hackers have to know that they will be safe from legal action if they disclose a vulnerability, but that is not always proving possible. Aware of the threat posed by data breaches, ransomware attacks, and election hacks, the U.S. Department of Justice has published a legal framework that organizations can adhere to under which vulnerabilities can be safely disclosed — but not enough companies are adopting it.
Says Ellis, "The ambiguity and lack of a framework surrounding protocols for 'good faith' hackers have resulted in legal threats, unlawful criminal punishment and even jail for researchers who are only trying to improve global security."
One touchstone case highlights the risks: last fall, security researcher Kevin Finisterre found that Chinese drone maker DJI had left cryptographic keys to customer cloud storage accounts on an AWS server, revealing drone flight logs and images showing users' uploaded personal data, including driving licenses and passports.
Under its bug bounty program, DJI agreed Finisterre's finding could earn a reward of $30,000, but under nondisclosure conditions that Finisterre was unwilling to agree to, seeing it as an infringement of his right to free speech. In explaining why he walked away from the $30,000 bounty, Finisterre says DJI had also said his accessing of DJI servers could have been seen as an infringement under the U.S. Computer Fraud and Abuse Act.
What is needed, says Ellis, is for companies to contractually exempt ethical hackers from litigation, an idea that is gaining traction. "In general, legal frameworks are lagging behind and often chilling security research. But the standardization and acceptance of contractual exemptions that provide safe harbor is a promising area of growth," he says.
Gartner cautions, however, that a lack of trust between companies and hackers "has the potential to slow or stall growth of the bug bounty market." Overall, however, the market researcher expects bounty programs will prosper, and by 2022, will "fundamentally disrupt" traditional penetration testing firms, "which will be forced to adapt or die."
In addition, says Gartner, advances in the AI-based simulation of data breaches and cyberattacks could help software houses to more effectively eradicate bugs internally, posing "a competitive threat" to the bug bounty business model.
Paul Marks is a technology journalist, writer, and editor based in London, U.K.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment