The 2009 DARPA Red Balloon Challenge (also known as the DARPA Network Challenge) explored how the Internet and social networking can be used to solve a distributed, time-critical, geo-location problem. Teams had to find 10 red weather balloons deployed at undisclosed locations across the continental U.S. The first team to correctly identify the locations of all 10 would win a $40,000 prize. A team from the Massachusetts Institute of Technology (MIT) won in less than nine hours (http://networkchallenge.darpa.mil/). Here, we reflect on lessons learned from the strategies used by the various teams.
The Challenge commemorated the 40th anniversary of the first remote log-in to the ARPANet (October 29, 1969), an event widely heralded as the birth of the Internet. The Challenge was designed to identify how more recent developments (such as social media and crowdsourcing) could be used to solve challenging problems involving distributed geo-locations. Since the Challenge was announced only about one month before the balloons were deployed, it was not only a timed contest to find the balloons but also a time-limited challenge to prepare for the contest. Both the diffusion of how teams heard about the Challenge and the solution itself demonstrated the relative effectiveness of mass media and social media.
The surprising efficiency of applying social networks of acquaintances to solve widely distributed tasks was demonstrated in Stanley Milgram’s celebrated work9 popularizing the notion of “six degrees of separation”; that is, it typically takes no more than six intermediaries to connect any arbitrary pair of people. Meanwhile, the Internet and other communication technologies have emerged that increase the ease and opportunity for connections. These developments have enabled crowdsourcing—aggregating bits of information across a large number of users to create productive value—as a popular mechanism for creating encyclopedias of information (such as Wikipedia) and solving other highly distributed problems.1
The Challenge was announced at the “40th Anniversary of the Internet” event (http://www.engineer.ucla.edu/IA40/index.html). On December 5, 2009, at 10:00 A.M. Eastern time, 10 numbered, eight-foot-diameter red weather balloons were deployed at moored locations across the continental U.S. (see Figure 1). DARPA selected readily accessible public sites where the balloons would be visible from nearby roads, each staffed by a DARPA agent who would issue a certificate validating each balloon location. While general information about the Challenge (such as date and time of the deployment, with a picture of a balloon) had already been distributed, some details were not announced (such as DARPA’s banner on each balloon and an attendant at each balloon issuing certificates). Teams submitted their guesses to a DARPA Web site and were given feedback as to which balloons had been identified correctly. While the balloons were scheduled to be taken down at 5:00 P.M. local time, DARPA was prepared to re-deploy them a second day and leave the submission process open for up to a week until a team identified all 10.
The team from MIT correctly identified the location of all of them in eight hours, 52 minutes, 41 seconds. A team from the Georgia Tech Research Institute (GTRI) placed second by locating nine balloons within nine hours. Two more teams found eight balloons, five teams found seven balloons, and the iSchools team (representing Pennsylvania State University, University of Illinois at Urbana-Champaign, University of Pittsburgh, Syracuse University, and University of North Carolina at Chapel Hill) finished tenth by locating six balloons.
Two months later, at the Computer-Supported Cooperative Work Conference (http://www.cscw2010.org/) in Savannah, GA, a special session dedicated to lessons learned from the Challenge brought together representatives from the winning MIT team, the GTRI team, and the iSchools team to compare and contrast among the strategies and experiences across the teams. There, members of the MIT and iSchools teams reflected on their strategies, how they validated their balloon sightings, and the role of social networking tools in their process. While the GTRI team was unavailable for this article, we report on what they shared at the CSCW session and published elsewhere.6,11,12
MIT Team
The MIT team learned about the Challenge only a few days before the balloons were deployed and developed a strategy that emphasized both speed (in terms of number of people recruited) and breadth (covering as much U.S. geography as possible). They set up a platform for viral collaboration that used recursive incentives to align the public’s interest with the goal of winning the Challenge. This approach was inspired by the work of Peter S. Dodds et al.5 that found that success in using social networks to tackle widely distributed search problems depends on individual incentives. The work of Mason and Watts7 also informed the use of financial incentives to motivate crowdsourcing productivity.
The MIT team’s winning strategy was to use the prize money as a financial incentive structure rewarding not only the people who correctly located balloons but also those connecting the finder to the MIT team. Should the team win, they would allocate $4,000 in prize money to each balloon. They promised $2,000 per balloon to the first person to send in the correct balloon coordinates. They promised $1,000 to the person who invited that balloon finder onto the team, $500 to whoever invited the inviter, $250 to whoever invited that person, and so on. Any remaining reward money would be donated to charity.
Figure 2 outlines an example of this recursive incentive structure. Alice joins the team and is given an invite link, like http://balloon.mit.edu/alice. Alice then emails her link to Bob, who uses it to join the team as well. Bob gets a unique link, like http://balloon.mit.edu/bob, and posts it on Facebook. His friend Carol sees it, signs up, then twitters about http://balloon.mit.edu/carol. Dave uses Carol’s link to join, then spots one of the DARPA balloons. Dave is the first person to report the balloon’s location to the MIT team, helping it win the Challenge. Once that happens, the team sends Dave $2,000 for finding the balloon. Carol gets $1,000 for inviting Dave, Bob gets $500 for inviting Carol, and Alice gets $250 for inviting Bob. The remaining $250 is donated to charity.
The recursive incentive structure differed from the direct-reward option of giving $4,000 per balloon found in two key ways: First, a direct reward might actually deter people from spreading the word about the MIT team, as any new person recruited would be extra competition for the reward. Second, it would eliminate people living outside the U.S., as there was no possibility of them spotting a balloon. These two factors played a key role in the success of the MIT approach, as illustrated by the fact that the depth of the tree of invites went up to 15 people, and approximately one of three tweets spreading information about the team originated outside the U.S. Distributing the reward money more broadly motivated a much larger number of people (more than 5,000) to join the team, including some from outside of the U.S. who could be rewarded for simply knowing someone who could find a balloon. This strategy combined the incentive of personal gain with the power of social networks to connect people locating each balloon with the MIT team.
The MIT team received more than 200 submissions of balloon sightings, of which 30 to 40 turned out to be accurate. Given the considerably noisy submission data, including deliberate attempts at misdirection, the team had to develop a strategy to accurately identify the correct sightings. It did not have time to build a sophisticated machine-learning system to automate the process, nor did it have access to a trusted human network to verify balloon sightings. Instead, most of its strategies relied on using human reasoning to analyze the information submitted with the balloon sightings and eliminate submissions with inconsistencies.
The first strategy was to observe the patterns of submissions about a certain balloon site. Since the balloons were all located in public spaces, each one tended to elicit multiple submissions. Multiple submissions at a specific location increased the probability of a report being accurate. However, those deliberately falsifying balloon sightings also submitted multiple sightings for each false location. To filter out these submissions, the team observed differing patterns in how balloon locations were reported (see Figure 3). Multiple submissions about a real balloon location tended to differ a little from one another, reflecting natural variation in representing a certain location: address, crossroads, nearby landmarks. Malicious submissions tended to have identical representations for a single location, making them suspicious.
Another simple strategy the team used involved comparing the IP address of the submission with where a balloon was reported found; for example, one submission reporting a balloon in Florida came from an IP address in the Los Angeles area. A simple IP trace and common sense filtered out such false submissions.
Many submissions included pictures, some contrived to confirm misleading submissions. Most altered pictures involved shots of a balloon from a distance and lacked the DARPA agent holding the balloon and the DARPA banner (an unannounced detail). Figure 4 shows examples of authentic and contrived confirmation photos.
While the MIT team succeeded in quickly collecting balloon-sighting data, it relied on human analysis of the data to detect inconsistencies or patterns reflecting attempts to mislead with false sightings. Analyses exposed bogus balloon sightings, resulting in identifying the accurate sightings by process of elimination.
GTRI Team
In contrast, the GTRI team (also known as the “I Spy a Red Balloon” team) was one of the quickest to launch a Web site and start recruiting a network of participants, eventually growing to about 1,400 people. It even explored partnering with a major shipping company to leverage its network of drivers covering the U.S. to find and report balloon locations. Ultimately, the company declined over “concerns about driver safety and focus on the job.”12 The GTRI team instead promoted its own Web page, registered a Google Voice number, and formed a Facebook group to communicate with participants searching for balloons for the team.
A major aspect of the GTRI strategy was to promote the visibility of the team so anyone spotting a balloon would be more likely to report it to the team. Besides activating the team Web site three weeks before launch day, it leveraged mass media coverage of the team and search engine rank optimization for the Web site to make its participation in the Challenge readily discoverable. This approach capitalized on its longer lead time advantage by starting early to prepare for the contest. The team also declared it would donate all prize money to charity, appealing to the intrinsic motivation of altruism to encourage people to help the team.
While the GTRI team also had to validate the accuracy of reported sightings, it believed its charitable intentions deterred submission of false reports to the team.11 The strategy focused on personally confirming balloon sightings. Where possible, the team had a direct conversation with the balloon spotter to verify a report, creating a social situation whereby it was more difficult to fabricate balloon sightings. If the team could not personally contact a balloon spotter, it called nearby businesses to solicit help validating sightings. Such cold calls produced mixed results; some were obliging, while others simply dismissed the request. In essence, the GTRI team largely relied on social persuasion of strangers, either of potential balloon spotters or of people in the vicinity of a balloon sighting, to validate balloon locations.
While the GTRI team correctly identified nine balloons, it had no record of a report of the tenth balloon (in Katy, TX) being submitted to the team. The mechanism for personally validating balloon sightings (and perhaps its charitable intentions) seemed to engender more social cooperation, but the effort fell short of eliciting a report of all 10 balloons.
iSchools Team
The iSchools team formed about two weeks before the launch date, recruiting observers from member organizations for direct search for the balloons and employing Open Source Intelligence methods8 for cyberspace search. Confirmation techniques were a key element of the iSchools team’s ability to locate six of the 10 balloons, helping it claim tenth place. Most of the six were located through the cyberspace search approach, using humans as sensors in a participatory sensing experiment,2 whereas directly recruiting them as observers in advance of an event is often problematic or impossible.
The team tried using the wide geographic footprint of its member organizations. Since it included colleges and universities from across the continental U.S., it had a good chance of recruiting observers wherever DARPA placed balloons. Current students, faculty, and staff, as well as alumni, were recruited through messages sent to email lists, Twitter feeds, and Facebook groups, when available. Only a handful of pre-registered observers actively participated during launch day, yielding only a single valid balloon location through direct search.
In the cyberspace-search approach, a group of analysts sought evidence of balloon-sighting reports that were accessible on publicly available Internet sites, including public Twitter feeds, Web sites of competing teams, and any other source they could access without hacking. This approach was the primary source of data for finding the other balloon locations. Evidence was gathered from all sources, compiled, and manually evaluated. The validity of evidence was assessed to include the content of the data, as well as the reputation of its source; for example, solitary tweets without detail sent from new Twitter accounts with no followers were discounted, while those from established users with geo-tagged photographs (attached) were given a higher-reliability assessment.
The cyberspace search approach also used a Twitter-capture system to store and search tweets about the Challenge offline, as well as a custom Web crawler set to record data from the publicly accessible parts of Web sites of competing teams. Since analysis of the data captured by the crawler required more time, it could have been more helpful if the Challenge continued longer than the one day the balloons were deployed. However, the Twitter-capture system turned out to be more helpful, as it revealed locations from users allowing their smart-phones to embed geo-data with their tweets. Unlike manually geotagging photos, falsifying data was more difficult through the current set of tools and therefore deemed more reliable.
Several reports were confirmed as false through a combination of photograph analysis and secondary confirmation by dispatched observers. Observers from the pre-event recruiting effort were used where possible (such as detecting the fabricated photo in Figure 5 of the balloon over Albany, NY). Where no pre-recruited observer was available, the command staff and cyberspace search staff called and recruited observers from the iSchools Caucus member organizations, family, and friends known to be near unconfirmed sightings. This technique was used to confirm the valid location in Portland, OR, and disqualify the fabricated image of Providence, RI, in Figure 4, right, and the non-DARPA balloon over Royal Oak, MI, in Figure 4, center.
In one case, a competing team unintentionally leaked details on its Web site of an accurate balloon sighting in Scottsdale, AZ. An attempt to cover up the leak and misdirect others to think the balloon report was in another state created an inconsistency in the story posted with the photograph. To identify the true location of the sighting, the iSchools team triangulated information across many social networking sites. Following geographical clues in the original posting,3,10 the team confirmed the true identity and likely home location of the original poster. The location of the balloon was then confirmed by matching the original text description (in the park near the poster’s house) and comparing the poster’s photograph of the balloon with photographs of the park on Panoramio (see Figure 6). This illustrates the potential of piecing together bits of publicly available information across disparate sources in a timely way to solve a piece of the puzzle.
The iSchools team found that cyberspace-search techniques are effective and inexpensive. Especially in situations where observers could not be recruited in advance, existing observer networks and publicly available information can be leveraged to address intelligence tasks. Moreover, the iSchools team’s approach can be leveraged in intelligence and law enforcement, especially where grassroots organizations are more able to recruit and motivate observers.
The team also learned that secondary confirmation techniques must be employed to overcome deception. During the Challenge, secondary observers, photograph analysis, and metadata analysis were combined to assess the validity of scarce data. Social networking tools have provided public access to large numbers of people and enough data to enable both discovery and independent verification of intelligence information.
Reflections
This experience generated insights at several levels. Diffusion of the Challenge itself demonstrated the complementary roles of traditional mass media and social media. Comparing the strategies of the three teams at the CSCW panel yielded interesting contrasts and implications for how to validate submitted information, adding to DARPA’s reflection across all participating teams.
Diffusion of the Challenge through mass media and social media channels provided a good comparison of the relative roles of traditional and social media methods in network mobilization. The initial announcement at the “40th Anniversary of the Internet” event in October and some widely circulated blog posts (at mssv.net and Slashdot) generated a steady trickle of traffic to the DARPA Web site, averaging about 1,000 hits per day. Initial expectations that the diffusion of the Challenge would progress virally were not realized until the final week before balloon launch. A steep increase in Web-site hits corresponded with the appearance of a story in the New York Times, November 30, 2009, with Web-site traffic increasing to an average of 20,000 hits a day in the last week before the balloons were launched.
Diffusion of the Challenge (itself an experiment in social media) showed how traditional mass media and social media channels are complementary. At least for this target audience and in this time frame, it took a combination of mass media and social media to effectively disseminate information to the intended audience. While mass media played a key role in making the general public aware of the Challenge, social media were an important factor in viral diffusion of Challenge information, especially among the teams relying on them to quickly recruit and connect participants.
Reflecting across the three teams revealed similarities and interesting differences in strategy and implications for each team’s operations. All three set up co-located operations centers where a core team assembled on launch day to actively monitor the real-time chatter in social networking feeds to learn of balloon sightings and possible clues of their validity. The bulk of the effort involved analyzing the balloon sighting information to determine which reports were accurate. Beyond these similarities, the table here summarizes the main differences concerning how the teams motivated participants, validated balloon sightings, and used social and mass media.
The MIT team aligned individual incentives with connecting a social network so it would grow quickly and autonomously. Financial incentives served as extrinsic motivation to work with strangers, both in quickly recruiting the network and in activating the network to locate the balloons. The MIT team also developed strategies for verifying the accuracy of reported balloon sightings largely by analyzing the balloon sighting information submitted to the team.
The GTRI team took advantage of its early start and relied on a combination of social media and mass media coverage to make the team’s quest visible to the vast audience of potential participants. But GTRI’s network size from three weeks of recruiting was far smaller than the network the MIT team recruited in three days. While difficult to determine the causes (such as motivational incentives and social connections), the wide range of responses to the MIT and GTRI teams shows the great variability in dissemination that is somewhat characteristic of social media today.
The iSchools team mined publicly available information through Twitter to identify balloon sightings. In this sense, the team did not offer any motivation or incentives to attract people to help the team but exploited information people made public voluntarily. The advent of social media tools has made a wealth of information publicly available, and the iSchools team’s strategy demonstrated that this information could be mined to tackle a time-urgent problem. While the strategies of the MIT and GTRI teams relied on social media tools to quickly extend their reach to people who could help solve the problem, the iSchools’ data-mining strategy would have been impossible without the social networking tools that elicited data to be made publicly available in the first place.
The Challenge demonstrated that geospatial intelligence is potentially available to anyone with an Internet connection, not just to government intelligence analysts.
However, since the information providers had no motivation to help the iSchools team win, the team had perhaps the most challenging job of identifying accurate sightings among the wide range of noisy information circulating through Twitter. The team was able to identify five balloons simply through publicly available information, performing better than many teams that actively recruited members. Its approach is most relevant for tackling problems where advance preparation, direct recruiting, and financial incentives are inappropriate.
Together, the three teams exhibited a range of strategies that relied on intrinsic or extrinsic motivation and proactive recruiting or reactive data mining. While social networking tools played a role (to varying degrees) in data collection for all teams, the data generated could not be trusted without first verifying its accuracy. The teams’ strategies for validation also varied but relied largely on analyzing the internal consistency of the data or independently verifying balloon sightings, often through social networking tools or trusted social connections. The MIT team’s approach enabled it to solve the game-like problem within a day, while the iSchools team had planned for more extensive data-mining tools that would be useful in a more long-lived challenge. Comparing the teams highlights the different ways social media were used to recruit participants, collect balloon sightings, and validate balloon sighting data.
DARPA View
While the DARPA Web site registered more than 4,000 individuals (from 39 countries) as participants in the Challenge, interview data and team estimates of network size indicate that more than 350,000 people participated in some way. Based on the 922 balloon sighting submissions to the DARPA Web site and team interview data, DARPA tracked 58 teams that were able to correctly locate at least two balloons. Following the Challenge, DARPA conducted 53 interviews with team leaders who had competed in the Challenge. These interviews supplemented the quantitative submission-log data collected on the DARPA Web site with qualitative data about participating team strategies, social and technical tools used, network size, mobilization speed, and important social dynamic factors. This data enabled DARPA to reflect on the experience of teams beyond the three in the CSCW session.4
The Challenge clearly demonstrated the variety, efficiency, and effectiveness of crowdsourcing solutions to a distributed, geo-located, time-urgent problem. The network mobilization time was far faster than expected by DARPA program managers, requiring days instead of weeks. The MIT team constructed a motivated network exceeding 5,000 individuals from four initial nodes in just a few days. Other teams that built around existing networks were able to mobilize them in a day. In one case, a highly connected individual successfully mobilized his contacts through Twitter in less than an hour. As impressive as their use of the network to discover balloons, many teams also used it to do precise, targeted dispatching to verify balloon sightings. Balloon verification, from initial report to confirmation by a targeted dispatch, was typically less than two hours.
While the power of social networks and the manner in which they are poised to transform our society have been gaining attention, the Challenge revealed several promising means for using them to mobilize groups of people for a specific purpose. It also demonstrated the speed at which social networks could be used to solve challenging, national geo-location problems. This potential has profound implications for a variety of applications, from natural disaster response to quickly locating missing children. However, the Challenge also demonstrated this wealth of data is very noisy, reflecting the need for better search methods and verification algorithms.
Much of the transformative potential of social networks lies in the promise of democratization of information and capabilities that had previously been the exclusive purview of privileged government or corporate institutions. The Challenge demonstrated that geospatial intelligence is potentially available to anyone with an Internet connection, not just to government intelligence analysts. Social media and crowdsourcing practices have given almost any individual the potential to tap the inherent power of information. However, along with that power comes the need to cultivate a concomitant sense of responsibility for its appropriate and constructive use. As indicated by recent events, like information disclosed through WikiLeaks and the role of social networking in civil uprisings, appropriate use of these new tools reflects an evolving debate.
Acknowledgments
We thank the Computer Supported Cooperative Work 2010 conference co-chairs Kori Inkpen and Carl Gut-win for supporting its “Reflecting on the DARPA Red Balloon Challenge” session. The MIT team thanks Alex (Sandy) Pentland, Riley Crane, Anmol Madan, Wei Pan, and Galen Pickard from the Human Dynamics Laboratory at the MIT Media Lab. The iSchool team is grateful for the support and advice provided by John Yen, David Hall, Wade Shumaker, Anthony Maslowski, Gregory Traylor, Gregory O’Neill, Avner Ahmad, Madian Khabsa, Guruprasad Airy, and Leilei Zhu of Penn State University; Maeve Reilly and John Unsworth of the University of Illinois; Martin Weiss of the University of Pittsburgh; Jeffrey Stanton of Syracuse University; and Gary Marchionini of the University of North Carolina. We also acknowledge Ethan Trewhitt and Elizabeth Whitaker from GTRI for their participation in the CSCW session. We also thank Peter Lee and Norman Whitaker from DARPA and the DARPA Service Chiefs’ Program Fellows who conceived and executed the Challenge: Col. Phillip Reiman (USMC), CDR Roger Plasse (USN), CDR Gus Gutierrez (USN), MAJ Paul Panozzo (USA), Maj. Jay Orson (USAF), Timothy McDonald (NGA), Capt. Derek Filipe (USMC), and CPT Deborah Chen (USA).
Figures
Figure 1. Locations in the DARPA Red Balloon Challenge.
Figure 2. Example recursive incentive-structure process for the MIT team.
Figure 3. Typical real (top) and false (bottom) locations of balloons, with bottom map depicting five submissions with identical locations.
Figure 4. Typical real (left) and contrived (center and right) pictures of balloons.
Figure 5. Fabricated photo posted during the challenge (left) (
Figure 6. Photo Mapping with Google Maps and Panoramio (Location: Chaparral Park, Scottsdale, AZ); photo report from Twitter (
Join the Discussion (0)
Become a Member or Sign In to Post a Comment