Sign In

Communications of the ACM

Viewpoint

An Embarrassment of Riches


View as: Print Mobile App ACM Digital Library Full Text (PDF) In the Digital Edition Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
An Embarrassment of Riches, illustration

Credit: Examiner.com

Open innovation systems represent an emerging collective intelligence success story. In such systems, a customer describes a problem they want to solve (for example, "we want ideas for new beverage products") and provides an online tool that allows the crowd to submit proposed solutions, as well as rate (and sometimes critique) other people's proposed solutions. Many open innovation platforms have emerged (such as ideascale, spigit, and imaginatik) and have been used widely in contexts that range from IBM to Starbucks, from the Danish central government to the White House. One recent survey2 found that one in four companies plan to utilize open innovation systems within the next 12 months, and this figure is growing. Such systems have proven they can elicit substantive contributions at a very large scale and very low cost. In the early weeks of his first term, for example, President Obama asked U.S. citizens to submit and vote on questions on the website change.gov, and promised to answer the top five questions in each category in a major press conference. This initiative engaged over 100,000 contributors, who submitted over 70,000 questions and four million votes. Google's 10 to the 100th project received over 150,000 suggestions on how to channel Google's charitable contributions. In IBM's Idea Jam in 2006, 46,000 ideas for possible IBM products and services were generated by 150,000 contributors. Such large-scale participation enables in turn, such powerful emergent phenomena as:

  • The long tail: Crowds can generate a much greater diversity of ideas, including potentially groundbreaking "out of the box" contributions, than we could easily access otherwise.
  • Idea synergy: Crowds can rapidly develop huge volumes of novel ideas by recombining and refining the ideas proposed by other participants.
  • Many eyes: Crowd participants can check and correct each other's contributions, enabling remarkably high-quality results very inexpensively.
  • Wisdom of the crowd: Crowds can collectively make better judgments than the individuals that comprise them, often exceeding the performance of experts.

Open innovation systems face, however, serious challenges that, paradoxically, are largely a result of how successful they have been at eliciting huge volumes of participation. In this Viewpoint, we review these challenges and propose some promising directions for moving forward.

Back to Top

Challenges

The challenges faced by open innovation systems occur both with idea generation and idea evaluation. Key challenges with idea generation include:

  • Harvesting costs: Open innovation engagements tend to generate idea corpuses that are large, disorganized, and highly redundant. Pruning such a list to find the best ideas can be a massive undertaking. Google's 10 to the 100th project, for example, had to engage 3,000 employees to prune the 150,000 ideas they received, putting them nine months behind schedule. IBM flew 100 senior executives into New York from around the world to prune the results of their Idea Jam.
  • Unsystematic coverage: Open innovation systems have no inherent mechanism for ensuring the ideas submitted comprehensively cover the most critical facets of the problem at hand, so the coverage is hit-or-miss and may not align with the customer's needs.
  • Shallowness: Open innovation systems tend to generate large numbers of relatively shallow ideas. A major reason for this, we believe, is that collaborative idea development, and accurate credit assignment, is typically not well supported in current tools.

Open innovation systems also face challenges with crowd-sourced idea evaluation: there is often a disconnect between what the customer wants and what the crowd selects. This can occur for several reasons:

  • Shallow evaluations: Little support is provided for the crowd building upon each other's evaluative expertise, since users usually do not examine and correct each other's facts and reasoning.
  • Rating lock-in: When there are thousands of ideas, many potentially valuable ideas may not end up being evaluated in sufficient depth, and the system can quickly "lock" into a fairly static, and arbitrary, ranking, where the winning ideas are inferior to others in the list.

The challenges faced by open innovation systems occur both with idea generation and idea evaluation.


Open innovation systems thus face critical challenges in terms of ensuring the potentially massive contributions of the crowd provide high value to the customer without incurring prohibitive harvesting costs.

Back to Top

Promising Directions

How can we meet these challenges and more fully achieve the promise of open innovation systems? Progress will require, we believe, advances on the following two key fronts.

Better open innovation processes. New open innovation processes are needed that provide more guidance about how the crowd can best contribute, help crowd members build on each other's inputs, and make it easier to harvest their contributions,

  • Collaborative idea definition: Helping the crowd make more deeply considered contributions will require progress on incentive schemes and collaborative authoring structures. Participants, for example, can be asked to structure their contributions as deliberation maps1as trees made up of problems to solve, potential solutions for these problems, and the arguments for and against each potential solution, all single-authored. Participants can then compose proposals from the best solution ideas in each map. Credit assignment becomes straightforward because each proposal is built from components with clear authorship.
  • Novel rating mechanisms can help ensure the crowd evaluates ideas quickly and accurately with respect to the criteria the customer cares about. One possibility, for example, is to use a kind of prediction market where participants are given a limited budget to buy and sell stocks, each representing a different idea, and get a payoff when ideas they "bought" are successfully implemented by the customer. This gives users incentives to evaluate ideas carefully from the customer's perspective. Deliberation maps can also help here, by allowing crowds to check and build upon each other's reasoning by creating chains of supporting and rebutting arguments.
  • Creativity enhancement techniques (which have been developed, to date, almost exclusively for face-to-face team settings) can be used offline to feed open innovation engagements as well as adapted to crowd-scale online contexts.
  • Interleaving ideation and evaluation across multiple open innovation rounds. The crowd can be asked to create new ideas built upon those that survived the previous round of selection, so idea generation is more likely to focus on what the customer wants.

Deeper computer support. Crowds (of people) and clouds (computers) have synergistic capabilities. Crowds are able to create, understand and evaluate ideas in ways that computers cannot match, but are best suited for performing relatively small and quick tasks that require little context. Computers, by contrast, excel at rapid analysis of large swaths of data to get "the big picture" of what is (and is not) happening in a crowd. Combining these strengths will require bridging the semantic gap between the natural language that crowds use, and the formal languages that computers require. Someday, this will be achieved by advanced algorithms that allow computers to deeply understand natural language. But this achievement seems to remain far off. In the meantime our goal, we believe, must be to find ways that crowds can do the minimum formalization needed to enable significant computer support, for example by:

  • Semantic tagging: Crowds can annotate natural language idea corpuses with semantic cues (for example, idea and argument boundaries, topic keywords). This can be a mixed initiative process, wherein computers propose possible tags that are corrected by crowd members, and where machine learning can be used to improve the computer algorithms over time based on this human feedback.
  • Design tools can allow users to express their ideas as semi-formal models built on domain-specific primitives, rather than just as natural language text. Design tools aimed at the masses (for example, Google sketchup) are already becoming ubiquitous, but have yet to be incorporated into open innovation platforms.

As the semantic gap between crowds and clouds narrows, we can create powerful new forms of computer support for open innovation.


As the semantic gap between crowds and clouds narrows, we can create powerful new forms of computer support for open innovation, such as:

  • Analysis tools that take advantage of semi-formal idea representations to help evaluate the strengths and weaknesses of contributed ideas;
  • Semantic compression algorithms that remove duplicates, and cluster related ideas, in order to compress idea corpuses; and
  • Visualization tools that summarize what the crowd has done so far, so customers can determine what the gaps/promising areas are and use this information to guide future crowd contributions, for example via focused incentives.

Back to Top

A Call to Arms

Open innovation systems, as we have seen, have the potential to harness the collective intelligence of the crowd for problem solving in areas ranging from business to government, from science to education. This potential is far from fully realized, however, largely because of our inability to deal effectively with the massive levels of user contributions that these systems can elicit. Advances in this area will require contributions from many disciplines, including computer science, cognitive science, social psychology, computational linguistics, and economics. Will you join us in addressing these important challenges?

Back to Top

Further Reading

Bailey, B.P. and Horvitz, E.
What's your idea? A case study of a grassroots innovation pipeline within a large software company. In Proceedings of CHI 2010, ACM Press, NY, 2010.

Bason, C.
Leading Public Sector Innovation: Co-creating for a Better Society. Policy Press, 2010.

Bjelland, O.M. and Chapman Wood, R.
An inside view of IBM's innovation jam. MIT Sloan Management Review 50, 1 (2008), 3240.

Chesbrough, H., Vanhaverbeke, W., and West, J., Eds.
Open Innovation: Researching a New Paradigm. Oxford University Press, Oxford, U.K., 2006.

Gulley, N.
Patterns of innovation: A web-based MATLAB programming contest. Human Factors in Computing Systems (2001), 337338.

Jouret, G.
Inside Cisco's Search for the Next Big Idea. Harvard Business Review 87, 9 (2009), 4345.

Lakhani, K.R. and Jeppesen, L.B.
Getting unusual suspects to solve R&D puzzles. Harvard Business Review 85, 5 (2007), 3032.

von Hippel, E.
Democratizing Innovation. MIT Press, 2005.

Back to Top

References

1. Klein, M. and Iandoli. L. Supporting collaborative deliberation using a large-scale argumentation system: The MIT collaboratorium. Directions and Implications of Advanced Computing; Conference on Online Deliberation (DIAC-2008/OD2008). University of California, Berkeley, 2008.

2. Thompson, V. IDC MarketScape: Worldwide Innovation Management Solutions 2013 Vendor Analysis, (2013); http://idcdocserv.com/240823_spigit.

Back to Top

Authors

Mark Klein (m_klein@mit.edu) is a principal research scientist at the MIT Center for Collective Intelligence, an affiliate at the MIT Computer Science and AI Lab, the New England Complex Systems Institute, and the Dynamic and Distributed Information Systems Group at the University of Zurich in Switzerland.

Gregorio Convertino (gconvertino@informatica.com) is a senior user researcher at Informatica Corporation in Redwood City, CA.

Back to Top

Figures

UF1Figure. A deliberation map1 can help enable better organizational decision making.

Back to top


Copyright held by authors.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.


 

No entries found