What if practices rather than ideas are the main source of innovation?
I think that your interesting article missed some reasons for this awkward situation of high idea production rate and low innovation adoption rate.
The first one is economical issue. I have an old friend of mine with whom we were colleagues in Academy of Sciences decades ago and since that time I consider him a genius in programming. For many years he generated ideas, put them into programming products and the results were amazing. In addition to his ability for generating new ideas, he has the highest programming level and the ability to work much more than most other people. Then, many years ago, he started to work for Microsoft, and in private discussion three years ago he mentioned that now the marketing department of that company decides what is needed and what he has to work on. Do you think that the brightest people in computer world work in marketing departments and have to decide the future? Microsoft invites the best programmers from around the world (at least they declare them to be the best), fit them into already existing system of product development, and put the marketing department as the main deciding body. Knowing this, I am not surprised at all that for many years in a row I don't see anything really revolutionary and really amazing from Microsoft; marketing department can evaluate only what is already on the market.
The second reason is linked to WHERE and HOW those ideas are produced. A lot of new ideas are generated as a result of research work in numerous universities. In recent review by one of the MIT professors (not to break the CACM comments' rules, I am not mentioning his name here, but I can send it to you through a private letter, if you want) I was really amazed to read such a phrase: It should be noted that the essence of research is ideas, not implementation. To which I had to explain that the essence of any research is the cognition of the world. Ideas are the source of any research but definitely not an essence. And then I had to add that if he wanted to say that ideas are the main thing in any research and not implementation, then he was fundamentally wrong. It was right at the time of ancient Greece, but since Galileo Galilei each scientific statement must be proved by experiments and the correctness of each statement must be demonstrated in the way that can be duplicated and rechecked by everyone else. In programming it definitely means that each statement must be demonstrated in a working program. If MIT professor (and he is considered to be a bright professor) have such views on ideas and implementation, then why do you expect any outcome of his ideas? And he publishes a lot of papers every year. I don't think that his view on ideas and implementation is exceptional.
Thank you for your excellent article.
My argument is simply that generating ideas and putting them on the table is not sufficient to achieve the adoption of those ideas into practice. If your purpose, say as a university researcher, is to generate ideas and publish them in the research literature, then the work of adoption will be of no concern to you. All this means is that you have no right to expect that your work will ultimately become a standard practice within the field. If you do want your ideas to be adopted, you have more work cut out for you.
Many ideas are generated by university researchers. Many more are generated by businesses looking to make new offers in the marketplace. The biggest challenge for business is to select which of the great ideas on their table they have the time and resources to pursue. As I said, we are idea rich, selection challenged, and adoption poor.
Your ideas deserve a wider audience: for example if true they undermine much of the justification for the monopolies granted by the patent system.
It's a pity ACM policy means this is premium subscription content.
The following letter was published in the Letters to the Editor in the June 2012 CACM (http://cacm.acm.org/magazines/2012/6/149799).
Peter J. Denning's Viewpoint "The Idea Idea" (Mar. 2012) resonated with me due to discussions I had at Hewlett-Packard on how best to lead process improvement and innovation. New practices capable of delivering at the bleeding edge must be scouted and deployed/replicated. It is usually only after such an effort that a practice gains a theoretical basis, a two-step sequence that goes like this:
* It works (replicate/deploy/leverage); and
* This is why it works (institutionalize/educate).
During my stint, 19972000, as a member of the Software Engineering Process Group (SEPG) at Hewlett-Packard India Software Operations facilitating process improvement across all business lines, SEPG and line management concluded the best ideas originate as new practices in working teams. Teams were therefore encouraged to submit their own best practices to help identify such ideas, which were then analyzed for soundness and selected for deployment and replication. The teams were encouraged to maintain documentation using Deming/Shewart's Plan, Do, Check, Act (PDCA) at the project level and the Initiating Diagnosing Establishing Acting Leveraging (IDEAL) model at the SEPG level so they could quantify and demonstrate improvement. (PDCA and IDEAL both overlap the Prime Innovation Pattern outlined by Denning.) Selected best practices were then engineered as repeat-able processes and institutionalized through associated training and tools. What we wished to guard against was innovation for its own sake.
Ideas generated by research teams often entail too much risk to deploy as-is in industry without first undergoing feasibility studies. For one thing, they often do not scale well, as when, say, a toy programming language is used to make a (perhaps valid) point in a research paper. For another, they may be solutions looking for a problem or inventions without a contextyet.
Displaying all 4 comments