I love innovations in technology. The things that distinguish meaningful innovations from just good ideas have been the people who actually implement the ideas and then measure the results to confirm that it actually makes something better. For something to be innovative it must (a) Be different then something you are doing now, (b) Actually get implemented and used by your customers, and (c) Measurably improve the system it was meant to improve. If you don't implement the idea there is no improvement to analyze. If you don't measure the impact of your implementation, it may be nice-to-have, but if nobody uses it, or worse it annoys customers, how did spending your time on it really help?
Back in 1998, Amazon.com rendered every page using software written in "C" and deployed it daily. In writing software to render the web pages, I noticed we were making several hundred calls to the system library for "malloc()/free()" to allocate memory for the HTML. Periodically we would call the system library method of "fwrite()" to send the HTML code fragments to the customer. When I asked "Why did we do that?" The answer was we wanted to get partial results back to the customer as soon as possible. I agreed with the goal of fast results to customers, but I didn't believe that all the system calls were helping. From my grad school days, I had recalled that every system call was really expensive in not only did it take the program out of user memory space to system memory space but, it also typically involved hitting global resource locks associated with the system resources (memory, network devices, disk).
The hypothesis I wanted to try was that if we could render more web pages faster by allocating a chunk of memory once, filling it with the HTML to send to the customer, and then send it off to the network only when we were done rendering the page. The implementation was simple at the time as Amazon's code base was relatively small. A developer could walk through the whole code path and remove all the system calls and replace them using a single string buffer. Measurement was also easy because a) you could attach a code profiler and confirm that the number of system calls performed decreased and b) Amazon had already developed an infrastructure and culture for measuring time to serve pages and the amount of cpu time used by the software. When the change rolled out on the normal daily push, there was some concern that the site was broken because the amount of CPU usage on the website dropped by 30% as the amount of time it took to render each page dropped by many milliseconds per page. The result of the change not only improved customer experience by getting a single page faster to each customer, it also saved enough CPU time that we could defer the cost of buying new machines. The change was propagated through the code base and eventually the majority of the site was re-written to take advantage of a new insight.
In this example, following my definition for what is innovative: The "Good Idea" was thinking about how much faster could we get the page to the customer if we reduced the number of system calls. The "Implementation" was actually walking through the code, profiling its usage and making sure we captured the majority of the system calls. The "Measurement" was tracking the performance improvement in time spent to render each page, and overall system utilization time vs user space time. The "Innovation" of the change ended up saving Amazon millions in not having to buy new machines as quickly had we been doing it the way "we always" did it. This is just one anecdote, but it really helped solidify my thinking about what it meant to innovate in building a product. There were other innovations that were not as successful, but the culture of implementation and measurement really helped spur an innovation mindset.
In looking at any new idea that proclaims to be innovative I can recommend asking the three questions: (a) Does it help the intended customer? (b) Has it been implemented? (c) Has it been measured?
No entries found