Opinion
Computing Applications The business of software

Don’t Bring Me a Good Idea

How to sell process changes.
Posted
  1. Introduction
  2. Sponsorship
  3. SmartSignal
  4. Testing a System
  5. Some Steps
  6. Test Knowledge is System Knowledge
  7. A Really Good Idea
  8. References
  9. Author
  10. Footnotes
  11. Figures
red bar over lightbulb

You want to know how to get my attention?” Jason Kalich asked the audience rhetorically. “First off, don’t bring me a good idea—I’ve already got plenty of good ideas.” Kalich, the general manager of Microsoft’s Relationship Experience Division, was participating in the keynote panel at the Quest Conference in Chicago.a The three industry experts on the panel were addressing the question asked by the moderator Rebecca Staton-Reinstein: “How can I get my manager’s buy in (to software quality and process change initiatives)?” The audience consisted of several hundred software professionals, most of them employed in the areas of software quality, testing, and process management. Kalich had clearly given the topic a lot of thought and he warmed to the theme: “Don’t even bring me cost savings. Cost savings are nice, but they’re not what I’m really interested in.” He paused for emphasis. “Bring me revenue growth and you’ve got my ear. Bring me new value, new products, new customers, new markets: then you’ve got my attention, then you’ve got my support. Don’t bring me a good idea. Not interested.”

Back to Top

Sponsorship

Obtaining sponsorship for software development process changes is essential. The first of Watts Humphrey’s Six Rules of Process Change is “start at the top”—get executive sponsorship for whatever change you are trying to make.1 Without solid and continuing executive commitment to support changes they usually wither on the vine. But just how does a software quality professional get this support? Kalich and the other panelists were adamant that a good idea, even when supported by possible cost savings, just doesn’t cut it in the current economic climate.

What the panel was saying is that good ideas are just that. And cost reduction, while valuable, tends to be quite incompressible—once the first 10%–20% of cost savings are achieved, further savings usually become increasingly difficult to get. Reducing costs is like compressing a spring—it may require more and more energy for less and less movement.

I looked around the audience and, while there were nods of understanding, there were also many blank stares as people tried to figure out: How can I turn my process initiative into a profit center? Making money is not a typical goal of process change as it is usually practiced which, according to the panel, might be why it doesn’t always get the support it might.

But how to actually do it? That afternoon, I attended a presentation that showed how it can be done, and what critical success factors are needed to make it work.

Back to Top

SmartSignal

“Predictive Analytics is a really complex data set,” said George Cerny, “our systems predict the possible failure of commercial aircraft, power stations, and oil rigs sometimes weeks before a failure might actually occur.” Cerny is the quality assurance manager at SmartSignal,b an Illinois-based data analytics company.

To manage predictive analytics, large and complex systems must be instrumented and enormous amounts of complicated data must be collected from many different sources: pumps, power meters, pressure switches, maintenance databases, and other devices. Sometimes data is collected in real time, sometimes it is batched. Simple data is monitored for threshold conditions and complex interactive data is analyzed for combinational conditions. The analysis system must recognize patterns that indicate the future possibility of component, subsystem, or systemic failure and what the probability of that failure might be. And then it needs to report what it finds. Sometimes these reports are large and detailed; sometimes they are urgent and immediate.

“But before all this happens, the analytic system must be set up.” Cerny said. “This setup was manual and data-entry intensive. A single power station might have hundreds of items of equipment that need to be monitored. Each item might have hundreds of measurements that must be taken over short, medium, and long time-frames. Each measurement might be associated with many similar or different measurements on the same device or on other equipment.” The screen flashed with list after list of data items. “So how could we test this? How could we make sure the system works before we put it in?”

Back to Top

Testing a System

Testing is the interaction of several knowledge-containing artifacts, as shown in the accompanying figure. Some of these artifacts must be in executable software form, but many others are often in a paper format and are processed manually.

Cerny described this: “We realized early we had to test using virtual machines, but how could we test these? And how could we ensure scalability with both the numbers and the complexities of environments and inputs?” To the testing group at SmartSignal test automation was clearly a good idea. But how to get sponsorship for this good idea?

Jim Gagnard, CEO of SmartSignal, put it this way: “We are a software company whose products measure quality and everything is at risk if we aren’t as good as we can be in everything we do. Leaders can help define and reinforce the culture that gets these results, but if it’s not complemented with the right people who truly own the issues, it does not work.”

Dave Bell, vice president of Application Engineering and Stacey Kacek, vice president of Product Development at SmartSignal, concurred. “We always have to be looking to replace what people do manually with the automated version of the same,” said Kacek, “…once it works.” Bell added: “While the management team understood the advantages of test automation and we all have an engineering background, we had to keep asking: how to get buy-in?” “Our driver was to find creative ways for our customers to make decisions, what we call ‘speed to value‘” Kacek asserted.

Back to Top

Some Steps

Cerny described a few of the steps they took at SmartSignal to build their automated test system:

  • Build to virtual machines and virtualization to isolate device dependence;
  • Start simply using comma delimited scripts and hierarchical tree data views;
  • Build up a name directory of functions;
  • Separate global (run in any environment) from local variables;
  • Keep object recognition out of scripts, use both static and dynamic binding; and
  • Initially automate within the development team to prove the concept before moving to production.

Back to Top

Test Knowledge is System Knowledge

These steps are typical engineering design actions anyone might take in automating testing or, indeed, in automating any process or any system. But in this case there was a difference.

“Asset configuration is a big issue in the power industry.” Cerny said in his presentation at Quest. “Imagine setting up a power station: what equipment should go where? Which pumps are used and connected to which other equipment? Where are sensors to be placed? What is the ‘best’ configuration of equipment that will most likely reduce the overall failure rate of the plant?”

The analytics test system is designed to prove the analytical system itself works. To do this, the test system must be set up (automatically, of course) to the appropriate target system configuration. The normal test function is meant to show that the analytical system will work as built for that particular target system. But what if we turn this around? What if we use our testing capability to find out what configuration would show the lowest likely failure rate? Doing this allows field engineers and power plant designers to model different configurations of systems for least likelihood of failure before they actually build and install them.


The knowledge of how to set up the test system is also the knowledge of how to set up the target production system.


The knowledge in the test system is the same as the knowledge in the target system. The knowledge of how to set up the test system is also the knowledge of how to set up the target production system. Automating this knowledge allows simulation of a system before it is built.

Back to Top

A Really Good Idea

This is what Jason Kalich and the panel at Quest were looking for. Automating the test system at SmartSignal ended up being not simply about speeding things up a bit, making the testers’ lives easier, or saving a few dollars. It was not just about cranking through a few more tests in limited time or reducing test setup, analysis, and reporting time. It became something different—it became a configuration simulator and that’s a new product.

If we automate knowledge in the right way, even internal software process knowledge, it can be used in many different ways and it can even be used to create new functionality and new products that our customers will pay for.

Now that’s a good idea.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

UF1 Figure. Knowledge-containing artifacts involved in testing.

Back to top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More