Vinton G. Cerf’s Cerf’s Up column "Responsible Programming" (July 2014) raised the interesting question of whether it is responsible to produce software without using state-of-the-art defect-detection-and-validation tools. Research on such tools is ongoing for decades, and despite progress made possible through improved SAT solvers and theorem provers, software-defect tools are known primarily within their research community and rarely used in development projects, open or closed source. Perhaps a more important question involves how computer science can accelerate development of effective tools and motivate their widespread adoption.
The answer is legal, not technical. The standard software license, disclaiming all liability and suitability for any purpose, should be prohibited, with software subject to the same liability and suitability terms as any other manufactured product. If software is insecure or simply does not work, then the manufacturer must show it was constructed with sufficient care to avoid liability. This financial and legal pressure would eventually end egregious practices, including failure to use the best available tools and practices, shipping bug-ridden code, and using customers as beta testers.
The transition from the current state of software to where it should be will take time, financial investment, new ideas, and great effort. It cannot happen overnight but might never happen if software producers avoid assuming responsibility for the correct, secure operation of their products.
James Larus, Lausanne, Switzerland
Make Security the Predominant Architecture
Just before reading Seda Gurses’s Viewpoint "Can You Engineer Privacy?" (Aug. 2014), I had been reading the latest on hacking car control units by manipulating the software controlling the car, especially the engine, the steering wheel, and other car components,1 pondering the need for a new approach to security and privacy.
Why are intruders so successful? For one thing, computer science and engineering often simplifies attacks, with appliances and application systems using standardized and generalized algorithms, protocols, and component systems. These concepts are also the basis of the software industry’s ability to quickly develop new systems that are open for further development. Intruders are likewise able to create tools for unwelcome manipulation.
What new paradigm of computer science would allow software developers to improve system security and personal privacy? How about one that is application-specific, employs nonstandard protocols and address schemes (such as on LANs in cars), and eliminates concepts like algorithms and data structures "reserved for future use," or more general algorithms in applications than are needed ("upward compatibility"), as in a companywide hardware and software platform in car computers? (This is not to say I advocate the idea of handcrafting all future secure systems.)
The standard software license, disclaiming all liability and suitability for any purpose, should be prohibited, with software subject to the same liability and suitability terms as any other manufactured product.
Rather than make it easy for would-be intruders to develop generalized tools, application engineers should look to develop standardization variation generators, or SVPs, to create strategic complexity specific to families of applications or even to individual appliances. In the case of cars, SVPs must be able to generate a specific protocol for communication between sensors, steering activators, and control processors, even though they are derived from a general class of protocols. Dynamic solutions like protocol variations that depend on car-key identification are especially promising, not by substituting encryption and information hiding but by providing another self-contained obstacle to foil intruders.
Privacy and security can be engineered, even in highly sensitive systems, but such engineering works only if application system architects and software developers view computer security as the predominant architecture, not just as added functionality, on which to develop applications.
Georg E. Schaefer, Ulm, Germany
Hold the Politics
For Moshe Y. Vardi to state as fact, as he did in his Editor’s Letter "Openism, IPism, Fundamentalism, and Pragmatism" (Aug. 2014), "Only the drastic measures taken by the U.S. government…" averted catastrophe as a result of the Lehman Brothers bankruptcy and "…this event shredded the dogma of capitalism…" is counter to the view of many, including me, that the Lehman collapse is an offshoot of national, state, county, and municipal government structure and policy that sucks up 45% of the economy in the form of taxes. Vardi characterizing it as "fundamentalism" then claiming "…history shows that fundamentalist ideas rarely work…" misdirects the argument by attempting to associate criticism of the current economic mess with religious fundamentalism, William Jennings Bryan, and the Scopes trial, rather than the actual substance of the view.
The core of the column seemed to be whether the "reader-pays" or "author-pays" publication model is "more sustainable" for the ACM Digital Library. Fine. That can be done without an anti-capitalist diatribe. It should be remembered that technology is the core interest of the membership, including both liberals and conservatives, and all have access to political discussion elsewhere.
Bryan Batten, Carlsbad, CA
Math and the Computing Paradigm
Regarding Peter J. Denning’s and Peter A. Freeman’s Viewpoint "Computing’s Paradigm" (Dec. 2009), Simone Santini’s letter to the editor "Computing Paradigm Not a Branch of Science" (Apr. 2010) said computing can be categorized as both a branch of science and a branch of mathematics, claiming, "The abstract problem of symbol manipulation is mathematical…" and "The instantiation of the symbol-manipulation model in useful systems is a problem for the engineering of computing." In response, Denning and Freeman said, "Computing does not separate neatly into math and engineering, as Santini claims." But what is indeed wrong with Santini’s distinction, which has been endorsed by many others over the years? Denning and Freeman even predicted, "Santini’s desire to parse computing into separate elements will fail, just as all such previous attempts have failed."
Virtually all software development is done through trial and error, (unit) testing, and never-ending patching following delivery, with stupendous (productivity) costs; recall the classic Microsoft software alert: "You may have to reboot your system." There are good reasons for this practice. One is there are no tools (or supporting theory) for systematic top-down iterative formal development, from requirements to running system. Most software products do not need meaning-preserving transformations or formal verifications.
This state of the art does not mean we can dismiss a math approach to development, validation, and annotations of library elements for machine-assisted reuse. It is actually a failure of computer science, better called "informatics," to have not developed a math approach to the software development life cycle. Consider recent unwelcome consequences of the lack of formal verification techniques: the Heartbleed flaw in OpenSSL, Goto fail in Apple OS, and the CVE-2014-1776 patch for Internet Explorer.
Though sorting belongs to one of the oldest algorithms in computer science, the Cygwin library (part of a Unix-like command-line interface for Microsoft Windows) had (still has?) an "improved" defective version of qsort with guaranteed quadratic behavior on certain inputs. In any case, I have never encountered even an informal proof that the output of a sorting algorithm is a permutation of the input.
This is not to say I think computer science should be viewed as a branch of mathematics but rather as a way to urge more research in formal techniques, hopefully yielding tools for all phases of the development life cycle. Redeveloping the Linux operating system this way would be a genuine advance, making it possible to maintain it at a high level instead of exclusively tinkering with its code.
Denning’s and Freeman’s response should not have demeaned Santini’s distinction, endorsing again and again the pathological optimism approach (such as Scrum and Agile) to software development. In the meantime, see my Technical Opinion "Software Engineering Considered Harmful" (Nov. 2002).
Dennis de Champeaux, San Jose, CA
Join the Discussion (0)
Become a Member or Sign In to Post a Comment