Sign In

Communications of the ACM

Letters to the Editor

Provenance of British Computing


View as: Print Mobile App ACM Digital Library Full Text (PDF) In the Digital Edition Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Letters to the Editor, illustration

Credit: iStockPhoto.com

David Anderson's viewpoint "Tom Kilburn: A Tale of Five Computers" (May 2014) on the pioneering computers built at the University of Manchester was fascinating and informative, but no article on the history of British computing can avoid the precedence controversy between the Universities of Manchester and Cambridge. For example, Cambridge advocates will put the case for the EDSAC computer and its many achievements, including nearly 10 years of service to the scientific community starting in 1949. But the Manchester Baby was operational more than 10 months earlier. It is in this spirit we should examine Anderson's remark, "Starting in 1963, [Kilburn] spent several years establishing and organizing a new Department of Computer Science, the first of its kind in the U.K." It is no criticism of Manchester's fine School of Computer Science to ask, what is the word "first" doing here? (Let us ignore the qualifier "of its kind," which would guarantee uniqueness of almost anything.) The Cambridge department had already been in existence for 27 years. The central authorities at Cambridge published its Report on the Establishment of a Computing Laboratory in 1936, with the aim of providing a computing service to the sciences while also conducting research on computational techniques. Initially called the Mathematical Laboratory, it developed the EDSAC and other computers, taught programming to Edsger W. Dijkstra in 1951, and established the world's first course in computing (at the master's level) in 1953. Another point of note: Many people imagine the development of computing was driven by the demands of war, but the Mathematical Laboratory (now known as the Computer Laboratory) was created from the outset to meet the needs of science.

Lawrence C. Paulson, Cambridge, England

Back to Top

A Programming Language Is Not a User Interface

A programming language is not a user interface but rather an expert-only tool, like, say, a command line but not the language-as-interface concept outlined by Mark Guzdial in his blog "The Difficulty of Teaching Programming Languages, and the Benefits of Hands-on Learning" (July 2014) in response to Andy Ko's earlier blog. Viewing a language as a user interface reflects a flawed understanding of the language. How to learn a programming language depends on whether the programmer has an accurate model of the language, the underlying machine, and what the programmer is trying to accomplish.

Consider a musical instrument as a physical analogy to a programming language, where the instrument is the "interface" to the realm of creating music. Mastering the instrument is one thing; understanding music is something else. Without understanding, learning to play may be futile. No musician lacking talent or a deep understanding of music will ever be a truly accomplished player. Changing one type of instrument to one easier to use does not change the connection between understanding and performance.

The instrument's role in the process is minor. Yet with programming languages, some say we simply have not found a language that is easy to teach and inherent difficulty learning to write good code will magically disappear. Most bad software is produced by programmers with limited understanding of what they are trying to accomplish and the tools they are trying to use. Programming languages play only a minor role in such a personal struggle, while choosing a particular language is at best only a question of convenience. Sure, choosing a language with clear representation of specific concepts helps teach and learn the concepts but does not guarantee understanding.

Unless teachers acknowledge the inherent difficulty of programming and its dependence on talent and dedication, there can be no end to the software crisis. Moreover, trying to teach programming to students who lack that talent will continue to produce incompetent programmers.

Reflecting on my own experience in commercial projects, I can say that paying more for competent programmers pays off. Some programmers actually represent negative productivity, in that cleaning up after them costs more than any value they might have created. Though many who call themselves programmers may have to quit the profession, the same would happen to talentless musicians pursuing musical performance as a career. The difference is that most people recognize badly played music (it hurts), while, apparently, not all teachers of computer science recognize why so much bad code continues to be produced.

Arno Wagner, Zürich, Switzerland

Back to Top

Release the Source Code

A welcome addition to the 16 items Chuck Huff and Almut Furchert recommended in their Viewpoint "Toward a Pedagogy of Ethical Practice" (July 2014) would be the release of source code. Few practices could do as much to give users confidence that the code they depend on functions as intended, meets requirements, and reflects the choices they approve. Whether an open license is used (permitting code redistribution or alteration) is a separate matter based on the goals and business plan of the coding organization. But allowing outside experts to freely view the code would be a natural step for organizations developing software in the public interest.

Andy Oram, Cambridge, MA

Back to Top

Toward a Clear Sense of Responsibility

Vinton G. Cerf's Cerf's Up column "Responsible Programming" (July 2014) should be echoed wherever software is used, procured, or developed. Dismal software quality hinders the economy, national security, and quality of life. Every organization is likely rife with process error. If you have not been affected by a cyberattack you soon could be. Software industry analyst Capers Jones (http://www.spr.com) reported deployed software systems, circa 2012, contained approximately 0.4 latent faults per function point. Reflecting on the urgency of moving to responsible programming, this statistic improved approximately 300% since the 1970s; compare this statistic to automobile engineers achieving 3,000% reduction in emissions in less time.

Almost all operational errors and successful cyberattacks can be traced to faulty code. Responsible programming must therefore extend beyond individual programs to the whole set of programs that interoperate to accomplish a user's purpose, even in the context of nondeterministic situations. Responsible programming could thus ensure each program supports system principles concerning safety properties.

An early example involved Antonio Pizzarello, who co-founded a company in 1995 to commercialize a fault-detection-and-correction theory developed by Edsger W. Dijkstra et al. at the University of Texas. As described in Pizzarello et al.'s U.S. Patent No. 6029002 Method and Apparatus for Analyzing Computer Code Using Weakest Precondition the code-analysis method starts with a user-identified, unacceptable post-condition. An analyst writes the desired result, then calculates the weakest precondition until reaching a contradiction highlighting the statement containing the logic, arithmetic, or semantic fault. However, though Pizzarello's method was a technical success, it could not scale economically to larger systems of programs containing numerous possible paths because it was prohibitively labor-intensive and slow, even for highly trained analysts.

The promise of hardware for massively parallel, conditional processing prompted a complete reconceptualization in 2008; typical is the Micron Automata Processor (http://www.micron.com/about/innovations/automata-processing). A new software-integrity assessment method thus enables proofreading computer code as text while applying deep reasoning regarding software as predicates for logic, arithmetic, and semantic coherence at a constant, predictable rate of approximately 1Gb/sec. Software faults are detected, systemwide, automatically.

Informal polls of software developers find they spend approximately 50% of their project time and budget defining, negotiating, and reworking program interfaces and interoperation agreements. They then waste approximately 40% of test time and budget awaiting diagnosis of and fixes for test aborts. Software-integrity assessment can preclude wasted time and money. Moreover, software maintainers and developers may be able to find and null faults more quickly than cyberattackers are able to create them.

Jack Ring, Gilbert, AZ

Back to Top

A Plea for Consistency

Although Dinei Florêncio et al. made several rather grand claims in their Viewpoint "FUD: A Plea for Intolerance" (June 2014), including "The scale of the FUD problem is enormous," "While security is awash in scare stories and exaggerations," and "Why is there so much FUD?," they offered no evidence to support them. Odd, given that they also said, "We do not accept sloppy papers, so citing dubious claims (which are simply pointers to sloppy work) should not be acceptable either."

Alexander Simonelis, Montréal, Canada

Back to Top

Authors' Response:

We offered many examples but could not include references for everything. Typing "digital Pearl Harbor," "trillion-dollar cybercrime," or other terms into a search engine will easily produce examples of who has been saying and repeating what.

Dinei Florêncio, Cormac Herley, and Adam Shostack

Back to Top

Correction

An editing error in "From the President" (June 2014) resulted in an incorrect awards citation. Susan H. Rodger received the Karl V. Karlstrom Outstanding Educator Award for contributions to the teaching of computer science theory in higher education and the development of computer science education in primary and secondary schools.

Back to Top

Footnotes

Communications welcomes your opinion. To submit a Letter to the Editor, please limit yourself to 500 words or less, and send to letters@cacm.acm.org.


©2014 ACM  0001-0782/14/09

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from permissions@acm.org or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.


Comments


Richard Cichelli

I liked Arno Wagner's letter "A Programming Language Is Not a User Interface." It seems written from the point of view that if you want to learn to write better programs, you need to learn to write in a better programming language. I agree but differ.

As the Tools Section Editor of Pascal News and a Lehigh University teacher nearly 40 years ago, I confronted the same issue. General semanticists argue persuasively that language influences thought. I wanted to support good algorithmic thinking. To learn to write better programs I concluded one needed to read good ones first.

I believe Pascal News was one of the earliest publications to show long, well written, well edited and well composed programs. Knuth did this later with his TeXbook.

Perhaps improving program writing starts with improving program reading. Reading well written programs in a readable programming language is, in my opinion, were good programming skills are learned easiest.


Displaying 1 comment