Opinion
Computing Applications Letters to the editor

Free Speech For Algorithms?

Posted
  1. Introduction
  2. Authors' Response:
  3. Science Has 1,000 Legs
  4. Certify Software Professionals and their Work
  5. Unicode Not So Unifying
  6. The Merchant Is Still Liable
  7. Author's Response:
  8. Format Migration or Unforgiving Obsolescence
  9. References
  10. Footnotes
Letters to the Editor

In "Regulating the Information Gatekeepers" (Nov. 2010), Patrick Vogl and Michael Barrett said a counterargument against the regulation of search-engine bias is that "Search results are free speech and therefore cannot be regulated." While I have no quarrel as to whether this claim is true, I’m astounded that anyone could seriously make such a counterargument—or any judge accept it.

Search results are the output of an algorithm. I was unaware the field of artificial intelligence had advanced to the point that we must now consider granting algorithms the right of free speech. To illustrate such absurdity, suppose I was clever enough to have devised an algorithm that could crawl the Web and produce opinionated articles, rather than search results, as its output. Would anyone seriously suggest the resulting articles be granted all the constitutional protections afforded the works of a human author? Taking the analogy further, suppose, too, my algorithm produced something equivalent to shouting "Fire!" in a crowded theater. Or, further still, perhaps it eventually produced something genuinely treasonous.

If we accept the idea that the output of an algorithm can be protected under the right of free speech, then we ought also to accept the idea that it is subject to the same limitations we place on truly unfettered free speech in a civilized society. But who would we go after when these limitations are exceeded? I may have created the algorithm, but I’m not responsible for the input it found that actually produced the offensive output. Who’s guilty? Me? The algorithm? (Put the algorithm on trial?) The machine that executed the algorithm? How about those responsible for the input that algorithmically produced the output?

Unless humans intervene to modify the output of algorithms producing search results, arguments involving search results and free speech are absurd. At least until artificial intelligence has advanced to where machines must indeed be granted the same rights we grant our fellow humans.

Roger Neate, Seattle, WA

Back to Top

Authors’ Response:

Neate touches a nerve concerning the increasingly complex relationship between humans and material technologies in society. Accountability in these sociomaterial settings is challenging for judge and regulator alike. In the 2003 case of SearchKing vs. Google Technology, a U.S. District Court noted the ambiguity of deciding whether PageRank is mechanical and objective or subjective, ruling that PageRank represents constitutionally protected opinions. Whether search results are indeed free speech remains controversial, meaning we can expect the debate to continue.

Patrick Vogl and Michael Barrett, Cambridge, U.K.

Back to Top

Science Has 1,000 Legs

It’s great to reflect on the foundations of science in Communications, as in Tony Hey’s comment "Science Has Four Legs" (Dec. 2010) and Moshe Y. Vardi’s Editor’s Letter "Science Has Only Two Legs" (Sept. 2010), but also how the philosophy of science sheds light on questions involving the number of legs in a natural science.

Willard Van Orman Quine’s 1951 paper "Two Dogmas of Empiricism" convincingly argued that the attempt to distinguish experiment from theory fails in modern science because every observation is so theory-laden; for example, as a result of a Large Hadron Collider experiment, scientists will not perceive, say, muons or other particles, but rather some visual input originating from the computer screen displaying experimental data. The interpretation of this perception depends on the validity of many nonempirical factors, including physics theories and methods.

With computation, even more factors are needed, including the correctness of hardware design and the validity of the software packages being used, as argued by Nick Barnes in his comment "Release the Code" (Dec. 2010) concerning Dennis McCafferty’s news story "Should Code Be Released?" (Oct. 2010).

For such a set of scientific assumptions, Thomas S. Kuhn coined the term "paradigm" in his 1962 book The Structure of Scientific Revolutions. Imre Lakatos later evolved the concept into the notion of "research program" in his 1970 paper "Falsification and the Methodology of Scientific Research Programs."

In this light, neither the two-leg nor the four-leg hypothesis is convincing. Citing the leg metaphor at all, science is perhaps more accurately viewed as a millipede.

Wolf Siberski, Hannover, Germany

Back to Top

Certify Software Professionals and their Work

As a programmer for the past 40 years, I wholeheartedly support David L. Parnas’s Viewpoint "Risks of Undisciplined Development" (Oct. 2010) concerning the lack of discipline in programming projects. We could be sitting on a time bomb and should take immediate action to prevent potential catastrophic consequences of the carelessness of software professionals. I agree with Parnas that undisciplined software development must be curbed.

I began with structured programming and moved on to objects and now to Web programming and find that software is a mess today. When I travel on a plane, I hope its embedded software does not execute some untested loop in some exotic function never previously recognized or documented. When I conduct an online banking transaction, I likewise hope nothing goes wrong.

See the Web site "Software Horror Stories" (http://www.cs.tau.ac.il/~nachumd/horror.html) showing why the facts can no longer be ignored. Moreover, certification standards like CMMI do not work. I have been part of CMMI-certification drives and find that real software-development processes have no relation to what is ultimately certified. Software development in real life starts with ambiguous specifications. When a project is initiated and otherwise unrelated employees assembled into a team, the project manager creates a process template and fills it with virtual data for the quality-assurance review. But the actual development is an uncontrolled process, where programs are assembled from random collections of code available online, often taken verbatim from earlier projects.

Most software winds up with an unmanageable set of bugs, a scenario repeated in almost 80% of the projects I’ve seen. In them, software for dropped projects might be revived, fixed by a new generation of coders, and deployed in new computer systems and business applications ultimately delivered to everyday users.

Software developers must ensure their code puts no lives at risk and enforce a licensing program for all software developers. Proof of professional discipline and competency must be provided before they are allowed to write, modify, or patch any software to be used by the public.

As suggested by Parnas,1,2 software should be viewed as a professional engineering discipline. Science is limited to creating and disseminating knowledge. When a task involves creating products for others, it becomes an engineering discipline and must be controlled, as it is in every other engineering profession. Therefore, software-coding standards should be included in penal codes and country laws, as in the ones that guide other engineering, as well as medical, professions. Moreover, software developers should be required to undergo periodic relicensing, perhaps every five or 10 years.

Basudeb Gupta, Kolkata, India

Back to Top

Unicode Not So Unifying

Poul-Henning Kamp’s attack in "Sir, Please Step Away from the ASR-33!" on ASCII as the basis of modern programming languages was somewhat misplaced. While, as Kamp said, most operating systems support Unicode, a glance at the keyboard shows that users are stuck with an ASCII subset (or regional equivalent).

My dubious honor learning and using APL* while at university in the 1970s required a special "golf ball" and stick-on key labels for the IBM Selectric terminals supporting it. A vexing challenge in using the language was finding one the many Greek or other special characters required to write even the simplest code.

Also, while Kamp mentioned Perl, he failed to mention that the regular expressions made popular by that language—employing many special characters as operators—are virtually unintelligible to all but the most diehard fans. The prospect of a programming language making extensive use of the Unicode character set is a frightening proposition.

William Hudson, Abingdon, U.K.

Back to Top

The Merchant Is Still Liable

In his Viewpoint "Why Isn’t Cyberspace More Secure?" (Nov. 2010), Joel F. Brenner said that in the U.K. the customer, not the bank, usually pays in cases of credit-card fraud. I would like to know the statistical basis for this claim, since for transactions conducted in cyberspace the situation in both the U.K. and the U.S. is that liability generally rests with the merchant, unless it provides proof of delivery or has used the 3-D Secure protocol to enable the card issuer to authenticate the customer directly. While the rates of uptake of the 3-D Secure authentication scheme may differ, I have difficulty believing that difference translates into a significant related difference in levels of consumer liability.

The process in the physical retail sector is quite different in the U.K. as a result of the EMV, or Europay, MasterCard, and VISA protocol, or "Chip & PIN," though flaws in EMV and hardware mean, in practice, the onus is till on the bank to demonstrate its customer is at fault.

Alastair Houghton, Fareham, England

Back to Top

Author’s Response:

The U.K. Financial Services Authority took over regulation of this area November 1, 2009, because many found the situation, as I described it, objectionable. In practice, however, it is unclear whether the FSA’s jurisdiction has made much difference. While the burden of proof is now on the bank, one source (see Dark Reading, Apr. 26, 2010) reported that 37% of credit-card fraud victims get no refund. The practice in the U.S. is not necessarily better but is different.

Joel F. Brenner, Washington, D.C.

Back to Top

Format Migration or Unforgiving Obsolescence

David S.H. Rosenthal’s response (Jan. 2011) to Robin Williams’ comment "Interpreting Data 100 Years On" said he was unaware of a single format widely used that has actually become obsolete. Though I understand the sentiment, it brought to mind Apple’s switch from PowerPC to Intel architecture about six years ago. Upgrading the computers in my company in response to that switch required migrating all our current and legacy data to the new format used by Intel applications at the time. Though we didn’t have to do it straightaway, as we could have kept running our older hardware and software, we had no choice but to commence a process to migrate over time.

This decision directly affected only my company, not the entire computing world, but when addressing data exchange and sharing, it was an additional factor we had to consider. Rather than face some general obsolescence, we may inevitably all have to address format obsolescence that is a natural consequence of IT’s historically unforgiving evolution.

Bob Jansen, Erskineville, NSW, Australia

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More