Sign In

Communications of the ACM

Letters to the editor

Hennessy and Patterson on the Roots of RISC


Letters to the Editor, illustration

Credit: Getty Images

Awarding ACM's 2017 A.M. Turing Award to John Hennessy and David Patterson was richly deserved and long overdue, as described by Neil Savage in his news story "Rewarded for RISC" (June 2018). RISC was a big step forward. In their acceptance speech, Patterson also graciously acknowledged the contemporary and independent invention of the RISC concepts by John Cocke, another Turing laureate, at IBM, as described by Radin.1 Unfortunately, Cocke, who was the principal inventor but rarely published, was not included as an author, and it would have been good if Savage had mentioned his contribution.

It is noteworthy that RISC architectures depend on and emerged from optimizing compilers. So far as I can tell, all the RISC inventors had strong backgrounds in both architecture and compilers.

Fred Brooks, Chapel Hill, NC, USA

Back to Top

No Inconsistencies in Fundamental First-Order Theories in Logic

Referring to Martin E. Hellman's Turing Lecture article "Cybersecurity, Nuclear Security, Alan Turing, and Illogical Logic" (Dec. 2017), Carl Hewitt's letter to the editor "Final Knowledge with Certainty Is Unobtainable" (Feb. 2018) included a number of misleading statements, the most important that: "Meanwhile, Gödel's results were based on first-order logic, but every moderately powerful first-order theory is inconsistent. Consequently, computer science is changing to use higher-order logic." Computer science is based on logic, mostly first-order logic, and programmers make their coding decisions using logic every day. The most important results of logic (such as Kurt Gödel's Incompleteness Theorems) are taught in theory courses and are the fundamentals on which computer science and software engineering are based. No inconsistencies have ever been found in any of the standard first-order theories used in logic, ranging from moderately powerful to very powerful, and none are believed to be inconsistent.

Harvey Friedman, Columbus, OH, USA, and Victor Marek, Lexington, KY, USA


It is noteworthy that RISC architectures depend on and emerged from optimizing compilers.


Back to Top

Author Responds:

Powerful first-order theories of intelligent information systems are inconsistent because these systems are not compact, thus violating a fundamental principle of first-order theories. Meanwhile, the properties of self-proof of inferential completeness and formal consistency in higher-order mathematical theories are the opposite of incompleteness and the self-unprovability of consistency Gödel showed for first-order theories. Differing properties between higher-order and first-order theories are reconciled by Gödel's "I'm Unprovable" proposition's nonexistence in higher-order theories. First-order theories are not foundational to computer science, which indeed relies on the opposite of Gödel's results.

Carl Hewitt, Palo Alto, CA, USA

Back to Top

More Accurate Text Analysis for Better Patient Outcomes

David Gefen et al.'s article "Identifying Patterns in Medical Records through Latent Semantic Analysis" (June 2018) endorsed the latent semantic analysis (LSA) method of text analysis due to its ability to identify links among mentions of medical terms, including the strengths of their relative associations. In practice, however, a single-keyword mention in a clinical narrative note might not represent the true factual meaning of such a mention. Moreover, a disease may be mentioned in the context of being ruled out as a diagnosis or only in the context of documenting family history. A disease mention could even lack any meaning at all, as it is just part of a template generated by an electronic health-records system of a particular provider's care system. And many clinical-narrative notes include content that has been copied and pasted from other notes, possibly inflating the importance of certain mentions thus incorporated into the applicable machine-learning algorithms.


A disease mention could even lack any meaning at all, as it is just part of a template generated by an electronic health records system of a particular provider's care system.


Even incorporating standard International Classification of Diseases (ICD) codes, as defined and published by the World Health Organization, into text-processing methods, as Gefen et al. discussed, could be misleading.

For a variety of everyday conditions (such as insomnia), such codes do not indicate definitively the existence or nonexistence of a particular condition. Another example of ICDs yielding potentially misleading results for an inaccurately coded disease concerns nonalcoholic fatty liver disease (NAFLD), a common yet underdocumented disease often mentioned in notes without ICD codes indicated. Given also subjective and idiosyncratic physician billing styles, a patient record might include a code for NAFLD, though the code might indicate just a biopsy, despite greater odds that the patient's liver is functioning normally. Incorporating codes without associated dates likewise limits their true meaning and thus reduces their applicability in association studies based on text. A code in a patient's problem list (a standard record indicating the most important health problems a patient might be facing) has a very different meaning from the same code appearing on the same patient's doctor-noted encounter-diagnosis record.

To improve classification, accuracy of text-processing methods focused on health care (such as LSA, as Gefen et al. explored) would strongly benefit from much more specific representations of keywords to more accurately indicate or negate a condition rather than incorporate only single keywords. For instance, instead of noting "hypertension," a one-keyword mention, as in Gefen et al.'s Figure 1, the methods should use specific non-negated and time-dependent expressions like "Current visit: Hypertension is in excellent control" or in the context of a cardiac-related condition, as in Gefen et al.'s Figure 2, "No evidence of coronary artery disease."

LSA and other advanced techniques have the potential to truly represent the level of strength in the connections among textual concepts. However, to deliver accurate results that most serve the patient, the features within them must be more descriptive. Such features should thus be based on commonly used multi-keyword expressions and their variations.

Uri Kartoun, Cambridge, MA, USA

Back to Top

References

1. Radin, G. The 801 minicomputer. IBM Journal of Research & Development (1983), 237–246.

Back to Top

Footnotes

Communications welcomes your opinion. To submit a Letter to the Editor, please limit yourself to 500 words or less, and send to [email protected].


©2018 ACM  0001-0782/18/10

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.


 

No entries found