While Stuart Russell’s review article "Unifying Logic and Probability" (July 2015) provided an excellent summary of a number of attempts to unify these two representations, it also gave an incomplete picture of the state of the art. The entire field of statistical relational learning (SRL), which was never mentioned in the article, is devoted to learning logical probabilistic models. Although the article said little is known about computationally feasible algorithms for learning the structure of these models, SRL researchers have developed a wide variety of them. Likewise, contrary to the article’s statement that generic inference for logical probabilistic models remains too slow, many efficient algorithms for this purpose have been developed.
The article mentioned Markov logic networks (MLNs), arguably the leading approach to unifying logic and probability, but did not accurately describe them. While the article conflated MLNs with Nilsson’s probabilistic logic, the two are quite different in a number of crucial respects. For Nilsson, logical formulas are indivisible constraints; in contrast, MLNs are log-linear models that use first-order formulas as feature templates, with one feature per grounding of the formula. This novel use of first-order formulas allows MLNs to compactly represent most graphical models, something previous probabilistic logics could not do. This capability contributes significantly to the popularity of MLNs. And since MLNs subsume first-order Bayesian networks, the article’s claim that MLNs have problems with variable numbers of objects and irrelevant objects that Bayes-net approaches avoid is incorrect. MLNs and their variants cannot only handle object uncertainty but relation uncertainty as well. Further, the article said MLNs perform inference by applying MCMC to a ground network, but several lifted inference algorithms for them exist.
Another major strand of research in this area the article did not portray accurately is probabilistic logic programming. The article said the "… first significant probabilistic programming language was Pfeffer’s IBAL." While IBAL is definitely significant, Poole’s ICL and Sato’s PRISM were developed much earlier and have had a significant impact on the field. ICL and PRISM essentially extend the Prolog programming language by labeling facts with probabilities. They then use these probabilistic facts in the same way probabilistic databases use labeled tuples—to define a probability distribution over possible worlds. They can represent Bayesian networks, as well as cope with infinite possible worlds and an unknown number of objects. Sato received the test-of-time award from the International Conference on Logic Programming in 2015 for his seminal 1995 paper on PRISM.
The article concluded with, "… these are early days in the process of unifying logic and probability." On the contrary; with developments like MLNs, probabilistic logic programming, lifted inference, statistical relational learning, and more generally statistical relational AI, we are well on our way to solving this longstanding problem.
Pedro Domingos, Seattle, WA,
Kristian Kersting, Dortmund, Germany,
Raymond Mooney, Austin, TX, and
Jude Shavlik, Madison, WI
Author’s Response:
Domingos et al. take me to task for my cautious claims about the state of the art. Given the history of occasional overstatement in artificial intelligence, caution seems appropriate. Domingos et al. claim "many efficient algorithms" exist for generic inference in these languages, as well as "a wide variety … of computationally feasible algorithms for learning the structure of these models." If true, this is excellent news for computer science, since these problems subsume the Halting Problem, as well as general program synthesis.
I am also accused of ignoring "the entire field of statistical relational learning." The excellent book Introduction to Statistical Relational Learning, compiled by my former student Lise Getoor and the late Ben Taskar (MIT Press, 2007), has 13 chapters on SRL languages and systems. My article referred to 10 of them.
My comment that IBAL was the first probabilistic programming language (PPL) was in no way intended as a slight to Sato’s PRISM and Poole’s ICL, contributions for which I have the highest respect. My article placed these approaches, along with BLOG, within the tradition of languages for defining probability distributions over logical worlds, as did Sato and Poole. For example, the PRISM website (http://rjida.meijo-u.ac.jp/prism/) says, "The program defines a probability distribution over the set of possible Herbrand interpretations." My article clearly distinguished this approach from the PPL tradition based on distributions over execution traces of an arbitrary programming language, due to Koller, McAllester, and Pfeffer. Perhaps this is just a matter of terminology, although it seems worthwhile to point out that execution traces need have no relational structure at all. At the time of writing, the extensive bibliography at http://probabilistic-programming.org/research/ does not include the early papers by Sato and Poole, but a broader notion of PPL might well include them.
Stuart Russell, Berkeley, CA
Give Me ‘Naked’ Braces
I was appalled by A. Frank Ackerman’s letter to the editor "Ban ‘Naked’ Braces!" (Oct. 2015), which recommended programmers adopt a policy of always following the closing brace of each code block (presumably, in Algol-like languages like C and Java) with a comment intended to make it clear exactly which code block the closing brace belongs to. However, assuming one’s code is properly indented and the size and complexity of every function is below some reasonable limit, the best way (in my firmly held opinion) to determine the code block closed by any given brace is simply to follow the indentation up to find its matching brace. Although I readily admit a "reasonable limit" is a subjective matter (and may be strongly influenced by the particular development tools being used) the mere presence of a code block large and complex enough to cause potential confusion should serve as a red flag, indicating the code in question ought to be decomposed into smaller functions. In such cases, it may indeed be expedient to use comments in the manner Ackerman described, if one wishes to defer the decisions of exactly which parts to split out, how to pass parameters to and/or from the resulting functions, or what names to give these functions. However, this is a rather poor (and, one hopes, temporary) reason to add such comments. Every developer should perceive them as being unusual in production-quality code, rather than the normal way to end a code block.
The main problem with a comment that is supposed to document which code block is ended by a given brace is it merely documents the comment’s belief about the purpose of each closing brace, and therefore opens up the possibility for the comments to be inaccurate. In the absence of any support by the development tools, every bug that was possible before adopting such coding practice is still possible afterward. So, any degree to which subsequent developers trust the comments simply increases their surprise when a bug happens. As a result, the comments actually communicate their author’s skepticism, rather than confidence, as to whether or not the braces are in agreement with the intended structure of the code.
Broadly speaking, I would not support an attempt to convince coders they can provide a significant benefit to the quality of code by simply adding comments to indicate what they believe the code is doing. Absent a way to validate those beliefs, such attempts are generally misguided, particularly in the case of closing braces, as advocated by Ackerman.
Weston Markham, Pittsburgh, PA
Author’s Response:
In matters of style there are always a variety of opinions. I personally find the practice I recommended useful both in development and in improving readability for maintenance. I have used it for many years and taught it to my students in several different procedural languages. Code has a tendency to move around quite bit during development, and my proposed tags help keep the logic blocks straight when it happens.
A. Frank Ackerman, Butte, MT
I found an interesting juxtaposition between A. Frank Ackerman’s letter to the editor (Oct. 2015) and George V. Neville-Neil’s Viewpoint "Storming the Cubicle" (Oct. 2015), with Neville-Neil implying Ackerman’s suggestion would not work in practice. Ackerman reasonably recommended developers tag their construct terminators (with, say, closing braces) to help avoid programming errors and seemed to assume developers would code this way if asked. But Neville-Neil illustrated the well-known fact that uneducated and/or unmotivated developers routinely ignore quality in favor of doing their work as quickly and easily as possible. Fortunately, Ackerman ended by noting educating beginners about code quality is the one true solution to such problems. I would like to add that education can take many forms, not only for beginners in an educational environment but even for experienced programmers through on-the-job training, continuing education, and, most important, peer mentoring.
Geoffrey A. Lowney, Issaquah, WA
Join the Discussion (0)
Become a Member or Sign In to Post a Comment