While Stuart Russell's review article "Unifying Logic and Probability" (July 2015) provided an excellent summary of a number of attempts to unify these two representations, it also gave an incomplete picture of the state of the art. The entire field of statistical relational learning (SRL), which was never mentioned in the article, is devoted to learning logical probabilistic models. Although the article said little is known about computationally feasible algorithms for learning the structure of these models, SRL researchers have developed a wide variety of them. Likewise, contrary to the article's statement that generic inference for logical probabilistic models remains too slow, many efficient algorithms for this purpose have been developed.
The article mentioned Markov logic networks (MLNs), arguably the leading approach to unifying logic and probability, but did not accurately describe them. While the article conflated MLNs with Nilsson's probabilistic logic, the two are quite different in a number of crucial respects. For Nilsson, logical formulas are indivisible constraints; in contrast, MLNs are log-linear models that use first-order formulas as feature templates, with one feature per grounding of the formula. This novel use of first-order formulas allows MLNs to compactly represent most graphical models, something previous probabilistic logics could not do. This capability contributes significantly to the popularity of MLNs. And since MLNs subsume first-order Bayesian networks, the article's claim that MLNs have problems with variable numbers of objects and irrelevant objects that Bayes-net approaches avoid is incorrect. MLNs and their variants cannot only handle object uncertainty but relation uncertainty as well. Further, the article said MLNs perform inference by applying MCMC to a ground network, but several lifted inference algorithms for them exist.
No entries found