Research and Advances
Artificial Intelligence and Machine Learning Research highlights

Technical Perspective: Markov Meets Bayes

Posted
  1. Article
  2. References
  3. Author
  4. Footnotes
Read the related Research Paper

Probabilistic sequence models have become so pervasive in science and engineering that we often act as if the sequence modeling problem is basically solved. On the Internet, such models power the large-scale systems for statistical machine translation, such as Google Translate, that receive millions of queries each day. These systems use a probabilistic sequence model—a language model—to select among candidate translations in the target language. For example, consider translating the English phrases "to show interest" and "to charge interest" into Portuguese. In the first phrase, the word "interest" should be translated as "interesse" (curiosity) while in the second it should be translated as "juro" (return on lending). In each case, a Portuguese language model would assign higher probability to the correct translation based on the context established by the preceding verb. Today, such language models are derived from the statistics of large corpora with as many as 1010 sentences of text. Moreover, in practical systems for machine translation or speech recognition, they may assign probabilities to sequences that contain as many as 106 distinct words.

The history of probabilistic sequence models, too long and complex to review here, dates back to Markov at the turn of the last century. To a first approximation, today’s practice is to memorize a subset of cw patterns occurring in the training data, where w is a token (for example, Unicode point, DNA base, word) and c is a context of preceding tokens. The statistics of these patterns are then used to estimate the probability of a token appearing after a given sequence of tokens. For modelers, the critical modeling questions are how to select which patterns to store, and from these patterns, how to estimate the probability that a token w follows a context c when w has never or rarely been seen in that context. Though informed by decades of research, current practices are still something of a black art. They often reflect the particular data structures and algorithms used to create sequence models and, for large-scale systems, the need for distribution implementations. We practitioners seem for the most part resigned to these complications, although naturally we wonder if there are more elegant ways to manage the balance between memorizing old patterns and generalizing to new ones.

The Sequence Memoizer (SM) detailed in the following paper addresses this balance in a novel way by extending several previous ideas: suffix trees for storing prediction contexts of unbounded length;2 factorable priors for integrating contexts of different lengths into a final probability estimate;5 and nonparametric Bayesian methods for incorporating the uncertainty over which contexts best predict the next token.4 The authors had the critical insight to combine a particular form of hierarchical Pitman-Yor process with standard techniques for linear-time, context-tree creation. This combination yields the first linear-time, unbounded-context sequence model based on principled Bayesian techniques.

How well does it work? On standard test collections using standard metrics, the SM matches or outperforms all previous methods that store the same context statistics. Why does it work so well? One likely reason is that the hierarchical Pitman-Yor process is better at modeling the power law statistics of natural sequence data: in many types of sequences, patterns that are individually rare still account for a substantial fraction of all observed patterns. By modeling these statistics, the SM is better able to integrate the predictions from contexts of different lengths.

The SM is an important advance in probabilistic sequence modeling that finally makes nonparametric Bayesian methods practical for large data sets. Is sequence modeling therefore solved? Hardly. First, there remains a bit of black art in the SM, whose probability estimates cannot be computed in closed form but must instead be approximated with stochastic (Monte Carlo) methods for inference. Not only do these methods require some empirical tuning, but they also create a trade-off between predictive power and computational complexity. We lack any theoretical understanding of this trade-off in SMs when stochastic inference is taken into account. Finally, for many applications, long contiguous patterns of tokens seem a very inefficient representation of context. For instance, natural language text often has a hierarchical structure, where relatively distant words (say, the main verb and the head noun of the direct object) may be strongly correlated while the words in between are less correlated to either;1 similarly, DNA sequences can exhibit strong correlations between bases in two consecutive exons but much weaker correlations with the bases in the intervening intron. Furthermore, in certain contexts it seems a reasonable approximation to interchange sets of tokens or token sequences that belong to particular syntactic or semantic classes.3 An interesting question is whether more refined nonparametric Bayesian models can capture these statistical properties.

Back to Top

Back to Top

Back to Top

    1. Chelba, C. A structured language model. In Proceedings of the 35th Annual Meeting of the ACL (Morristown, NJ, 1997), Assoc. Computational Linguistics, 498–500.

    2. Cleary, J.G., and Teahan, W.J. Unbounded length contexts for PPM. The Computer J. 40 (1997), 67–75

    3. Lin, D. An information-theoretic definition of similarity. In Proceedings of the 15th International Conference on Machine Learning (1998), 296–304; http:dblp.unitrier.de/.

    4. Mochihashi, D., and Sumita, E. The infinite Markov model. Advances in Neural Information Processing Systems 20 (2008), 1017–1024.

    5. Willems, F., Shtarkov, Y., and Tjalkens, T. The context-tree weighting method: Basic properties. IEEE Trans. Info. Theory 41, 3 (May 1995), 653–664.

    DOI: http://doi.acm.org/10.1145/1897816.1897841

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More