Research and Advances

Likelihood ratio gradient estimation for stochastic systems

Consider a computer system having a CPU that feeds jobs to two input/output (I/O) devices having different speeds. Let &thgr; be the fraction of jobs routed to the first I/O device, so that 1 - &thgr; is the fraction routed to the second. Suppose that &agr; = &agr;(&thgr;) is the steady-sate amount of time that a job spends in the system. Given that &thgr; is a decision variable, a designer might wish to minimize &agr;(&thgr;) over &thgr;. Since &agr;(·) is typically difficult to evaluate analytically, Monte Carlo optimization is an attractive methodology. By analogy with deterministic mathematical programming, efficient Monte Carlo gradient estimation is an important ingredient of simulation-based optimization algorithms. As a consequence, gradient estimation has recently attracted considerable attention in the simulation community. It is our goal, in this article, to describe one efficient method for estimating gradients in the Monte Carlo setting, namely the likelihood ratio method (also known as the efficient score method). This technique has been previously described (in less general settings than those developed in this article) in [6, 16, 18, 21]. An alternative gradient estimation procedure is infinitesimal perturbation analysis; see [11, 12] for an introduction. While it is typically more difficult to apply to a given application than the likelihood ratio technique of interest here, it often turns out to be statistically more accurate. In this article, we first describe two important problems which motivate our study of efficient gradient estimation algorithms. Next, we will present the likelihood ratio gradient estimator in a general setting in which the essential idea is most transparent. The section that follows then specializes the estimator to discrete-time stochastic processes. We derive likelihood-ratio-gradient estimators for both time-homogeneous and non-time homogeneous discrete-time Markov chains. Later, we discuss likelihood ratio gradient estimation in continuous time. As examples of our analysis, we present the gradient estimators for time-homogeneous continuous-time Markov chains; non-time homogeneous continuous-time Markov chains; semi-Markov processes; and generalized semi-Markov processes. (The analysis throughout these sections assumes the performance measure that defines &agr;(&thgr;) corresponds to a terminating simulation.) Finally, we conclude the article with a brief discussion of the basic issues that arise in extending the likelihood ratio gradient estimator to steady-state performance measures.

Advertisement

Author Archives

Research and Advances

Computing Poisson probabilities

We propose an algorithm to compute the set of individual (nonnegligible) Poisson probabilities, rigorously bound truncation error, and guarantee no overflow or underflow. Work and space requirements are modest, both proportional to the square root of the Poisson parameter. Our algorithm appears numerically stable. We know no other algorithm with all these (good) features. Our algorithm speeds generation of truncated Poisson variates and the computation of expected terminal reward in continuous-time, uniformizable Markov chains. More generally, our algorithm can be used to evaluate formulas involving Poisson probabilities.

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved