What is it about?

In this article, we present a novel method to obtain both improved estimates and reliable stopping rules for stochastic optimization algorithms such as the Monte Carlo EM (MCEM) algorithm. By characterizing a stationary point, θ*, of the algorithm as the solution to a fixed point equation, we provide a parameter estimation procedure by solving for the fixed point of the update mapping. We investigate various ways to model the update mapping, including the use of a local linear (regression) smoother. This simple approach allows increased stability in estimating the value of θ* as well as providing a natural quantification of the estimation uncertainty. These uncertainty measures can then also be used to construct convergence criteria that reflect the inherent randomness in the algorithm. We establish convergence properties of our modified estimator. In contrast to existing literature, our convergence results do not require the Monte Carlo sample size to go to infinity. Simulation studies are provided to illustrate the improved stability and reliability of our estimator.

Featured Image

Why is it important?

It provides a new guideline to develop convergence criteria for stochastic optimization algorithms, such as the Monte Carlo EM algorithm, which does not require the Monte Carlo sample size to go to infinity. An uncertainty quantification measure is also provided.

Read the Original

This page is a summary of: Improved Estimation and Uncertainty Quantification Using Monte Carlo-Based Optimization Algorithms, Journal of Computational and Graphical Statistics, July 2015, Taylor & Francis,
DOI: 10.1080/10618600.2014.927361.
You can read the full text:

Read

Contributors

The following have contributed to this page