Department of Computer Science, University of Toronto

9/25/1997

3:45pm-4:45pm

When fitting a stochastic generative model to data it is normal to compute the posterior probability distribution that is induced over configurations of the hidden variables by each observation. For simple models such as mixtures of Gaussians and factor analysis this posterior distribution can be computed exactly. For more interesting generative models composed of multiple layers of non-linear units it is intractable to compute the posterior distribution. I shall describe various ways of approximating the posterior distribution and show that simple learning rules can improve the generative model even when the approximations are poor.

*Refreshments will be served immediately before the talk at 3:30pm.Hosted by Michael Mozer.*

Department of Computer Science

University of Colorado Boulder

Boulder, CO 80309-0430 USA

webmaster@cs.colorado.edu

University of Colorado Boulder

Boulder, CO 80309-0430 USA

webmaster@cs.colorado.edu