MetaPPL: Inference Algorithms as First-Class Generative Models
Probabilistic programming languages often distinguish between code for representing generative models and code for implementing inference algorithms. However, this distinction is often conceptually unnecessary: many inference algorithms (e.g. running a Markov chain or a particle filter) are naturally understood as probabilistic processes that define generative models. In fact, several state-of-the-art modeling and inference techniques use meta-inference, i.e. inference about the internal choices made by these inference algorithms, to improve the quality of proposals and variational approximations.
In this talk, we present MetaPPL, a probabilistic programming language in which inference algorithms are first-class generative models. This enables users to condition inference algorithms on their observed results, estimate marginal densities of inference algorithms, and invoke inference algorithms from within bespoke Monte Carlo proposals and variational families, while automating the calculations necessary for inference. MetaPPL is based on three technical ideas:
a technique for deriving importance weights, MH acceptance probabilities, and ELBO gradient estimates automatically when proposals and variational families contain auxiliary variables,
a suite of inference algorithm building blocks implemented in the same language as models, and
custom “meta-inference” logic for each inference algorithm, for proposing plausible executions of an inference algorithm that could have produced some observed inference result.
Although our implementation of these new features incurs some runtime overhead, we demonstrate that they make it possible to express within a PPL (and with automation) several state-of-the-art modeling and inference techniques.
Tue 21 Jan
|14:00 - 14:30|
Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support
|14:32 - 14:47|
|14:49 - 15:05|