Importance sampling offers a highly general method for performing inference tasks over arbitrary distributions. While its validity in Rn is intuitive, when considering discrete structural choices with different sets of continuous parameters associated with them, we may question if the importance-weighted approximation over their different measures is valid.
Let's define a generative model under which we sample a discrete structure k∼p(⋅) and a corresponding set of continous parameters, θ∼p(⋅∣k) of length Nk, under which we generate an observation D∼p(⋅∣k,θ). The full joint distribution is then
p(k,θ,D)=p(k)p(θ∣k)p(D∣k,θ)
We may wish to recover the posterior, or compute expectations of test functions of the latent parameters, but because of the intractability of the marginal
p(D)=k∑p(k)∫Θp(θ∣k)p(D∣k,θ) dθ
it's not feasible to do so. We might then choose to use importance sampling to form an estimate of the marginal. We sample a set of N latent parameters {(θi,ki)}i∼q(⋅,⋅) from some proposal distribution q, and assign a corresponding weight
w(k,θ)=q(k,θ)p(k)p(θ∣k)p(D∣k,θ)
Theorem 1. The expected value of the importance weights is the marginal likelihood.
Proof. It sufficies to show
Eq[w(k,θ)]=p(D)
k∑∫Θw(k,θ)q(k,θ) dθ=k∑∫Θq(k,θ)p(k)p(θ∣k)p(D∣k,θ)q(k,θ) dθ=k∑p(k)∫Θp(θ∣k)p(D∣k,θ) dθ=p(D)
□
If we have an consistent estimator of the marginal likelihood, we can also show how the weights also allow us to form an consistent estimator of any test function under the posterior distribution. To do this, we introduce a quantity called the self-normalized importance sampling estimator.
μ~f=Eq[w(k,θ)]Eq[w(k,θ)f(k,θ)]
Theorem 2. The self-normalized importance sampling estimator is an consistent estimator of the expected value of the test function under the posterior.
Proof. It sufficies to show
μ~f=E(k,θ)∼p(⋅,⋅∣D)[f(k,θ)]=p(D)⋅E(k,θ)∼p(⋅,⋅∣D)[f(k,θ)]
k∑∫Θw(k,θ)q(k,θ)f(k,θ) dθ=k∑∫Θq(k,θ)p(k)p(θ∣k)p(D∣k,θ)q(k,θ) dθ=k∑∫Θp(k,θ,D)f(k,θ) dθ=p(D)k∑∫Θp(k,θ∣D)f(k,θ) dθ
□
We were able to avoid a measure-theoertic analysis by exploiting the factorization of the distribution. If the continous parameters are independent given a discrete structure choice, all we need to do is consider the expectations of the relevant distributions. We can be confident that given enough samples, importance sampling will find the true posterior.