GSoC #6: Less Math More Code

1 minute read

Published:

This week was all about trying to wrangle PyTensor to correctly marginalise out the hyperparameters whilst leaving \(x\) untouched by pm.sample. In contrast to prior weeks, there was much less focus on the theory, as this time it was a matter of trying to get PyMC and PyTensor to do what I want.

Currently, PyMC extras handles marginalisation with the marginalize function. To the best of my understanding, it converts a pm.Model into a graph, collects the appropriate RVs from said graph and then manually applies some marginalisation scheme to the selected parameters before converting this rewritten graph back into a model. Unfortunately, it only works for finite discrete RVs currently, but for our purposes this doesn’t make a difference, as we’d need to implement a Laplace marginalisation regardless.

Essentially, the current plan is to implement a LaplaceMarginalRV class, and register a custom logp to it - this log probability would just be the Laplace approximation \(p(y \mid \theta) = \frac{p(y \mid x, \theta)p(x \mid \theta)}{p_G(x \mid y, \theta)}\) which we’ve derived previously.

Currently I’m just bogged down in PyTensor and PyMC’s backend. At the moment, I’ve run into a bug which may or may not be an issue on the PyTensor end, so I’ll have to address that with Ricardo.