GSoC #4: Spoilt for Choice

1 minute read

Published:

This week I was finally able to get my first INLA-related PR merged in!

Now we turn our attention towards marginalisation. That is, we need some kind of integration or sampling scheme to obtain \(p(\theta \mid y)\), and to marginalise out \(\theta\) for \(p(x \mid y)\). There are a few different approaches out there to tackle this, and different implementations seem to use different methods, such as adaptive quadrature schemes in R-INLA and sampling in Stan.

At this stage however, Rob and I are thinking that it may be best to start by building a higher-level function or method, like pmx.fit(method='INLA'), and abstract away the details for now (e.g. just call functions like sample_prior and marginalise_posterior and avoid the details of the implementation). However, while we’re at it, we will also implement the functionality for those functions, but will only go with one method for now. Depending on where the Quasi-Monte Carlo PR sits on the path to a potential merge, we might employ QMC, or instead go down a quadrature route, or simply use some other step-sampling approach similar to QMC which already exists within PyMC. We’ll see.

In any case, the plan is to have a PR which addresses two issues at once: Implement marginalistion and implement an INLA method for pmx.fit.