GSoC #10: All Good Things…

4 minute read

Published:

This will be my final blogpost as part of my GSoC series which can be found here.

The INLA roadmap can be found here, which outlines the current state of the project. The goal of this project was to work towards implementing integrated nested Laplace approximations (INLA) as a feature in the PyMC library, which is outlined in the roadmap. Sparse features were considered outside of scope.

I primarily focused on building up the main INLA framework, in particular the Laplace approximation step (rootfinding, custom logp, etc.) and then hooking that up with the pmx.fit function.

There were a few features that I wasn’t able to implement, in particular inference over the latent field, as this turned out to involve a nontrivial bespoke method, which seems to rely on sparse optimisations. Additionally, it was observed that there were significant computational expenses associated with running minimisation at every step of sampling, which I was not able to reconcile within the time assigned. One option would have been to forgo the PyTensor native optimize.minimize function for a bespoke method, however this limited flexibility which is why I opted to keep the former.

In the end, most of my work ended up being grouped into a few large PRs rather than many smaller ones, simply based on how the scope of the issues turned out. Additionally, much of my day-to-day work involved experimenting in jupyter notebooks rather than writing large amounts of new code, so the final result is a crystallisation of much of that experimentation.

In its current state, INLA works end-to-end for providing posteriors for the hyperparameters, however it is too slow to work in practice (due to bottlenecking at the minimisation step). To reflect its current state, we have updated the INLA roadmap to close out issues I solved and opened new ones to address next steps. A full list of my contributions and the resources I used is available below.

Overall, this project was an excellent opportunity to learn the theory behind Laplace approximation (which helped me understand its extension - Singular Learning Theory, where I’m interested in pursuing my Honours thesis and potentially even a PhD in), as well as to network and receive great career advice from my mentors. My single biggest challenge during this project was time management, as I was juggling GSoC with university and an internship at the same time, alongside two conferences where I was travelling in the US for a week each.

Contributions

Issues Closed

Issues Raised

PRs

Resources Used

Blogposts

Papers

(Non-PyMC) Codebases