
Psychoco 2026 @ Università di Padova
February 5, 2026

INLA provides a fast, deterministic alternative to MCMC for performing accurate Bayesian inference on a wide class of latent Gaussian models.
Rue et al. (2009)

For \(s=1\!:\!n\), MVN-SEM equations are \[ \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \newcommand{\N}{\mathrm{N}} \newcommand{\SN}{\mathrm{SN}} \begin{gathered} y_s = {\nu +\,} {\Lambda} \eta_s + \epsilon_s \\ \eta_s = {\alpha +\,} { B} \eta + \zeta_s \end{gathered} \] with assumptions \(\epsilon_s \sim \N(0,\Theta)\), \(\eta_s \sim \N(0,\Psi)\), \(\operatorname{Cov}(\epsilon_s,\eta_s)=0\).
Additionally, Bayesian: \(\mathbb R^m \ni \vartheta \sim \pi(\vartheta)\).
R-INLA out of the box. However…
R-INLA fits in 6.3s for \(n=75\); 11.1s for \(n=750\); 125s for \(n=7500\).
ℹ Finding posterior mode.
✔ Finding posterior mode. [41ms]
ℹ Computing the Hessian.
✔ Computing the Hessian. [96ms]
ℹ Performing VB correction.
✔ VB correction; mean |δ| = 0.053σ. [101ms]
⠙ Fitting skew normal to 0/30 marginals.
✔ Fitting skew normal to 30/30 marginals. [673ms]
ℹ Sampling covariances and defined parameters.
✔ Sampling covariances and defined parameters. [56ms]
⠙ Computing ppp and DIC.
✔ Computing ppp and DIC. [172ms]
asem(), acfa(), agrowth() with dp = blavaan::dpriors() option.

Use the Laplace approximation again, this time to approximate an integral: \[ \pi(\vartheta_j \mid y) = \int e^{\log \pi(\vartheta_j, \vartheta_{-j} \mid y)} \approx K \times \overbrace{\pi(\vartheta_j, \hat\vartheta_{-j} \mid y)}^{\text{height}} \times \overbrace{\left| H_{-j} \right|^{-1/2 }}^{\text{width}} \]
This is expensive to compute, as for every \(\vartheta_j\) evaluation, need

Lemma 1 (Conditional Mean Path) = No need for reoptimisation
The set \(\mathcal{C}_j = \big\{ \vartheta \in \mathbb{R}^m \mid \vartheta_{-j} = \operatorname{E}_{{\pi_G}}[\vartheta_{-j} \mid \vartheta_j] \big\}\) is the sufficient integration path for the marginal, i.e. \(\pi(\vartheta_j) \propto \pi(\vartheta)\big|_{\vartheta\in\mathcal C_j}\). Under Gaussianity, the path is linear: \[\vartheta(t)=\vartheta^*+\Sigma_{\cdot j}\Sigma_{jj}^{-1}t.\]
Lemma 2 (Efficient volume correction) = No need to factor dense Hessian
Let \(L\) be a whitening matrix for \(-H\). For the \(j\)th component along the cond. mean path, \(\log |-H_{-j}(\vartheta(t))| \approx \text{const.} + \gamma_j t\), where
\[ \small \gamma_j = \sum_{k\neq j} L_{\cdot k}^\top \, \frac{d}{dt} \left( \frac{d}{dL_{\cdot k}} \Big[ \nabla_{\vartheta} \log \pi(\vartheta(t) \mid y) \Big] \right) \Bigg|_{t=0} \ . \]
\[ \underbrace{z \sim N(0, R)}_{\text{Gaussian correlation}} \xrightarrow{\ \Phi(\cdot) \ } \underbrace{u}_{\ \text{Uniform} \ } \xrightarrow{ \ F_{\text{SN}}^{-1}(\cdot) \ } \underbrace{\vartheta^{(b)}}_{\text{SN Marginals}} \xrightarrow{ \ h(\cdot) \ } \text{trans. params.} \]
Reconstruct joint posterior samples \(h(\vartheta^{(b)})\) using a Copula (Nelsen, 2006):
INLAvaan 0.2.3.9001 ended normally after 82 iterations
Estimator BAYES
Optimization method NLMINB
Number of model parameters 30
Number of observations 75
Model Test (User Model):
Marginal log-likelihood -1688.569
PPP (Chi-square) 0.000
ind60=~x2 ind60=~x3 dem60=~y2 dem60=~y3
2.219 1.851 0.656 1.021
dem65=~y6 dem65=~y7 dem65=~y8 dem60~ind60
1.163 1.281 1.086 1.310
dem65~ind60 dem65~dem60 y1~~y5 y2~~y4
0.823 0.768 0.224 0.444
y2~~y6 y3~~y7 y8~~y4 y6~~y8
0.311 0.207 0.253 0.305
x1~~x1 x2~~x2 x3~~x3 y1~~y1
0.088 0.123 0.498 1.395
y2~~y2 y3~~y3 y5~~y5 y6~~y6
9.914 5.334 2.298 5.303
y7~~y7 y8~~y8 y4~~y4 ind60~~ind60
3.712 4.009 10.887 0.451
dem60~~dem60 dem65~~dem65
4.680 0.434
Also available:
summary()fitmeasures()predict()standardisedsolution()vcov()plot()

