Log Probability Differences and Posterior Discrepancies in SNLE/SNPE with Various Sampling Techniques #1295
Replies: 1 comment
-
Hi there! Thanks for opening this!
Hope this helps! |
Beta Was this translation helpful? Give feedback.
-
Hi there! Thanks for opening this!
Hope this helps! |
Beta Was this translation helpful? Give feedback.
-
Hi,
I’m currently training an SNLE and a (T)SNPE aiming to obtain samples from the posterior and their corresponding log probabilities. My main goal is to follow the approach outlined by Spurio Mancini in Bayesian Model Comparison for Simulation-Based Inference (2023) by using a Learnt Harmonic Mean estimator to compute the evidence.
I've noticed that it's possible to calculate the log probability from the trained NPE (using samples generated by the NPE) and from the trained NLE (using samples from either MCMC, Variational Inference (VI), or the NPE-generated samples).
Example Code:
My two questions are:
Log Probability Differences: What is the fundamental difference between the log probabilities calculated from the SNPE and SNLE for the same set of samples? Is there any specific reason for potential discrepancies, particularly, does the NPE and NLE have to converge to the ""same"" posterior distribution in order for my logprob values, using SNPE samples and the NLE density estimator, to be considered reliable?
Sampler Variations: When I use different sampling techniques (Rejection, MCMC, VI), I observe varying posterior distributions, with VI producing a notably narrower posterior. Could you explain to me why this discrepancy occurs, especially for VI, and how it might impact the posterior distribution or the evidence calculation when using Bayesian model comparison methods?
Any insights or explanations regarding these observations would be greatly appreciated!
Thanks in advance for your help!
Posterior using VI sampling
Posterior using Rejection sampling
Beta Was this translation helpful? Give feedback.
All reactions