This code provides a reference implementation of the Smoothing Variational Objectives (SVO) algorithms described in the publications:
-
Variational Objectives for Markovian Dynamics with Backwards Simulation.
Moretti, A.*, Wang, Z.*, Wu, L.*, Drori, I., Pe'er, I.
European Conference on Artificial Intelligence, 2020. -
Particle Smoothing Variational Objectives.
Moretti, A.*, Wang, Z.*, Wu, L.*, Drori, I., Pe'er, I.
arXiv preprint, arXiv:1909.097342019. -
Smoothing Nonlinear Variational Objectives with Sequential Monte Carlo.
Moretti, A.*, Wang, Z.*, Wu, L., Pe'er, I.
ICLR Workshop on Deep Generative Models for Highly Structured Data, 2019.
SVO is written as an abstract class that reduces to two related variational inference methods for time series. As a reference, the AESMC and IWAE algorithms are implemented from the following publications:
-
Auto-Encoding Sequential Monte Carlo.
Le, T., Igl, M., Rainforth, T., Jin, T., Wood, F.
International Conference on Learning Representations, 2018. -
Importance Weighted Autoencoders.
Burda, Y., Grosse, R., Salakhutidinov, R.
International Conference on Learning Representations, 2016.
The code is written in Python 3.6. The following dependencies are required:
- Tensorflow
- seaborn
- numpy
- scipy
- matplotlib
To check out, run git@github.com:amoretti86/psvo.git
Running python runner_flags.py
will find a two dimensional representation of the Fitzhugh-Nagumo dynamical system from one dimensional observations. The following figure provides the original dynamical system and trajectories along with the resulting inferred dynamics and trajectories from SVO.
Original | Inferred |
---|---|