Implementation of "An Approximate Memory based Defense against Model Inversion Attacks to Neural Networks" and "MIDAS: Model Inversion Defenses Using an Approximate Memory System"
-
Updated
Jun 25, 2022 - Jupyter Notebook
Implementation of "An Approximate Memory based Defense against Model Inversion Attacks to Neural Networks" and "MIDAS: Model Inversion Defenses Using an Approximate Memory System"
a gradient-based optimisation routine for highly parameterised non-linear dynamical models
My attempt to recreate the attack described in "Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures" by Fredrikson et al. in 2015 using Tensorflow 2.9.1
📄 [Talk] OFFZONE 2022 / ODS Data Halloween 2022: Black-box attacks on ML models + with use of open-source tools
Research into model inversion on SplitNN
Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021
reveal the vulnerabilities of SplitNN
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be Careful What You Smooth For".
A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.
Privacy Testing for Deep Learning
Add a description, image, and links to the model-inversion topic page so that developers can more easily learn about it.
To associate your repository with the model-inversion topic, visit your repo's landing page and select "manage topics."