R.I.P.
π»
Ghosted
Direct Amortized Likelihood Ratio Estimation
November 17, 2023 Β· Entered Twilight Β· π AAAI Conference on Artificial Intelligence
Repo contents: .gitignore, LICENSE, README.md, data, models, notebooks, requirements.txt, setup.py, src
Authors
Adam D. Cobb, Brian Matejek, Daniel Elenius, Anirban Roy, Susmit Jha
arXiv ID
2311.10571
Category
stat.ML: Machine Learning (Stat)
Cross-listed
cs.LG,
stat.CO
Citations
5
Venue
AAAI Conference on Artificial Intelligence
Repository
https://github.com/SRI-CSL/dnre
β 1
Last Checked
2 months ago
Abstract
We introduce a new amortized likelihood ratio estimator for likelihood-free simulation-based inference (SBI). Our estimator is simple to train and estimates the likelihood ratio using a single forward pass of the neural estimator. Our approach directly computes the likelihood ratio between two competing parameter sets which is different from the previous approach of comparing two neural network output values. We refer to our model as the direct neural ratio estimator (DNRE). As part of introducing the DNRE, we derive a corresponding Monte Carlo estimate of the posterior. We benchmark our new ratio estimator and compare to previous ratio estimators in the literature. We show that our new ratio estimator often outperforms these previous approaches. As a further contribution, we introduce a new derivative estimator for likelihood ratio estimators that enables us to compare likelihood-free Hamiltonian Monte Carlo (HMC) with random-walk Metropolis-Hastings (MH). We show that HMC is equally competitive, which has not been previously shown. Finally, we include a novel real-world application of SBI by using our neural ratio estimator to design a quadcopter. Code is available at https://github.com/SRI-CSL/dnre.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Machine Learning (Stat)
R.I.P.
π»
Ghosted
Distilling the Knowledge in a Neural Network
R.I.P.
π»
Ghosted
Layer Normalization
R.I.P.
π»
Ghosted
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
R.I.P.
π»
Ghosted
Domain-Adversarial Training of Neural Networks
R.I.P.
π»
Ghosted