Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

image loss function? #15

Open
dribnet opened this issue Jan 25, 2016 · 8 comments
Open

image loss function? #15

dribnet opened this issue Jan 25, 2016 · 8 comments
Assignees

Comments

@dribnet
Copy link
Contributor

dribnet commented Jan 25, 2016

VAE and VAEGAN code is currently using mean squared error as the reconstruction loss function. In most papers / implementations, I'm more used to seeing binary cross entropy with numbers reported in nats.

Curious what we think would be best here. I did do a quick look for this in the chainer docs but didn't see binary cross entropy listed as one of the built in loss functions.

@tjtorres
Copy link
Contributor

Any chance you might point me to sources? I have seen BCE used to more accurately reflect the distribution of the data when binary (for instance if training on the MNIST set), but I am not sure I see the benefit to using it for continuous pixel values as in most images. I am definitely willing to change this though if there is compelling evidence that it would be a good idea, so please post the papers/implementations and I will take a look.

@dribnet
Copy link
Contributor Author

dribnet commented Jan 25, 2016

I'm most familiar with DRAW, which says (section 4):

cross_entropy

Will try to track down something more recent to see if this is best practice more broadly.

@cemoody
Copy link
Contributor

cemoody commented Jan 25, 2016

There's a sigmoid cross entropy available, which might be of use here.

@umguec
Copy link

umguec commented Jan 29, 2016

This is from Kingma and Welling (2013):

We let pθ(x|z) be a multivariate Gaussian (in case of real-valued data) or Bernoulli (in case of binary data) whose distribution parameters are computed from z with a MLP (a fully-connected neural network with a single hidden layer, see appendix C).

c

Here is a more recent paper in which a similar formulation is used.

Chainer has gaussian_nll and bernoulli_nll loss functions for VAE.

@tjtorres
Copy link
Contributor

tjtorres commented Feb 1, 2016

It definitely makes sense to add the Bernoulli negative log likelihood if one wishes to look at Bernoulli distributed posterior data distributions as in say MNIST, though I hadn't envisioned that being a big use case initially. However, after recently trying to use the package to train over a font dataset, and realizing performance was somewhat hindered if I didn't artificially induce continuity with a slight Gaussian filtering, I think it's probably a good idea to include this as a loss option. The gaussian NLL is quite similar to MSE assuming unit covariance, but they do differ somewhat and I'd be willing to adopt that as additional option too, since implementing both is rather easy (as you point out they both already exist in Chainer). I will assign myself to this unless there are volunteers.

@tjtorres tjtorres self-assigned this Feb 1, 2016
@dribnet
Copy link
Contributor Author

dribnet commented Feb 1, 2016

I'm hoping to use binarized MNIST (with validation data) as a sanity check to compare the NLL test score fauxtograph can achieve against other generative implementations.

@tjtorres
Copy link
Contributor

tjtorres commented Feb 2, 2016

Sounds great! Should be quite fast to validate over MNIST, though I think the MNIST set will be too small to use with the convolution architecture currently available. MNIST images are 28x28 and fauxtograph supports 32x32 at the smallest. A simple workaround would be to preprocess the set and add in a 2 pixel black border to all sides. I have also been thinking of adding a conditional semi-supervised option or an adversarial auto encoder class at some point as well. Would be good to benchmark all.

@abhinav3
Copy link

abhinav3 commented Sep 5, 2019

I've tried both BCELoss and MSELoss for CIFAR10 dataset reconstructions using Autoencoder. MSELoss is giving better looking reconstructed images than BCELoss.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants