-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
image loss function? #15
Comments
Any chance you might point me to sources? I have seen BCE used to more accurately reflect the distribution of the data when binary (for instance if training on the MNIST set), but I am not sure I see the benefit to using it for continuous pixel values as in most images. I am definitely willing to change this though if there is compelling evidence that it would be a good idea, so please post the papers/implementations and I will take a look. |
I'm most familiar with DRAW, which says (section 4): Will try to track down something more recent to see if this is best practice more broadly. |
There's a sigmoid cross entropy available, which might be of use here. |
This is from Kingma and Welling (2013): We let pθ(x|z) be a multivariate Gaussian (in case of real-valued data) or Bernoulli (in case of binary data) whose distribution parameters are computed from z with a MLP (a fully-connected neural network with a single hidden layer, see appendix C). Here is a more recent paper in which a similar formulation is used. Chainer has gaussian_nll and bernoulli_nll loss functions for VAE. |
It definitely makes sense to add the Bernoulli negative log likelihood if one wishes to look at Bernoulli distributed posterior data distributions as in say MNIST, though I hadn't envisioned that being a big use case initially. However, after recently trying to use the package to train over a font dataset, and realizing performance was somewhat hindered if I didn't artificially induce continuity with a slight Gaussian filtering, I think it's probably a good idea to include this as a loss option. The gaussian NLL is quite similar to MSE assuming unit covariance, but they do differ somewhat and I'd be willing to adopt that as additional option too, since implementing both is rather easy (as you point out they both already exist in Chainer). I will assign myself to this unless there are volunteers. |
I'm hoping to use binarized MNIST (with validation data) as a sanity check to compare the NLL test score fauxtograph can achieve against other generative implementations. |
Sounds great! Should be quite fast to validate over MNIST, though I think the MNIST set will be too small to use with the convolution architecture currently available. MNIST images are 28x28 and fauxtograph supports 32x32 at the smallest. A simple workaround would be to preprocess the set and add in a 2 pixel black border to all sides. I have also been thinking of adding a conditional semi-supervised option or an adversarial auto encoder class at some point as well. Would be good to benchmark all. |
I've tried both BCELoss and MSELoss for CIFAR10 dataset reconstructions using Autoencoder. MSELoss is giving better looking reconstructed images than BCELoss. |
VAE and VAEGAN code is currently using mean squared error as the reconstruction loss function. In most papers / implementations, I'm more used to seeing binary cross entropy with numbers reported in nats.
Curious what we think would be best here. I did do a quick look for this in the chainer docs but didn't see binary cross entropy listed as one of the built in loss functions.
The text was updated successfully, but these errors were encountered: