Replies: 1 comment 4 replies
-
Yes, it should be differentiable wrt both the gp's parameters and the inputs.
Yes, I believe that they use exact inference from a skim of their paper. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I am trying to implement the paper here (Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance) in GPytorch.
The objective function to be minimized in the paper (equation 3) is
l_ssl = 1/n * l_likelihood + alpha/m * l_variance
where
n
,alpha
, andm
are weights andl_variance
corresponds to predictive variance, i.e.l_variance=sum(Variance(y_hat))
Can i confirm that for
Variance(y_hat)
, we can directly use the predicted variance values from the distribution classes? i.e. we can have something likeIs the
.variance
attribute differentiable and will it backpropagate with the loss step function?Also, if possible, can I just confirm that the GP layer type to be used for this DKL should be Exact GP and not approximate / stochastic variational GP? I'm quite certain it is just an exact DKL since I am aware of the difference with stochastic variational DKL but here seems to be quite a few variations in GPytorch so just want to be sure.
Thanks a lot!
Beta Was this translation helpful? Give feedback.
All reactions