-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Unexpected results in inv_quad_logdet()
after .add_low_rank()
with dense inputs
#102
Comments
.add_low_rank()
with dense inputsinv_quad_logdet()
after .add_low_rank()
with dense inputs
For some more info, it seems like some things work and some things don't. With all vars defined as above, we have: Things that work:
dense_root_solve = dense_root_cov.solve(y.T).T
full_solve = torch.linalg.solve(cov, y.T).T
np.isclose(full_solve, dense_root_solve, atol=1e-5).all(), (full_solve - dense_root_solve).abs().median(), (full_solve - dense_root_solve).abs().max()
# => (True, tensor(1.1921e-07), tensor(1.9073e-06))
np.isclose(
linear_operator.inv_quad(cov, y.T, reduce_inv_quad=False),
linear_operator.inv_quad(dense_root_cov, y.T, reduce_inv_quad=False),
).all()
# => True Things that don't
dense_root_cov.logdet(), torch.linalg.det(cov).log()
# => (tensor(-13.8775), tensor(3.5932)) The torch.abs(
linear_operator.inv_quad_logdet(cov, y.T, reduce_inv_quad=False, logdet=True)[0]
- linear_operator.inv_quad_logdet(dense_root_cov, y.T, reduce_inv_quad=False, logdet=True)[0]
).median()
# => tensor(51524.3984) So, although regular |
Adding a low rank matrix to a dense diagononally-dominant matrix was unfortunately not the intended use case for Can you describe your intended use case in a bit more detail? cc @saitcakmak |
Thanks for your message Geoff! That makes sense. Yes, my use case is in Gaussian mixture modeling where the $k$th component is What I'm doing now is to use a workaround class which just adapts |
First of all thanks for this incredibly great library, it has been a lifesaver! (and hi @gpleiss from jlg :)
🐛 Bug
When adding
A
(aDenseLinearOperator
) tov
(aLowRankRootLinearOperator
) the result ofinv_quad_logdet()
is not what it should be. (Other cases work -- for instance, ifA
is aDiagLinearOperator
, things are fine.)Below, I've included a test case based on computing normal likelihoods so that I can produce some SciPy numbers for ground truth.
To reproduce
I've broken up the code into a few cases, showing ways to get the right answer and the add_low_rank case which breaks.
Imports...
Simulate some test data, and get SciPy log liks:
Cases which behave as expected
The
isclose
s here are True.Failing case
If we use a dense linear operator and land in .add_low_rank(), the
isclose()
is False here:The differences are substanatial -- the max difference was 876303.6523659548 in this case, and the median abs difference from scipy was 25754.86743231282.
Workaround
I wanted a way to use Woodbury with a dense operator, so I wrote a quick implementation of a
LowRankRootSumLinearOperator
which is basically identical toLowRankRootAddedDiagLinearOperator
-- it makes a Cholesky of the capacitance matrix. My code is here in case it is helpful at all: https://github.com/cwindolf/dartsort/blob/main/src/dartsort/util/more_operators.pyExpected Behavior
inv_quad_logdet()
should lead to results such that things match scipy in all cases.System information
Please complete the following information:
0.5.3
2.5.1.post3
Mac
The text was updated successfully, but these errors were encountered: