Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

argument name inconsistency in setting jitter value with gpytorch #46

Closed
LuhuanWu opened this issue Jan 6, 2023 · 2 comments
Closed

Comments

@LuhuanWu
Copy link

LuhuanWu commented Jan 6, 2023

  • Change the init argument of class _dtype_value_context.

    • Currently the init function for _dtype_value_context in linear_operator/settings.py is:
      def __init__(self, float=None, double=None, half=None):

    • This is not consistent with the init function for _dtype_value_context in gpytorch/settings.py, which is:
      def __init__(self, float_value=None, double_value=None, half_value=None):

    • I would suggest to unify the argument names, for example, make the both init functions as:
      def __init__(self, float_value=None, double_value=None, half_value=None):

@Balandat
Copy link
Collaborator

That makes sense to me. Will put up a PR

Balandat added a commit to Balandat/linear_operator that referenced this issue Jan 11, 2023
gpleiss pushed a commit that referenced this issue Jan 17, 2023
@gpleiss
Copy link
Member

gpleiss commented Jan 17, 2023

Should be closed by #47

@gpleiss gpleiss closed this as completed Jan 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants