You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, DaggerChain communicates to Dagger that the wrapped model is located on a CUDA GPU, which is not necessarily true (and shouldn't be a requirement anyway). We should provide functions which can move the model to the GPU and communicate the correct location to Dagger, and/or auto-detect where a model currently resides.
The text was updated successfully, but these errors were encountered:
If we remove it, then we won't get automatic GPU execution (because GPU execution is disabled by default). We probably need a dispatch-based API in Dagger to enable GPU execution for certain functions (like DaggerChain).
Currently,
DaggerChain
communicates to Dagger that the wrapped model is located on a CUDA GPU, which is not necessarily true (and shouldn't be a requirement anyway). We should provide functions which can move the model to the GPU and communicate the correct location to Dagger, and/or auto-detect where a model currently resides.The text was updated successfully, but these errors were encountered: