You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense.
This is because in the previous version, after community discussion, an incompatible feature was introduced. Project-HAMi/HAMi-core#4
We didn't handle this panic, simply to make it easier for users to identify the problem.
Restarting all Pods using vgpu can solve this problem.
I understand, but I think we can actually provide a solution that is compatible with older versions instead of causing the entire monitoring component to panic, which may be confusing to someone who is not familiar with this project. Maybe if such problems arise in the future, we can't handle them in this way, right?
The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense.
1. Issue or feature description
vgpu-monitor panic like:
2. Steps to reproduce the issue
I'm not sure why one of the cache file sizes is incorrect, maybe it's a version issue?
3. Information to attach (optional if deemed irrelevant)
Common error checking:
nvidia-smi -a
on your host/etc/docker/daemon.json
)sudo journalctl -r -u kubelet
)Additional information that might help better understand your environment and reproduce the bug:
docker version
uname -a
dmesg
The text was updated successfully, but these errors were encountered: