New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Kubernetes 1.19.3 - k8s-device-plugin fails and restarts at least once a day
#210
opened Nov 26, 2020 by
svetlana41
How are the helm-related files in deployment and gpu-operator generated?
#208
opened Nov 25, 2020 by
ccnankai
Question: How to set LD_LIBRARY_PATH on the nvidia-device-plugin pod
#204
opened Nov 5, 2020 by
joedborg
With volume-mounts strategy, pod shouldn't fail when no permission to read NVIDIA_VISIBLE_DEVICES
#203
opened Nov 5, 2020 by
zhsj
Setting "failOnInitError" unexpectedly "works" with a small 2 node cluster.
#199
opened Oct 13, 2020 by
supertetelman
What is the most recent stable beta and what do your tags mean?
#193
opened Aug 26, 2020 by
dwschulze
Container fails to initialize NVML even after setting default docker runtime=nvidia
#182
opened Jul 1, 2020 by
limwenyao
5 of 11
Error: failed to start container "nvidia-device-plugin-ctr": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: signal: segmentation fault (core dumped), stdout: , stderr: \\\"\"": unknown
#171
opened May 19, 2020 by
wxitzxg
docker: Error response from daemon: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v1.linux/moby/761bd05e8ceb95e1459db860b160e9dda095254a969ebd9a0b777524f73f9263/log.json: no such file or directory): exec: "nvidia-container-runtime": executable file not found in $PATH: unknown.
#166
opened May 5, 2020 by
wjimenez5271
Reuse the first gpu when there are multiple gpu training tasks
#163
opened Apr 14, 2020 by
alwaysdark
When nvidia-device-plugin pod exit, it didn't clean up nvidia-device-plugin process on host.
#162
opened Apr 9, 2020 by
gaopengw
Previous Next
ProTip!
Exclude everything labeled
bug with -label:bug.