1

Lately my GCP VM of multiple GPUs throws the following error when I try to run my container:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container ini
t caused: Running hook #1:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: timed out: unknown

I also noticed, that executing nvidia-smi takes more than 30 seconds most of the time.

Specs:

  • base image: nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04
  • nvidia driver: 450.102.04
  • zone: europe-west1-b

I've been using this setup for months and never noticed anything alike.

Pit
  • 184
  • 11
ben0it8
  • 111
  • 2
  • 1
    Have you tried updating NVIDIA drivers? What is the output of `nvidia-container-cli -k -d /dev/tty info` ? I saw somethin similar [here](https://github.com/NVIDIA/nvidia-docker/issues/1133) but the output of the command would be a good starting point. – Judith Guzman Apr 21 '21 at 03:05
  • It might be related with this [post on StackOverflow](https://stackoverflow.com/questions/48074282/docker-container-not-starting-giving-oci-runtime-create-failed). – Pit Jul 19 '21 at 09:52
  • First check the service status: ```$systemctl status docker``` Try a restart: ```$systemctl restart docker``` And let us know your results also share the Docker version is the issue persists. – Elba Lazo Jan 20 '21 at 16:44
  • Are you(@ben0it8) still facing the problem? Was your problem resolved? If yes, can you mention the steps you have taken to solve the problem. – Bakul Mitra Jan 15 '22 at 12:18

0 Answers0