Skip to content Skip to sidebar Skip to footer

Runtimeerror: Cuda Runtime Implicit Initialization On Gpu:0 Failed. Status: All Cuda-capable Devices Are Busy Or Unavailable

Problem: when I run the following command python -c 'import tensorflow as tf; tf.test.is_gpu_available(); print('version :' + tf.__version__)' Error: RuntimeError: CUDA runtime im

Solution 1:

I can confirm the case mentioned in a comment.

I had the problem while working with an Ubuntu VM, executed on VMware ESXi host, and using a vGPU partition for a v100 Nvidia GPU.

I got the same error, and I have already tried changing cuda versions and downloading (pip) softwares compiled for that specific CUDA versions, this has NOT solved the issue, the error:

tensorflow.python.framework.errors_impl.InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: all CUDA-capable devices are busy or unavailable

In my case I forgot to set the license server in /etc/nvidia/grid.conf, and I got exactly the same error, so in my case it was a GRID license issue ... fixing the grid config file and rebooting solved the issue.

Post a Comment for "Runtimeerror: Cuda Runtime Implicit Initialization On Gpu:0 Failed. Status: All Cuda-capable Devices Are Busy Or Unavailable"