I have a problem running nvidia-docker containers on a slurm cluster. When inside the container all gpus are visible so basically it ignores the CUDA_VISIBLE_DEVICES set env by slurm. Outside the container the visible gpus are correct.
Is there a way to restrict the container e.g. with -e NVIDIA_VISIBLE_DEVICES ? Or is there are way to set NVIDIA_VISIBLE_DEVICES to CUDA_VISIBLE_DEVICES ?