If you’re working in deep learning using NVIDIA GPUs, you may have heard about the NVIDIA GPU Cloud or NGC for short. NGC provides pre-integrated GPU-accelerated containers that you can use power your own AI projects.
We know that many of you are using Bright clusters with NVIDIA GPUs, and some of you have asked about running your deep learning workflows inside containers. To help you get started using NGC in your Bright cluster, we’ve written a technical paper that shows how to run three popular deep learning frameworks PyTorch, Tensorflow, and MXNet in Docker containers scheduled by Kubernetes, and in Singularity containers scheduled through an HPC workload manager.
Check out the step-by-step instructions and examples in this new technical white paper, and let us know what you think.