Skip to main content

PyTorch with Cudo Compute

PyTorch is an open source framework for machine learning. With Cudo Compute you can deploy PyTorch docker containers to the latest NVIDIA Ampere Architecture GPUs. Accelerate training times and reduce training cost. Cudo Compute GPU cloud provides images with NVIDIA drivers and Docker preinstalled.

Common uses for PyTorch:

  • Deep Neural Networks (DNN)
  • Convolutional Neural Networks (CNN)
  • Conversational AI
  • Recurrent Neural Networks (RNN)
  • Reinforcement Learning
  • Natural Language Processing (NLP)

Prerequisites

  • Create a project and add an SSH key
  • Optionally download CLI tool
  • Choose a VM with an NVIDIA GPU and Configure
  • Use the Ubuntu 22.04 + Nvidia drivers + Docker image (in CLI tool type -image ubuntu-nvidia-docker)

Deploy PyTorch to Cudo Compute

SSH into your VM and run the following commands

docker run --gpus all -it --rm pytorch/pytorch:latest

Or for the NVIDIA optimised PyTorch container

docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:22.08-py3

NGC tags can be found here

At the prompt

$ python
>>> import torch
>>> print(torch.cuda.is_available())
Learn more about PyTorch on Cudo Compute

or get started with Cudo Compute.