Skip to content

How to Request Gear GPU Compute

The standard configuration is to request a dynamic engine with a particular NVIDIA GPU, and to use job tag based engine routing, with the tag 'gpu' for users to route the job to the GPU engine.

Info

NVIDIA GPU compute is currently available only on AWS production sites. GCP and Azure support is under long term consideration.

Details of this compute option

  • Allows gears to leverage GPU compute for AI use cases such as Inference, Image preprocessing and Training/Fine-Tuning using the popular AI frameworks such as PyTorch or TensorFlow.
  • The latest NVIDIA driver will be installed on the gear compute, ensuring capability with gears that contain the latest NVIDIA CUDA Runtimes. See Determining GPU Driver
  • Once configured, users can add a gear tag (eg. 'gpu') when they launch the gear and the job will be routed to a compute with a GPU.

Request info

Site Admins should submit a Support Request and please provide the items below.

  • The type of GPU you would like to add. Please provide a primary and alternative option, since it may not be possible to get the first choice from the cloud vendor. Only one GPU is supported.
    • The GPU RAM you would like, this will help us find an appropriate GPU to fit your needs.
    • The CPU and number of cores (can be a SKU or family provided by the cloud vendor)
    • The RAM desired.
    • The name of the gear tag, if you want something other than "gpu"

The Flywheel support team will then reach out to you with options available to confirm and then make the change.

Contact Support

Site Admins should contact Flywheel support by submitting a ticket or emailing us at support@flywheel.io.