Wendy LogoWendy
Guides & TutorialsDevice Management

GPU Access

Enable NVIDIA GPU access for CUDA, ML inference, and accelerated vision workloads

GPU Access

GPU access is most common on NVIDIA Jetson devices for CUDA, ML inference, image processing, DeepStream, PyTorch, TensorRT, and MLX workloads. WendyOS does not expose GPU devices to apps unless the app declares the gpu entitlement.

Add the entitlement from your project directory:

wendy project entitlements add gpu

Inspect GPU Capabilities

Check the target device version and hardware summary:

wendy device version

For a focused hardware capability view:

wendy device hardware list --category gpu

The output reports GPU-related device paths and properties when the target exposes them.

Common Pairings

GPU apps often also need:

  • camera for computer vision and live inference
  • network for inference APIs or dashboards
  • audio for voice AI pipelines
  • persist for model caches and local output

For camera inference:

wendy project entitlements add camera
wendy project entitlements add gpu

For a networked inference server:

wendy project entitlements add gpu
wendy project entitlements add network --mode host

App Configuration

A minimal GPU app uses:

{
  "appId": "com.example.gpu",
  "version": "1.0.0",
  "entitlements": [
    { "type": "gpu" }
  ]
}

Then deploy:

wendy run