Guides & TutorialsDevice Management
GPU Access
Enable NVIDIA GPU access for CUDA, ML inference, and accelerated vision workloads
GPU Access
GPU access is most common on NVIDIA Jetson devices for CUDA, ML inference, image processing, DeepStream, PyTorch, TensorRT, and MLX workloads. WendyOS does not expose GPU devices to apps unless the app declares the gpu entitlement.
Add the entitlement from your project directory:
wendy project entitlements add gpuInspect GPU Capabilities
Check the target device version and hardware summary:
wendy device versionFor a focused hardware capability view:
wendy device hardware list --category gpuThe output reports GPU-related device paths and properties when the target exposes them.
Common Pairings
GPU apps often also need:
camerafor computer vision and live inferencenetworkfor inference APIs or dashboardsaudiofor voice AI pipelinespersistfor model caches and local output
For camera inference:
wendy project entitlements add camera
wendy project entitlements add gpuFor a networked inference server:
wendy project entitlements add gpu
wendy project entitlements add network --mode hostApp Configuration
A minimal GPU app uses:
{
"appId": "com.example.gpu",
"version": "1.0.0",
"entitlements": [
{ "type": "gpu" }
]
}Then deploy:
wendy run