LLM Integration
Configure AI assistants like Claude, ChatGPT, and Copilot to understand and use the Wendy CLI
Using LLMs with the Wendy CLI
Large Language Models (LLMs) like Claude, ChatGPT, GitHub Copilot, and Cursor are transforming how developers interact with command-line tools. However, these AI assistants may not have been trained on Wendy CLI commands, especially newer features.
The --experimental-dump-help flag solves this by providing a machine-readable JSON dump of all Wendy CLI commands, parameters, and documentation that LLMs can parse and understand.
The Problem
When you ask an AI assistant to help you with Wendy CLI commands, it might:
- Not know about Wendy at all
- Hallucinate commands that don't exist
- Miss important flags or options
- Use outdated syntax
The Solution
Run wendy --experimental-dump-help to generate a complete, structured reference of all CLI commands:
wendy --experimental-dump-helpThis outputs a JSON structure containing:
- All available commands and subcommands
- Command descriptions and documentation
- All flags, options, and positional arguments
- Parameter types and default values
- Whether parameters are required or optional
Example Output
{
"command": {
"abstract": "Wendy CLI",
"commandName": "wendy",
"subcommands": [
{
"abstract": "Run Wendy projects.",
"commandName": "run",
"arguments": [
{
"abstract": "Attach a debugger to the container",
"kind": "flag",
"names": [{"kind": "long", "name": "debug"}]
},
{
"abstract": "Run the container in the background",
"kind": "flag",
"names": [{"kind": "long", "name": "detach"}]
}
]
}
]
}
}Setting Up Your Project for LLMs
To help AI assistants understand and use Wendy effectively in your project, create an AGENTS.md or CLAUDE.md file in your project root.
Creating CLAUDE.md
Create a CLAUDE.md file that instructs Claude Code (and other AI assistants) to learn about Wendy:
# Wendy CLI Project
This project uses the Wendy CLI to deploy applications to WendyOS devices.
## Learning About Wendy
Before helping with Wendy commands, run this to learn all available commands:
\`\`\`bash
wendy --experimental-dump-help
\`\`\`
This outputs a JSON structure with all commands, flags, and documentation.
## Common Tasks
- Run an app: `wendy run`
- Discover devices: `wendy discover`
- Check device status: `wendy device version`
- Update agent: `wendy device update`
- Configure WiFi: `wendy wifi connect`Creating AGENTS.md
For broader AI assistant compatibility, create an AGENTS.md file:
# AI Assistant Instructions
## Wendy CLI
This project deploys to WendyOS devices using the Wendy CLI.
To understand all available Wendy commands and their options, run:
\`\`\`bash
wendy --experimental-dump-help
\`\`\`
Parse the JSON output to learn:
- All subcommands (run, discover, device, wifi, apps, etc.)
- Available flags and options for each command
- Required vs optional parameters
- Parameter descriptions and defaults
## Device Information
- Target device hostname: wendyos-<device-name>.local
- Default agent port: 50051
- Debug ports: 4242 (Swift/LLDB), 5678 (Python/debugpy)Best Practices
1. Include Device-Specific Context
Add your device hostname and any custom configuration:
## Device Configuration
- Device hostname: `wendyos-humble-pepper.local`
- Connection: USB or LAN
- Target platform: NVIDIA Jetson Orin Nano2. Document Your Project Type
Help the LLM understand what kind of project this is:
## Project Type
This is a Swift application targeting WendyOS.
Build and run with:
\`\`\`bash
wendy run
\`\`\`
Debug with:
\`\`\`bash
wendy run --debug
\`\`\`3. Reference the Dump Help Command
Always remind the LLM it can refresh its knowledge:
## CLI Reference
If you're unsure about a Wendy command or its options, run:
\`\`\`bash
wendy --experimental-dump-help | jq '.command.subcommands[] | select(.commandName == "COMMAND_NAME")'
\`\`\`
Replace `COMMAND_NAME` with the command you want to learn about.Using with Different AI Tools
Claude Code
Claude Code automatically reads CLAUDE.md files. Place the file in:
- Your project root for project-specific instructions
~/.claude/CLAUDE.mdfor global instructions
GitHub Copilot
Copilot reads context from open files. Keep your AGENTS.md open or reference it in comments.
Cursor
Cursor supports .cursorrules files. You can include Wendy instructions there:
When working with Wendy CLI commands, first run:
wendy --experimental-dump-help
Use this output to understand available commands and their parameters.ChatGPT / Other LLMs
When starting a conversation, paste the output of wendy --experimental-dump-help or instruct the LLM to request it.
Filtering the Output
The full dump can be large (~4700 lines). Use jq to filter for specific commands:
# Get all top-level subcommands
wendy --experimental-dump-help | jq '.command.subcommands[].commandName'
# Get details for the "run" command
wendy --experimental-dump-help | jq '.command.subcommands[] | select(.commandName == "run")'
# Get all flags for the "device" command
wendy --experimental-dump-help | jq '.command.subcommands[] | select(.commandName == "device") | .subcommands'Example: Complete CLAUDE.md
Here's a complete example for a WendyOS project:
# WendyOS Voice Assistant Project
## About This Project
A voice assistant application running on NVIDIA Jetson Orin Nano using WendyOS.
## Wendy CLI
To learn all available Wendy commands, run:
\`\`\`bash
wendy --experimental-dump-help
\`\`\`
### Quick Reference
| Task | Command |
|------|---------|
| Run app | `wendy run` |
| Run with debug | `wendy run --debug` |
| Run detached | `wendy run --detach` |
| Discover devices | `wendy discover` |
| List apps | `wendy device apps list` |
| Stop app | `wendy device apps stop <name>` |
| Update agent | `wendy device update` |
| Connect WiFi | `wendy wifi connect` |
## Device Configuration
- Hostname: `wendyos-humble-pepper.local`
- Platform: NVIDIA Jetson Orin Nano
- Connection: USB-C
## Development Workflow
1. Connect device via USB
2. Run `wendy discover` to verify connection
3. Run `wendy run` to build and deploy
4. View logs in terminal
5. Use `wendy run --debug` for breakpoint debuggingNext Steps
- Discover your devices
- Set up the VSCode extension for integrated debugging
- Manage apps on your device