Documentation Index
Fetch the complete documentation index at: https://docs.cyberwave.com/llms.txt
Use this file to discover all available pages before exploring further.
What are ML Models?
ML Models in Cyberwave are AI models registered in your workspace that can process various inputs—video, images, audio, text, or robot actions. They integrate with workflows and can run in the cloud or on edge devices.ML Models define what the model can do (input types) and where it runs (provider). The actual inference happens through the model provider’s API or on your edge device.
Model Capabilities
Each ML Model specifies what inputs it can process:| Capability | Description | Example Use Cases |
|---|---|---|
can_take_video_as_input | Process video streams | Surveillance, teleoperation |
can_take_image_as_input | Process single images | Quality inspection, object detection |
can_take_audio_as_input | Process audio data | Voice commands, anomaly detection |
can_take_text_as_input | Process text prompts | Natural language commands |
can_take_action_as_input | Process robot actions | Behavior cloning, RL policies |
Registering a Model
Via the SDK
Via the Dashboard
- Navigate to ML Models in your workspace
- Click Add Model
- Fill in the model details:
- Name and description
- External ID (model identifier for the provider)
- Provider name (e.g., “openai”, “local”, “huggingface”)
- Input capabilities
- Click Create
Model Providers
Models can run through different providers:Local / Edge
Run on your edge devices using ONNX, TensorRT, or custom inference
Cloud APIs
Use OpenAI, Anthropic, or other cloud AI services
Hugging Face
Deploy models from Hugging Face Hub
Custom
Your own inference servers and endpoints
Using Models in Workflows
ML Models integrate with workflow nodes for automated processing:Example: Vision-Language Model
Register a VLM for natural language robot control:Listing Models
Model Visibility
| Visibility | Who Can Access |
|---|---|
private | Only your workspace members |
workspace | All workspace members |
public | Anyone (admin-only to create) |
Running Inference
Cloud Models
For cloud-based models, Cyberwave routes requests to the provider:Edge Models
For local models, run inference on your edge device:Next Steps
Workflows
Use ML Models in automated workflows
Edge Devices
Run models on edge hardware