Skip to main content

What are ML Models?

ML Models in Cyberwave are AI models registered in your workspace that can process various inputs — video, images, audio, text, or robot actions. They integrate with workflows and can run in the cloud or on edge devices.
ML Models define what the model can do (input types) and where it runs (provider). The actual inference happens through the model provider’s API or on your edge device.

Model Capabilities

Each ML Model specifies what inputs it can process:
CapabilityDescriptionExample Use Cases
can_take_video_as_inputProcess video streamsSurveillance, teleoperation
can_take_image_as_inputProcess single imagesQuality inspection, object detection
can_take_audio_as_inputProcess audio dataVoice commands, anomaly detection
can_take_text_as_inputProcess text promptsNatural language commands
can_take_action_as_inputProcess robot actionsBehavior cloning, RL policies

Model Providers

Models can run through different providers:

Local / Edge

Run on your edge devices using ONNX, TensorRT, or custom inference

Cloud APIs

Use OpenAI, Anthropic, or other cloud AI services

Hugging Face

Deploy models from Hugging Face Hub

Custom

Your own inference servers and endpoints

Registering a Model

1

Navigate

Go to ML Models in your workspace.
2

Add model

Click Add Model.
3

Configure

Fill in the model details: name, description, external ID (model identifier for the provider), provider name (e.g. openai, local, huggingface), and input capabilities.
4

Create

Click Create.

Model Visibility

VisibilityWho Can Access
privateOnly your workspace members
workspaceAll workspace members
publicAnyone (admin-only to create)

Using Models in Workflows

ML Models integrate with workflow nodes for automated processing:
┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│ Capture     │───▶│ ML Model:   │───▶│ Condition:  │
│ Image       │    │ Detect      │    │ Found?      │
└─────────────┘    │ Objects     │    └─────────────┘
                   └─────────────┘           │
                                    ┌───────┴───────┐
                                    ▼               ▼
                             ┌───────────┐   ┌───────────┐
                             │ Pick      │   │ Alert     │
                             │ Object    │   │ Operator  │
                             └───────────┘   └───────────┘

Running Inference

For cloud-based models, Cyberwave routes requests to the provider:
response = cw.api.vlm_generation({
    "model_uuid": model.uuid,
    "prompt": "What objects do you see? How should the robot pick them up?",
    "image_url": "https://..."
})

Listing Models

from cyberwave import Cyberwave

cw = Cyberwave(api_key="your_api_key")

models = cw.api.list_mlmodels()

for model in models:
    print(f"{model.name} ({model.model_provider_name})")
    print(f"  Video: {model.can_take_video_as_input}")
    print(f"  Image: {model.can_take_image_as_input}")
    print(f"  Text: {model.can_take_text_as_input}")