Documentation Index
Fetch the complete documentation index at: https://docs.cyberwave.com/llms.txt
Use this file to discover all available pages before exploring further.
What are Workflows?
Workflows in Cyberwave let you create automated sequences of robot operations. Connect nodes visually to build complex behaviors without writing procedural code. Workflows can run on the cloud (Celery tasks) or on the edge device depending on the trigger type. Cloud triggers handle schedule, webhook, event, and manual execution. Thecamera_frame trigger runs ML inference directly on the edge — no video leaves the device.
Workflow Components
Nodes
Nodes are the building blocks of workflows. Each node performs a specific action:Trigger Nodes
Start the workflow: manual, schedule, webhook, event, MQTT, email, or camera_frame (edge)
Call Model Nodes
Run ML inference — cloud VLM/LLM or edge-local object detection
Twin Nodes
Control digital twin position, rotation, and state
Joint Nodes
Set individual joint positions or run trajectories
Condition Nodes
Branch based on sensor data, twin state, or model results
Delay Nodes
Add timing between operations
Connections
Connections define the execution flow between nodes:- Sequential: Execute nodes one after another
- Parallel: Execute multiple nodes simultaneously
- Conditional: Branch based on conditions
Connection validation prevents invalid graphs: trigger nodes cannot accept incoming connections, cycles are blocked, and
camera_frame triggers can only connect to call_model nodes.Trigger Types
| Trigger | Where it runs | How it fires |
|---|---|---|
| Manual | Cloud (Celery) | User clicks “Run” in the UI or triggers via SDK/API |
| Schedule | Cloud (Celery) | Cron or interval timer |
| Webhook | Cloud (Celery) | HTTP POST to a webhook URL |
| Event | Cloud (Celery) | Business event matching conditions |
| MQTT | Cloud (Celery) | MQTT message on a topic |
| Cloud (Celery) | Incoming email | |
| Camera Frame | Edge device | Every camera frame, locally — never sends video to the cloud |
Creating a Workflow
- Dashboard
- CLI
- Python SDK
Create
Click Create Workflow. Give it a name, optional slug (unique within the workspace), and visibility.
Build
Drag nodes from the palette to the canvas. Connect nodes by dragging from output to input ports.
Configure
Configure each node’s parameters (twin UUID, model, confidence threshold, emit_event settings, etc.).
Edge Workflows (Camera Frame)
Thecamera_frame trigger is designed for on-device ML inference. The backend generates a Python worker file that runs directly on the edge — raw video frames never leave the device.
How it works
emit_event configuration
Thecall_model node’s emit_event parameter controls when detection events are published back to the cloud. All settings are baked into the generated worker at codegen time.
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | true | Set to false to run inference without publishing events |
event_type | string | "detection" | Business event type published to the cloud |
severity | string | "INFO" | Event severity (INFO, WARNING, CRITICAL) |
emit_mode | string | "always" | Controls when events fire (see below) |
cooldown_seconds | float | 5.0 | Minimum seconds between consecutive events |
always— publishes for every detection, subject to cooldown. Skips frames with no results.on_enter— publishes only when new object classes appear that were not in the previous frame. Useful for “person entered the zone” alerts.on_change— publishes only when the detection count changes. Useful for occupancy tracking.
Syncing to the edge
After activating a workflow in the UI, push it to edge devices:sync_workflows command via MQTT. Edge core receives it and immediately pulls the latest worker files from the backend — no need to wait for the periodic cycle.
The edge device also syncs automatically on boot and periodically (default ~5 min, configurable via CYBERWAVE_WORKER_SYNC_INTERVAL_LOOPS).
Execution Modes
Workflows can be triggered by:| Trigger | Description |
|---|---|
| Manual | Run on demand from the dashboard or SDK |
| Schedule | Run at specific times (cron) |
| Events | Run when sensor data matches conditions |
| API | Trigger from external systems via REST or MCP |
| Camera Frame | Run on every camera frame at the edge device |
Monitoring Executions
Track workflow execution status and results:started_at, finished_at, and error_message fields.
Check if a Workflow is Running
Useis_running() to quickly check if a workflow has any active execution without manually querying runs:
True when any run has status running, waiting, or requested.
In the dashboard, a Running indicator appears next to the Active badge in the workflow editor header whenever the workflow has an active execution. It refreshes automatically every 2 seconds.
Example: Edge Detection Workflow
A camera_frame workflow that runs YOLO on the edge and emits alerts:Best Practices
- Keep workflows focused — create separate workflows for distinct operations rather than one large workflow. This makes debugging and maintenance easier.
- Add error handling — include condition nodes to handle failure cases gracefully. Consider what should happen if a joint can’t reach its target.
- Use meaningful names — name nodes and workflows descriptively. “Alert on person in zone A” is better than “Node 1”.
- Use
on_enteremit mode for alerts — avoids flooding with repeated events while the same object stays in frame. - Set appropriate cooldowns — balance between responsiveness and event volume. 5s is a safe default; lower for time-critical use cases.
- Eject before customising — never edit
wf_*.pyfiles directly. Copy them to a custom name and deactivate the originating workflow.