Workspaces and Projects
Workspaces
A workspace is your team’s container for all robotics projects, environments, and resources. It provides:- Centralized authentication and API access
- Team collaboration and permissions
- Resource sharing across projects
- Billing and usage tracking
Projects
Projects organize related robotics work within a workspace. Each project contains:- Environments: 3D spaces where your robots operate
- Twins: Digital representations of your physical robots
- Simulations: Physics-based testing and validation scenarios
Environments
An environment is a 3D space where your digital twins exist and interact. It can represent:- Physical facilities (warehouse, factory floor, outdoor space)
- Testing scenarios
- Simulation environments
Environment Editor
The Environment Editor allows you to:- Build 3D scenes with drag-and-drop interface
- Add digital twins from the catalog or custom assets
- Place sensors and cameras for data collection
- Export to simulation (e.g., MuJoCo) with one click
- Sync with real devices for live monitoring
Environments support both Edit Mode (for designing and configuring) and Live Mode (for real-time operation and monitoring).
Digital Twins
A digital twin is a virtual replica of a physical robot that mirrors its behavior, capabilities, and state in real-time. It serves as a bridge between physical and digital worlds.Key Components
- 3D Model: Accurate geometric representation
- Physics Simulation: Realistic movement and dynamics
- Sensor Integration: Virtual sensors matching physical setup
- Real-time Sync: Bidirectional communication with physical robots
Asset Catalog
The Asset Catalog is Cyberwave’s library of pre-configured robot assets. Each asset includes:- Registry ID: Unique identifier in
vendor/modelformat (e.g.,the-robot-studio/so101) - URDF File: Robot description with joints and links
- 3D Model: GLB mesh for visualization
- Metadata: Category, manufacturer, version, capabilities
- Capabilities: Sensors, grippers, flight control, etc.
Asset Types
- Robot Arms: Joint control, inverse kinematics, gripper operations
- Mobile Robots: Position control, navigation
- Quadrupeds: Leg joint control, gait patterns, camera streaming
- Drones: Takeoff, landing, flight control
ML Models
ML Models in Cyberwave are AI models registered in your workspace that can process various inputs and integrate with robotics workflows.Model Capabilities
Each model specifies what inputs it can process:| Capability | Description | Use Cases |
|---|---|---|
can_take_video_as_input | Process video streams | Surveillance, teleoperation |
can_take_image_as_input | Process single images | Quality inspection, object detection |
can_take_audio_as_input | Process audio data | Voice commands, anomaly detection |
can_take_text_as_input | Process text prompts | Natural language commands (VLM) |
can_take_action_as_input | Process robot actions | Behavior cloning, RL policies |
Training and Deployment
Cyberwave provides an end-to-end ML pipeline:- Collect data through teleoperation and recording
- Create datasets from recorded episodes
- Train models using VLM or custom architectures
- Deploy models as controller policies
- Execute autonomously with natural language prompts
Workflows
Workflows let you create automated sequences of robot operations through visual orchestration. Connect nodes to build complex behaviors without writing procedural code.Workflow Components
Execution
Workflows can be triggered by:- Manual: On-demand execution
- Schedule: Run at specific times (cron)
- Events: Run when sensor data matches conditions
- API: Trigger from external systems
Workflows run on Cyberwave’s cloud infrastructure, ensuring reliable execution even when your local machine is offline.
Controller Policies
A controller policy is a control mechanism that determines how a robot should act. Cyberwave supports multiple types of controllers:Types of Controllers
Manual Control
Direct control through dashboard or SDK
Teleoperation
Leader arm controls follower arm in real-time
Remote Operation
Control from Cyberwave dashboard without leader arm
AI Models (VLM)
Trained models execute tasks from natural language prompts
Assigning Controller Policies
In Edit Mode:- Click Assign Controller Policy
- Select your deployed model or controller type
- Save configuration
- Switch to Live Mode to execute
Teleoperation vs Remote Operation
Teleoperation
Teleoperation uses a physical leader arm to control a follower arm:- Requires both leader and follower arms
- Real-time joint mirroring
- Ideal for data collection and demonstrations
- Used for creating training datasets
Remote Operation
Remote operation controls the follower arm directly from Cyberwave:- Only requires the follower arm
- Control through dashboard interface
- Manual joint control or scripted movements
- Useful for testing and calibration
Datasets and Episodes
Datasets
A dataset is a collection of recorded robot operations used for training ML models. Each dataset contains:- Multiple episodes
- Joint positions and movements over time
- Camera video feeds
- Timing and sequence data
Episodes
An episode is a single recording of the robot completing a task from start to finish. Episodes should:- Demonstrate one complete task execution
- Be consistent in initial and final states
- Focus on specific behaviors
- Remove setup time and failed attempts
Creating Datasets
- Record operations during teleoperation in Live Mode
- Trim recordings into discrete episodes
- Select quality episodes for the dataset
- Export as a structured dataset
- Train ML models using the dataset
Authentication
Cyberwave uses API keys or tokens to authenticate requests. You’ll need credentials to:- Connect the Python SDK
- Make REST API calls
- Establish MQTT connections