Skip to main content

Workspaces and Projects

Workspaces

A workspace is your team’s container for all robotics projects, environments, and resources. It provides:
  • Centralized authentication and API access
  • Team collaboration and permissions
  • Resource sharing across projects
  • Billing and usage tracking

Projects

Projects organize related robotics work within a workspace. Each project contains:
  • Environments: 3D spaces where your robots operate
  • Twins: Digital representations of your physical robots
  • Simulations: Physics-based testing and validation scenarios
Use separate projects for different robot deployments or development stages (e.g., “Development”, “Testing”, “Production”).

Environments

An environment is a 3D space where your digital twins exist and interact. It can represent:
  • Physical facilities (warehouse, factory floor, outdoor space)
  • Testing scenarios
  • Simulation environments

Environment Editor

The Environment Editor allows you to:
  • Build 3D scenes with drag-and-drop interface
  • Add digital twins from the catalog or custom assets
  • Place sensors and cameras for data collection
  • Export to simulation (e.g., MuJoCo) with one click
  • Sync with real devices for live monitoring
Environments support both Edit Mode (for designing and configuring) and Live Mode (for real-time operation and monitoring).

Digital Twins

A digital twin is a virtual replica of a physical robot that mirrors its behavior, capabilities, and state in real-time. It serves as a bridge between physical and digital worlds.

Key Components

  • 3D Model: Accurate geometric representation
  • Physics Simulation: Realistic movement and dynamics
  • Sensor Integration: Virtual sensors matching physical setup
  • Real-time Sync: Bidirectional communication with physical robots

Asset Catalog

The Asset Catalog is Cyberwave’s library of pre-configured robot assets. Each asset includes:
  • Registry ID: Unique identifier in vendor/model format (e.g., the-robot-studio/so101)
  • URDF File: Robot description with joints and links
  • 3D Model: GLB mesh for visualization
  • Metadata: Category, manufacturer, version, capabilities
  • Capabilities: Sensors, grippers, flight control, etc.

Asset Types

  • Robot Arms: Joint control, inverse kinematics, gripper operations
  • Mobile Robots: Position control, navigation
  • Quadrupeds: Leg joint control, gait patterns, camera streaming
  • Drones: Takeoff, landing, flight control

ML Models

ML Models in Cyberwave are AI models registered in your workspace that can process various inputs and integrate with robotics workflows.

Model Capabilities

Each model specifies what inputs it can process:
CapabilityDescriptionUse Cases
can_take_video_as_inputProcess video streamsSurveillance, teleoperation
can_take_image_as_inputProcess single imagesQuality inspection, object detection
can_take_audio_as_inputProcess audio dataVoice commands, anomaly detection
can_take_text_as_inputProcess text promptsNatural language commands (VLM)
can_take_action_as_inputProcess robot actionsBehavior cloning, RL policies

Training and Deployment

Cyberwave provides an end-to-end ML pipeline:
  1. Collect data through teleoperation and recording
  2. Create datasets from recorded episodes
  3. Train models using VLM or custom architectures
  4. Deploy models as controller policies
  5. Execute autonomously with natural language prompts

Workflows

Workflows let you create automated sequences of robot operations through visual orchestration. Connect nodes to build complex behaviors without writing procedural code.

Workflow Components

Execution

Workflows can be triggered by:
  • Manual: On-demand execution
  • Schedule: Run at specific times (cron)
  • Events: Run when sensor data matches conditions
  • API: Trigger from external systems
Workflows run on Cyberwave’s cloud infrastructure, ensuring reliable execution even when your local machine is offline.

Controller Policies

A controller policy is a control mechanism that determines how a robot should act. Cyberwave supports multiple types of controllers:

Types of Controllers

Manual Control

Direct control through dashboard or SDK

Teleoperation

Leader arm controls follower arm in real-time

Remote Operation

Control from Cyberwave dashboard without leader arm

AI Models (VLM)

Trained models execute tasks from natural language prompts

Assigning Controller Policies

In Edit Mode:
  1. Click Assign Controller Policy
  2. Select your deployed model or controller type
  3. Save configuration
  4. Switch to Live Mode to execute

Teleoperation vs Remote Operation

Teleoperation

Teleoperation uses a physical leader arm to control a follower arm:
  • Requires both leader and follower arms
  • Real-time joint mirroring
  • Ideal for data collection and demonstrations
  • Used for creating training datasets
so101-teleoperate \
    --twin-uuid YOUR_SO101_TWIN_UUID \
    --leader-port /dev/tty.usbmodem123 \
    --follower-port /dev/tty.usbmodem456 \
    --camera-twin-uuid YOUR_CAMERA_TWIN_UUID \
    --fps 30

Remote Operation

Remote operation controls the follower arm directly from Cyberwave:
  • Only requires the follower arm
  • Control through dashboard interface
  • Manual joint control or scripted movements
  • Useful for testing and calibration
so101-remoteoperate \
    --follower-port /dev/tty.usbmodem456 \
    --twin-uuid YOUR_SO101_TWIN_UUID \
    --token YOUR_CYBERWAVE_TOKEN

Datasets and Episodes

Datasets

A dataset is a collection of recorded robot operations used for training ML models. Each dataset contains:
  • Multiple episodes
  • Joint positions and movements over time
  • Camera video feeds
  • Timing and sequence data

Episodes

An episode is a single recording of the robot completing a task from start to finish. Episodes should:
  • Demonstrate one complete task execution
  • Be consistent in initial and final states
  • Focus on specific behaviors
  • Remove setup time and failed attempts

Creating Datasets

  1. Record operations during teleoperation in Live Mode
  2. Trim recordings into discrete episodes
  3. Select quality episodes for the dataset
  4. Export as a structured dataset
  5. Train ML models using the dataset
Record 10-15 demonstrations of the same task with slight variations to create robust datasets for better model generalization.

Authentication

Cyberwave uses API keys or tokens to authenticate requests. You’ll need credentials to:
  • Connect the Python SDK
  • Make REST API calls
  • Establish MQTT connections

Environment Variables

CYBERWAVE_API_KEY=cw_live_xxxxxxxxxxxxxxxxxxxx
CYBERWAVE_TOKEN=your_bearer_token
CYBERWAVE_BASE_URL=https://api.cyberwave.com
CYBERWAVE_ENVIRONMENT_ID=env_xxxxxxxxxxxxxxxxxxxx
CYBERWAVE_WORKSPACE_ID=workspace_xxxxxxxxxxxxxxxxxxxx
Keep your API keys secure. Never commit them to version control or share them publicly.

Next Steps

Now that you understand the key concepts, you’re ready to start building with Cyberwave: