What is a Digital Twin?
A digital twin is a virtual replica of a physical robot that mirrors its behavior, capabilities, and environment in real-time. It serves as a bridge between the physical and digital worlds — enabling you to simulate, test, control, and monitor your robots from anywhere.A digital twin includes a 3D model, physics simulation, sensor integration, and real-time bidirectional sync with the physical robot via Edge Core.
Why Use Digital Twins?
Risk-Free Testing
Test dangerous or complex scenarios without risking physical hardware
Faster Development
Iterate and optimize algorithms in simulation before deployment
Remote Monitoring
Monitor and control robots from anywhere in the world
Scalable Solutions
Test fleet behaviors and multi-robot coordination
Capabilities Map
Twin behavior in Cyberwave is capability-driven. The platform computes capabilities from each twin’suniversal_schema, and those values control which UI panels, controls, and SDK behaviors are available.
Core Capabilities
| Capability | Type | Description |
|---|---|---|
can_locomote | boolean | Twin can move through the environment (navigation/locomotion) |
can_fly | boolean | Twin supports aerial movement |
can_grip | boolean | Twin has a gripper/end-effector for grasping |
can_actuate | boolean | Twin has actuators that can be commanded (e.g. joints) |
has_joints | boolean | Twin has controllable joints |
has_wheels | boolean | Twin has wheel-based locomotion hardware |
has_legs | boolean | Twin has legged locomotion hardware |
manipulator_count | number | Number of manipulators/end-effectors |
payload_capacity_kg | number | Max payload supported by manipulation system (kg) |
power_source | enum | battery, tethered, rails, fuel, solar, hybrid |
power_capacity_wh | number | Power capacity in Wh (-1 = unknown/not applicable) |
navigation_autonomy_level | enum | manual, waypoint, path_following, semi_autonomous, fully_autonomous, none |
navigation_obstacle_avoidance | boolean | Whether built-in obstacle avoidance is available |
locomotion_mode | enum | stationary, wheeled, tracked, legged, aerial, surface, subsea, hybrid |
locomotion_config | object | Locomotion limits/config (max linear/angular velocity, DOF) |
sensors | array | Sensor definitions attached to the twin |
Sensor Capabilities
sensors is an array of sensor entries inside capabilities. If a twin has one or more sensor entries, sensor-driven features become available in both UI and SDK flows.
| Field | Type | Description |
|---|---|---|
id | string | Stable sensor identifier (e.g. wrist_camera) |
type | enum | rgb, depth, lidar_2d, lidar_3d, lidar_4d, map |
model | string | Optional hardware/model label |
offset.position | object | Position offset from twin origin (x, y, z) |
offset.rotation | object | Rotation offset quaternion (x, y, z, w) |
fov_degrees | number | Optional camera FOV override |
width, height | number | Optional image resolution (camera sensors) |
min_range, max_range | number | Optional range values (depth/lidar sensors) |
points_per_second | number | Optional lidar density/throughput hint |
velocity_sensing | boolean | Optional 4D lidar velocity support flag |
update_rate | number | Optional update frequency (Hz) |
Feature-to-Capability Matrix
| Feature | Capability condition |
|---|---|
| Joint movement and editing | can_actuate = true (typically with has_joints = true) |
| Re-calibrate driver action (Live mode) | can_actuate = true |
| Missions editor/simulation | can_locomote = true |
| Sensor windows (Live + Simulate) | sensors.length > 0 |
| Point cloud rendering | Sensor type includes depth or lidar_* |
| Controller policy UI | has_joints OR can_locomote OR can_actuate OR can_grip |
| Edge connection status (Live mode) | sensors.length > 0 OR controllable twin |
| SDK twin class selection | Combination of can_locomote, can_fly, can_grip, and sensors |
| SDK camera streaming | sensors includes camera sensors (rgb/depth) |
canHaveMissions currently maps to can_locomote, and “controllable twin” maps to has_joints OR can_locomote OR can_actuate OR can_grip.