Skip to main content

What is a Digital Twin?

A digital twin is a virtual replica of a physical robot that mirrors its behavior, capabilities, and environment in real-time. It serves as a bridge between the physical and digital worlds — enabling you to simulate, test, control, and monitor your robots from anywhere.
A digital twin includes a 3D model, physics simulation, sensor integration, and real-time bidirectional sync with the physical robot via Edge Core.

Why Use Digital Twins?

Risk-Free Testing

Test dangerous or complex scenarios without risking physical hardware

Faster Development

Iterate and optimize algorithms in simulation before deployment

Remote Monitoring

Monitor and control robots from anywhere in the world

Scalable Solutions

Test fleet behaviors and multi-robot coordination

Capabilities Map

Twin behavior in Cyberwave is capability-driven. The platform computes capabilities from each twin’s universal_schema, and those values control which UI panels, controls, and SDK behaviors are available.

Core Capabilities

CapabilityTypeDescription
can_locomotebooleanTwin can move through the environment (navigation/locomotion)
can_flybooleanTwin supports aerial movement
can_gripbooleanTwin has a gripper/end-effector for grasping
can_actuatebooleanTwin has actuators that can be commanded (e.g. joints)
has_jointsbooleanTwin has controllable joints
has_wheelsbooleanTwin has wheel-based locomotion hardware
has_legsbooleanTwin has legged locomotion hardware
manipulator_countnumberNumber of manipulators/end-effectors
payload_capacity_kgnumberMax payload supported by manipulation system (kg)
power_sourceenumbattery, tethered, rails, fuel, solar, hybrid
power_capacity_whnumberPower capacity in Wh (-1 = unknown/not applicable)
navigation_autonomy_levelenummanual, waypoint, path_following, semi_autonomous, fully_autonomous, none
navigation_obstacle_avoidancebooleanWhether built-in obstacle avoidance is available
locomotion_modeenumstationary, wheeled, tracked, legged, aerial, surface, subsea, hybrid
locomotion_configobjectLocomotion limits/config (max linear/angular velocity, DOF)
sensorsarraySensor definitions attached to the twin

Sensor Capabilities

sensors is an array of sensor entries inside capabilities. If a twin has one or more sensor entries, sensor-driven features become available in both UI and SDK flows.
FieldTypeDescription
idstringStable sensor identifier (e.g. wrist_camera)
typeenumrgb, depth, lidar_2d, lidar_3d, lidar_4d, map
modelstringOptional hardware/model label
offset.positionobjectPosition offset from twin origin (x, y, z)
offset.rotationobjectRotation offset quaternion (x, y, z, w)
fov_degreesnumberOptional camera FOV override
width, heightnumberOptional image resolution (camera sensors)
min_range, max_rangenumberOptional range values (depth/lidar sensors)
points_per_secondnumberOptional lidar density/throughput hint
velocity_sensingbooleanOptional 4D lidar velocity support flag
update_ratenumberOptional update frequency (Hz)

Feature-to-Capability Matrix

FeatureCapability condition
Joint movement and editingcan_actuate = true (typically with has_joints = true)
Re-calibrate driver action (Live mode)can_actuate = true
Missions editor/simulationcan_locomote = true
Sensor windows (Live + Simulate)sensors.length > 0
Point cloud renderingSensor type includes depth or lidar_*
Controller policy UIhas_joints OR can_locomote OR can_actuate OR can_grip
Edge connection status (Live mode)sensors.length > 0 OR controllable twin
SDK twin class selectionCombination of can_locomote, can_fly, can_grip, and sensors
SDK camera streamingsensors includes camera sensors (rgb/depth)
canHaveMissions currently maps to can_locomote, and “controllable twin” maps to has_joints OR can_locomote OR can_actuate OR can_grip.