Skip to main content

What is Manipulation?

Manipulation covers everything a robot does with its end-effector: picking, placing, pushing, assembling, handing over, and dexterous in-hand manipulation. On Cyberwave, you build manipulation systems by pairing a robot arm with a digital twin, controlling it through teleoperation or AI policies, and orchestrating multi-step tasks with workflows.
Most manipulation projects on Cyberwave follow the same loop: simulate the task → teleoperate to collect demonstrations → train a policy → deploy to hardware.

Hardware that Supports it

SO101 Robot Arms

Open-source 6-DOF arms for desk-based manipulation, teleoperation, and imitation learning.

Universal Robot UR7e

Industrial collaborative arm for production-grade pick and place and assembly tasks.

Boston Dynamics Spot Arm

Mobile manipulation: combine the Spot quadruped with its arm for inspection and retrieval.

Custom Arms

Bring any URDF-described arm and connect it through a custom driver.

How You Build It

1. Test in simulation first

Spin up a digital twin in the Environment Editor, drop in target objects, and verify your motion plans against the MuJoCo physics simulation before any hardware moves.

2. Drive the arm

ModeUse it forReference
Dashboard controllersQuick manual moves, demosLive Teleoperation
Leader-follower teleoperationCollecting demonstration dataLive Teleoperation
Python SDKScripted sequences, tests, CIPython SDK
ML policyAutonomous executionML Models
from cyberwave import Cyberwave

cw = Cyberwave()
arm = cw.twin("so101-main")

with cw.affect("live"):
    arm.joints.set("shoulder_pan", 10, degrees=True)
    arm.joints.set("shoulder_lift", -30, degrees=True)
    arm.joints.set("elbow_flex", 60, degrees=True)
    arm.joints.set("wrist_flex", 45, degrees=True)
    arm.joints.set("wrist_roll", 0, degrees=True)
    arm.joints.set("gripper", 0, degrees=True)

3. Train a policy from demonstrations

Record 10–15 episodes of teleoperated demonstrations, then train a Vision-Language-Action model in the cloud and deploy it back to the arm as a controller policy. See ML Models for the training pipeline.

4. Orchestrate with workflows

Chain perception, planning, and motion into repeatable sequences with Workflows. A pick-and-place workflow typically looks like camera frame → object detection model → IK solver → motion node → gripper node.

Where to Go Next

SO101 Quickstart

Stand up your first manipulation arm.

Train a VLA Model

End-to-end tutorial: collect data, train, deploy.

Workflows

Compose multi-step manipulation tasks visually.