Skip to main content

Get Started with SO101

Goals

This guide helps you:
  • Set up an SO101 arm and an external camera in a real environment and replicate the same setup in Cyberwave.
  • Configure teleoperation and remote operation to control the follower arm using a leader arm and Cyberwave data.
  • Create datasets for specific tasks and use them to train and deploy ML models.
  • Use deployed ML models as controller policies to control the follower arm directly from Cyberwave.

Prerequisites

Before you begin this quick start guide, ensure you have the following:
  • SO101 robot arm set (leader and follower arms) (Contact us if you want access to this hardware)
  • External camera (USB or IP camera) to record video feeds for datasets
  • Computer or single-board computer (SBC, e.g., Raspberry Pi with 64-bit OS)
  • USB or serial connection to the SO101 devices
The Cyberwave CLI and Edge Core require a 64-bit architecture (arm64/aarch64) on Raspberry Pi. If you are using a 32-bit OS or architecture, please wait for an updated version.

Set Up Teleoperation

Step 1: Set Up the Cyberwave Environment

An environment is a 3D virtual space that mirrors your real-world robot setup. It’s where your digital twins live, sensors stream data, and controllers send commands, all in real time. You’ll create one environment that contains both the SO101 arm and camera twins. Create the environment:
  1. Go to the Cyberwave dashboard and click on New Environment.
  2. Give your environment a name (e.g., “SO101 Teleoperation Setup”) and description.
Add the SO101 digital twin: A digital twin is a virtual replica of your physical robot, it mirrors the robot’s structure, joints, sensors, and behavior in real time within your environment.
  1. Inside your environment, click Add from Catalog in the left panel.
  2. Search for and select SO101.
  3. Add it to your environment and position it to match your physical setup.
Add the camera twin:
  1. Click Add from Catalog again.
  2. Search for and select Standard Camera.
  3. Add it to your environment.
The camera connected to the follower arm captures the workspace from the arm’s perspective during teleoperation and dataset recording.
Some cameras may use different frameworks and support lower resolutions, which can limit video streaming quality.
Dock the camera to the SO101 twin:
  1. Click on the Standard Camera twin and switch to Edit Mode.
  2. In the Dock to Twin option, select the SO101 twin.
  3. In the Parent Root dropdown, select wrist.
The camera twin should now appear nested under the SO101 twin in the hierarchy.
Docking the camera to the wrist angle means the camera physically follows the arm’s end-effector. This is essential for manipulation tasks where the camera needs to see what the gripper is doing, the resulting video feed stays aligned with the arm’s workspace during teleoperation and dataset recording.

Step 2: Install the Cyberwave Edge

The Cyberwave CLI is the command-line tool used to authenticate, pair, and manage your physical hardware with the Cyberwave platform. The Edge Core acts as the bridge between the SO101 hardware and the Cyberwave cloud backend.
The Cyberwave CLI and Edge Core are currently only compatible with 64-bit architecture on Raspberry Pi (arm64/aarch64). If you are using a 32-bit Raspberry Pi OS or any other non-64-bit architecture, please wait for an updated version.
SSH into your edge device: Connect to the device that is physically connected to the SO101 arms and camera (e.g., Raspberry Pi, Jetson, or your local computer):
ssh <edge-device-username>@<edge-device-ip>
Install the CLI:
curl -fsSL https://cyberwave.com/install.sh | bash
Install the Edge Core:
sudo cyberwave edge install
The CLI will prompt you to log in with your Cyberwave credentials and then ask you to select the environment you created in Step 1. Once complete, the edge runtime is installed and your device is linked to the cloud platform. Pair the hardware: Follow the terminal prompts to pair the SO101 arms and camera with their digital twins:
  1. Select the environment you created.
  2. Select the SO101 digital twin.
  3. The appropriate driver will be automatically installed and configured.
  4. Repeat for the camera twin.

Step 3: Calibrate the Arms

Calibration is a required step before using an SO101 arm for teleoperation or control. It teaches the software where each joint’s zero (reference) position is, what its valid movement range is, and how the physical arm maps to the software model. Without calibration, the robot won’t know where its joints actually are and commands won’t translate correctly to hardware movements.
When you first connect a robot, it will have no calibration, you must complete calibration before the arm can be used. If you ever need to recalibrate (e.g. after reassembly or mechanical adjustments), you can delete the existing calibration from the platform and redo it.
You must calibrate each arm individually. If you’re using a dual-arm setup (leader + follower), complete calibration for both.
Calibrate via the Cyberwave Platform:
  1. Open the Cyberwave dashboard and navigate to your environment.
  2. Select the SO101 twin, you’ll see an option to Calibrate both arms (leader and follower).
  3. Click Calibrate and follow the on-screen prompts.
  4. Manually move every joint of the leader arm through its full range when prompted.
  5. Repeat for the follower arm.
  6. Once both arms are calibrated, the platform will confirm the calibration is complete.
Take your time during calibration, move each joint slowly and through its full range. Accurate calibration directly improves control precision during teleoperation.

Step 4: Set Up Teleoperation

Teleoperation lets you control the follower arm using the leader arm in real time. When you physically move the leader arm, the follower arm mirrors those movements instantly, giving you an intuitive, hands-on way to operate the robot. This is the primary method for performing tasks, collecting training data, and demonstrating behaviors to the system. How Teleoperation Works: The teleoperation system creates a synchronized connection between:
  • Physical leader arm → captures human-guided joint movements
  • Physical follower arm → executes the movements in real-time
  • Digital twin → receives telemetry data from both arms for monitoring and recording
Both the leader and follower arms send real-time joint data to their corresponding SO101 digital twin in Cyberwave (identified by the twin UUID). The camera also streams data to its digital twin for visual feedback and dataset recording. Start Teleoperation:
  1. Open your environment in the Cyberwave dashboard.
  2. Click the Assign Controller button.
  3. Select Local Teleop from the list of available controllers.
  4. The teleoperation session will start, the follower arm is now linked to the leader arm.
Teleoperation is active. Try moving the leader arm — the follower arm will replicate your movements in real time.
Controller types and data quality:
  • Local Teleop — actively generates high-frequency control data as you move the leader arm. This produces smooth, consistent datasets suitable for training ML models.
  • Keyboard — still generates data, but at a much lower control frequency. Keyboard-generated data is not recommended for datasets as the low frequency results in jerky, inconsistent demonstrations that don’t train well.

Set Up Remote Operation

Remote operation lets you control the follower arm without a physical leader arm. Instead, you assign an external controller to the robot’s digital twin and send commands directly from the Cyberwave platform. This is useful when you want to control the arm from a distance, test different control strategies, or run autonomous policies. How Remote Operation Works: In teleoperation, the leader arm drives the follower. In remote operation, a controller takes the place of the leader arm. The controller can be anything, a keyboard, a gamepad, a scripted sequence, or an AI model (like a VLA). You assign the controller to the SO101 digital twin, and it sends commands to the physical follower arm in real time via the edge runtime. Set Up Remote Operation:
  1. Open your environment in the Cyberwave dashboard.
  2. Click the Assign Controller button on the SO101 twin.
  3. Select a controller from the list, for example:
    • Keyboard — control individual joints using keyboard keys
    • VLA Model — a trained vision-language-action model that executes tasks from prompts
    • Custom Controller — any controller you’ve registered in the platform
  4. Once assigned, the follower arm will respond to commands from the selected controller.
Remote operation is active. The follower arm is now being controlled by the assigned controller, no leader arm needed.

Create and Export Datasets

Once teleoperation is set up and working, you can create datasets by recording episodes of the robot performing specific tasks. These datasets can later be used to train machine learning models for autonomous operation.
For recording datasets with the SO101, we recommend using teleoperation rather than remote operation. Controlling the follower arm with a physical leader arm gives you finer, more intuitive control resulting in smoother demonstrations and higher-quality training data.
A dataset consists of multiple episodes, individual recordings of the robot completing a task. Each episode captures:
  • Joint positions and movements over time
  • Camera video feed showing the task execution
  • Timing and sequence data

Step 1: Record Episodes

Recording episodes captures the manual operations performed through teleoperation. Each recording can contain multiple task demonstrations that you’ll later trim into episodes. Start Recording in Live Mode:
  1. Navigate to your Cyberwave environment in the dashboard
  2. Switch to Live Mode in the environment viewer
  3. Turn on the camera:
    • Locate the camera icon in the upper-right corner
    • Click the Turn On icon to activate the camera feed
  4. Click Start Recording to begin capturing data
Make sure teleoperation is running (from Step 7) before you start recording. The recording captures both the arm movements and camera feed simultaneously.
Perform Task Demonstrations: With recording active, use the leader arm to guide the follower arm through the task you want to teach:
  1. Position the robot at the starting configuration
  2. Execute the task smoothly using the leader arm
  3. Complete the task fully (e.g., pick up object → move → place in box)
  4. Repeat the same task multiple times to create variety in the dataset
Goal: Train the SO101 to pick up an object and drop it inside a box.Recording process:
  1. Start with the gripper open near the object
  2. Move the leader arm to position the follower over the object
  3. Close the gripper to pick up the object
  4. Move to the box location
  5. Open the gripper to release the object
  6. Return to starting position
  7. Repeat 10-15 times with slight variations
This creates a robust dataset with multiple examples of the same behavior.
Record multiple demonstrations of the same task with slight variations (different speeds, slightly different positions). This helps the model generalize better during training.
Stop Recording: When you’ve captured enough demonstrations:
  1. Click Stop Recording in the Cyberwave interface
  2. The recording will be saved and ready for processing

Step 2: Export Dataset

After recording, you’ll trim the raw recording into discrete episodes and export them as a structured dataset. Create Episodes from Recording:
  1. Open the recorded session in your Cyberwave environment.
  2. Review the timeline: You’ll see the full recording with video and telemetry data.
  3. Trim episodes:
    • Identify the start and end of each successful task demonstration
    • Use the trim tool to isolate each episode
    • Remove any failed attempts, pauses, or unwanted sections
  4. Label episodes (optional): add descriptive names for organization.
Each episode should contain one complete task execution from start to finish. Keep episodes focused and remove any unnecessary setup or reset time between demonstrations.
Create a Dataset: Once you’ve created episodes:
  1. Review all episodes to ensure quality.
  2. Select the episodes you want to include in the final dataset.
    • Check the box next to each desired episode
    • Deselect any that have errors or poor quality
  3. Click Create Dataset.
Aim for consistency in your episodes, they should all demonstrate the same task in similar conditions. Remove outliers or failed attempts to improve training data quality.
Manage Your Datasets:
  1. Navigate to the Manage Datasets tab in Cyberwave
  2. View all your created datasets.
  3. Access dataset details:
    • Number of episodes
    • Duration
  4. Download datasets for local training or use them directly in Cyberwave for model training.
Export Your Datasets:
  1. Go to File -> Export -> Export Datasets.
  2. Select the specific dataset you want to export.
  3. Click on Export.
Dataset Created: Your dataset is now ready for training machine learning models. Each episode contains synchronized robot movements and camera footage that can teach autonomous behaviors.

Train and Deploy an ML Model

With your dataset created, you can now train a machine learning model to autonomously replicate the behaviors you demonstrated. Once trained, the model can be deployed as a controller policy that directly controls the SO101 robot.

Step 1: Train a Model

Training transforms your recorded demonstrations into a model that can predict and execute similar actions autonomously. Configure training parameters:
  1. Workspace: Select your workspace from the dropdown.
  2. ML Model: Choose the appropriate ML model.
  3. Dataset: Select the dataset you created earlier.
  4. Advanced Settings: Data Augmentation:
    • Use the slider to select augmentation level:
      • 0 — No augmentation
      • 1 — Low augmentation (recommended for most cases)
      • 2 — Medium augmentation (for more robust generalization)
    Data augmentation adds variations to your training data (like slight position changes or lighting differences) to help the model generalize better to new situations.
    Training Stop Policy: Choose one of two stopping strategies:
    • Save best model until iterations (recommended for beginners)
      • Set the number of iterations (max: 5000)
      • Training continues until reaching the specified iterations
      • The best-performing model checkpoint is saved
    • Stop when validation loss is under threshold (for faster training)
      • Set the validation loss threshold (default: 0.01)
      • Set max iterations (max: 5000)
      • Training stops early when validation loss reaches the threshold
      • May be faster since training stops when a valid model is found
    If you’re unsure, use the default settings: “Save best model until iterations” with 5000 iterations. This ensures complete training without premature stopping.
  5. Click Start Training to begin.

Step 2: Deploy a Model

Once training completes successfully, deploy the model to make it available as a controller policy. Create a Deployment:
  1. Navigate to AI → Deployments.
  2. Click Start New Deployment.
  3. Select your trained model from the list of completed trainings.
  4. Select the target twins to deploy the model to.
  5. Click Deploy.
Model Deployed: Your trained model is now available as a controller policy and ready to control the robot autonomously.

Step 3: Use the Model as a Controller Policy

Now use your trained model to autonomously control the physical SO101 robot. Assign the Controller Policy:
  1. In your environment, switch to Edit Mode.
  2. Click Assign Controller Policy from the right side view.
  3. Select your deployed model from the dropdown.
  4. Click Save Configuration.
  5. The model now appears as a controller policy in the right side view.
Execute with Natural Language Prompts:
  1. Switch to Live View.
  2. You’ll see an option to enter a prompt.
  3. Type your instruction (e.g., “Pick up the object and place it in the box”).
  4. The model deploys the action to the SO101 in your real environment setup.
Ensure the workspace is clear and the robot has safe operating space before executing autonomous actions.
Autonomous Control Active: Your SO101 is now controlled by AI using natural language prompts!

Manual Calibration (CLI)

If you prefer to calibrate the arms via the command line instead of the Cyberwave platform, you can use the so101-calibrate command directly. Calibrate the Leader Arm:
so101-calibrate --type leader --port /dev/tty.usbmodem123 --id leader1
Replace /dev/tty.usbmodem123 with your actual leader arm port. This registers the device as a leader arm, stores its calibration under the ID leader1, and prepares it to capture your manual movements. Calibrate the Follower Arm:
so101-calibrate --type follower --port /dev/tty.usbmodem456 --id follower1
Replace /dev/tty.usbmodem456 with your actual follower arm port. This registers the device as a follower arm, stores its calibration under the ID follower1, and prepares it to receive and execute control commands. The calibration process is interactive — follow the on-screen prompts to move joints to specific positions and confirm alignment.