Documentation Index
Fetch the complete documentation index at: https://docs.cyberwave.com/llms.txt
Use this file to discover all available pages before exploring further.
Phase 2: Train and Deploy Models
With your robot connected, record demonstrations (via Local Teleop), build datasets, and train an OpenVLA policy in the cloud—then deploy it for autonomous control.stub Training stack: Cloud training currently supports VLA with OpenVLA only. We plan to add more policy types—including additional VLA options and RL—and, longer term, non-robotic models as well.
stub Data collection: Dataset recording for this workflow assumes your robot twin can use the local teleop controller. Robots that do not expose Local Teleop are not supported for this style of training data capture today.
Step 6: Record and Create Datasets
Collect training data by recording your robot operations. These datasets are used for OpenVLA training. Your twin must support the local teleop controller (see the note above).- Switch your environment to Live Mode
- Check that cameras are streaming correctly
- Assign Local Teleop to a supported robot twin, then record (teleop-driven data is what OpenVLA training expects)
- Perform the task you want the robot to learn multiple times
- Stop recording when you have enough demonstrations
stub: When collecting data to train an AI model, favor clear, deliberate demonstrations over rushing. Better demonstrations usually improve VLA predictions more than marginal gains from noisy data. How much data? There is no single dataset size for every task—it depends on the VLA, the scene, and task difficulty. About 30 demonstrations is a reasonable starting point; more diverse data often makes the policy more robust. Mixing related tasks in one dataset can sometimes help generalization. Cameras: Plan for at least a wrist camera and an overhead (scene) view. A practical rule: if you could perform the task using only what those cameras show, the VLA has a fair shot at learning it. Match camera placement and lighting between recording for training and running inference as closely as you can. Episodes: Pause between episodes. Vary start configurations so the robot learns to recover from different poses. Mark ends at crisp task boundaries—for example, gripper closed after a stable grasp on a pick, or gripper open after release on a place. Short pauses around those moments make trimming and labeling easier. Iteration: If quality is still lacking after training, add more demonstrations and repeat. Target the behavior that is weakest—picking is often the hardest—by adding more high-quality pick examples rather than only scaling unrelated footage.
API Reference:
PUT /api/v1/twins/{uuid}- Assign Local Teleop controller for recordingGET /api/v1/twins/{uuid}/recordings- Get twin recordings
cyberwave/twin/{uuid}/telemetry- Recording lifecycle:telemetry_start- Recording beginstelemetry_end- Recording ends (triggers cloud processing)
cyberwave/joint/{uuid}/+- Joint state updates during recordingcyberwave/twin/{uuid}/command- Controller assignment changes
Step 7: Mark episodes in Replay (stub)
After recording, define episodes—one time-bounded segment per demonstration—in Replay before you assemble a dataset. Episodes carry a task name that groups semantically similar clips; keep that naming consistent for training.- Switch the environment to Replay mode
- Select the session (and overall time range) where you collected data; zoom or filter the timeline if needed so markers land cleanly in a less cluttered view
- Enter episode creation (Create Episode in the replay controls). Move the playhead and press M to drop a marker at the current time; the second marker completes a span and opens the episode save dialog (two markers → one episode)
- In the dialog, set the task name with care—use the same name for the same task across runs. The UI defaults to the last task name you used and lets you pick from names already used in the environment; avoid near-duplicate spellings
- With the dialog open, playback stays paused. When the dialog is dismissed and focus is on the viewer, Space toggles play/pause like elsewhere in Replay (shortcuts may not apply while typing in the dialog)
stub: Implementation sketch: Episode marking and M are wired through replay keyboard shortcuts; confirmation UI lives in the episode create flow. Exact button labels, marker affordances, and whether time-filtering is session-picker-only vs timeline-zoom should be checked in-product. Link internal sources:cyberwave-frontend/components/environment/replay/replay-panel.tsx,episode-create-dialog.tsx,hooks/useReplayShortcuts.ts.
API Reference:
POST /api/v1/episodes- Create episode from marked timeline segmentGET /api/v1/episodes- List episodes (filter by environment)GET /api/v1/episodes/task-names- Get available task names for autocomplete
telemetry_end events.Step 8: Prepare Your Dataset (export optional)
Turn recordings and episodes (from Step 7 or equivalent trimming) into a dataset Cyberwave can use for training. You can do this entirely in the platform—exporting a dataset file is optional and only needed if you want a copy outside Cyberwave.- Review your recorded sessions and the episodes tied to them
- If needed, adjust segments (e.g. trim) so each episode is one clean task completion
- Choose the episodes you want in your dataset
- Open Manage Datasets to create or review the dataset you will use in AI → Training
- (Optional) Export the dataset if you need an offline artifact; otherwise continue in-product for training
API Reference:
POST /api/v1/datasets- Create dataset from episodesGET /api/v1/datasets- List available datasetsGET /api/v1/datasets/{uuid}/zip?format=openvla|lerobot- Get signed URL for processed dataset zip (requiresformatquery parameter)PUT /api/v1/datasets/{uuid}- Update dataset episodes
Step 9: Train an AI Model
Use your dataset to train a VLA (OpenVLA) policy that can control your robot autonomously. Other training backends are not available yet (see the notes at the top of this page). You can start training in either of these ways:- AI → Training → New Training — single dialog to pick workspace, dataset, and model (when you already have a processed dataset).
- AI → Finetune VLA model — opens Finetune OpenVLA to your tasks, a guided flow for dataset prep and training in one place.
stub: Finetune VLA wizard (implementation reference): The guided dialog is implemented inIf you use New Training only:cyberwave-frontend/components/environment/ai/fine-tune-vla-wizard-dialog.tsx. User-facing steps are Prepare dataset → Start training → Done: create or attach an existing dataset and wait for processing, then submit training via the embedded New Training form (new-training-form.tsx), with optional polling/logs until the run completes. Menu labels, timeouts, and step copy may change—reconcile with the live app before removing this stub.
- Navigate to AI → Training in your environment
- Click New Training
- Select the dataset you prepared (in-platform; no export required unless you chose to download one)
- Choose the OpenVLA architecture (currently the supported option)
- Configure training parameters or use defaults
- Click Start Training
- Training can take several hours; use AI → Training → View training history to check status
API Reference:
GET /api/v1/mlmodels- List available ML models (OpenVLA, etc.)POST /api/v1/mltrainings- Start a new training jobGET /api/v1/mltrainings/{uuid}- Get training status and metricsPUT /api/v1/mltrainings/{uuid}- Update training (used by training scripts)
Step 10: Deploy Your Model
Deploy your trained OpenVLA policy as a controller to enable autonomous robot control.- Go to AI → Deployments in your environment
- Click Deploy Model
- Select your trained model
- Give your deployment a name
- Configure deployment settings
- Click Deploy
API Reference:
POST /api/v1/mltrainings/{uuid}/deploy- Deploy trained model to twinsGET /api/v1/mlmodels/{uuid}/weights- Download model checkpoint weightsGET /api/v1/mltrainings/deployed- List deployed models
cyberwave/twin/{uuid}/command- Sendscontroller-changedevent on deployment
Step 11: Use AI to Control Your Robot
Control your robot with natural language prompts using your deployed VLA policy.- Switch to Edit Mode in your environment
- Click Assign Controller Policy for your robot
- Select your deployed model
- Save the configuration
- Switch to Live Mode
- Enter a natural language prompt (e.g., “Pick up the object”)
- Watch your robot execute the task autonomously!
API Reference:
PUT /api/v1/twins/{uuid}- Assign controller policy to twinPOST /api/v1/twins/{uuid}/actions- Execute actions (motion, animation, etc.)GET /api/v1/twins/{uuid}/actions/{action_id}- Get action execution status
cyberwave/twin/{uuid}/command- Controller policy updatescyberwave/joint/{uuid}/+- Joint commands from AI controller to edgecyberwave/twin/{uuid}/position- Position commands from AIcyberwave/twin/{uuid}/rotation- Rotation commands from AI