What is Perception?
Perception is how your robot understands its environment: what’s in the camera frame, where obstacles are, what objects to interact with, and what a human operator is asking for. On Cyberwave, perception data flows from sensors on the edge through digital twins into AI models, and back into your control loop in real time.Perception models on Cyberwave can run in the cloud (good for heavy VLMs and batch inference) or on the edge through Edge Workers (good for low-latency, video-never-leaves-the-device use cases).
Hardware that Supports it
Cameras
USB webcams, laptop cameras, and IP cameras for live streaming, vision workflows, and dataset recording.
LiDAR & Depth Sensors
Connect 3D sensors via ROS, drivers, or custom integrations.
Onboard Robot Sensors
Wrist-mounted cameras, depth sensors, and IMUs that stream alongside the robot’s joint state.
Edge Compute
Run inference on Raspberry Pi, Jetson, or any Linux box co-located with your hardware.
How You Build It
1. Stream sensor data into a twin
Cameras and sensors paired through the Edge Core appear as live streams on their digital twin. View them in the dashboard, record them as datasets, or pipe them into a model. The wire format is consistent across cloud and edge.2. Pick the right model
3. Run inference where it makes sense
For low-latency loops (closed-loop visual servoing, safety-critical detection), deploy your model as an Edge Worker so frames never leave the device. For heavier reasoning (planning, multi-modal VLMs), call the cloud-hosted model directly.4. Wire perception into workflows
Thecamera_frame trigger in Workflows runs your model on every new frame and routes the output into downstream nodes, perfect for event-driven detection, anomaly alerts, or visual triggers for manipulation tasks.
Where to Go Next
Camera Quickstart
Stream your first camera into a digital twin.
Edge to Cloud VLM
Tutorial: run a VLM on edge frames and trigger cloud workflows.
Edge Workers
Deploy models alongside your hardware for low-latency inference.