Skip to main content

What is a compatible driver?

A compatible driver creates the connection between a hardware device’s own API and the Cyberwave platform. It is responsible for translating the device’s native protocol into the twin model that the rest of the platform understands. Drivers can run anywhere — on a dedicated edge device, directly on the robot hardware, in the cloud, or on a developer laptop. In production, they typically run on edge hardware co-located with the device they control. Each driver is packaged as a Docker container image. Edge Core pulls and runs that image, injecting the environment variables described below. This means you can develop and test your driver locally using the same image that will run in production.

Quickstart: scaffold with the Claude skill

The fastest way to get started is the Cyberwave Driver skill for Claude Code. It asks you a few questions about your hardware and scaffolds a complete, production-ready driver project — including the Dockerfile, local dev setup, and a working twin connection. Install the skill:
git clone https://github.com/cyberwave-os/driver-skill ~/.claude/skills/cyberwave-driver
Then in any Claude Code session:
/cyberwave-driver
Claude will generate the full project tree and walk you through connecting it to a real twin locally. The skill source is open source at cyberwave-os/driver-skill.

Quickstart: use the SDK

The fastest way to write a compatible driver is to use one of the official SDKs: The SDKs handle twin synchronization, file I/O, reconnection logic, and more, so you can focus on the hardware integration.

Environment variables

When Edge Core starts a driver container it injects the following environment variables. You can develop your driver assuming these are always set to valid values — no need to handle the case where they are absent.
VariableDescription
CYBERWAVE_TWIN_UUIDUUID of the twin instance this driver manages
CYBERWAVE_API_KEYAPI key scoped to this driver for authenticating platform calls
CYBERWAVE_TWIN_JSON_FILEAbsolute path to a writable JSON file containing the twin’s current state (see Twin JSON file)
CYBERWAVE_CHILD_TWIN_UUIDS(optional) Comma-separated UUIDs of child twins (e.g. cameras) attached to this driver
CYBERWAVE_CHILD_TWIN_UUIDS is set when child twins are attached to the driver twin. Drivers can use this to coordinate child devices (for example, multiple cameras) without additional configuration.

Restart behavior tuning

The following optional variables let you override Edge Core’s restart defaults:
VariableDefaultDescription
CYBERWAVE_DRIVER_RESTART_LOOP_THRESHOLD4Number of restarts before the driver is marked as flapping
CYBERWAVE_DRIVER_RESTART_LOOP_WINDOW_SECONDS60Time window (seconds) used to count restarts
CYBERWAVE_DRIVER_TROUBLESHOOTING_URLhttps://docs.cyberwave.comURL surfaced in platform alerts for operator guidance

Driver failure handling

Drivers must exit with a non-zero code when they cannot access required hardware (for example, a missing /dev/video* device or a disconnected peripheral). This allows Edge Core to detect startup failures and trigger restart logic. Edge Core raises the following alerts:
  • driver_start_failure — raised when a driver container cannot reach a stable running state.
  • driver_restart_loop — raised when a driver exceeds the restart threshold within the window. The container is stopped and marked as flapping.

Twin JSON file

CYBERWAVE_TWIN_JSON_FILE points to a JSON file on disk that contains the digital twin instance (including its metadata) and the associated catalog twin data, matching the TwinSchema and AssetSchema API schemas. Drivers may read and modify this file. Edge Core syncs any changes back to the backend when connectivity is available.

Runtime configuration

Drivers should treat metadata["edge_configs"] as the source of truth for per-device runtime configuration, and metadata["edge_fingerprint"] as the edge identity (not duplicated inside edge_configs). Read edge_configs from CYBERWAVE_TWIN_JSON_FILE at startup to obtain per-device settings without hardcoding them in the image.

Sensor data output

If your driver produces sensor data (video frames, depth maps, audio, joint states, etc.), publish it to the edge data bus so worker containers and ML models can consume it locally with zero network overhead. There are two options: the Zenoh data bus (recommended) and the filesystem convention (fallback for constrained environments). Both use the same channel names — a driver can switch between them by changing one env var. The Zenoh data bus provides zero-copy shared memory between driver and worker containers. Data is consumed directly by worker hooks and cw.data.latest().

Key expression convention

cw/{twin_uuid}/data/{channel}
SegmentValueExample
cwFixed prefixcw
twin_uuidUUID of the twina1b2c3d4-...
dataFixed namespacedata
channelCanonical channel nameframes/default
The DataBus handles key composition automatically via CYBERWAVE_TWIN_UUID.

Canonical channels

ChannelEncodingPatternWire payload
frames/defaultnumpy/ndarrayStreamSDK header + raw BGR/RGB uint8
depth/defaultnumpy/ndarrayStreamSDK header + raw uint16 depth (mm)
joint_statesapplication/jsonLatest value{ts, names, positions, velocities?, efforts?, source_type}
positionapplication/jsonLatest value{ts, x, y, z, qx?, qy?, qz?, qw?}
audio/defaultnumpy/ndarrayStreamSDK header + float32 PCM
pointcloud/defaultnumpy/ndarrayStreamSDK header + Nx3 float32
imuapplication/jsonStream{ts, accel: {x,y,z}, gyro: {x,y,z}}
batteryapplication/jsonLatest value{ts, voltage_v, current_a, charge_pct}
telemetryapplication/jsonLatest valueFree-form {ts, ...}
You can define custom channels by picking any channel name.

Python SDK example

from cyberwave import Cyberwave
import numpy as np
import os, time

cw = Cyberwave(api_key=os.environ["CYBERWAVE_API_KEY"], source_type="edge")

# Binary stream: numpy array published with SDK header
frame = np.zeros((480, 640, 3), dtype=np.uint8)
cw.data.publish("frames/default", frame)

# JSON latest-value: dict published as application/json
cw.data.publish("joint_states", {
    "ts": time.time(),
    "names": ["shoulder_pan", "elbow_flex"],
    "positions": [0.1, -0.5],
})
CYBERWAVE_TWIN_UUID is read automatically from the environment. CYBERWAVE_DATA_BACKEND selects the transport (zenoh or filesystem).

Wire format reference (for native language publishers)

For C++, Rust, or any language that needs to publish without the Python SDK:
┌──────────────────┬──────────┬──────────┬─────────────────────┬─────────────────┐
│ header_len (u32) │ ts (f64) │ seq (i64)│ header JSON (UTF-8) │ payload (bytes) │
│   4 bytes, LE    │ 8 bytes  │ 8 bytes  │ variable length     │ variable length │
└──────────────────┴──────────┴──────────┴─────────────────────┴─────────────────┘
Required JSON fields:
  • content_type: "numpy/ndarray" | "application/json" | "application/octet-stream"
  • shape: [H, W, C] (for ndarray; omit for JSON/bytes)
  • dtype: "uint8" | "uint16" | "float32" etc. (for ndarray; omit for JSON/bytes)

C++ native publish example

Minimal zenoh-cpp snippet that publishes frames with the correct header:
#include <zenoh.hxx>
#include <nlohmann/json.hpp>
#include <cstring>
#include <cstdint>

std::vector<uint8_t> pack_frame(
    const uint8_t* pixels, size_t pixel_bytes,
    int height, int width, int channels,
    double ts, int64_t seq
) {
    nlohmann::json meta;
    meta["content_type"] = "numpy/ndarray";
    meta["shape"] = {height, width, channels};
    meta["dtype"] = "uint8";
    std::string json_str = meta.dump();

    uint32_t header_len = 16 + static_cast<uint32_t>(json_str.size());
    std::vector<uint8_t> buf(4 + header_len + pixel_bytes);

    size_t off = 0;
    memcpy(buf.data() + off, &header_len, 4); off += 4;
    memcpy(buf.data() + off, &ts, 8); off += 8;
    memcpy(buf.data() + off, &seq, 8); off += 8;
    memcpy(buf.data() + off, json_str.data(), json_str.size());
    off += json_str.size();
    memcpy(buf.data() + off, pixels, pixel_bytes);
    return buf;
}

int main() {
    auto config = zenoh::Config::default_config();
    auto session = zenoh::Session::open(std::move(config));

    std::string twin_uuid = std::getenv("CYBERWAVE_TWIN_UUID");
    std::string key = "cw/" + twin_uuid + "/data/frames/default";
    auto pub = session.declare_publisher(key);

    int64_t seq = 0;
    while (true) {
        // ... capture frame into pixels[] ...
        auto wire = pack_frame(pixels, pixel_bytes, 480, 640, 3,
                               wall_clock_seconds(), seq++);
        pub.put(zenoh::Bytes(wire.data(), wire.size()));
    }
}
The Python SDK’s DataBus.subscribe() automatically decodes this payload — no adapter code needed.

Option B: Filesystem convention (fallback)

The filesystem convention is the fallback for environments where eclipse-zenoh cannot be installed. For most drivers, use cw.data.publish() (Zenoh data bus) instead — it provides zero-copy shared memory and is consumed directly by worker hooks. Both conventions use the same channel names.
Write sensor data to a subfolder of the config directory that Edge Core mounts into your container:
$CYBERWAVE_EDGE_CONFIG_DIR/data/{twin_uuid}/{channel}/{sensor_name}/
CYBERWAVE_EDGE_CONFIG_DIR is always set by Edge Core (defaults to /app/.cyberwave).

Ring buffer (for stream data)

data/{twin_uuid}/frames/default/
├── ring/
│   ├── 000000.npy
│   ├── 000001.npy
│   └── ...         # numbered slots, wraps around
└── meta.json       # write pointer + format info
Rules:
  • Write .npy files to numbered slots: {slot:06d}.npy
  • Slot index = write_count % buffer_size (default: 120)
  • Atomic writes: write to {slot}.npy.tmp, then rename() to {slot}.npy
  • Update meta.json after each write

Latest value (for state data)

data/{twin_uuid}/joint_states/
└── latest.json     # overwritten each update
Rules:
  • Write a single JSON file: latest.json
  • Atomic writes: write to latest.json.tmp, then rename()
  • Include a timestamp field
This is a filesystem convention, not a Python API. C++, Rust, or any other language can write .npy files and JSON to the same paths.

MQTT topics and payloads

If you publish data over MQTT directly (rather than through the SDK’s cw.data.publish), see the MQTT API Reference for the complete list of topics and payload schemas supported by the platform. That page covers:
  • Twin transform: position, rotation, scale
  • Joint state updates (single-joint, flat multi-joint, and aggregated formats)
  • Navigation commands and status reporting
  • Locomotion commands (move_forward, turn_left, etc.)
  • Telemetry lifecycle events (connected, telemetry_start, telemetry_end)
  • Sensor data: depth frames, point clouds, metrics
  • Edge health reporting
  • WebRTC signalling
  • Health check ping/pong

Migrating from MQTT-only drivers

If your driver currently publishes sensor data over MQTT, you can add Zenoh publishing without removing the MQTT path. The two paths serve different consumers:
  • MQTT → cloud backend (telemetry, frontend, workflows)
  • Zenoh → local worker containers (zero-copy inference, fusion)

Step 1: Set CYBERWAVE_DATA_BACKEND

Ensure CYBERWAVE_DATA_BACKEND=zenoh is set in the driver container. Edge Core sets this automatically for managed drivers. For manual testing:
docker run -e CYBERWAVE_DATA_BACKEND=zenoh ...

Step 2: Add cw.data.publish alongside the MQTT call

# Before (MQTT only)
twin.client.mqtt.update_joints_state(twin_uuid=twin_uuid, ...)

# After (dual-publish)
twin.client.mqtt.update_joints_state(twin_uuid=twin_uuid, ...)   # unchanged
cw.data.publish("joint_states", {"ts": ts, "names": [...], "positions": [...]})
Zenoh publish errors are caught and logged — they do not affect the MQTT path.

Step 3: Verify with a subscriber

sub = cw.data.subscribe("joint_states", lambda data: print(data))
# Run your driver; you should see joint dicts printed

Controlling which paths are active

Set CYBERWAVE_PUBLISH_MODE to choose:
ValueEffect
dualBoth MQTT and Zenoh publish (default)
zenoh_onlyOnly Zenoh (local-only drivers)
mqtt_onlyOnly MQTT (legacy mode)

Licensing your driver

You own your driver code. There are two common paths:
  • Open source — publish your driver as a public repository on GitHub under the Apache 2.0 license. This is our recommended default and makes it easier for the community to contribute and reuse your work.
  • Closed source — keep your driver proprietary. In this case, we recommend obfuscating your code before distributing the image and including a clear license file that reflects your distribution terms. Interested in writing a closed-source driver? Reach out to us.

Example drivers

The following open-source drivers are good starting points and reference implementations:

Advanced topics

Once you have a working driver, these guides cover the platform features your driver can leverage:

Edge Workers

Hook-based worker modules for on-device ML inference and event-driven processing.

Data Wire Format

SDK header encoding, key expressions, and the on-wire contract for edge data channels.

Data Fusion Primitives

Time-aware sensor fusion: interpolated point reads and time-window queries.

Synchronized Multi-Channel Hooks

Approximate time synchronizer that fires when samples from all listed channels arrive within tolerance.

Record & Replay

Capture live edge data to disk and replay it for deterministic debugging.

MQTT API Reference

Complete list of MQTT topics and payload schemas: telemetry, commands, navigation, joint states, and more.