Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.cyberwave.com/llms.txt

Use this file to discover all available pages before exploring further.

What is the Model Playground?

The Model Playground is the interactive detail page that lives at /{workspace-slug}/models/{model-slug} (or /models/{uuid} for models without a slug). It lets you explore, test, and integrate any ML Model registered in your workspace or visible in the public catalog.
Access the playground from the Model Detail Page. Every card on /models also ships a Playground button that deep-links straight into the ?tab=playground view.

What you can do

The playground auto-detects the model’s capabilities (deployment, provider, tags, input modalities, output format) and renders the right UI for each kind:
Playground kindTypical modelsWhat you can do
vlm-spatial-reasonerGemini Robotics-ER 1.5 / 1.6, Molmo pointing, PaliGemma grounding, GPT-5 spatial, any VLM with metadata.point_format or a pointing/grounding/visual-grounding/spatial-reasoning tagUpload an image, pick a structured task (detect points, detect boxes, caption, segment), run it, and visualize the JSON output as overlays on the input image.
vlmGPT-5, Gemini 2.5 Flash, any text/vision LLMSend a text prompt (and optionally an image) and see the text response.
im2meshHunyuan3D, TripoSR, single-image-to-3DUpload an image, submit, and preview the generated GLB inline.
vlaOpenVLA, Pi 0.5Read the SDK/CLI recipes for running the model against a live twin.
edgeYOLOv8, SAM2, any edge-only modelRead the CLI recipe for binding the model to a workspace edge node.
The resolver behind this lives at cyberwave-frontend/components/models/resolve-playground-kind.ts and is covered by Vitest fixtures at cyberwave-frontend/__tests__/resolve-playground-kind.test.ts.

How inference is plumbed

  • Synchronous cloud models (Google GenAI, OpenAI, or models with an endpoint_url deployment) go through POST /api/v1/mlmodels/{uuid}/run and return 200 OK with the output payload (MLModelRunResultSchema).
  • Asynchronous cloud-node workloads (e.g. im2mesh) return 202 Accepted with a workload_uuid and a poll_url pointing at /api/v1/cloud-node-workloads/{uuid}. The UI polls the workload and renders the artifact (GLB, image, etc.) once the workload completes.
  • Edge models are not invoked from the browser. The playground shows the exact CLI / SDK commands needed to run the model locally through a Cyberwave edge worker.

Using a model from your code

Every playground tab also surfaces copy-paste-ready snippets for the Python SDK, the cyberwave CLI, and curl. These snippets are generated by components/models/build-code-snippets.ts and are covered by a “grounded” Vitest test that greps the monorepo to ensure every command and method referenced in the snippets actually exists in the SDK or CLI. That guarantees the examples do not drift out of sync with the real surface.

Cloud model (Python SDK)

from cyberwave import Cyberwave

client = Cyberwave()
loaded = client.models.load("<workspace>/models/<model-slug>")
result = loaded.predict(text="Describe this image", image="./photo.png")
print(result)

Edge model (CLI)

cyberwave model list
cyberwave model bind <model-slug> --twin <twin-slug> --camera front

Direct HTTP (cURL)

curl -X POST "$CYBERWAVE_API_URL/api/v1/mlmodels/<model-uuid>/run" \
  -H "Authorization: Bearer $CYBERWAVE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "hello"}'

Deterministic mocks for local dev & tests

Set MLMODEL_PROVIDER_MOCK=1 in the backend to bypass real provider calls and return deterministic fixtures. This keeps the playground, Vitest suites, and the Playwright flows in cyberwave-frontend/e2e/model-playground.spec.ts fully hermetic.

See also