raehDocs
Guides

Streaming from Firmware

Open the ingest WebSocket, send binary PPG frames, handle errors.

This guide walks through the device side of a Raeh integration: the thing that lives in your wristband, patch, or phone and pushes raw samples at the sensor's native rate.

The full wire-level spec lives in Reference → /stream/ingest and Binary frame format. This page is the narrative version.

The connection

Open a WebSocket to:

wss://api.raeh.io/stream/ingest
  ?api_key=<raeh_...>
  &device_id=<your per-unit id>
  &device_model_id=<uuid from the dashboard>

Three query parameters:

ParamValue
api_keyYour raeh_* API key.
device_idOpaque per-unit string: MAC, serial, whatever's stable.
device_model_idUUID from the dashboard, baked into your firmware.

The handshake

Immediately after the WebSocket opens, the server sends one JSON message:

{
  "session_id": "ad225407-36de-48b1-bcf2-14d5c057aeca",
  "slots": [
    { "slot_id": 0, "modality": "ppg_green", "sample_rate_hz": 100, "bit_depth": 16, "num_channels": 1 },
    { "slot_id": 1, "modality": "acc_3axis", "sample_rate_hz": 50,  "bit_depth": 16, "num_channels": 3 }
  ]
}

Store the slot_id for each modality you're going to send. You need the right ID in every frame header.

Sending frames

All subsequent messages are binary. Each frame is a 12-byte header followed by raw sample bytes:

┌────────┬──────────────────┬──────────────┬───────┬────────────────────┐
│ slot   │ t0_epoch_ms      │ sample_count │ flags │ payload            │
│ (u8)   │ (int64, BE)      │ (u16, BE)    │ (u8)  │ raw samples        │
└────────┴──────────────────┴──────────────┴───────┴────────────────────┘
 1 byte   8 bytes            2 bytes        1 byte  sample_count × bytes_per_sample × num_channels
  • slot: from the handshake.
  • t0_epoch_ms: timestamp of the first sample in the frame, in ms since epoch. Use the device clock; we only care that it's monotonic within a session.
  • sample_count: how many samples are in the payload.
  • flags: reserved, send 0.
  • payload: interleaved samples, little-endian (native on x86/ARM, so usually a direct memcpy from your sensor buffer). For 3-axis accelerometer it's [x0, y0, z0, x1, y1, z1, …].

Full details including common encoding mistakes: Binary frame format.

How big should each frame be?

Aim for ~1 second of samples per frame. Smaller frames waste bandwidth on headers; larger frames increase latency.

At 100 Hz PPG: 100 samples × 2 bytes = ~200 byte payload + 12 byte header = 212 bytes per frame per second. Trivial.

Pace yourself

Send one frame per second (or whatever your sensor's natural buffer interval is). Do not burst. The pipeline runs warmup gates that wait for ~15 seconds of real-time data before emitting insights. Sending 15 seconds worth in 100ms will not give you insights any faster and wastes your device's radio budget.

If your sensor buffers internally and you occasionally catch up, that's fine. But the steady state should track wall-clock time.

Multiple modalities

If your device has PPG + accelerometer, send frames on both slots. The pipeline uses the accelerometer as an optional input to reject motion artifacts. You don't have to send it; it just improves HR quality during movement.

  slot 0 (PPG green, 100 Hz)   →  1 frame/sec,  100 samples each
  slot 1 (ACC 3-axis, 50 Hz)   →  1 frame/sec,  50 samples × 3 channels

Pace both streams together. See "Sample pairing" below for why asymmetric cadences degrade motion-rejected HR.

Sample pairing across modalities

Motion-rejection algorithms correlate PPG with accelerometer sample-by-sample in time. The quality of that correlation is bounded by the tightness of your clock alignment between the two sensors. Three tiers, in order of quality:

SHOULD: shared hardware clock at source (best)

Drive all sensor ADCs from a single hardware oscillator on the same MCU, and timestamp each sample (or each paired group of samples) at the ISR, not after buffering. This is how well-engineered wearables solve alignment: the PPG sample at time T and the ACC sample at time T are genuinely measured at the same physical instant. There's nothing the server can do to recover this if it's lost on-device.

Reference: the open-source BioGAP-Ultra platform distributes a 2.048 MHz master clock to all its ADCs via board-to-board connector for exactly this reason.

MUST: single monotonic clock for all t0 values

Even if you can't share an oscillator across sensors, you must timestamp every frame's t0_epoch_ms from one monotonic hardware clock, typically the MCU's RTC or a clock_gettime(CLOCK_MONOTONIC) equivalent. Never use:

  • Per-sensor buffer clocks that drift independently of each other.
  • Server arrival time (Date.now() at send) as a proxy for sample time.
  • Wall-clock time that can jump backward on NTP sync.

Raeh's pipeline treats t0 as the only ground truth for sample timing. If your two modalities' t0 values disagree about "when did this sample happen", the server can't reconcile them. The best it can do is drop stale optional inputs (our optional_max_lag_ms gate) rather than fuse mistimed data.

Asymmetric batching degrades quality

Sending PPG every 1s but ACC every 4s (to save battery, for example) means that for 75% of each 4-second window, the pipeline has no fresh ACC. Our motion-artifact node will drop the ACC reference when it's more than 500ms stale, degrading the algorithm's quality to PPG-only. Keep cadences symmetric across modalities you want fused.

Distributed sensors (chestband + wrist, etc.)

If your hardware architecture puts PPG and ACC on physically separate modules with independent clocks (e.g. a chestband streaming to a wrist unit over BLE), sub-sample alignment is not achievable without a wired interconnect. Expect motion-rejected HR quality to degrade. If you want to build your own sync layer over BLE / Wi-Fi, Lab Streaming Layer (LSL) is the canonical open-source reference: per-sample timestamping with an NTP-inspired network clock sync.

Disconnect and reconnect

If the network drops, re-open the same URL. You'll get a new session_id and new slot IDs; sessions are bound to a TCP connection. There's no "resume" protocol; the pipeline treats the new session as a fresh stream (same device, new window).

For user-facing apps, wait a backoff (start at 1s, double up to 30s) between reconnect attempts.

Errors during a session

If the server closes with an error code:

CodeMeaning
4001Authentication failed: key missing, invalid, revoked, or from a different account
4004Resource not found, e.g. device_model_id missing or has no modality channels

The close-frame reason string is plain text and safe to log for debugging. On a 4004, don't retry the same request; fix the device_model_id and reconnect.

Health check

GET https://api.raeh.io/v1/stream/health returns {"ok": true} when the ingest path is up. Use it from your firmware's self-test routine if you want to surface a "Raeh reachable" indicator to the end user.

Reference implementations

On this page