A tiny logger + fire‑hose relay for shipping “everything logs” — fast, diskless, and filterable at the edge.
Think of it as DEBUG meets a market data feed: producers emit freely; ops curates later.
Ratatouille treats logs like a live market feed, not a database. Producers speak freely and emit anything (strings, blobs, JSON, haiku). There’s no sacred schema or level system to satisfy. Topics are just routing hints. The slow path is pretty print to the local console for on‑machine debugging. The fast path is a network fire hose that ships everything out, in memory, with backpressure and dropping policies — nothing touches disk by default. Persistence only happens when ops decides to materialize views.
- Logs are noise until curated. Don’t force structure on write. Schema lives in ops (schema‑on‑read).
- One control surface. Filters, transforms, sampling and routing live in one place (ops), not sprinkled across apps and agents.
- Topics, not taxonomies. A topic names a stream; it doesn’t define its shape. Mixed content is allowed.
- Print is optional. Console output is for humans; the real business is the network stream.
- Diskless by default. Bounded in‑memory queues, counters, and explicit drop metrics instead of silent blocking or surprise files.
- Forced schemas make heterogeneous sources (e.g., HTTPD vs. app transactions) brittle and spawn endless per‑source configs. Ratatouille rejects that: deliver logs as they are, then let ops carve meaning once, centrally.
- Late binding means you can change your mind about what matters without redeploying code.
- Minimal Topic hot path (no parsing, no timers on the emission path) keeps overhead tiny and works in Node, SSR and workers. (Sinks like Relay may use timers for batching.)
- Smart filtering with
DEBUG‑style allow/deny and wildcards; optional sampling and drop policies. - Selectable output: pretty text for humans; JSON lines for pipes; or forward raw via Relay (HTTP/TCP/Workers) to whatever stack you run (Grafana/Loki, Influx, OTLP, S3, …).
- Observability of the logger itself: per‑topic sequence counters and drop counts so you know what was kept, what was shed, and why.
Ratatouille is not a traditional structured logging framework (log4j/log4js/winston/pino-style).
- No log levels as a first-class concept ("error" can be a topic, not a stream).
- No promise of durability, ordering, or delivery. This is telemetry, not an audit trail.
- No enforced schema. Producers can emit anything; consumers decide what to keep.
If something is critical, do not put it in logs. Use a separate audit/event system.
When you have hundreds of log producers (autoscaled microservices across Workers/Lambda/K8s/Nomad), "just print to stdout" turns into:
- too much volume
- too much cost
- too much noise
- too little traceability
Ratatouille is built for the fire-hose reality: producers emit freely, a sink ships the stream to a concentrator/queue, and consumers mine it later.
Two keywords:
- Insane speed
- Never unencrypted at rest
A typical deployment looks like:
producer → triager (TLS/auth, optional) → queue → consumer
- Producers (apps/workers) emit a fire-hose stream.
- Triager is an HTTPS tunnel/edge that terminates TLS, authenticates, and forwards bytes. (It does not parse.)
- Queue (e.g., SegQ or similar) durably buffers the stream for fan-out. If it writes to disk, it must be encrypted at rest.
- Consumers read from the queue and decide what to keep, index, sample, or drop.
The design goal is that logs are data in transit. If they ever touch disk in the pipeline, they do so encrypted, which helps with GDPR-style risk: you avoid piles of plaintext log files sitting around on nodes.
Logs are best-effort telemetry. They can drop.
Do not use logs to represent important state like:
- "X made a purchase"
- "payment succeeded"
- "user permissions changed"
Those belong in an audit/event system with durability, idempotency, and query guarantees.
A tiny, flexible debug logger for Node and SSR that’s easy to read in dev and easy to pipe in prod.
- Callable topics:
const log = Topic("api"); log("hello"). - Inline colors:
Topic("api#ff00aa"),Topic("db#red"), orTopic("auth#random"). - Structured logs:
RATATOUILLE=jsonemits one JSON object per line. - Smart filtering:
DEBUG-style allow/deny with wildcards; support multiple envs. - Per-topic sequence: Each topic counts calls (
#000001,#000002, …) and includes it in output. - Zero-dep & SSR-safe: Works in Node; falls back cleanly in browsers/workers.
# if published
npm i @frogfish/ratatouille
# or with pnpm
pnpm add @frogfish/ratatouilleWorking locally? Import from source:
import Topic, { setDebug } from "./src/topic".
If you want a diskless local collector + tailer, pair this with Ringtail (NDJSON sink + tail).
// ESM default import
import Topic from "@frogfish/ratatouille"; // or "./src/topic"
// Or named import
// import { Topic } from "@frogfish/ratatouille";
// Plain topic
const log = Topic("debug", { svc: "api" });
log("hello world", { user: "alice" }, { requestId: 123 }, "extra arg");
// Colored topic (24‑bit)
const pink = Topic("debug#ff00aa", { svc: "api" });
pink("this prints the topic in #ff00aa");
// Named color
const red = Topic("auth#red");
red("login failed");
// Stable random color from a readable palette
const api = Topic("api#random");
api("picked a deterministic 256‑color for 'api'");Output (text mode):
[2025-09-05T01:23:45.678Z #000001] debug — hello world {"user":"alice"} {"requestId":123} extra arg
[2025-09-05T01:23:45.790Z #000002] debug — …
Ratatouille is the producer + optional relay. To actually watch a firehose, you need a sink.
Ringtail is the companion sink/tailer: an in-memory, diskless NDJSON collector you can run locally (or in Nomad) and tail like a live feed.
By default, Relay.send(payload) emits one NDJSON line per call. If the payload is already an envelope (a plain object with a topic), Relay will pass it through and ensure:
tsexists (addsDate.now()if missing)srcexists (injects/merges your configured identity)
If the payload is not an envelope, Relay wraps it into a minimal envelope:
{"ts":1730000000000,"topic":"raw","args":["hello"],"src":{"app":"edge-auth","where":"cf-worker","instance":"prod:abc"}}This keeps the producer API flexible ("log anything") while keeping the transport format predictable.
If you configure Relay with an HTTP(S) host-only endpoint (no path, or /), it will default to:
http(s)://host:port/sink
So these are equivalent:
endpoint: "http://127.0.0.1:8080"endpoint: "http://127.0.0.1:8080/sink"
These are the variables you'll use most often when shipping to Ringtail:
RINGTAIL_URL— base URL for Ringtail (path optional). Examples:http://127.0.0.1:8080http://127.0.0.1:8080/sink
RINGTAIL_TOKEN— optional Bearer token (sent asAuthorization: Bearer …)
RATATOUILLE_APP— service/app name (e.g. payments, edge-auth)RATATOUILLE_WHERE— runtime label (e.g. nomad, node, cf-worker)RATATOUILLE_INSTANCE— instance label (alloc id / isolate id / hostname-ish)RATATOUILLE_DEFAULT_TOPIC— used when you call Relay.send("a string") (defaults to raw)
Tip: identity is what makes server-side filtering practical:
src.app,src.where,src.instance.
Set RINGTAIL_URL to enable shipping; if unset, presets typically only print locally unless you wire a transport explicitly.
- run Ringtail (sink)
- run your services with Ratatouille → Relay
- tail Ringtail
# terminal A: start sink
ringtail sink --listen 127.0.0.1:8080
# terminal B: watch the stream
ringtail tail http://127.0.0.1:8080
# terminal C: run your app with env pointing at the sink
export RINGTAIL_URL=http://127.0.0.1:8080
export RATATOUILLE_APP=api
export RATATOUILLE_WHERE=dev
node app.jsPresets give you a tiny, reusable log factory that:
- computes a sensible src identity for the environment
- optionally wires topics to Ringtail automatically
- keeps the hot path non-blocking (drops are OK)
import { createNomadFactory } from "@frogfish/ratatouille/presets/nomad";
// create once (singleton)
export const log = createNomadFactory({
alsoPrint: true, // print locally while forwarding
});
// use anywhere
const api = log.topic("api", { svc: "api" });
api("hello", { user: "alice" });Enable shipping by setting RINGTAIL_URL (and optionally RINGTAIL_TOKEN).
The Nomad preset derives src from Nomad env vars when present (job/group/task/alloc), but you can override with RATATOUILLE_*.
Optional eager connect:
await log.initLogging();import { createWorkersFactory } from "@frogfish/ratatouille/presets/workers";
export default {
async fetch(req: Request, env: any, ctx: ExecutionContext) {
const log = createWorkersFactory({
env,
app: "edge-auth",
where: "cf-worker",
alsoPrint: true,
});
log.topic("api")("hit", { url: req.url, method: req.method });
// best-effort: ensures at least one connect/flush path runs before isolate goes idle
ctx.waitUntil(log.initLogging());
return new Response("ok");
},
};Notes:
- Workers don't guarantee timers (setInterval) will fire before an isolate is suspended.
- If you care about getting some logs out, call
ctx.waitUntil(relay.flushNow())in your Worker, orctx.waitUntil(log.initLogging())if you're using the preset.
If you want full control, wire a transport directly:
import Topic from "@frogfish/ratatouille";
import { createRingtailTransport } from "@frogfish/ratatouille/transports/ringtail";
const rt = createRingtailTransport({
url: process.env.RINGTAIL_URL || "http://127.0.0.1:8080",
token: process.env.RINGTAIL_TOKEN,
includeEnv: true,
src: {
app: process.env.RATATOUILLE_APP || "api",
where: process.env.RATATOUILLE_WHERE || "node",
instance: process.env.RATATOUILLE_INSTANCE || "local",
},
});
await rt.connect();
const log = Topic("api").extend((e) => rt.send(e), true);
log("hello");Ratatouille has two ideas:
- Emission filter: which topics exist (and therefore reach sinks).
- Local printing: a developer convenience so you can see what's happening while coding.
Today the filter syntax is DEBUG-style (wildcards, allow/deny). DEBUG=... is supported mainly as a
compatibility shim for local dev workflows.
The canonical control surface is RATATOUILLE (especially RATATOUILLE.filter).
- Patterns are separated by commas or whitespace.
*is a wildcard;-negates a pattern.- Allow + deny evaluation:
- If both allow and deny lists are empty → logging is disabled.
- If allow is empty and deny is non-empty → allow everything except deny matches.
- Otherwise → enabled if topic matches any allow and no deny.
Examples:
# Enable everything
DEBUG=* node app.js
# Enable API only
DEBUG=api* node app.js
# Enable all except chat
DEBUG=-chat* node app.js
# Mix allow/deny
DEBUG="api*,auth*,-auth:noise" node app.jsQuote values when using
*to avoid shell globbing.
You can merge more variables (e.g., XYZ) without changing your code by using RATATOUILLE config (see below):
# Use DEBUG and XYZ together
RATATOUILLE='{"debugVars":["DEBUG","XYZ"]}' \
XYZ=auth* DEBUG=-db* node app.jsAdd a color by suffixing the topic with #…:
- Hex:
#ff00aa,#faf(shorthand) - Named (subset):
red,green,blue,cyan,magenta,yellow,orange,purple,pink,teal,gray/grey,black,white - Random:
#random→ assigns a deterministic, readable 256‑color based on the topic name
Color only affects the topic label. Messages remain uncolored for readability.
Color output toggles:
- Auto‑enabled on TTY; disabled if
NO_COLORorFORCE_COLOR=0. - Force on/off via
RATATOUILLE(below).
[ISO‑8601 #SEQ] <topic> <meta> — <args…>
#SEQis a zero‑padded per‑topic sequence (#000001).metaand each argument are pretty‑printed:- Uses a safe JSON replacer (handles circular refs, Error objects).
Errorinstances print.stackif present (elsename: message).
Enable with RATATOUILLE=json or a JSON config ({"format":"json"}). One JSON object per line:
{"ts":"2025-09-05T01:23:45.678Z","seq":1,"topic":"debug","meta":{"svc":"api"},"args":["hello",{"user":"alice"}]}- Handles circulars via a safe replacer (
"[Circular]"). - Serializes
Erroras{name,message,stack}.
A single env var that’s either quick flags or a full JSON.
RATATOUILLE=nocolor # force disable colors
RATATOUILLE=json # structured JSON outputExamples:
# Set filter in RATATOUILLE (preferred) and disable colors; do not print (default)
RATATOUILLE='{"filter":"api*,auth*,-auth:noise","color":"off"}' node app.js
# Back-compat: merge DEBUG + XYZ from env if no RATATOUILLE.filter is set
RATATOUILLE='{"debugVars":["DEBUG","XYZ"]}' XYZ=auth* DEBUG=-db* node app.js- If
RATATOUILLE.printis set, it takes precedence. - Printing is a developer convenience (a local PrintSink). Sinks (e.g.,
Relay) are the core fire-hose. - If any sink is attached, printing is suppressed by default to avoid double-output.
A sink can opt-in to printing via
alsoPrint=true. - If
RATATOUILLE.filteris set andprintis not specified, printing defaults to false (opt-in). - If no
filteris set and filters are derived from env vars likeDEBUG, printing defaults to true for drop-in compatibility.
Examples:
# Print JSON logs to console (explicit)
RATATOUILLE='{"format":"json","filter":"*","print":true}' node app.js
# Use DEBUG from env and print by default
DEBUG=* node app.jsIn Workers, RATATOUILLE defined in wrangler.toml [vars] is available as env.RATATOUILLE at runtime, not process.env. Call configureRatatouille once per isolate to apply it:
// worker.ts
import Topic, { configureRatatouille, setDebug, setPrint } from '@frogfish/ratatouille';
let configured = false;
export default {
async fetch(req: Request, env: any, ctx: ExecutionContext) {
if (!configured) {
// Apply TOML var, e.g., '{"format":"json","filter":"*","print":true}'
if (env.RATATOUILLE) configureRatatouille(env.RATATOUILLE);
// Or configure explicitly:
// setDebug('*'); setPrint(true);
configured = true;
}
const log = Topic('worker');
log('hello', { url: req.url });
return new Response('ok');
}
}Force JSON logs regardless of TTY:
RATATOUILLE='{"format":"json"}' DEBUG=api* node app.jsimport Topic, { setDebug, configureRatatouille, setPrint } from "@frogfish/ratatouille"; // or from "./src/topic"Creates a callable logger bound to a topic. Second argument is a config object or legacy meta.
name: may include an inline color suffix:"topic#ff00aa","topic#red","topic#random".config.meta: optional object printed once per line after the topic.config.env: optional environment snapshot added to each line (JSON:envfield; text: appended after meta).config.print: per-topic print override; forces/suppresses console output for this topic.- Back‑compat: passing a plain object as the second arg is treated as
meta.
Returns a function (...args: unknown[]) => void with properties:
.topic: string— the base topic name (color suffix stripped).meta: Record<string, unknown> | undefined— the meta object.enabled: boolean— whether the topic is currently enabled by filters.seq: number— current per‑topic sequence (starts at 0; first call prints#000001)
Usage:
const debug = Topic("debug#random", { meta: { svc: "api" }, env: { region: "iad" }, print: true });
if (debug.enabled) {
debug("starting", { port: 8080 });
}
// Extend: attach a non-blocking handler that receives JSON envelopes
debug.extend((e) => {
// e: { ts, seq, topic, meta, args, env }
// Forward to legacy/bespoke loggers without blocking the request path
console.log(`[legacy] ${e.topic}#${e.seq} ${JSON.stringify(e)}`);
});Use extend(handler, alsoPrint?) to plug in bespoke or legacy logging without changing call sites.
- Signature:
extend((envelope) => void, alsoPrint?: boolean) - Envelope:
{ ts, seq, topic, meta, args, env }(what JSON mode would emit) - Non-blocking: handlers run on a timer/microtask to keep the hot path fast.
- Gating: handlers represent sinks. They run whenever the topic is enabled by filters, independent of local printing.
Behavior rules:
- No sinks attached → normal printing (subject to filters and print gate).
- If any sink is attached with
alsoPrint=false(default) → printing is suppressed; only sinks run. - If any sink sets
alsoPrint=true→ sinks run and local printing also happens (subject to the print gate).
Why this design?
- Lets you “take over” output and route it elsewhere (e.g., log4js, winston, analytics) without double-printing.
- Gives an opt-in to keep local printing for dev while still forwarding to your sinks.
Examples
- Replace printing with a custom sink
const log = Topic('api', { meta: { svc: 'api' } })
.extend((e) => myLegacySink(e)); // no console printing
log('user login', { id: 42 });- Print locally and forward
const log = Topic('api', { meta: { svc: 'api' } })
.extend((e) => myLegacySink(e), true); // print + handler
log('started');- Force extensions to run even when global print is disabled
// Global printing off (e.g., RATATOUILLE.filter set, print omitted)
// Make this topic eligible by forcing per-topic print, but suppress local printing via extend default
const log = Topic('api', { print: true }).extend((e) => mySink(e));
log('event'); // handler runs, console stays quiet- Use env/meta to implement levels or routing in the handler
const warn = Topic('app', { meta: { level: 'warn' }, env: { region: 'iad' } })
.extend((e) => {
if ((e.meta as any)?.level === 'warn') legacy.warn(e);
else legacy.info(e);
});
warn('cpu high', { usage: 0.92 });// Named Topic
import { Topic } from "@frogfish/ratatouille";
const log = Topic("api");
// Access Relay and setDebug
import Topic, { setDebug, Relay } from "@frogfish/ratatouille";Recompile filter patterns at runtime.
setDebug("api*,auth*")— override from a string.setDebug()— rebuild from env using configureddebugVars(e.g.,DEBUG,XYZ).
Useful in tests or REPLs that toggle logging on the fly.
Use Relay to batch and forward logs to a collector. It supports two runtimes:
- Node: TCP (
tcp://host:port) and HTTP(S) with keep‑alive. - Cloudflare Workers/Browser: HTTP(S) via
fetch(no TCP; keep‑alive not user‑controlled).
- Node (first‑class):
import { Relay } from "@frogfish/ratatouille"// root export includes Relay in Node- or
import Relay from "@frogfish/ratatouille/relay"
- Cloudflare Worker / Browser:
import Relay from "@frogfish/ratatouille/relay"// resolves to the Worker variant
type RelayConfig = {
endpoint: string; // "tcp://host:port" (Node) or "https://…" (Workers)
batchMs?: number; // flush interval (default 100)
batchBytes?: number; // max bytes per batch (default 262_144)
// Bounded memory (best-effort telemetry)
maxQueueBytes?: number; // max buffered bytes (default 5MB)
maxQueue?: number; // max buffered lines (default 10_000)
dropPolicy?: "drop_oldest" | "drop_newest"; // default "drop_oldest"
headers?: Record<string,string>;// extra headers for HTTP(S)
keepAlive?: boolean; // Node HTTP(S) keep-alive agent (default true). Ignored in Workers.
sampleRate?: number; // 0..1 probability to keep a line (default 1)
// Transport identity injected into every envelope (as `src`).
// Example: { app: "payments", where: "node", instance: "prod-eu1" }
// Defaults can come from env (below); explicit config wins.
source?: Record<string, unknown>;
// Default topic used when payload is not already an envelope with a `topic`.
// Can be overridden via env.
defaultTopic?: string; // default "raw"
// Optional encoder override for `send(payload)`.
// If you provide this, you control the wire format.
encode?: (payload: unknown) => string;
}Relay POSTs to whatever endpoint you give it. For convenience, if you pass a host-only HTTP(S) URL (no path or just /), Relay will default the path to:
http(s)://host:port/sink
This matches Ringtail’s ingestion endpoint naming.
In a fire-hose system, “what” happened is only half the story — you also need where it came from.
Relay can inject a small, static source identity into every emitted envelope as src.
Presets (Nomad/Workers) compute src for you automatically; you can still override it with RATATOUILLE_APP, RATATOUILLE_WHERE, and RATATOUILLE_INSTANCE.
You can set it in code:
const relay = new Relay({
endpoint: "http://127.0.0.1:8080",
headers: { Authorization: `Bearer ${process.env.RINGTAIL_TOKEN}` },
source: {
app: "edge-auth", // app/service identifier
where: "cf-worker", // runtime / environment label
instance: "prod-eu1:abc", // deployment slice / isolate id / hostname-ish
},
});Or set defaults via environment variables (useful in Nomad/K8s/etc.):
RATATOUILLE_APPRATATOUILLE_WHERERATATOUILLE_INSTANCERATATOUILLE_DEFAULT_TOPIC
Explicit config wins over env. (Relay merges {...env, ...config.source}.)
By default, relay.send(payload) emits one NDJSON line containing a minimal envelope.
If payload is already a plain object with a topic, Relay treats it as an envelope and will ensure:
tsexists (addsDate.now()if missing)srcexists (injects/merges configuredsource)
Otherwise Relay wraps the payload:
{"ts":1730000000000,"topic":"raw","args":["haiku"],"src":{"app":"edge-auth","where":"cf-worker","instance":"prod-eu1:abc"}}This keeps Ratatouille’s “emit anything” philosophy while keeping the network transport predictable.
- Server-side filters (e.g. Ringtail admission filters) can match
topicreliably. - You can filter by
src.app,src.where,src.instancewithout parsing arbitrary payloads. - The wire format is stable even when producers log strings/blobs.
Sometimes you just want to ship bytes with zero ceremony.
sendLine(line)enqueues a pre-formatted NDJSON line exactly as you give it (Relay will only add a trailing\nif missing).sendChunk(chunk)enqueues an arbitrary NDJSON chunk (may contain many lines). Relay does no parsing/validation.
Use this mode when you don’t want envelopes — but note:
- Server-side topic filtering won’t work unless your raw lines include a
topicfield. - If you want filtering + identity, prefer
send()with envelopes.
import { Relay } from "@frogfish/ratatouille";
import crypto from "crypto";
// Host-only endpoint is fine; Relay defaults to /sink
const relay = new Relay({
endpoint: process.env.RINGTAIL_URL || "http://127.0.0.1:8080",
keepAlive: true, // enables Node http(s).Agent keep-alive
batchMs: 100, // send every ~100ms
headers: process.env.RINGTAIL_TOKEN
? { Authorization: `Bearer ${process.env.RINGTAIL_TOKEN}` }
: {},
source: {
app: process.env.RATATOUILLE_APP || "api",
where: process.env.RATATOUILLE_WHERE || "node",
instance: process.env.RATATOUILLE_INSTANCE || `local:${crypto.randomUUID()}`,
},
});
await relay.connect();
// emit logs
relay.send({ topic: "api", msg: "service started" });
// flush at checkpoints
await relay.flushNow();
// on shutdown
process.on("SIGINT", async () => {
await relay.flushNow();
relay.close();
process.exit(0);
});import { Relay } from "@frogfish/ratatouille";
const relay = new Relay("tcp://collector.internal:5001");
await relay.connect();
relay.send({ level: "warn", msg: "hot path" });// worker.ts
import Relay from "@frogfish/ratatouille/relay"; // Worker variant (fetch-based)
let relay: any; // lazily initialize with env-bound headers
export default {
async fetch(req: Request, env: any, ctx: ExecutionContext) {
if (!relay) {
const isolateId = crypto.randomUUID();
relay = new Relay({
// Host-only endpoint is fine; Relay defaults to /sink
endpoint: env.RINGTAIL_URL || "http://127.0.0.1:8080",
batchMs: 100,
sampleRate: 1, // set <1 to reduce volume (e.g., 0.1)
headers: env.RINGTAIL_TOKEN
? { Authorization: `Bearer ${env.RINGTAIL_TOKEN}` }
: undefined,
source: {
app: env.RATATOUILLE_APP || "edge-auth",
where: "cf-worker",
instance: `${env.ENVIRONMENT || "dev"}:${isolateId}`,
},
});
await relay.connect();
}
// enqueue structured log lines (non-blocking)
relay.send({ ts: Date.now(), url: req.url, method: req.method });
// ensure at least one batch is pushed even if the isolate goes idle soon
ctx.waitUntil(relay.flushNow());
return new Response("ok");
}
};setInterval timers are not a delivery guarantee in Workers (isolates can go idle). ctx.waitUntil(relay.flushNow()) is the best-effort way to push at least one batch.
Notes for Workers:
- Only
http(s)://endpoints are supported (no raw TCP sockets). - The platform may reuse connections under the hood (HTTP/1.1 persistent or HTTP/2), but keep‑alive is not configurable.
- Create a singleton Relay at module scope; avoid per‑request construction.
- Tune
batchMs/batchBytesfor your delivery/overhead trade‑off.
send(payload)enqueues one NDJSON line (object → JSON +\n, or viaencode).sendLine(line)enqueues a pre-formatted NDJSON line (adds trailing\nif missing).sendChunk(chunk)enqueues a pre-formatted NDJSON chunk (may contain multiple lines). No parsing/validation.- Batches are limited by
batchBytes; oversized single lines/chunks are dropped early. - The queue is bounded by
maxQueueBytes(primary) andmaxQueue(secondary). When full, items are dropped perdropPolicy. - Periodic flush runs every
batchMs. CallflushNow()to push one batch immediately. status()exposes lightweight counters (queued bytes/items, dropped bytes/items, sent bytes/batches, failures).
For sub-second delivery with connection reuse, front Workers can forward logs to a Durable Object (DO) that batches and relays upstream.
// do-logger.ts
export class LogAggregator {
state: DurableObjectState;
env: any;
q: string[] = [];
timer: any;
constructor(state: DurableObjectState, env: any) {
this.state = state;
this.env = env;
this.timer = setInterval(() => this.flush().catch(() => {}), 100);
}
async fetch(req: Request): Promise<Response> {
const url = new URL(req.url);
if (req.method === 'POST' && url.pathname === '/log') {
const line = await req.text(); // expected to be a single NDJSON line
this.q.push(line.endsWith('\n') ? line : line + '\n');
return new Response('ok');
}
if (url.pathname === '/flush') {
await this.flush();
return new Response('flushed');
}
return new Response('not found', { status: 404 });
}
private drain(maxBytes = 262_144): string | undefined {
if (!this.q.length) return;
let bytes = 0;
const batch: string[] = [];
while (this.q.length && bytes + this.q[0].length <= maxBytes) {
const x = this.q.shift()!; batch.push(x); bytes += x.length;
}
return batch.length ? batch.join('') : undefined;
}
private async flush(): Promise<void> {
const data = this.drain();
if (!data) return;
await fetch(this.env.LOG_ENDPOINT, {
method: 'POST',
headers: { 'Content-Type': 'application/x-ndjson', 'Authorization': `Bearer ${this.env.LOG_TOKEN}` },
body: data,
}).catch(() => {});
}
}// worker.ts
export default {
async fetch(req: Request, env: any, ctx: ExecutionContext) {
const id = env.LOG_AGGREGATOR.idFromName('logs');
const stub = env.LOG_AGGREGATOR.get(id);
// one NDJSON line per event
const line = JSON.stringify({ ts: Date.now(), url: req.url, method: req.method }) + '\n';
ctx.waitUntil(stub.fetch('https://do/log', { method: 'POST', body: line }));
return new Response('ok');
}
}Bindings (wrangler.toml):
[[durable_objects.bindings]]
name = "LOG_AGGREGATOR"
class_name = "LogAggregator"
[vars]
LOG_ENDPOINT = "https://logs.example.com/ingest"
LOG_TOKEN = "..."If per-request latency should never touch logging, enqueue entries and drain them in a consumer Worker.
export default {
async fetch(req: Request, env: any, ctx: ExecutionContext) {
const entry = { ts: Date.now(), url: req.url, method: req.method };
// Do not await; let the platform handle retries/backpressure
ctx.waitUntil(env.LOG_QUEUE.send(entry));
return new Response('ok');
}
}import Relay from '@frogfish/ratatouille/relay';
let relay: Relay | undefined;
export default {
async queue(batch: MessageBatch<any>, env: any, ctx: ExecutionContext) {
if (!relay) {
relay = new Relay({ endpoint: env.LOG_ENDPOINT, batchMs: 100, headers: { Authorization: `Bearer ${env.LOG_TOKEN}` } });
await relay.connect();
}
for (const msg of batch.messages) relay.send(msg.body);
await relay.flushNow();
}
}Bindings (wrangler.toml):
[[queues.producers]]
queue = "LOG_QUEUE"
binding = "LOG_QUEUE"
[[queues.consumers]]
queue = "LOG_QUEUE"
script_name = "log-consumer"- Tokens split by commas or whitespace:
"api*,-db*","api* -db*". *matches any substring.- A leading
-negates a token. - Semantics: enabled iff (allowed or implied‑allow‑all) and not denied.
Edge cases:
DEBUG=""→ disabled.DEBUG="*"→ all topics.DEBUG="-chat*"→ all exceptchat…(deny‑only ⇒ allow everything else).
Quote values containing *:
DEBUG='api*,auth*,-auth:noise' node app.js$env:DEBUG = 'api*,auth*,-auth:noise'
node app.jsset DEBUG=api*,auth*,-auth:noise
node app.js- Topic hot path is tiny — Topic emission does no I/O unless local printing is enabled.
- Printing — writes synchronously to stdout in Node (fast stream write) and uses
console.logelsewhere (Workers/browsers). - Sinks (Relay) — use a bounded in-memory queue plus a periodic flush timer (
batchMs). Best-effort; drops are explicit. - Error rendering —
Errorinstances print.stackwhen available; otherwisename: message. - Performance — precompiles allow/deny regexes; caches enabled decisions per topic; minimal stringification.
Q: What gets colored?
Only the topic label (e.g., debug). Arguments remain uncolored for readability.
Q: How do I ensure colors never print?
Set RATATOUILLE=nocolor or RATATOUILLE='{"color":"off"}'.
Q: Can I force colors even in non‑TTY environments?
Yes: RATATOUILLE='{"color":"on"}'.
Q: What’s #random vs no suffix?
No suffix → uncolored topic. #random → assign a stable 256‑color from a curated palette.
Q: How do I combine multiple env vars for patterns?
RATATOUILLE='{"debugVars":["DEBUG","XYZ"]}' then set DEBUG and XYZ as usual.
GPL-3.0-only
Commercial license for proprietary redistribution/hosted offerings. Please contact info@frogfish.io
{ "color": "auto" | "on" | "off", // default "auto" "format": "text" | "json", // default "text" "filter": "*,-noisy*", // primary DEBUG-style filter (preferred over env) "debugVars": ["DEBUG", "XYZ"], // env vars to merge for patterns "print": true | false, // controls console/stderr printing (see below) "extra": { /* reserved for future */ } }