A Go-based AMQP event processor for rendering Tidbyt Pixlet applications. This service receives render requests via AMQP, processes them using Pixlet, and returns results through AMQP queues.
- AMQP Integration: Consumes render requests and publishes results via RabbitMQ
- Pixlet Processing: Renders Tidbyt applications using the Pixlet engine
- Redis Caching: Distributed caching layer with app/device scoped keys
- 12-Factor App: Environment-based configuration following 12-factor principles
- Security: Non-root container user with read-only filesystem access
- Graceful Shutdown: Proper signal handling and cleanup
- Structured Logging: JSON-structured logging with Zap
- Health Checks: Container and service health monitoring
AMQP Producer → RabbitMQ → MATRX Renderer → Pixlet → RabbitMQ → AMQP Consumer
The service:
- Consumes render requests from the
matrx.renderer_requests
queue - Validates and processes requests using Pixlet
- Publishes results to device-specific queues:
matrx.{DEVICE_ID}
- Handles errors gracefully with proper logging
All configuration is done via environment variables:
AMQP_URL
: RabbitMQ connection string (default:amqp://guest:guest@localhost:5672/
)AMQP_EXCHANGE
: Exchange name (default:matrx
)AMQP_QUEUE
: Input queue name (default:matrx.renderer_requests
)AMQP_ROUTING_KEY
: Routing key for input queue (default:renderer_requests
)AMQP_RESULT_QUEUE
: Result queue template (default:matrx.{DEVICE_ID}
) - dynamic per deviceAMQP_PREFETCH_COUNT
: QoS prefetch count for load balancing (default:1
)
SERVER_PORT
: HTTP port for health checks (default:8080
)SERVER_READ_TIMEOUT
: Read timeout in seconds (default:10
)SERVER_WRITE_TIMEOUT
: Write timeout in seconds (default:10
)
PIXLET_APPS_PATH
: Path to Pixlet apps directory (default:/opt/apps
)
App Directory Structure: Apps are organized in nested directories as /opt/apps/{app_id}/{app_id}.star
. The Docker build automatically downloads apps from the matrx-apps repository.
REDIS_ADDR
: Redis server address (default:localhost:6379
)REDIS_PASSWORD
: Redis password (optional)REDIS_DB
: Redis database number (default:0
)
LOG_LEVEL
: Log level (default:info
)
The renderer supports both in-memory and Redis-based caching:
- In-Memory Cache: Used by default when no Redis configuration is provided
- Redis Cache: Automatically enabled when
REDIS_ADDR
is configured - Cache Scoping: Keys are scoped as
/{applet_id}/{device_id}/{key_name}
- TTL Support: Configurable time-to-live for cached values
For detailed Redis cache configuration and usage, see REDIS_CACHE.md.
Pixlet apps are organized in a nested directory structure within the apps path:
/opt/apps/
├── clock/
│ └── clock.star
├── weather/
│ └── weather.star
└── news/
└── news.star
Each app must:
- Be in its own directory named after the app ID
- Contain a
.star
file with the same name as the directory - Follow the Pixlet app structure and conventions
The Docker build process automatically downloads apps from the koiosdigital/matrx-apps repository during image creation.
{
"type": "render_request",
"app_id": "clock",
"device": {
"id": "device-uuid-or-string",
"width": 64,
"height": 32
},
"params": {
"timezone": "America/New_York",
"format": "12h"
}
}
{
"device_id": "device-uuid-or-string",
"app_id": "clock",
"render_output": "base64-encoded-webp-data",
"processed_at": "2025-08-12T10:30:05Z"
}
Note: On error, the service logs the error to console but does not send an AMQP response message.
The service uses a dynamic queue routing system:
- Queue:
matrx.renderer_requests
- Routing Key:
renderer_requests
- All render requests are sent to this single queue
- Queue:
matrx.{DEVICE_ID}
(e.g.,matrx.device-123
) - Routing Key:
{DEVICE_ID}
(e.g.,device-123
) - Each device gets its own result queue for isolation
- Queues are created automatically when the first result is published
This design allows:
- Multiple renderer instances to consume from the same input queue
- Device-specific result routing for proper message isolation
- Automatic scaling based on queue depth
The MATRX renderer is designed for horizontal scaling with multiple instances:
- Fair Load Distribution: Each instance processes only one message at a time (configurable via
AMQP_PREFETCH_COUNT
) - Message Safety: Manual acknowledgment ensures messages are only removed after successful processing
- Automatic Failover: Failed messages are requeued for other instances to process
- Instance Identification: Each consumer has a unique tag for monitoring and debugging
AMQP_PREFETCH_COUNT
: Number of unacknowledged messages per consumer (default:1
)1
: Fair round-robin distribution (recommended for most cases)>1
: Higher throughput but less fair distribution0
: No limit (not recommended for scaling)
- Start with 1-2 instances and monitor queue depth
- Scale up when average queue depth consistently exceeds desired latency
- Monitor CPU/memory usage per instance - rendering is CPU-intensive
- Use container orchestration (Kubernetes, Docker Swarm) for automatic scaling
apiVersion: apps/v1
kind: Deployment
metadata:
name: matrx-renderer
spec:
replicas: 3 # Start with 3 instances
selector:
matchLabels:
app: matrx-renderer
template:
metadata:
labels:
app: matrx-renderer
spec:
containers:
- name: renderer
image: matrx-renderer:latest
env:
- name: AMQP_URL
value: "amqp://user:pass@rabbitmq:5672/"
- name: AMQP_PREFETCH_COUNT
value: "1" # Fair distribution
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1Gi
Monitor these metrics for scaling decisions:
- Queue depth in
matrx.renderer_requests
- Message processing rate per instance
- CPU/Memory usage per instance
- Error rates and failed message counts
- Go 1.21+
- Docker and Docker Compose
- Pixlet CLI tool
- Clone the repository
- Copy environment file:
cp .env.example .env
- Install dependencies:
go mod download
- Run RabbitMQ:
docker-compose up -d rabbitmq
- Create apps directory:
mkdir -p apps
- Add your Pixlet
.star
files to theapps
directory - Run the service:
go run cmd/server/main.go
-
Build and run with Docker Compose:
docker-compose up --build
-
Access RabbitMQ Management UI at http://localhost:15672 (guest/guest)
Run tests with:
go test ./...
Build the image:
docker build -t matrx-renderer .
Run with environment variables:
docker run -d \
--name matrx-renderer \
-e AMQP_URL=amqp://user:pass@rabbitmq:5672/ \
-v /path/to/apps:/opt/apps:ro \
matrx-renderer
The application is designed to work well in Kubernetes with:
- ConfigMaps for configuration
- Secrets for sensitive data
- ReadOnlyRootFilesystem security context
- Resource limits and requests
- Health check endpoints
- Non-root user: Container runs as user ID 1001
- Read-only filesystem: Apps directory mounted read-only
- Path validation: Prevents directory traversal attacks
- Input sanitization: Validates configuration parameters
- Minimal attack surface: Alpine-based minimal container
- No shell access: User has no shell (
/sbin/nologin
)
The service provides:
- Health check endpoint for container orchestration
- Structured JSON logging for log aggregation
- Error tracking with correlation IDs
- Performance metrics through logging
MIT License - see LICENSE file for details