A multi-service, cloud-native chat system prototype built to explore gRPC, WebSockets, multi-language service orchestration, and real-time messaging infrastructure.
This system includes:
- chat-service: Core backend (Go) using gRPC for message processing and broadcasting
- chat-gateway: WebSocket bridge (Node.js) between frontend clients and gRPC
- chat-ai: AI responder (Python) powered by Hugging Face transformers, listening via gRPC stream
- chat-client: Lightweight HTML/Vue.js frontend for testing gateway connection
- Bi-directional chat flow using gRPC and WebSocket
- Modular, multi-language architecture (Go, Node.js, Python, Vue.js)
- AI-generated responses using a Hugging Face transformer model
- Isolated development environments using devcontainers and Dockerfiles
- Unified
Makefileandconcurrently-powered orchestration - Docker Compose configuration for full stack deployment
Each service has its own Makefile for local dev. You can run everything at once with:
make run-allOr run them individually:
make -C chat-service run
make -C chat-gateway run
make -C chat-ai run
make -C chat-client runBasic unit tests are scaffolded for each service. Run them using:
make -C chat-service test
make -C chat-ai testTo bring up all services:
docker-compose up --buildServices will be available at:
chat-service: gRPC atlocalhost:8080chat-gateway: WebSocket server atlocalhost:8081chat-client: Vue.js test page served atlocalhost:3000
- chat-gateway: Node.js WebSocket bridge that proxies messages between frontend clients and
chat-service - chat-client: Simple static Vue.js test page served over Python HTTP, connects via WebSocket to the gateway
- chat-ai: Python gRPC stream listener that connects to
chat-serviceand replies to user messages using a Hugging Face model
Run all services concurrently:
make run-allThis uses a custom script powered by concurrently to orchestrate startup of all services.
Each service also has an individual Makefile to support:
make runmake testmake protomake lint(where applicable)
To run the full system using Docker:
docker-compose up --buildThe following ports will be exposed:
8080: gRPC service (chat-service)8081: WebSocket gateway (chat-gateway)3000: Static client HTML page (chat-client)
All containers share a virtual network and reference chat-service by name.
All services scaffolded with basic unit tests and development Makefiles. The AI responder connects successfully to the gRPC stream. Logging, proto generation, and local dev tools are implemented.
- Add multi-room support to enable isolated conversations per room (e.g.,
general,support,random) - Externalize configuration for ports, model selection, and service hosts via environment variables or
.envfiles - Harden service loops with retries, timeouts, and graceful reconnection for all gRPC streaming clients
- Enhance AI response quality with context history, prompt tuning, or model upgrade (e.g., replace DialoGPT)
- Integrate logging frameworks with consistent formats, timestamps, and log levels across all services
MIT