School project built during the 168h "codecamp".
Create a painless web tail for containers based on logstash / fluentbit
See the subject for further description.
In order to bring the whole stack up:
docker-compose up -d
The ansible role trail
will deploy a docker-enabled filebeat container with the required permissions (read capability on the docker socket and container directories).
An example playbook demonstrates a basic usage of this role. The LOGSTASH_TARGET
is required, as it is used in the filebeat config file.
The trail
role will:
- Template the filebeat configuration in
/etc/filebeat.yml
- Ensure
pip
is available on the remote host - Ensure the python docker package is available on the remote host and install it if necessary
- Run a container named
filebeat_agent
on the remote host
By running docker-compose up -d
, you will have the api on your local port 3000
and the frontend on the port 8080
.
A Vagrantfile is provided to experiment with trail
. It will create 5 VMs with multiple docker-based example projects (wordpress
, rails
, django
, a failing nginx
configuration and a flog
container).
Install the required ansible
role (docker installation and configuration) by running the following:
ansible-galaxy install -r requirements.yml
The LOGSTASH_TARGET
variable in the provisioning playbook is supposed to be modified to suit your needs.
In order to use the provisioning part of the Vagrantfile, simply bring the VMs up and Vagrant will automatically run the corresponding playbook based on the node name.
vagrant up
name | role |
---|---|
filebeat | log collection - fetch logs from containers, either by mapping the docker sockets/directories to filebeat, or deploy as a kubernetes pod |
logstash | centralize, parse and enrich logs |
rabbitmq | access point for logs through the topic exchange called logs |
flog | open source log generation tool |
mongodb | short term storage (6h) |
I highly advise running each items one after the other, to ensure everything is running smooth.
docker-compose up -d broker
docker-compose up -d logstash
docker-compose up -d filebeat
docker-compose up -d storage
docker-compose up log-generator
(in order to have it in the foreground)
The Exchange is automatically created by Logstash at startup time, based on the
- Connect to RabbitMQ
- Create a Queue (NOTE: define if queue have to be exclusive and/or durable)
- Bind your Queue to the Exchange, specify if needed a routing key
- Consume the messages from your Queue
The logstash pipeline will, in addition to forwarding the logs in RabbitMQ, store the logs in MongoDB.
As seen in the mongoscripts configuration directory, an unoptimized index based on the @timestamp
field will enable the deletion of each log after 6 hours.
Querying these logs through the frontend was not implemented, a helper script search.py was added instead as a lightweight replacement.
Note: Implementing this in the frontend should be quite quick
# install pymongo
pip install --user -r requirements.txt
export MONGODB_URI=mongodb://localhost:27017/trail # This script uses the environment variable MONGODB_URI to connect to MongoDB
python search.py term
# ...
# <@timestamp>, <message>
See the log schema.