Description
Bug Report
Describe the bug
We are encountering an issue with Fluent Bit when using the filesystem storage. Our intention is to ensure that when Fluent Bit experiences an overload, logs will be stored in a file to prevent data loss. However, during our tests, it appears that Fluent Bit is not respecting the configured memory limits and continues to consume memory despite being configured to offload logs to the filesystem.
To Reproduce
- Rubular link if applicable:
- Example log message if applicable:
{"log":"YOUR LOG MESSAGE HERE","stream":"stdout","time":"2018-06-11T14:37:30.681701731Z"}
- Steps to reproduce the problem:
Here is the configuration of fluentbit we used
[SERVICE]
Grace 30
Flush 1
Daemon Off
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_PORT 2020
Parsers_File parsers.conf
storage.path /var/log/flb-storage/
storage.metrics On
storage.max_chunks_up 2
storage.backlog.mem_limit 5M
storage.buffer_chunk_size 2M
storage.buffer_max_chunks_up 2
storage.total_limit_size 50M
log_level trace
[INPUT]
Name http
Listen 0.0.0.0
Port 24224
storage.type filesystem
[METRICS]
Name prometheus
Host 0.0.0.0
Port 2021
Scrape_Interval 1
[OUTPUT]
Name file
Format template
Template {time} used={Mem.used} free={Mem.free} total={Mem.total}
- Run Fluent Bit with the above configuration.
- Send a high volume of logs using the following script:
import requests
import json
import argparse
from datetime import datetime
import time
def send_log(url, log_data):
headers = {'Content-Type': 'application/json'}
response = requests.post(url, headers=headers, data=json.dumps(log_data))
if response.status_code == 200:
print(f"Logs sent successfully")
def generate_large_log_message(index, size_in_kb):
large_log = "A" * (size_in_kb * 1024 - 200) # Adjust the size for the rest of the JSON structure
return {
"container_name": "/ecs-appId-dec89697b7bcbbaaed01",
"source": "stdout",
"log": f"This is log number {index} at {datetime.now().isoformat()} {large_log}",
"container_id": "ef7f185cd674dd5969f8f1c2da2a953af355739329f9137ac"
}
def send_logs(url, logs_per_second, log_size_kb):
log_index = 0
while True:
start_time = time.time()
for _ in range(logs_per_second):
log_data = generate_large_log_message(log_index, log_size_kb)
send_log(url, log_data)
log_index += 1
elapsed_time = time.time() - start_time
time_to_sleep = max(0, 1 - elapsed_time)
time.sleep(time_to_sleep)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Send logs to Fluent Bit.')
parser.add_argument('--logs_per_second', type=int, required=True, help='Number of logs to send per second')
parser.add_argument('--log_size_kb', type=int, required=True, help='Size of each log in KB')
parser.add_argument('--url', type=str, default='http://localhost:24224', help='Fluent Bit HTTP input URL')
args = parser.parse_args()
send_logs(args.url, args.logs_per_second, args.log_size_kb)
- Execute the script with the following command:
python3 send_logs.py --logs_per_second 1000 --log_size_kb 1000 --url[ http://localhost:24224](http://localhost:24224/)
Observed behavior
The memory usage of the Fluent Bit container continues to grow despite the configuration of storage.max_chunks_up and storage.backlog.mem_limit.
Files are being written to the filesystem as .flb files in /var/log/flb-storage/, but the logs are also retained in memory, causing the memory to increase.
Expected behavior
Fluent Bit should respect the storage.max_chunks_up and storage.backlog.mem_limit and offload logs to the filesystem once the memory limit is reached, preventing further memory growth.