This report details the steps taken to complete the tasks for the Linux Monitoring project.
Write a bash script to generate files and folders with specific naming conventions and size constraints. The script should stop if the free disk space drops below 1GB. A log file should be created with details of all created files and folders.
The 01/main.sh script was created to solve this task.
Script usage:
./01/main.sh [absolute_path] [num_subfolders] [folder_letters] [num_files] [file_letters.extension_letters] [file_size_kb]Example:
./01/main.sh /opt/test 4 az 5 az.az 3kbThe script performs the following actions:
- Validates the input parameters.
- Generates folder names with the specified letters and the current date.
- Generates file names with the specified letters and the current date.
- Creates files with the specified size.
- Checks for available disk space and stops if it's less than 1GB.
- Logs all created files and folders in
part1.log.
[Screenshot of the script in action]

[Screenshot of the generated files and folders]

Write a bash script to create a large number of files and folders in random locations on the file system to simulate a disk-clogging scenario. The script should stop when the free disk space is 1GB.
The 02/main.sh script was created for this purpose.
Script usage:
./02/main.sh [folder_letters] [file_letters.extension_letters] [file_size_mb]Example:
./02/main.sh az az.az 3MbThe script performs the following actions:
- Generates folder names with the specified letters and the current date.
- Creates up to 100 subfolders in random locations (avoiding
/binand/sbin). - Creates a random number of files in each folder.
- Checks for available disk space and stops if it's less than 1GB.
- Logs all created files and folders in
part2.log. - Records the start and end time of the script and calculates the total execution time.
[Screenshot of the script in action]

[Screenshot of the log file with execution time]

Write a bash script to clean up the files and folders created in Part 2. The script should support three cleaning methods: by log file, by creation date and time, and by name mask.
The 03/main.sh script was created to clean the file system.
Script usage:
./03/main.sh [1|2|3]The script supports the following cleaning methods:
- By log file: Deletes all files and folders listed in
part2.log. - By creation date and time: Prompts the user to enter a start and end date/time and deletes all files created within that interval.
- By name mask: Deletes files and folders based on a name mask (e.g.,
ddmmyy_...).
[Screenshot of the cleaning script in action - Method 1]

[Screenshot of the cleaning script in action - Method 2]

[Screenshot of the cleaning script in action - Method 3]

Write a bash script or a C program to generate 5 Nginx log files in the combined format. Each log should contain information for one day with a random number of entries (100-1000).
The 04/main.sh script was created to generate the Nginx logs.
The script generates 5 log files (access_YYYY-MM-DD.log) with the following randomized data for each entry:
- IP address
- Response code (200, 201, 400, 401, 403, 404, 500, 501, 502, 503)
- HTTP method (GET, POST, PUT, PATCH, DELETE)
- Timestamp
- Request URL
- User-Agent
[Screenshot of the generated log files]

Write a bash script to parse the Nginx logs generated in Part 4 using awk. The script should provide different views of the data based on a command-line parameter.
The 05/main.sh script was created to parse the logs.
Script usage:
./05/main.sh [1|2|3|4]The script provides the following information:
- All entries sorted by response code.
- All unique IPs.
- All requests with errors (4xx or 5xx response codes).
- All unique IPs from erroneous requests.
[Screenshot of the script output - Option 1]

[Screenshot of the script output - Option 2]

[Screenshot of the script output - Option 3]

[Screenshot of the script output - Option 4]

Use the GoAccess utility to analyze the generated Nginx logs and view the results in a web interface.
GoAccess was used to generate an HTML report from the log files.
Command:
goaccess access_*.log -o report.html --log-format=COMBINEDThe generated report.html provides an interactive dashboard to explore the log data.
[Screenshot of the GoAccess HTML report]

Install and configure Prometheus and Grafana to monitor system metrics. Create a Grafana dashboard to display CPU, RAM, disk space, and I/O operations.
- Prometheus and Grafana were installed and configured.
- Node Exporter was installed to collect system metrics.
- A new Grafana dashboard was created to visualize the following metrics from Prometheus:
- CPU Usage
- Available RAM
- Free Disk Space
- Disk I/O Operations
- The script from Part 2 was executed to generate load on the system.
- The
stressutility was used to generate CPU, memory, and I/O load.
[Screenshot of the Prometheus]
[Screenshot of the Grafana dashboard under load]
Import a pre-built Grafana dashboard and use it to monitor the system under load, including network load.
- The "Node Exporter Quickstart and Dashboard" was imported from the official Grafana Labs website.
- The same load tests from Part 7 were performed.
iperf3was used to generate network load between two virtual machines.
[Screenshot of the iperf3 testing environment]
[Screenshot of the dashboard under network load]
Create a custom script to collect basic system metrics (CPU, RAM, disk) and expose them in a Prometheus-compatible format on an HTML page served by Nginx.
- The
09/main.shscript was created to collect CPU, RAM, and disk usage. - The script generates an
index.htmlfile in the Prometheus metrics format. - Nginx was configured to serve this
index.htmlfile. - A cron job was set up to run the script every 3 seconds to refresh the metrics.
- Prometheus was configured to scrape the metrics from the custom endpoint.
- The same load tests from Part 7 were performed.

