Welcome to Aether - a Python-powered solution for efficient OpenStack resource management and monitoring. Named after the fifth element in ancient philosophy that fills the "space" above us, Aether brings clarity to your cloud environment. This web application simplifies complex resource allocation tasks with an intuitive interface, powerful automation features, and comprehensive data visualization with automated updates via custom time cron job.
- Key Features
- Technology Stack
- System Architecture
- Project Structure
- Getting Started
- Development Guide
- Deployment
- Data Collection
- Troubleshooting
- Contributing
- License
| Feature | Description |
|---|---|
| π₯οΈ Instance Management | Comprehensive instance listing with advanced search, filtering, and migration tools |
| π½ Volume Management | Volume listing with usage statistics and volume usage prediction calculator |
| π¦ Flavor Catalog | Detailed listing of available instance flavors with resource specifications |
| π― Resource Allocation | Placement analysis, compute node monitoring, and resource reservation system |
| π Data Synchronization | Automated data collection and updates every 2 hours via cron jobs |
| π Advanced Filtering | Powerful search capabilities with regex support and DataTables integration |
| π€ Data Export | Easy export of instance and allocation data to various formats |
| π User Authentication | Secure login system with persistent session management |
| π Data Visualization | Interactive charts and visual representation of resource allocation |
| π± Responsive Design | Mobile-friendly interface with adaptive layouts |
| π Dark Mode | Elegant dark theme for reduced eye strain during night operations |
| β‘ Performance Optimized | Efficient file-based storage with no database dependencies |
-
Frontend:
- HTML5, CSS3, JavaScript (ES6+)
- DataTables.js for interactive tables with advanced features
- Tailwind CSS for modern, responsive styling
- Custom CSS for theming and dark mode support
- Chart.js for interactive data visualization
-
Backend:
- Python 3.x
- Flask web framework for lightweight, efficient serving
- Flask-Login for secure authentication and session management
- Pandas for powerful data processing and transformation
- Matplotlib for server-side chart generation
-
Data Collection:
- Bash scripts for OpenStack CLI interaction
- OpenStack API for resource data retrieval
- SSH for secure compute node data collection
- Ceph integration for storage metrics
-
Data Storage:
- Lightweight file-based storage (CSV, TXT, JSON)
- No database required for simplicity and portability
-
Deployment:
- Docker containerization for consistent environments and easy deployment
- Systemd service for application management and auto-restart
- Cron jobs for automated data collection and synchronization
- SSH for secure data transfer between collection and web servers
The application follows a modular, layered architecture designed for efficiency and maintainability:
-
Data Collection Layer:
- Bash scripts interact with OpenStack CLI to collect comprehensive resource data
- SSH connections to compute nodes gather allocation ratios and configuration details
- Ceph integration provides storage metrics and utilization data
-
Data Processing Layer:
- Python modules process and transform the raw OpenStack data
- Pandas handles data manipulation, filtering, and preparation
- Custom utilities for data formatting and conversion
-
Web Application Layer:
- Flask application with blueprint-based modular organization
- RESTful API endpoints for dynamic data retrieval
- Authentication and session management for security
-
Presentation Layer:
- Responsive HTML templates with Jinja2 templating
- JavaScript for interactive features and dynamic content updates
- DataTables for advanced table functionality
- Chart.js for interactive data visualization
- Data Collection: Scheduled execution of
get-data-aio.shscript collects data from OpenStack environment every 2 hours - Data Verification:
check-placement.shandcheck-instance-ids.shverify data consistency - Data Transfer: Collected data is securely transferred to the web server
- Application Processing: Flask application processes and transforms the data
- User Interface: Web interface presents the data with interactive visualizations
- User Interaction: Users can filter, search, and analyze the resource data
Data Update Schedule: The application updates data via automated cron jobs, ensuring fresh information while maintaining system performance.
# Cron job configuration example - runs every 2 hours
11 */2 * * * /bin/bash /home/ubuntu/workdir/scripts/aether/get-data-aio.sh >> /home/ubuntu/workdir/scripts/aether/get-data-aio.log 2>&1aether/
βββ app.py # Main application entry point
βββ config.py # Application configuration
βββ requirements.txt # Python dependencies
βββ get-data-aio.sh # Main data collection script
βββ check-placement.sh # Placement verification script
βββ check-instance-ids.sh # Instance ID verification script
βββ aether.service # Systemd service file
βββ data/ # Data directory (created at runtime)
β βββ aio.csv # Instance data with project, flavor, and host info
β βββ allocation.txt # Resource allocation data from hypervisors
β βββ cephdf.txt # Ceph storage metrics and utilization
β βββ flavors.csv # Flavor definitions with resource specifications
β βββ ratio.txt # CPU/RAM allocation ratios from compute nodes
β βββ volumes.json # Volume data with size and attachment info
β βββ users.json # User credentials for authentication
β βββ reserved.json # Reserved resources data for capacity planning
β βββ placement_diff.json # Placement allocation verification results
β βββ instance_ids_check.json # Instance ID verification results
βββ models/ # Data models
β βββ __init__.py # User model and model imports
β βββ data_host.py # Host data model for compute resources
βββ routes/ # Route handlers organized by feature
β βββ __init__.py # Blueprint registration
β βββ auth.py # Authentication routes (login/logout)
β βββ allocation.py # Resource allocation routes
β βββ compute.py # Compute node management routes
β βββ flavor.py # Flavor listing routes
β βββ instance.py # Instance management routes
β βββ volume.py # Volume management routes
βββ static/ # Static assets
β βββ DataTables/ # DataTables library for interactive tables
β βββ chartjs/ # Chart.js library for data visualization
β βββ tailwind.min.css # Tailwind CSS framework
β βββ modern-theme.css # Custom theme styles with dark mode
β βββ index.css # Home page styles
β βββ list-instances.css # Instance page styles
β βββ volumes.css # Volume page styles
β βββ allocation.css # Allocation page styles
β βββ results/ # Generated plots and visualization results
βββ templates/ # HTML templates with Jinja2
β βββ allocation.html # Resource allocation page
β βββ index.html # Home page with migration tools
β βββ list_all_flavors.html # Flavor catalog page
β βββ list_all_instances.html # Instance listing page
β βββ login.html # Authentication page
β βββ navbar.html # Navigation component
β βββ volumes.html # Volume management page
βββ utils/ # Utility functions
βββ __init__.py # Utility imports
βββ data_utils.py # Data processing utilities
βββ file_utils.py # File handling utilities
βββ format_utils.py # Data formatting and conversion utilities
- Python 3.x (3.8+ recommended)
- pip package manager
- Access to an OpenStack environment with admin privileges (for data collection)
- OpenStack CLI tools installed and configured
- SSH access to compute nodes (for ratio collection)
- Sudo privileges for systemd service setup (production deployment)
- Clone the repository:
git clone https://github.com/Pepryan/openstack-resource.git aether
cd aether- Create and activate virtual environment:
python3 -m venv venv-opre
source venv-opre/bin/activate- Install dependencies:
pip install -r requirements.txt- Create the required directory structure:
mkdir -p data static/results
chmod 750 data # Secure the data directory- Create a users.json file for authentication:
cat > data/users.json << EOF
{
"admin": "your-secure-password",
"user1": "another-password"
}
EOF
chmod 640 data/users.json # Restrict access to the credentials file- Configure application settings in config.py:
The application uses a direct configuration approach in config.py for better session persistence and simplified deployment. Key settings include:
# Security settings
SECRET_KEY = 'your-secure-secret-key' # Fixed key for session persistence
DEBUG = False # Set to True for development
HOST = "0.0.0.0" # Listen on all interfaces
PORT = 5005 # Application port
# Session configuration
SESSION_PERMANENT = True
PERMANENT_SESSION_LIFETIME_DAYS = 30 # Adjust session duration as needed
# File paths (automatically configured)
DATA_DIR = 'data'
AIO_CSV_PATH = os.path.join(DATA_DIR, 'aio.csv')
USERS_FILE_PATH = os.path.join(DATA_DIR, 'users.json')
# ... other file paths
# Constants
CORE_COMPUTE = 48 # Number of cores per compute node (adjust to match your environment)
CEPH_ERASURE_CODE = 1.5 # Adjust based on your Ceph configuration
CEPH_TOTAL_SIZE_TB = 6246.4 # Update with your Ceph total size
CSV_DELIMITER = '|' # Delimiter for CSV filesConfiguration Benefits:
- No environment file dependencies
- Consistent session management
- Simplified deployment process
- All settings in one location
- Set up OpenStack credentials for data collection:
# Create or ensure you have a valid OpenStack RC file
cat > ~/admin-openrc << EOF
#!/bin/bash
export OS_AUTH_URL=https://your-openstack-auth-url:5000/v3
export OS_PROJECT_NAME="admin"
export OS_USER_DOMAIN_NAME="Default"
export OS_PROJECT_DOMAIN_NAME="Default"
export OS_USERNAME="admin"
export OS_PASSWORD="your-openstack-admin-password"
export OS_REGION_NAME="RegionOne"
export OS_INTERFACE=public
export OS_IDENTITY_API_VERSION=3
EOF
chmod 600 ~/admin-openrc # Secure the credentials file- Modify the data collection script to match your environment:
# Edit get-data-aio.sh and update the instance_server variable
vim get-data-aio.sh
# Change: instance_server="172.18.218.129:~/openstack-resource/data"
# To match your web server's IP and path# Start the application in development mode
source venv-opre/bin/activate
python app.pyAccess the application at http://localhost:5005
# Copy the systemd service file
sudo cp aether.service /etc/systemd/system/
# Edit the service file to match your installation path
sudo vim /etc/systemd/system/aether.service
# Reload systemd, enable and start the service
sudo systemctl daemon-reload
sudo systemctl enable aether
sudo systemctl start aetherAccess the application at http://your-server-ip:5005
-
Fork and clone the repository
-
Set up virtual environment and install dependencies
-
Enable debug mode in config.py:
DEBUG = True
Note: The application uses direct configuration in
config.py- no environment files are needed. -
Create sample data files for testing:
# Create sample data files with realistic structure mkdir -p data static/results # Create sample instance data cat > data/aio.csv << EOF Project|ID|Name|Status|Power State|Networks|Image Name|Image ID|Flavor Name|Flavor ID|Host|CPU|RAM admin|12345|test-instance|ACTIVE|Running|net=192.168.1.100|Ubuntu 20.04|abcdef|m1.medium|98765|compute-01|2|4G EOF # Create sample allocation data echo "1 compute-01 enabled up 15 10 20480 10240" > data/allocation.txt # Create sample flavor data cat > data/flavors.csv << EOF ID|Name|RAM|Disk|Ephemeral|VCPUs|Is Public|Swap|RXTX Factor|Properties 98765|m1.medium|4096|40|0|2|True|0|1.0|hw_rng:allowed=True EOF # Create sample ratio data echo "compute-01, 4, 1.5" > data/ratio.txt # Create sample Ceph data cat > data/cephdf.txt << EOF --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED TOTAL 6246G 4500G 1746G 1746G 27.95 EOF # Create sample volume data echo '[{"ID":"vol-123","Name":"test-volume","Status":"in-use","Size":10,"Bootable":"true"}]' > data/volumes.json # Create sample user data echo '{"admin":"password"}' > data/users.json # Create sample reserved data echo '{"compute-01":{"CPU":"2","RAM":"4096","Kebutuhan":"Reserved for maintenance"}}' > data/reserved.json
The application follows a modular blueprint-based structure:
-
Models: Data structures and user authentication
models/data_host.py: Compute host data modelmodels/__init__.py: User model and authentication
-
Routes: HTTP route handlers organized by feature
routes/auth.py: Authentication routesroutes/compute.py: Compute node managementroutes/instance.py: Instance listing and managementroutes/volume.py: Volume managementroutes/allocation.py: Resource allocationroutes/flavor.py: Flavor catalog
-
Templates: HTML templates with Jinja2 templating
- Layout templates (navbar.html)
- Feature-specific templates
-
Static: CSS, JavaScript, and other static assets
- Third-party libraries (DataTables, Chart.js)
- Custom CSS for theming and responsive design
-
Utils: Utility functions for data processing
utils/data_utils.py: Data processing functionsutils/file_utils.py: File operationsutils/format_utils.py: Data formatting
-
Create a new blueprint in routes/init.py:
new_feature_bp = Blueprint('new_feature', __name__) blueprints.append(new_feature_bp)
-
Create a new route file in the routes/ directory:
# routes/new_feature.py from flask import render_template, request, jsonify from flask_login import login_required from routes import new_feature_bp import config @new_feature_bp.route('/') @login_required def index(): """New feature main page""" return render_template('new_feature.html') @new_feature_bp.route('/api/data') @login_required def get_data(): """API endpoint for new feature data""" # Process and return data return jsonify({"data": "example"})
-
Create a template in templates/ directory:
<!-- templates/new_feature.html --> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>New Feature</title> <link rel="stylesheet" href="static/tailwind.min.css"> <link rel="stylesheet" href="static/modern-theme.css"> <link rel="stylesheet" href="static/new-feature.css"> </head> <body> {% include 'navbar.html' %} <div class="main-content"> <main class="mx-4 mt-8"> <h1 class="text-2xl font-bold mb-4">New Feature</h1> <!-- Feature content here --> </main> </div> <script src="static/new-feature.js"></script> </body> </html>
-
Update the navigation in templates/navbar.html:
<!-- Add to the navigation links --> <a href="/new-feature" class="nav-link">New Feature</a>
-
Create CSS and JS files if needed:
touch static/new-feature.css static/new-feature.js
The application uses a combination of Tailwind CSS and custom CSS for styling:
- modern-theme.css: Main theme styles and dark mode support
- Page-specific CSS: Individual styling for each page
To modify the theme:
-
Global theme changes:
/* static/modern-theme.css */ :root { --primary-color: #3b82f6; /* Change primary color */ --secondary-color: #10b981; /* Change secondary color */ /* Other theme variables */ }
-
Page-specific styling:
/* static/new-feature.css */ .feature-card { border-radius: 0.5rem; box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1); /* Other styles */ }
-
Dark mode customization:
/* static/modern-theme.css */ [data-theme="dark"] { --bg-color: #121212; --text-color: #f3f4f6; /* Other dark theme variables */ }
Manual testing should be performed for all components:
-
Data collection scripts:
# Test data collection with debug output bash -x ./get-data-aio.sh -
Data processing functions:
# Add debug prints to verify data processing print(f"Processing data: {data}")
-
UI testing across browsers and devices:
- Test on Chrome, Firefox, and Safari
- Test on desktop and mobile devices
- Verify responsive design at different screen sizes
-
Authentication and session testing:
- Verify login persistence with "Remember me" option
- Test session timeout behavior
- Verify secure access to protected routes
-
Clone the repository on your production server:
git clone https://github.com/Pepryan/openstack-resource.git /home/ubuntu/aether cd /home/ubuntu/aether -
Set up a virtual environment and install dependencies:
python3 -m venv venv-opre source venv-opre/bin/activate pip install -r requirements.txt -
Configure the application for production:
# Edit config.py for production settings vim config.pyKey production settings in
config.py:# Use a fixed secret key for session persistence SECRET_KEY = 'your-fixed-production-secret-key' # Change this! DEBUG = False HOST = "0.0.0.0" PORT = 5005 # Session settings (already configured) SESSION_PERMANENT = True PERMANENT_SESSION_LIFETIME_DAYS = 30 # File paths and constants (pre-configured) DATA_DIR = 'data' CORE_COMPUTE = 48 # Adjust to your environment CEPH_TOTAL_SIZE_TB = 6246.4 # Update with your Ceph size
Production Benefits:
- No environment file management required
- Consistent configuration across deployments
- Simplified container deployments
-
Create and secure the data directory:
mkdir -p data static/results chmod 750 data
-
Set up user authentication:
# Create users.json with secure passwords vim data/users.json # Add user credentials in JSON format chmod 640 data/users.json
For easier deployment and environment consistency, you can use Docker. The application is designed to work seamlessly with containers using the built-in configuration system:
The application has been tested with the following versions:
- Docker: v27.4.1 or newer
- Docker Compose: v2.35.1 or newer (Docker Compose V2)
Note: Docker Compose V1 (using
docker-composecommand) may have compatibility issues with newer Docker versions. We recommend using Docker Compose V2 (usingdocker composecommand without hyphen).
- No Environment Files: Configuration is handled directly in
config.py - Persistent Sessions: Fixed secret key ensures session persistence across container restarts
- Volume Mounting: Data directory is mounted for easy data updates
- Simplified Deployment: No complex environment variable management
If you encounter compatibility issues with Docker Compose V1, install Docker Compose V2:
# For Ubuntu/Debian
sudo apt-get update && sudo apt-get install -y docker-compose-plugin
# Verify installation
docker compose version-
Clone the repository:
git clone https://github.com/Pepryan/openstack-resource.git aether cd aether -
Build and start the Docker container:
# Using Docker Compose V2 (recommended) docker compose up -d # Or using Docker Compose V1 (legacy) docker-compose up -d
-
Access the application at
http://your-server-ip:5005
The Docker setup includes:
- Dockerfile: Defines the container image with all dependencies
- docker-compose.yml: Configures the application service with volumes and networking
- docker-entrypoint.sh: Initializes the container environment
- docker-data-collector.sh: Collects data from OpenStack and transfers it to the container
The application uses two volume bindings to manage data:
-
Data Volume:
./data:/app/data- Maps the local
datadirectory to/app/datain the container - Contains all OpenStack resource data (instances, volumes, flavors, etc.)
- This is where the application reads data from
- Maps the local
-
Results Volume:
./static/results:/app/static/results- Maps the local
static/resultsdirectory to/app/static/resultsin the container - Contains generated plots and visualization results
- This is where the application stores output files
- Maps the local
When using Docker, the data collection process works as follows:
-
On your OpenStack server, run the data collection script:
# Make the script executable chmod +x docker-data-collector.sh # Run the script ./docker-data-collector.sh
-
The script will:
- Collect data from OpenStack using the standard collection scripts
- Transfer the data directly to the Docker container using
docker cp - No need to manually restart the application
To run the application in a different environment or server:
-
Option 1: Using Docker Hub (Recommended for Production)
a. Build and push the image to Docker Hub:
# Build the image docker build -t yourusername/aether:latest . # Push to Docker Hub docker push yourusername/aether:latest
b. On the target server, create the necessary directories:
mkdir -p data static/results
c. Create a docker-compose.yml file:
version: '3.8' services: app: image: yourusername/aether:latest container_name: openstack-resource restart: unless-stopped ports: - "5005:5005" volumes: - ./data:/app/data - ./static/results:/app/static/results environment: - SECRET_KEY=your-secure-secret-key - DEBUG=False - HOST=0.0.0.0 - PORT=5005
d. Start the container:
docker compose up -d
e. Transfer initial data to the new environment:
# Create a minimal users.json file echo '{"admin": "admin"}' > data/users.json # Transfer OpenStack data from your collection server scp user@openstack-server:/path/to/data/* ./data/
-
Option 2: Using Local Image Export/Import
a. Save the Docker image to a file:
docker save -o aether-image.tar yourusername/aether:latest
b. Transfer the image file to the target server:
scp aether-image.tar user@target-server:/path/to/destination/
c. On the target server, load the image:
docker load -i aether-image.tar
d. Follow steps c-e from Option 1 to set up and run the container
When running the application in a different environment, you have several options for data management:
-
Manual Data Transfer:
- Copy your data files to the
datadirectory on the new server - The application will read data from this directory
- Copy your data files to the
-
Automated Data Collection:
- Set up the
docker-data-collector.shscript on your OpenStack server - Configure it to point to your new Docker container
- Run it manually or via cron job to keep data updated
- Set up the
-
Sample Data for Testing:
- For testing purposes, you can create sample data files as described in the Development Guide
- This allows you to test the application without an OpenStack environment
# Using Docker Compose V2 (recommended)
# View application logs
docker logs openstack-resource
# Restart the container
docker compose restart
# Update the application
git pull
docker compose up -d --build
# Using Docker Compose V1 (legacy)
docker-compose restart
docker-compose up -d --buildCreate a systemd service for automatic startup and management:
-
Create the service file:
sudo vim /etc/systemd/system/aether.service
-
Add the following configuration:
[Unit] Description=Aether - OpenStack Resource Manager After=network.target [Service] Type=simple User=ubuntu Group=ubuntu WorkingDirectory=/home/ubuntu/aether/ ExecStart=/home/ubuntu/aether/venv-opre/bin/python3 -B /home/ubuntu/aether/app.py Restart=on-failure RestartSec=5 StandardOutput=journal StandardError=journal SyslogIdentifier=aether Environment="PYTHONUNBUFFERED=1" [Install] WantedBy=multi-user.target
-
Enable and start the service:
sudo systemctl daemon-reload sudo systemctl enable openstack-resource sudo systemctl start openstack-resource -
Verify the service is running:
sudo systemctl status openstack-resource
-
Check the logs if needed:
sudo journalctl -u openstack-resource -f
Set up cron jobs for automated data collection:
-
Edit the crontab for the user with OpenStack access:
crontab -e
-
Add the following line to run every 2 hours:
# Run data collection every 2 hours (at 11 minutes past the hour) # This ensures fresh data while maintaining optimal system performance 11 */2 * * * /bin/bash /home/ubuntu/workdir/scripts/aether/get-data-aio.sh >> /home/ubuntu/workdir/scripts/aether/get-data-aio.log 2>&1
-
Verify the cron job is scheduled:
crontab -l
-
Check the log file after the first scheduled run:
tail -f /home/ubuntu/workdir/scripts/aether/get-data-aio.log
For production environments, it's recommended to use Nginx as a reverse proxy:
-
Install Nginx:
sudo apt update sudo apt install nginx
-
Create a site configuration:
sudo vim /etc/nginx/sites-available/aether
-
Add the following configuration:
server { listen 80; server_name your-server-domain.com; # Change to your domain or IP location / { proxy_pass http://127.0.0.1:5005; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }
-
Enable the site and restart Nginx:
sudo ln -s /etc/nginx/sites-available/aether /etc/nginx/sites-enabled/ sudo nginx -t sudo systemctl restart nginx
-
Access the application at
http://your-server-domain.com
Important: Aether is designed as a scheduled data collection system, not a real-time monitoring tool. Data is collected and updated via automated cron jobs.
Update Frequency: Every 2 hours (configurable) Data Freshness: Web interface shows the timestamp of the last data collection Performance: Optimized for efficiency with minimal impact on OpenStack infrastructure
The application follows a specific data flow architecture with scheduled updates:
- Data Collection: Scripts run on the OpenStack server every 2 hours to collect resource data
- Data Transfer: Collected data is transferred to the application server
- Data Processing: The application processes and visualizes the data
- Data Storage: Results are stored in the
static/resultsdirectory - Data Refresh: Web interface displays the most recent data collected from the last update cycle
- Timestamp Display: Each page shows when the data was last updated
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β OpenStack API ββββββΆβ Data Scripts ββββββΆβ Data Files β
βββββββββββββββββββ βββββββββββββββββββ ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Visualization βββββββ Flask App βββββββ Data Transfer β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
The application includes several scripts for collecting and verifying data from OpenStack:
- get-data-aio.sh: Main data collection script that gathers instance, flavor, allocation, and volume data
- check-placement.sh: Verifies placement allocations to detect inconsistencies between Nova and Placement API
- check-instance-ids.sh: Verifies instance IDs when placement inconsistencies are found
- docker-data-collector.sh: Collects data and transfers it directly to a Docker container
This is the primary data collection script that:
- Sources OpenStack credentials from
~/admin-openrc - Collects instance data for all projects using
openstack server list - Retrieves flavor information with
openstack flavor list - Gathers hypervisor allocation data with
openstack hypervisor list - Collects CPU and RAM allocation ratios from compute nodes via SSH
- Retrieves Ceph storage metrics with
ceph df - Collects volume data with
openstack volume listfor each project - Transfers all collected data to the web server
- Triggers placement verification with
check-placement.sh - Restarts the web application service
# Key sections of get-data-aio.sh
# Collect instance data for all projects
for project_name in "${project_names[@]}"; do
openstack server list --project "$project_name" --limit -1 --long -c ID -c Name -c Status -c "Power State" -c Networks -c "Flavor ID" -c "Flavor Name" -c "Image ID" -c "Image Name" -c Host -f csv | grep -v "ERROR" | sed 's/^"\(.*\)"$/\1/' > temp_aio_project.csv
# Process and append to main file
awk -v project="$project_name" -F "|" 'BEGIN {OFS="|"} {print project, $0}' temp_aio_project.csv >> "$output_file"
done
# Collect volume data for each project
project_list=$(openstack project list -f value -c ID -c Name)
while read -r project_id project_name; do
# Get volumes for this project
openstack volume list --project $project_id -f json > volumes-$project_id.json
# Add project name to each volume
python3 -c "
import json
with open('volumes-$project_id.json', 'r') as f:
volumes = json.load(f)
for volume in volumes:
volume['Project'] = '$project_name'
with open('volumes-$project_id.json', 'w') as f:
json.dump(volumes, f)
"
# Merge with all volumes
done <<< "$(echo "$project_list")"This script is specifically designed for Docker deployments:
- Sources OpenStack credentials
- Checks if the Docker container is running
- Runs the main data collection script (
get-data-aio.sh) - Copies the collected data files directly to the Docker container
- Runs placement check if needed
- No need to restart the container as the application detects file changes
# Key sections of docker-data-collector.sh
# Set the Docker container's data directory
CONTAINER_DATA_DIR="/app/data"
CONTAINER_NAME="openstack-resource"
# Run the original data collection script
./get-data-aio.sh
# Copy the data files to the Docker container
docker cp data/aio.csv $CONTAINER_NAME:$CONTAINER_DATA_DIR/
docker cp data/allocation.txt $CONTAINER_NAME:$CONTAINER_DATA_DIR/
docker cp data/flavors.csv $CONTAINER_NAME:$CONTAINER_DATA_DIR/
docker cp data/ratio.txt $CONTAINER_NAME:$CONTAINER_DATA_DIR/
docker cp data/cephdf.txt $CONTAINER_NAME:$CONTAINER_DATA_DIR/
docker cp data/volumes.json $CONTAINER_NAME:$CONTAINER_DATA_DIR/The application uses several data files, each collected from specific OpenStack components:
| File | Source | Collection Method | Format | Purpose |
|---|---|---|---|---|
| aio.csv | Nova API | openstack server list |
CSV with pipe delimiter | Instance data with project, flavor, and host information |
| allocation.txt | Nova API | openstack hypervisor list |
Text file | Resource allocation data from hypervisors |
| cephdf.txt | Ceph | ceph df |
Text file | Ceph storage metrics and utilization |
| flavors.csv | Nova API | openstack flavor list |
CSV with pipe delimiter | Flavor definitions with resource specifications |
| ratio.txt | Compute Nodes | SSH to read Nova config | Text file | CPU/RAM allocation ratios from compute nodes |
| volumes.json | Cinder API | openstack volume list |
JSON | Volume data with size and attachment info |
| users.json | Manual | Created manually | JSON | User credentials for authentication |
| reserved.json | Manual | Created manually | JSON | Reserved resources data for capacity planning |
| placement_diff.json | Placement API | check-placement.sh |
JSON | Placement allocation verification results |
| instance_ids_check.json | Nova API | check-instance-ids.sh |
JSON | Instance ID verification results |
In a standard environment (non-Docker), data is managed as follows:
- Collection: Data is collected on the OpenStack server using
get-data-aio.sh - Transfer: Data is transferred to the application server using SCP
- Storage: Data is stored in the
datadirectory on the application server - Access: The Flask application reads data from the
datadirectory - Visualization: Results are stored in the
static/resultsdirectory
In a Docker environment, data is managed as follows:
- Collection: Data is collected on the OpenStack server using
get-data-aio.sh - Transfer: Data is transferred directly to the Docker container using
docker cp - Storage: Data is stored in the
/app/datadirectory inside the container - Access: The Flask application reads data from the
/app/datadirectory - Visualization: Results are stored in the
/app/static/resultsdirectory
When running the application in a new environment (different server or laptop):
-
Option 1: With OpenStack Access
- Install the application on the new server
- Configure OpenStack credentials
- Run the data collection scripts
- Data will be stored in the
datadirectory
-
Option 2: Without OpenStack Access
- Install the application on the new server
- Transfer existing data files to the
datadirectory - The application will use these static data files
- Note: Data will not be updated automatically
-
Option 3: Using Docker Image
- Pull the Docker image on the new server
- Create the necessary directories (
dataandstatic/results) - Transfer existing data files to the
datadirectory - Start the Docker container with volume bindings
- The application will use the data files from the host's
datadirectory
The application supports several methods for refreshing data:
-
Manual Refresh:
- Run the data collection scripts manually
- Data files will be updated in the
datadirectory - The application will use the updated data on the next request
-
Automated Refresh (Cron):
- Set up a cron job to run the data collection scripts periodically
- Example:
11 */2 * * * /bin/bash /path/to/get-data-aio.sh - This will refresh data every 2 hours
-
Docker Refresh:
- Run the
docker-data-collector.shscript on the OpenStack server - Data will be updated directly in the Docker container
- No need to restart the container
- Run the
-
OpenStack CLI Access:
- OpenStack CLI tools installed (
python-openstackclient) - Admin credentials with access to all projects
- Valid
admin-openrcfile with authentication details
- OpenStack CLI tools installed (
-
SSH Access:
- SSH key-based authentication to compute nodes
- Sudo access on compute nodes to read Nova configuration
-
Ceph Access (if using Ceph storage):
- Access to Ceph CLI tools
- Proper Ceph authentication configured
-
Network Connectivity:
- Network access between collection server and web server
- SSH access for secure file transfer
-
Docker Compose Version Compatibility
-
Issue: Error
HTTPConnection.request() got an unexpected keyword argument 'chunked'when runningdocker-compose up# Check Docker and Docker Compose versions docker --version docker-compose --version # Install Docker Compose V2 (recommended solution) sudo apt-get update && sudo apt-get install -y docker-compose-plugin # Verify installation and use Docker Compose V2 docker compose version docker compose up
-
Issue: Container starts but exits immediately
# Check container logs docker logs openstack-resource # Verify volume bindings docker inspect openstack-resource # Ensure data directory exists and has correct permissions mkdir -p data chmod 750 data
-
Issue: Cannot access the application after container starts
# Check if container is running docker ps # Check container port bindings docker port openstack-resource # Check application logs docker logs openstack-resource # Verify firewall settings sudo ufw status
-
-
Data Management in Docker
-
Issue: Data not updating in container
# Verify docker-data-collector.sh is copying files correctly bash -x docker-data-collector.sh # Manually copy a file to test docker cp data/aio.csv openstack-resource:/app/data/ # Check file exists in container docker exec openstack-resource ls -la /app/data
-
Issue: Cannot create or access results in container
# Verify results directory exists and has correct permissions mkdir -p static/results chmod 755 static/results # Check volume binding docker inspect openstack-resource
-
-
Running in Different Environments
-
Issue: Cannot run container in new environment
# Create necessary directories mkdir -p data static/results # Create minimal users.json echo '{"admin": "admin"}' > data/users.json # Verify docker-compose.yml exists and is correct cat docker-compose.yml # Start container with verbose output docker compose up
-
Issue: Image not found when running in new environment
# Pull image from Docker Hub docker pull yourusername/aether:latest # Or load image from file docker load -i aether-image.tar # Verify image exists docker images
-
-
Data Collection Issues
-
OpenStack Authentication Failures:
# Check if OpenStack credentials are valid source ~/admin-openrc openstack token issue # Verify OpenStack endpoints openstack endpoint list
-
SSH Access Problems:
# Test SSH access to compute nodes ssh compute-node-hostname "hostname" # Check SSH key permissions ls -la ~/.ssh/ chmod 600 ~/.ssh/id_rsa
-
Ceph Access Issues:
# Test Ceph access ceph status ceph df -
Data Transfer Failures:
# Test SCP connection touch test_file scp test_file ubuntu@web-server-ip:~/
-
-
Application Startup Problems
-
Missing Dependencies:
# Verify Python dependencies source venv-opre/bin/activate pip list | grep -E "flask|pandas|matplotlib" # Install missing dependencies pip install -r requirements.txt
-
Permission Issues:
# Check data directory permissions ls -la data/ # Fix permissions if needed chmod 750 data/ chmod 640 data/users.json
-
Service Configuration:
# Check service status sudo systemctl status aether # View service logs sudo journalctl -u aether -n 100 # Restart service sudo systemctl restart aether
-
-
UI and Rendering Issues
-
Browser Cache Problems:
- Clear browser cache (Ctrl+F5 or Cmd+Shift+R)
- Try a different browser to isolate the issue
- Check browser console for JavaScript errors (F12)
-
CSS Loading Issues:
# Check if CSS files exist ls -la static/*.css # Verify file permissions chmod 644 static/*.css
-
DataTables Initialization:
- Check browser console for DataTables errors
- Verify DataTables library is properly loaded
- Check column definitions in JavaScript
-
-
Authentication and Session Problems
-
Login Failures:
# Verify users.json format cat data/users.json # Ensure it's valid JSON python -c "import json; json.load(open('data/users.json'))"
-
Session Expiration Issues:
- Check
config.pyfor session settings - Verify
SECRET_KEYis consistent (not randomly generated on restart) - Check browser cookie settings
- Check
-
Remember Me Not Working:
- Verify Flask-Login configuration in
app.py - Check
REMEMBER_COOKIE_DURATIONsetting - Ensure cookies are not being blocked by browser
- Verify Flask-Login configuration in
-
-
"No module named 'flask'":
source venv-opre/bin/activate pip install flask -
"Permission denied" when accessing data files:
# Fix ownership and permissions sudo chown -R ubuntu:ubuntu /home/ubuntu/aether/ chmod -R 750 /home/ubuntu/aether/ chmod 640 data/users.json -
"Connection refused" when accessing the web interface:
# Check if application is running ps aux | grep app.py # Check firewall settings sudo ufw status # Allow port if needed sudo ufw allow 5005/tcp
Contributions to improve Aether are welcome! Please follow these steps:
- Fork the repository
- Create your feature branch:
git checkout -b feature/amazing-feature - Make your changes and commit them:
git commit -m 'Add amazing feature' - Push to the branch:
git push origin feature/amazing-feature - Create a Pull Request
- Code Style: Follow PEP 8 for Python code
- Documentation: Add docstrings to functions and classes
- Commit Messages: Write clear, concise commit messages
- Testing: Test your changes thoroughly before submitting
- Branch Naming: Use descriptive branch names (feature/, bugfix/, etc.)
- Pick an Issue: Start with existing issues or create a new one
- Discuss: For major changes, open an issue for discussion first
- Develop: Make your changes in a feature branch
- Test: Ensure your changes work as expected
- Submit: Create a pull request with a clear description
-
Python Code:
- Follow PEP 8 style guide
- Use meaningful variable and function names
- Add docstrings to all functions and classes
- Keep functions small and focused on a single task
-
JavaScript Code:
- Follow ES6 standards where possible
- Use camelCase for variable and function names
- Add comments for complex logic
-
HTML/CSS:
- Use consistent indentation (2 or 4 spaces)
- Follow BEM naming convention for CSS classes
- Keep CSS organized by component
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenStack for the cloud infrastructure platform
- Flask for the web framework
- DataTables for the interactive table functionality
- Chart.js for data visualization
- Tailwind CSS for the styling framework



