This repository contains the configuration and setup scripts for a high-availability (HA) cluster implementation using multiple technologies including HAProxy, Corosync/Pacemaker, GlusterFS, and RAID storage.
The cluster setup consists of the following components:
- HAProxy: Load balancer with round-robin distribution
- Corosync/Pacemaker: Cluster resource manager for failover
- Virtual IP: Floating IP address (172.20.32.1) for transparent failover
- RAID 1: Software RAID for disk redundancy using mdadm
- GlusterFS: Distributed file system for data replication across nodes
- Ganglia: Cluster monitoring and performance metrics
- Nginx: Reverse proxy for Ganglia web interface
- Docker/Docker Compose: Container orchestration for services
- Backend Servers:
- Server1: 192.168.32.121:80
- Server2: 192.168.32.122:80
- Virtual IP: 172.20.32.1/24
- Ganglia Web Interface: Port 82
setup/
├── docker-compose.yml # Container services configuration
├── glusterfs-setup.sh # GlusterFS installation and setup
├── glusterfs.sh # GlusterFS operations script
├── haproxy-setup.sh # HAProxy and Pacemaker setup
├── mount-glusterfs.sh # GlusterFS mounting script
├── raid-setup.sh # RAID 1 configuration
├── web-setup.sh # Docker and web services setup
├── web.sh # Web services management
├── conf/
│ ├── corosync-haproxy.conf # Corosync cluster configuration
│ ├── haproxy.cfg # HAProxy load balancer config
│ └── nginx.conf # Nginx reverse proxy config
└── ganglia/
├── ganglia-vhost.conf # Ganglia virtual host
├── gmetad.conf # Ganglia metadata daemon config
├── gmond.conf # Ganglia monitoring daemon config
└── monitor-setup.sh # Ganglia monitoring setup
- Ubuntu/Debian-based Linux systems
- Root or sudo access
- Multiple nodes for cluster setup
- Additional storage devices for RAID (sdb, sdc)
Set up RAID 1 for storage redundancy:
sudo bash setup/raid-setup.shThis script will:
- Install mdadm
- Create RAID 1 array using /dev/sdb and /dev/sdc
- Format with ext4 filesystem
- Mount to /raid1
- Add to /etc/fstab for persistent mounting
Install and configure distributed storage:
sudo bash setup/glusterfs-setup.shThis will:
- Install GlusterFS server
- Enable and start glusterd service
- Create /raid1 directory for GlusterFS mount
Configure load balancing and cluster management:
sudo bash setup/haproxy-setup.shThis script configures:
- HAProxy load balancer
- Corosync cluster communication
- Pacemaker resource management
- Virtual IP resource (172.20.32.1)
- Resource colocation and constraints
Install Docker and prepare container services:
sudo bash setup/web-setup.shInstall Ganglia monitoring on each node:
sudo bash setup/ganglia/monitor-setup.sh# Start containerized services
docker-compose -f setup/docker-compose.yml up -d
# Check cluster status
sudo pcs status
# Check RAID status
cat /proc/mdstat
# Check GlusterFS status
sudo gluster volume status- Ganglia Web Interface: http://your-server:82/ganglia/
- HAProxy Stats: Configure stats in haproxy.cfg if needed
- Cluster Status:
sudo pcs status
- Frontend: Binds to 0.0.0.0:80
- Backend: Round-robin between Server1 (192.168.32.121) and Server2 (192.168.32.122)
- Health Checks: Enabled for both backend servers
- haproxy: HAProxy service resource
- virtualip: Virtual IP resource (172.20.32.1)
- Constraints: HAProxy and Virtual IP are colocated
- STONITH: Disabled (stonith-enabled=false)
- RAID 1: /dev/md0 mounted at /raid1
- GlusterFS: Distributed across cluster nodes
- Filesystem: ext4 for RAID volume
-
Cluster Communication Problems
sudo systemctl restart corosync pacemaker sudo pcs cluster start --all
-
RAID Array Issues
# Check RAID status cat /proc/mdstat # Detailed array info sudo mdadm --detail /dev/md0
-
GlusterFS Mount Issues
# Check GlusterFS processes sudo systemctl status glusterd # Restart if needed sudo systemctl restart glusterd
-
HAProxy Not Starting
# Check configuration syntax sudo haproxy -c -f /etc/haproxy/haproxy.cfg # Check logs sudo journalctl -u haproxy
- HAProxy: /var/log/haproxy.log
- Pacemaker: /var/log/pacemaker/pacemaker.log
- Corosync: /var/log/corosync/corosync.log
- Docker:
docker-compose logs
- Monitor cluster health:
sudo pcs status - Check RAID status:
cat /proc/mdstat - Review logs: Check service logs regularly
- Update containers:
docker-compose pull && docker-compose up -d
- Configuration backup: Copy all config files from
setup/conf/ - Data backup: Backup GlusterFS volumes
- RAID backup: Consider external backup of /raid1
- Ensure proper firewall rules for cluster communication
- Secure Ganglia web interface access
- Regular security updates for all components
- Consider SSL/TLS termination at HAProxy level
- Adjust HAProxy timeout values based on application needs
- Monitor Ganglia metrics for performance bottlenecks
- Tune GlusterFS volume settings for workload
- Optimize RAID chunk size if needed
This project is intended for educational and system administration purposes.