The application is built on Go v1.23.4 and PostgreSQL. It uses Fiber as the HTTP framework and pgx as the driver and sqlx as the query builder.
-
Ensure you have Go 1.23 or higher and Task installed on your machine:
go version && task --version -
Create a copy of the
.env.examplefile and rename it to.env:cp ./config/.env.example ./config/.env
Update configuration values as needed.
-
Install all dependencies:
task
-
Start the Docker containers:
task service:up:build
-
Create the database schema:
task db:connect
-
Run database migrations:
task migrate:up
-
Start the Docker containers:
task service:up:build
-
To stop the Docker containers:
task service:down
-
To build the Docker containers:
task service:build
To check the database connection, use the following command:
task service:db:connectIf you encounter the following error during migration or database connection:
error: failed to open database: dial tcp: lookup db on 127.0.0.53:53: server misbehaving
It may be due to a DNS resolution issue. You can resolve this by adding db to your local /etc/hosts file:
echo "127.0.0.1 db" | sudo tee -a /etc/hostsThen retry:
task migrate:up-
Navigate to the
testfolder and clone the repository:git clone https://github.com/ProjectSprint/Batch3Project2TestCase.git
-
Install
k6(if you don't have it installed):Follow the instructions on the k6 installation page to install
k6on your machine. -
Navigate to the folder where this is extracted/cloned in the terminal and run:
BASE_URL=http://localhost:8080 make pull-test
-
Ensure that Redis is installed and exposed on port 6379, then run:
BASE_URL=http://localhost:8080 k6 run load_test.js
Before proceeding, start the VPN:
sudo openvpn --config /path/to/your/config.ovpnReplace /path/to/your/config.ovpn with the correct path to your .ovpn configuration file.
-
Update Go modules:
Before building the production binary, update Go modules:
task
-
Build the application for production:
task build
-
Upload the binary to your EC2 instance using SCP:
scp -i /path/to/your-key.pem mybinary ubuntu@<EC2_PUBLIC_IP>:/home/ubuntu/
Replace
/path/to/your-key.pemwith the path to your private key,mybinarywith the binary name, and<EC2_PUBLIC_IP>with your EC2 instance’s public IP. -
Upload the
.envconfiguration file:scp -i /path/to/your-key.pem -r config ubuntu@<EC2_PUBLIC_IP>:/home/ubuntu/
-
Login to your EC2 instance:
ssh -i /path/to/your-key.pem ubuntu@<EC2_PUBLIC_IP>
-
Make the binary executable:
chmod +x /home/ubuntu/mybinary
-
Run the binary:
./mybinary
-
Update Your
.envFile:
Ensure your.envfile contains the correct production database credentials:DB_HOST=<PRODUCTION_DB_HOST> DB_PORT=<PRODUCTION_DB_PORT> DB_USER=<PRODUCTION_DB_USER> DB_PASS=<PRODUCTION_DB_PASSWORD> DB_NAME=<PRODUCTION_DB_NAME>
-
Connect to the Production Database:
task db:connect
-
Run Migrations:
task migrate:up
-
Rollback Migrations (Optional):
task migrate:down
Or force a specific migration version:
task migrate:force CLI_ARGS=<VERSION>
To connect to the EC2 instance where Redis is hosted, use the following command:
task ec2:connectThis will prompt you to select which EC2 instance to connect to. Once connected, you can interact with the Redis server.
NOTE: put the key file in the root
To manually connect using SSH:
ssh -i /path/to/your-key.pem ubuntu@<EC2_PUBLIC_IP>Replace /path/to/your-key.pem with your private key and <EC2_PUBLIC_IP> with the public IP of the EC2 instance.
To check if Redis is running:
redis-cli pingIf Redis is running correctly, it should return:
PONG
Prometheus UI: After running docker-compose up, you can access Prometheus at http://localhost:9090.
Grafana UI: After running docker-compose up, you can access Grafana at http://localhost:3000 (default credentials: admin / admin).
These steps below are automatically done by prometheus.yml and grafana folder in the deploy folder.
Add Prometheus as Data Source in Grafana (manually):
Go to Configuration → Data Sources → Add Data Source → Select Prometheus.
Set the URL to http://prometheus:9090 (the name of the Prometheus service in docker-compose.yml).
Create Grafana Dashboard (manually):
You can now create dashboards with Prometheus queries like:
- http_requests_total
- rate(http_requests_total[1m])
- http_duration_seconds