EasyShop is a modern, full-stack e-commerce platform built with Next.js 14, TypeScript, and MongoDB. It features a beautiful UI with Tailwind CSS, secure authentication, real-time cart updates, and a seamless shopping experience.
- 🎨 Modern and responsive UI with dark mode support
- 🔐 Secure JWT-based authentication
- 🛒 Real-time cart management with Redux
- 📱 Mobile-first design approach
- 🔍 Advanced product search and filtering
- 💳 Secure checkout process
- 📦 Multiple product categories
- 👤 User profiles and order history
- 🌙 Dark/Light theme support
EasyShop follows a three-tier architecture pattern:
- Next.js React Components
- Redux for State Management
- Tailwind CSS for Styling
- Client-side Routing
- Responsive UI Components
- Next.js API Routes
- Business Logic
- Authentication & Authorization
- Request Validation
- Error Handling
- Data Processing
- MongoDB Database
- Mongoose ODM
- Data Models
- CRUD Operations
- Data Validation
Important
Before you begin setting up this project, make sure the following tools are installed and configured properly on your system:
- Install Terraform
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install terraformterraform -vterraform initAWS CLI (Command Line Interface) allows you to interact with AWS services directly from the command line.
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/installmsiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi
aws configure
- AWS Access Key ID:
- AWS Secret Access Key:
- Default region name:
- Default output format:
Note
Make sure the IAM user you're using has the necessary permissions. You’ll need an AWS IAM Role with programmatic access enabled, along with the Access Key and Secret Key.
Follow the steps below to get your infrastructure up and running using Terraform:
- Clone the Repository:
First, clone this repo to your local machine:
git clone https://github.com/LondheShubham153/tws-e-commerce-app.git
cd terraform- Generate SSH Key Pair: Create a new SSH key to access your EC2 instance:
ssh-keygen -f terra-keyThis will prompt you to create a new key file named terra-key.
- Private key permission: Change your private key permission:
chmod 400 terra-key- Initialize Terraform: Initialize the Terraform working directory to download required providers:
terraform init- Review the Execution Plan: Before applying changes, always check the execution plan:
terraform plan- Apply the Configuration: Now, apply the changes and create the infrastructure:
terraform applyConfirm with
yeswhen prompted.
- Access Your EC2 Instance;
After deployment, grab the public IP of your EC2 instance from the output or AWS Console, then connect using SSH:
ssh -i terra-key ubuntu@<public-ip>- Update your kubeconfig: wherever you want to access your eks wheather it is yur local machine or bastion server this command will help you to interact with your eks.
Caution
you need to configure aws cli first to execute this command:
aws configureaws eks --region eu-west-1 update-kubeconfig --name tws-eks-cluster- Check your cluster:
kubectl get nodesTip
Check if jenkins service is running:
sudo systemctl status jenkinsUse your public IP with port 8080: http://<public_IP>:8080
Start the service and get the Jenkins initial admin password:
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Get the Jenkins initial admin password:
sudo systemctl enable jenkins sudo systemctl restart jenkins
- Navigate to: Manage Jenkins → Plugins → Available Plugins
- Search and install the following:
- Docker Pipeline
- Pipeline View
- GitHub Credentials:
- Go to: Jenkins → Manage Jenkins → Credentials → (Global) → Add Credentials
- Use:
- Kind: Username with password
- ID: github-credentials
- DockerHub Credentials: Go to the same Global Credentials section
- Use:
- Kind: Username with password
- ID: docker-hub-credentials [Notes:] Use these IDs in your Jenkins pipeline for secure access to GitHub and DockerHub
Configure Trusted Pipeline Library:
- Go to: Jenkins → Manage Jenkins → Configure System Scroll to Global Pipeline Libraries section
Add a New Shared Library:
Name: Shared
Default Version: main
Project Repository URL:
https://github.com/<your user-name/jenkins-shared-libraries.[Notes:] Make sure the repo contains a proper directory structure eq: vars/
- Create New Pipeline Job
- Name: EasyShop
- Type: Pipeline
PressOkey
In General
- Description: EasyShop
- Check the box:
GitHub project- GitHub Repo URL:
https://github.com/<your user-name/tws-e-commerce-appIn Trigger
- Check the box:
GitHub hook trigger for GITScm pollingIn Pipeline
- Definition:
Pipeline script from SCM- SCM:
Git- Repository URL:
https://github.com/<your user-name/tws-e-commerce-app- Credentials:
github-credentials- Branch: master
- Script Path:
Jenkinsfile
Fork App Repo:
- Open the
Jenkinsfile- Change the DockerHub username to yours
Fork Shared Library Repo:
- Edit
vars/update_k8s_manifest.groovy- Update with your
DockerHub usernameSetup Webhook
In GitHub:
- Go to
Settings→Webhooks- Add a new webhook pointing to your Jenkins URL
- Select:
GitHub hook trigger for GITScm pollingin Jenkins jobTrigger the Pipeline
ClickBuild Nowin Jenkins
Prerequisites:
Before configuring CD, make sure the following tools are installed:
- Installations Required:
kubectl
AWS CLI
SSH into Bastion Server
- Connect to your Bastion EC2 instance via SSH.
Note:
This is not the node where Jenkins is running. This is the intermediate EC2 (Bastion Host) used for accessing private resources like your EKS cluster.
8. Configure AWS CLI on Bastion Server
Run the AWS configure command:
aws configureAdd your Access Key and Secret Key when prompted.
9. Update Kubeconfig for EKS
Run the following important command:
aws eks update-kubeconfig --region eu-west-1 --name tws-eks-cluster- This command maps your EKS cluster with your Bastion server.
- It helps to communicate with EKS components.
10. Install AWS application load balancer refering the below docs link
https://docs.aws.amazon.com/eks/latest/userguide/lbc-helm.html
11. Install the EBS CSI driver refering the below docs link
https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html#eksctl_store_app_data
12. Argo CD Setup
Create a Namespace for Argo CD
kubectl create namespace argocd- Install Argo CD using helm
(https://artifacthub.io/packages/helm/argo/argo-cd)
helm repo add argo https://argoproj.github.io/argo-helm
helm install my-argo-cd argo/argo-cd --version 8.0.10- get the values file and save it
helm show values argo/argo-cd > argocd-values.yaml- edit the values file, change the below settings.
global:
domain: argocd.example.com
configs:
params:
server.insecure: true
server:
ingress:
enabled: true
controller: aws
ingressClassName: alb
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: <your-cert-arn>
alb.ingress.kubernetes.io/group.name: easyshop-app-lb
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
hostname: argocd.devopsdock.site
aws:
serviceType: ClusterIP # <- Used with target-type: ip
backendProtocolVersion: GRPC
- save and upgrade the helm chart.
helm upgrade my-argo-cd argo/argo-cd -n argocd -f my-values.yaml
-
add the record in route53 “argocd.devopsdock.site” with load balancer dns.
-
access it in browser.
-
Retrive the secret for Argocd
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d-
login to argocd “admin” and retrieved password
-
Change the password by going to “user info” tab in the UI.
Deploy Your Application in Argo CD GUI
On the Argo CD homepage, click on the “New App” button.
Fill in the following details:
- Application Name:
Enter your desired app name- Project Name: Select
defaultfrom the dropdown.- Sync Policy: Choose
Automatic.
In the Source section:
- Repo URL: Add the Git repository URL that contains your Kubernetes manifests.
- Path:
Kubernetes(or the actual path inside the repo where your manifests reside)
In the “Destination” section:
- Cluster URL: https://kubernetes.default.svc (usually shown as "default")
- Namespace: tws-e-commerce-app (or your desired namespace)
Click on “Create”.
NOTE: before deploying Chnage your ingress settings and image tag in the yamls inside “kubernetes” directory
Ingress Annotations:
annotations:
alb.ingress.kubernetes.io/group.name: easyshop-app-lb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-south-1:876997124628:certificate/b69bb6e7-cbd1-490b-b765-27574080f48c
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
kubernetes.io/ingress.class: alb-
add record to route 53 “easyshop.devopsdock.site”
-
Access your site now.
- metric server install thru helm chart
https://artifacthub.io/packages/helm/metrics-server/metrics-server
verify metric server.
kubectl get pods -w
kubectl top pods
create a namespace “monitoring”
kubectl create ns monitoringhttps://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack
verify deployment :
kubectl get pods -n monitoringget the helm values and save it in a file
helm show values prometheus-community/kube-prometheus-stack > kube-prom-stack.yaml edit the file and add the following in the params for prometheus, grafana and alert manger.
Grafana:
ingressClassName: alb
annotations:
alb.ingress.kubernetes.io/group.name: easyshop-app-lb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-south-1:876997124628:certificate/b69bb6e7-cbd1-490b-b765-27574080f48c
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
hosts:
- grafana.devopsdock.sitePrometheus:
ingressClassName: alb
annotations:
alb.ingress.kubernetes.io/group.name: easyshop-app-lb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-south-1:876997124628:certificate/b69bb6e7-cbd1-490b-b765-27574080f48c
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
labels: {}
hosts:
- prometheus.devopsdock.site
paths:
- /
pathType: PrefixAlertmanger:
ingressClassName: alb
annotations:
alb.ingress.kubernetes.io/group.name: easyshop-app-lb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
hosts:
- alertmanager.devopsdock.site
paths:
- /
pathType: PrefixAlerting to Slack
Create a new workspace in slack, create a new channel e.g. “#alerts”
go to https://api.slack.com/apps to create the webhook.
- create an app “alertmanager”
- go to incoming webhook
- create a webhook and copy it.
modify the helm values.
config:
global:
resolve_timeout: 5m
route:
group_by: ['namespace']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receiver: 'slack-notification'
routes:
- receiver: 'slack-notification'
matchers:
- severity = "critical"
receivers:
- name: 'slack-notification'
slack_configs:
- api_url: 'https://hooks.slack.com/services/T08ULBZB5UY/B08U0CE3DEG/OivCLYq28gNzz4TabiY5zUj'
channel: '#alerts'
send_resolved: true
templates:
- '/etc/alertmanager/config/*.tmpl'Note: you can refer this DOCs for the slack configuration. “https://prometheus.io/docs/alerting/latest/configuration/#slack_config”
upgrade the chart
helm upgrade my-kube-prometheus-stack prometheus-community/kube-prometheus-stack -f kube-prom-stack.yaml -n monitoringget grafana secret “user = admin”
kubectl --namespace monitoring get secrets my-kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 -d ; echoYou would get the notification in the slack’s respective channel.
- we will use elasticsearch for logsstore, filebeat for log shipping and kibana for the visualization.
NOTE: The EBS driver we installed is for elasticsearch to dynamically provision an EBS volume.
Install Elastic Search:
helm repo add elastic https://helm.elastic.co -n logging
helm install my-elasticsearch elastic/elasticsearch --version 8.5.1 -n loggingCreate a storageclass so that elastic search can dynamically provision volume in AWS.
storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-aws
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumerapply the yaml file.
get the values for elastic search helm chart.
helm show values elastic/elasticsearch > elasticsearch.yaml update the values
replicas: 1
minimumMasterNodes: 1
clusterHealthCheckParams: "wait_for_status=yellow&timeout=1s"upgrade the chart
helm upgrade my-elasticsearch elastic/elasticsearch -f elasticsearch.yaml -n loggingif upgarde doesnt happen then uninstall and install it again.
make sure the pod is running .
kubectl get po -n logging
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 6h33m
elasticsearch-master-0 1/1 Running 0 87mFileBeat:
install filebeat for log shipping.
helm repo add elastic https://helm.elastic.co
helm install my-filebeat elastic/filebeat --version 8.5.1 -n loggingget the values
helm show values elastic/filebeat > filebeat.yaml Filebeat runs as a daemonset. check if its up.
kubectl get po -n logging
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 6h38m
elasticsearch-master-0 1/1 Running 0 93m
my-filebeat-filebeat-g79qs 1/1 Running 0 25s
my-filebeat-filebeat-kh8mj 1/1 Running 0 25sInstall Kibana:
install kibana through helm.
helm repo add elastic https://helm.elastic.co
helm install my-kibana elastic/kibana --version 8.5.1 -n loggingVerify if it runs.
k get po -n logging
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 8h
elasticsearch-master-0 1/1 Running 0 3h50m
my-filebeat-filebeat-g79qs 1/1 Running 0 138m
my-filebeat-filebeat-jz42x 1/1 Running 0 108m
my-filebeat-filebeat-kh8mj 1/1 Running 1 (137m ago) 138m
my-kibana-kibana-559f75574-9s4xk 1/1 Running 0 130mget values
helm show values elastic/kibana > kibana.yaml modify the values for ingress settings
ingress:
enabled: true
className: "alb"
pathtype: Prefix
annotations:
alb.ingress.kubernetes.io/group.name: easyshop-app-lb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-south-1:876997124628:certificate/b69bb6e7-cbd1-490b-b765-27574080f48c
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: logs-kibana.devopsdock.site
paths:
- path: /save the file and exit. upgrade the helm chart using the values file.
helm upgrade my-kibana elastic/kibana -f kibana.yaml -n loggingadd all the records to route 53 and give the value as load balancer DNS name. and try to access one by one.
retrive the secret of elastic search as kibana’s password, username is “elastic”
kubectl get secrets --namespace=logging elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -dconfigure filebeat to ship the application logs to view in kibana
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*easyshop*.logupgrade filebeat helm chart and check in kibana’s UI if the app logs are streaming.