Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added .DS_Store
Binary file not shown.
88 changes: 88 additions & 0 deletions .github/workflows/python-ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
name: Python CI

on:
push:
branches: [ "**" ]
pull_request:
branches: [ "**" ]

jobs:
test:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./app_python

strategy:
matrix:
python-version: ["3.8", "3.9", "3.10"]

steps:
- uses: actions/checkout@v4

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: 'pip'

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pylint black

- name: Format with Black
run: black .

- name: Lint with pylint
run: |
pylint --rcfile=.pylintrc *.py

- name: Run tests with pytest
run: |
python -m pytest -v

docker:
needs: test
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./app_python

steps:
- uses: actions/checkout@v4

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}

- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: ./app_python
push: true
tags: eleanorpi/moscow-time-app:latest

security:
needs: test
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./app_python

steps:
- uses: actions/checkout@v4

- name: Run Snyk to check for vulnerabilities
uses: snyk/actions/python-3.8@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
command: test
args: --severity-threshold=high --project-name=app_python --file=app_python/requirements.txt --skip-unresolved
133 changes: 133 additions & 0 deletions 13.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
# Lab 13: ArgoCD for GitOps Deployment

## Overview

In this lab, we implemented ArgoCD to automate Kubernetes application deployments using GitOps principles. We installed ArgoCD via Helm, configured it to manage applications, and simulated production-like workflows.

## Task 1: Deploy and Configure ArgoCD

### 1. Installation Steps Performed

1. Added the ArgoCD Helm repository:
```bash
helm repo add argo https://argoproj.github.io/argo-helm
```

2. Installed ArgoCD:
```bash
helm install argo argo/argo-cd --namespace argocd --create-namespace
```

3. Verified installation:
```bash
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=argocd-server -n argocd --timeout=90s
```

### 2. ArgoCD CLI Installation

1. Installed the ArgoCD CLI tool:
```bash
brew install argocd
```

2. Verified CLI installation:
```bash
argocd version
```

### 3. Accessing ArgoCD UI

1. Port-forwarded the ArgoCD server:
```bash
kubectl port-forward svc/argo-argocd-server -n argocd 8080:443 &
```

2. Retrieved and used the initial admin password:
```bash
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 --decode
argocd login localhost:8080 --insecure --username admin --password <password>
```

### 4. Python App Sync Configuration

1. Created ArgoCD application manifests in `k8s/ArgoCD/` directory
2. Deployed a sample application using ArgoCD
3. Verified the application deployment

## Task 2: Multi-Environment Deployment & Auto-Sync

### 1. Multi-Environment Configurations

1. Created namespaces for different environments:
```bash
kubectl create namespace dev
kubectl create namespace prod
```

2. Created ArgoCD application manifests for dev and prod environments
3. Applied the configurations and synced the applications

### 2. Self-Heal Testing

#### Test 1: Manual Override of Replica Count

1. Modified the deployment's replica count manually:
```bash
kubectl patch deployment guestbook-ui -n prod --patch '{"spec":{"replicas": 3}}'
```

2. Observed pods before ArgoCD auto-sync:
```
NAME READY STATUS RESTARTS AGE
guestbook-ui-764d76f89d-7jjbl 1/1 Running 0 37s
guestbook-ui-764d76f89d-nvjgq 0/1 ContainerCreating 0 5s
guestbook-ui-764d76f89d-xf7rg 0/1 ContainerCreating 0 5s
```

3. Observed ArgoCD detecting the drift:
```
Sync Status: OutOfSync from HEAD (4773b9f)
```

4. ArgoCD auto-reverted the change (due to syncPolicy.automated):
```
argocd app sync python-app-prod
```

5. Pods after ArgoCD auto-sync:
```
NAME READY STATUS RESTARTS AGE
guestbook-ui-764d76f89d-7jjbl 1/1 Running 0 57s
```

#### Test 2: Delete a Pod (Replica)

1. Deleted a pod in the prod namespace:
```bash
kubectl delete pod -n prod -l app=guestbook-ui
```

2. Kubernetes automatically recreated the pod:
```
NAME READY STATUS RESTARTS AGE
guestbook-ui-764d76f89d-n6bcn 1/1 Running 0 67s
```

3. ArgoCD showed no drift (since pod deletions don't affect the desired state):
```
argocd app diff python-app-prod
```

## Understanding Configuration Drift vs. Runtime Events

### Configuration Drift
ArgoCD detects and corrects configuration drift, such as changes to the number of replicas, resource limits, or other specifications defined in the manifests. When we manually changed the replica count to 3, ArgoCD detected this as drift from the desired state and automatically reverted it back to 1 as defined in the manifest.

### Runtime Events
Runtime events, such as pod deletions or restarts, are handled by Kubernetes itself and not by ArgoCD. When we deleted a pod, Kubernetes' own controllers (like the ReplicaSet controller) recreated the pod to match the desired state. ArgoCD doesn't need to intervene because the desired configuration (1 replica) remained unchanged - it was just the runtime state that temporarily changed.

This distinction shows the complementary roles of Kubernetes and ArgoCD:
- Kubernetes handles runtime state and reconciliation at the cluster level
- ArgoCD handles configuration state and reconciliation against Git as the source of truth

The combination ensures both configuration consistency and runtime resilience.
84 changes: 84 additions & 0 deletions 14.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# Lab 14: Kubernetes StatefulSet Implementation

## Task 1: StatefulSet Implementation in Helm Chart

### Understanding StatefulSets
StatefulSets are designed to manage stateful applications with guarantees about the ordering and uniqueness of Pods. Unlike Deployments, StatefulSets:
- Provide stable, unique network identifiers
- Stable, persistent storage
- Ordered, graceful deployment and scaling
- Ordered, automated rolling updates

The primary features that distinguish StatefulSets from Deployments are:

1. **Stable, Unique Network Identifiers**: Pods in a StatefulSet have persistent identifiers that are maintained upon rescheduling. The pattern follows: `$(statefulset name)-$(ordinal)`.

2. **Stable, Persistent Storage**: StatefulSet can use volumeClaimTemplates to provide persistent storage for each pod. When a pod is rescheduled, the same PVC is reattached.

3. **Ordered Deployment and Scaling**: StatefulSets provide guarantees about the ordering of pod creation, scaling, and deletion.

### Implementation
I've converted the existing Deployment to a StatefulSet by:
1. Renaming `deployment.yaml` to `statefulset.yaml`
2. Changing the Kind from `Deployment` to `StatefulSet`
3. Adding required StatefulSet-specific fields like `serviceName`
4. Implementing persistence with PVC templates
5. Updating the application to store state in the persistent volume mount at `/data`

## Task 2: StatefulSet Exploration and Optimization

### Command Outputs
```
# Output of kubectl get po,sts,svc,pvc will be added here after deployment
```

### Persistent Storage Validation
When a pod in a StatefulSet is deleted, the PVC associated with it persists. When the pod is recreated, it reattaches to the same PVC, ensuring data persistence.

```
# Results of pod deletion and verification will be added here
```

### Headless Service Access
A headless service (with clusterIP: None) allows direct DNS lookup for individual pods in the StatefulSet. This is essential for applications that need to communicate directly with specific pods.

```
# DNS resolution test results will be added here
```

### Monitoring & Alerts
I've implemented liveness and readiness probes in the StatefulSet to ensure pod health.

For stateful applications, health probes are critical because:
1. They ensure the application is ready to serve requests before traffic is directed to it
2. They detect when a pod becomes unhealthy and needs to be restarted
3. With stateful applications, proper health checking prevents data corruption or inconsistency
4. Unlike stateless applications, a malfunctioning stateful pod can't simply be terminated and replaced without considering data consistency

The probes I've implemented:
- **Liveness Probe**: Checks if the application is running. If it fails, Kubernetes restarts the pod.
- **Readiness Probe**: Checks if the application is ready to receive traffic. If it fails, the pod is removed from service endpoints.

### Ordering Guarantee and Parallel Operations
Our application doesn't strictly require ordering guarantees because:
- Each pod instance operates independently
- The application doesn't rely on inter-pod communication in a specific order
- There's no leader/follower relationship that requires sequential startup

I've implemented parallel pod operations by setting `.spec.podManagementPolicy: Parallel` in the StatefulSet definition to instruct the controller to launch or terminate pods in parallel. This is ideal for our application because:

1. It speeds up the deployment and scaling operations
2. Each pod operates independently with its own state
3. There's no dependency between pods that requires ordered startup

This optimization significantly improves the deployment and scaling time for our application, while still maintaining the unique identities and persistent storage benefits of StatefulSets.

## Differences Between Pods in the StatefulSet

StatefulSet pods maintain individual state. In our application, each pod maintains its own visit counter file at `/data/visits`. This means:

1. When you access a specific pod, it increments only its own counter
2. Due to the way Kubernetes services work, requests are load-balanced across pods
3. Each pod's counter reflects only the visits it has personally served

This behavior is different from a shared database where all pods would see the same counter value. In a production environment, we would typically use a database service for truly shared state, but this example demonstrates the per-pod persistence capability of StatefulSets.
Loading