Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
697 changes: 664 additions & 33 deletions README-zh_CN.md

Large diffs are not rendered by default.

23 changes: 22 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3953,10 +3953,31 @@ True

<details>
<summary>What is the workflow of retrieving data from Ceph?</summary><br><b>
The work flow is as follows:

1. The client sends a request to the ceph cluster to retrieve data:
> **Client could be any of the following**
>> * Ceph Block Device
>> * Ceph Object Gateway
>> * Any third party ceph client


2. The client retrieves the latest cluster map from the Ceph Monitor
3. The client uses the CRUSH algorithm to map the object to a placement group. The placement group is then assigned to a OSD.
4. Once the placement group and the OSD Daemon are determined, the client can retrieve the data from the appropriate OSD


</b></details>

<details>
<summary>What is the workflow of retrieving data from Ceph?</summary><br><b>
<summary>What is the workflow of writing data to Ceph?</summary><br><b>
The work flow is as follows:

1. The client sends a request to the ceph cluster to retrieve data
2. The client retrieves the latest cluster map from the Ceph Monitor
3. The client uses the CRUSH algorithm to map the object to a placement group. The placement group is then assigned to a Ceph OSD Daemon dynamically.
4. The client sends the data to the primary OSD of the determined placement group. If the data is stored in an erasure-coded pool, the primary OSD is responsible for encoding the object into data chunks and coding chunks, and distributing them to the other OSDs.

</b></details>

<details>
Expand Down
2 changes: 1 addition & 1 deletion certificates/aws-cloud-practitioner.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## AWS - Cloud Practitioner

A summary of what you need to know for the exam can be found [here](https://codingshell.com/aws-cloud-practitioner)
A summary of what you need to know for the exam can be found [here](https://aws.amazon.com/certification/certified-cloud-practitioner/)

#### Cloud 101

Expand Down
1 change: 1 addition & 0 deletions scripts/aws s3 event triggering/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
[](./sample.png)
122 changes: 122 additions & 0 deletions scripts/aws s3 event triggering/aws_s3_event_trigger.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
#!/bin/bash

# always put up the detail of scripts . version, author, what it does, what event triggers and all ..

###
# Author: Adarsh Rawat
# Version: 1.0.0
# Objective: Automate Notification for a object uploaded or created in s3 bucket.
###

# debug what is happening
set -x

# all these cmds are aws cli commands | abhishek veermalla day 4-5 devops

# store aws account id in a variable
aws_account_id=$(aws sts get-caller-identity --query 'Account' --output text)

# print the account id from the variable
echo "aws account id: $aws_account_id"

# set aws region, bucket name and other variables
aws_region="us-east-1"
aws_bucket="s3-lambda-event-trigger-bucket"
aws_lambda="s3-lambda-function-1"
aws_role="s3-lambda-sns"
email_address="adarshrawat8304@gmail.com"

# create iam role for the project
role_response=$(aws iam create-role --role-name s3-lambda-sns --assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com",
"s3.amazonaws.com",
"sns.amazonaws.com"
]
}
}]
}')

# jq is json parser here parse the role we created

# extract the role arn from json resposne and store in variable
role_arn=$(echo "$role_response" | jq -r '.Role.Arn')

# print the role arn
echo "Role ARN: $role_arn"

# attach permissions to the role
aws iam attach-role-policy --role-name $aws_role --policy-arn arn:aws:iam::aws:policy/AWSLambda_FullAccess
aws iam attach-role-policy --role-name $aws_role --policy-arn arn:aws:iam::aws:policy/AmazonSNSFullAccess

# create s3 bucket and get the output in a variable
bucket_output=$(aws s3api create-bucket --bucket "$aws_bucket" --region "$aws_region")

# print the output from the variable
echo "bucket output: $bucket_output"

# upload a file to the bucket
aws s3 cp ./sample.png s3://"$aws_bucket"/sample.png

# create a zip file to upload lambda function
zip -r s3-lambda.zip ./s3-lambda

sleep 5

# create a lambda function
aws lambda create-function \
--region $aws_region \
--function $aws_lambda \
--runtime "python3.8" \
--handler "s3-lambda/s3-lambda.lambda_handler" \
--memory-size 128 \
--timeout 30 \
--role "arn:aws:iam::$aws_account_id:role/$aws_role" \
--zip-file "fileb://./s3-lambda.zip"

# add permissions to s3 bucket to invoke lambda
LambdaFunctionArn="arn:aws:lambda:us-east-1:$aws_account_id:function:s3-lambda"
aws s3api put-bucket-notification-configuration \
--region "$aws_region" \
--bucket "$aws_bucket" \
--notification-configuration '{
"LambdaFunctionConfigurations": [{
"LambdaFunctionArn": "'"$LambdaFunctionArn"'",
"Events": ["s3:ObjectCreated:*"]
}]
}'

aws s3api put-bucket-notification-configuration \
--region "$aws_region" \
--bucket "$aws_bucket" \
--notification-configuration '{
"LambdaFunctionConfigurations": [{
"LambdaFunctionArn": "'"$LambdaFunctionArn"'",
"Events": ["s3:ObjectCreated:*"]
}]
}'

# create an sns topic and save the topic arn to a variable
topic_arn=$(aws sns create-topic --name s3-lambda-sns --output json | jq -r '.TopicArn')

# print the topic arn
echo "SNS Topic ARN: $topic_arn"

# Trigger SNS topic using lambda function

# Add sns topic using lambda function
aws sns subscribe \
--topic-arn "$topic_arn" \
--protocol email \
--notification-endpoint "$email_address"

# publish sns
aws sns publish \
--topic-arn "$topic_arn" \
--subject "A new object created in s3 bucket" \
--message "Hey, a new data object just got delievered into the s3 bucket $aws_bucket"
1 change: 1 addition & 0 deletions scripts/aws s3 event triggering/s3-lambda/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
boto3==1.17.95
38 changes: 38 additions & 0 deletions scripts/aws s3 event triggering/s3-lambda/s3-lambda.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
import boto3
import json

def lambda_handler(event, context):

# i want to know that event thing
print(event)

# extract relevant information from the s3 event trigger
bucket_name=event['Records'][0]['s3']['bucket']['name']
object_key=event['Records'][0]['s3']['object']['key']

# perform desired operations with the upload file
print(f"File '{object_key}' was uploaded to bucket '{bucket_name}'")

# example: send a notification via sns
sns_client=boto3.client('sns')
topic_arn='arn:aws:sns:us-east-1:<account-id>:s3-lambda-sns'
sns_client.publish(
TopicArn=topic_arn,
Subject='s3 object created !!',
Message=f"File '{object_key}' was uploaded to bucket '{bucket_name}"
)

# Example: Trigger another Lambda function
# lambda_client = boto3.client('lambda')
# target_function_name = 'my-another-lambda-function'
# lambda_client.invoke(
# FunctionName=target_function_name,
# InvocationType='Event',
# Payload=json.dumps({'bucket_name': bucket_name, 'object_key': object_key})
# )
# in case of queuing and other objective similar to the netflix flow of triggering

return {
'statusCode': 200,
'body': json.dumps("Lambda function executed successfully !!")
}
Binary file added scripts/aws s3 event triggering/sample.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
28 changes: 14 additions & 14 deletions topics/aws/README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
# AWS

**Note**: Some of the exercises <b>cost $$$</b> and can't be performed using the free tier/resources
**Note**: Some of the exercises <b>cost $$$</b> and can't be performed using the free tier or resources

**2nd Note**: Provided solutions are using the AWS console. It's recommended you'll use IaC technologies to solve the exercises (e.g. Terraform, Pulumi).<br>
**2nd Note**: The provided solutions are using the AWS console. It's recommended you use IaC technologies to solve the exercises (e.g., Terraform, Pulumi).<br>

- [AWS](#aws)
- [Exercises](#exercises)
- [IAM](#iam)
- [EC2](#ec2)
- [S3](#s3)
- [ELB](#elb)
- [Auto Scaling Groups](#auto-scaling-groups)
- [Auto Scaling Groups] (#auto-scaling-groups)
- [VPC](#vpc)
- [Databases](#databases)
- [DNS](#dns)
Expand All @@ -24,14 +24,14 @@
- [Global Infrastructure](#global-infrastructure)
- [IAM](#iam-1)
- [EC2](#ec2-1)
- [AMI](#ami)
- [EBS](#ebs)
- [Instance Store](#instance-store)
- [EFS](#efs)
- [Pricing Models](#pricing-models)
- [Launch Template](#launch-template)
- [ENI](#eni)
- [Placement Groups](#placement-groups)
- [AMI](#ami)
- [EBS](#ebs)
- [Instance Store](#instance-store)
- [EFS](#efs)
- [Pricing Models](#pricing-models)
- [Launch Template](#launch-template)
- [ENI](#eni)
- [Placement Groups](#placement-groups)
- [VPC](#vpc-1)
- [Default VPC](#default-vpc)
- [Lambda](#lambda-1)
Expand Down Expand Up @@ -63,7 +63,7 @@
- [SNS](#sns)
- [Monitoring and Logging](#monitoring-and-logging)
- [Billing and Support](#billing-and-support)
- [AWS Organizations](#aws-organizations)
- [AWS Organizations](#aws-organizations)
- [Automation](#automation)
- [Misc](#misc-2)
- [High Availability](#high-availability)
Expand Down Expand Up @@ -3485,6 +3485,6 @@ More details are missing to determine for sure but it might be better to decoupl
<details>
<summary>What's an ARN?</summary><br><b>

ARN (Amazon Resources Names) used for uniquely identifying different AWS resources.
It is used when you would like to identify resource uniqely across all AWS infra.
ARN (Amazon Resources Names) are used for uniquely identifying different AWS resources.
It is used when you would like to identify resource uniqely across all AWS infrastructures.
</b></details>
55 changes: 55 additions & 0 deletions topics/aws/exercises/launch_ec2_web_instance/solution.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,3 +37,58 @@ echo "<h1>I made it! This is is awesome!</h1>" > /var/www/html/index.html
9. In the security group section, add a rule to accept HTTP traffic (TCP) on port 80 from anywhere
10. Click on "Review" and then click on "Launch" after reviewing.
11. If you don't have a key pair, create one and download it.

12. ### Solution using Terraform

```

provider "aws" {
region = "us-east-1" // Or your desired region
}

resource "aws_instance" "web_server" {
ami = "ami-12345678" // Replace with the correct AMI for Amazon Linux 2
instance_type = "t2.micro" // Or any instance type with 1 vCPU and 1 GiB memory

tags = {
Name = "web-1"
Type = "web"
}

root_block_device {
volume_size = 8 // Or any desired size
delete_on_termination = true
}

provisioner "remote-exec" {
inline = [
"sudo yum update -y",
"sudo yum install -y httpd",
"sudo systemctl start httpd",
"sudo bash -c 'echo \"I made it! This is awesome!\" > /var/www/html/index.html'",
"sudo systemctl enable httpd"
]

connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/your_private_key.pem") // Replace with the path to your private key
host = self.public_ip
}
}

security_group_ids = [aws_security_group.web_sg.id]
}

resource "aws_security_group" "web_sg" {
name = "web_sg"
description = "Security group for web server"

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
```
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ resource "aws_dynamodb_table" "users" {

global_secondary_index {
hash_key =

name =
projection_type =
}
}
2 changes: 1 addition & 1 deletion topics/cicd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

A development practice where developers integrate code into a shared repository frequently. It can range from a couple of changes every day or a week to a couple of changes in one hour in larger scales.

Each piece of code (change/patch) is verified, to make the change is safe to merge. Today, it's a common practice to test the change using an automated build that makes sure the code can be integrated. It can be one build which runs several tests in different levels (unit, functional, etc.) or several separate builds that all or some has to pass in order for the change to be merged into the repository.
Each piece of code (change/patch) is verified to make sure that the change is safe to merge. Today, it's a common practice to test the change using an automated build that makes sure the code can be integrated. It can be one build which runs several tests in different levels (unit, functional, etc.) or several separate builds that all or some has to pass in order for the change to be merged into the repository.
</b></details>

<details>
Expand Down
12 changes: 11 additions & 1 deletion topics/cloud/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,16 @@ AWS definition: "AWS Auto Scaling monitors your applications and automatically a
Read more about auto scaling [here](https://aws.amazon.com/autoscaling)
</b></details>

<details>
<summary>What is the difference between horizontal scaling and vertical scaling?</summary><br><b>

[AWS Docs](https://wa.aws.amazon.com/wellarchitected/2020-07-02T19-33-23/wat.concept.horizontal-scaling.en.html):

A "horizontally scalable" system is one that can increase capacity by adding more computers to the system. This is in contrast to a "vertically scalable" system, which is constrained to running its processes on only one computer; in such systems the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage.

Horizontally scalable systems are oftentimes able to outperform vertically scalable systems by enabling parallel execution of workloads and distributing those across many different computers.
</b></details>

<details>
<summary>True or False? Auto Scaling is about adding resources (such as instances) and not about removing resource</summary><br><b>

Expand All @@ -105,4 +115,4 @@ False. Auto scaling adjusts capacity and this can mean removing some resources b
* Instance should have minimal permissions needed. You don't want an instance-level incident to become an account-level incident
* Instances should be accessed through load balancers or bastion hosts. In other words, they should be off the internet (in a private subnet behind a NAT).
* Using latest OS images with your instances (or at least apply latest patches)
</b></details>
</b></details>
Loading