Skip to content

Commit

Permalink
add gsp 313 315 321 330 344 388
Browse files Browse the repository at this point in the history
  • Loading branch information
elmoallistair committed Mar 12, 2021
1 parent 034475c commit 91ba079
Show file tree
Hide file tree
Showing 12 changed files with 319 additions and 52 deletions.
22 changes: 22 additions & 0 deletions labs/gsp313_create-and-manage-cloud-resources/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# [Create and Manage Cloud Resources: Challenge Lab ](https://www.qwiklabs.com/focuses/10258?parent=catalog)

## Overview

This lab is recommended for students who have enrolled in the labs in the [Create and Manage Cloud Resources](https://google.qwiklabs.com/quests/120) quest. Be sure to review those labs before starting this lab. Are you ready for the challenge?

Topics tested:
* Create an instance
* Create a 3-node Kubernetes cluster and run a simple service
* Create an HTTP(s) load balancer in front of two web servers

## Challenge Scenario

You have started a new role as a Junior Cloud Engineer for Jooli, Inc. You are expected to help manage the infrastructure at Jooli. Common tasks include provisioning resources for projects.

You are expected to have the skills and knowledge for these tasks, so step-by-step guides are not provided.

Some Jooli, Inc. standards you should follow:

* Create all resources in the default region or zone, unless otherwise directed.
* Naming normally uses the format team-resource; for example, an instance could be named nucleus-webserver1.
* Allocate cost-effective resource sizes. Projects are monitored, and excessive resource use will result in the containing project's termination (and possibly yours), so plan carefully. This is the guidance the monitoring team is willing to share: unless directed, use f1-micro for small Linux VMs, and use n1-standard-1 for Windows or other applications, such as Kubernetes nodes.
72 changes: 72 additions & 0 deletions labs/gsp313_create-and-manage-cloud-resources/script.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
# Create and Manage Cloud Resources: Challenge Lab
# https://www.qwiklabs.com/focuses/10258

# 1. Create a project jumphost instance (zone: us-east1-b)
gcloud compute instances create nucleus-jumphost \
--zone="us-east1-b" \
--machine-type="f1-micro" \
--boot-disk-size=10GB
# if failed to check, go create it manually

# 2. Create a Kubernetes service cluster
gcloud config set compute/zone us-east1-b
gcloud container clusters create nucleus-jumphost-webserver1
gcloud container clusters get-credentials nucleus-jumphost-webserver1
kubectl create deployment hello-app --image=gcr.io/google-samples/hello-app:2.0
kubectl expose deployment hello-app --type=LoadBalancer --port 8080
kubectl get service

# 3. Create the web server frontend
## 3.1 Create Instance Template
cat << EOF > startup.sh
#! /bin/bash
apt-get update
apt-get install -y nginx
service nginx start
sed -i -- 's/nginx/Google Cloud Platform - '"\$HOSTNAME"'/' /var/www/html/index.nginx-debian.html
EOF
gcloud compute instance-templates create nginx-template \
--metadata-from-file startup-script=startup.sh

## 3.2 Create Target Pool
gcloud compute target-pools create nginx-pool
# NOTE: Create it us-east1 region

## 3.3 Create managed instance group
gcloud compute instance-groups managed create nginx-group \
--base-instance-name nginx \
--size 2 \
--template nginx-template \
--target-pool nginx-pool

## 3.4 Create firewall rule
gcloud compute firewall-rules create www-firewall --allow tcp:80
gcloud compute forwarding-rules create nginx-lb \
--region us-east1 \
--ports=80 \
--target-pool nginx-pool

## 3.5 Create health check
gcloud compute http-health-checks create http-basic-check
gcloud compute instance-groups managed set-named-ports nginx-group \
--named-ports http:80

## 3.6 Create backend service
gcloud compute backend-services create nginx-backend \
--protocol HTTP --http-health-checks http-basic-check --global
gcloud compute backend-services add-backend nginx-backend \
--instance-group nginx-group \
--instance-group-zone us-east1-b \
--global

## 3.7 Create url map
gcloud compute url-maps create web-map \
--default-service nginx-backend
gcloud compute target-http-proxies create http-lb-proxy \
--url-map web-map

## 3.8 Create forwarding rule
gcloud compute forwarding-rules create http-content-rule \
--global \
--target-http-proxy http-lb-proxy \
--ports 80
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# [Perform Foundational Infrastructure Tasks in Google Cloud: Challenge Lab ](https://www.qwiklabs.com/focuses/10379?parent=catalog)

## Your challenge

You are now asked to help a newly formed development team with some of their initial work on a new project around storing and organizing photographs, called memories. You have been asked to assist the memories team with initial configuration for their application development environment; you receive the following request to complete the following tasks:

* Create a bucket for storing the photographs.
* Create a Pub/Sub topic that will be used by a Cloud Function you create.
* Create a Cloud Function.
* Remove the previous cloud engineer’s access from the memories project.

Some Jooli Inc. standards you should follow:

* Create all resources in the **us-east1** region and **us-east1-b** zone, unless otherwise directed.
* Use the project VPCs.
* Naming is normally team-resource, e.g. an instance could be named **kraken-webserver1**
* Allocate cost effective resource sizes. Projects are monitored and excessive resource use will result in the containing project's termination (and possibly yours), so beware. This is the guidance the monitoring team is willing to share; unless directed, use **f1-micro** for small Linux VMs and **n1-standard-1** for Windows or other applications such as Kubernetes nodes.
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
export PROJECT_ID=$DEVSHELL_PROJECT_ID

# 1. Create a bucket
gsutil mb gs://$PROJECT_ID

# 2. Create a Pub/Sub topic
gcloud pubsub topics create $PROJECT_ID

# 3. Create the Cloud Function
# Go to Cloud Functions > Create Function
# Trigger: Cloud Storage
# Event type: Finalize/Create
# Entry Point: thumbnail
# Runtime: Node.js
# fill index.js and package.json with given scripts
# replace line 15 in index.js, in this case, fill with your project id
# upload one JPG or PNG image into the bucket

# 4. Remove the previous cloud engineer
# Go to IAM > find your second username > Click Pencil Icon > Delete
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Set up and Configure a Cloud Environment in Google Cloud: Challenge Lab
# https://www.qwiklabs.com/focuses/10603?parent=catalog

# NOTE: ALL RESOURCES MUST BE CREATED IN REGION us-east1-b
# NOTE: Create all resources in the us-east1 region and us-east1-b zone, unless otherwise directed.

# Task 1: Create development VPC manually
- Go to Navigation menu > VPC Network > Create VPC Network
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,12 @@ You are expected to have the skills and knowledge for these tasks so don’t exp

You need to complete the following tasks:

* Create a development VPC with three subnets manually
* Create a production VPC with three subnets using a provided Deployment Manager configuration
* Create a bastion that is connected to both VPCs
* Create a development Cloud SQL Instance and connect and prepare the WordPress environment
* Create a Kubernetes cluster in the development VPC for WordPress
* Prepare the Kubernetes cluster for the WordPress environment
* Create a WordPress deployment using the supplied configuration
* Enable monitoring of the cluster via stackdriver
* Provide access for an additional engineer
* Create a development VPC with three subnets manually
* Create a production VPC with three subnets using a provided Deployment Manager configuration
* Create a bastion that is connected to both VPCs
* Create a development Cloud SQL Instance and connect and prepare the WordPress environment
* Create a Kubernetes cluster in the development VPC for WordPress
* Prepare the Kubernetes cluster for the WordPress environment
* Create a WordPress deployment using the supplied configuration
* Enable monitoring of the cluster via stackdriver
* Provide access for an additional engineer
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
# Implement DevOps in Google Cloud: Challenge Lab
# https://www.qwiklabs.com/focuses/13287?parent=catalog

# NOTE: im confusing with this lab, if you still failed (in task 2 and/or 4) try look at other people reference

- Open Cloud shell, run:
# Open Cloud shell, run:
gcloud config set compute/zone us-east1-b
git clone https://source.developers.google.com/p/$DEVSHELL_PROJECT_ID/r/sample-app
gcloud container clusters get-credentials jenkins-cd
Expand All @@ -19,7 +17,7 @@ printf $(kubectl get secret cd-jenkins -o jsonpath="{.data.jenkins-admin-passwor
- username: admin
- password: see at your cloud shell output

- Back to Cloud shell, run:
# Back to Cloud shell, run:
cd sample-app
kubectl create ns production
kubectl apply -f k8s/production -n production
Expand All @@ -36,66 +34,64 @@ git add .
git commit -m "initial commit"
git push origin master

- Back to Jenkins Dashboard > Manage Jenkins (left pane) > manage Credentials
- Look at Stores scoped to Jenkins, click Jenkins
- Click Global credentials (unrestricted)
- Click Add Credentials
- Kind: Google Service Account from metadata
- Project Name: <your_project_id>
- Click OK

- Back to Jenkins Dashboard > New Item (left pane)
- Enter an item name: sample-app
- Click Multibranch Pipeline
- OK
- *in sample-app config*
- Branch Sources: Git
- Project Repository: https://source.developers.google.com/p/[PROJECT_ID]/r/sample-app
- Credentials: qwiklabs service account
- Scan Multibranch Pipeline Triggers, check Periodically if not otherwise run
- Interval: 1 minute
- SAVE # building will take long time
# Note: Repeat if you see error msg while scanning Multibranch Pipeline Log
- CHECK YOUR FIRST CHECKPOINT
# Back to Jenkins Dashboard > Manage Jenkins (left pane) > manage Credentials
# - Look at "Stores scoped", click Jenkins
# - Click Global credentials (unrestricted)
# - Click Add Credentials
# - Kind: Google Service Account from metadata
# - Project Name: <your_project_id>
# - Click OK
#
# Back to Jenkins Dashboard > New Item (left pane)
# Enter an item name: sample-app
# Click Multibranch Pipeline
# OK
# *in sample-app config*
# - Branch Sources: Git
# - Project Repository: https://source.developers.google.com/p/[PROJECT_ID]/r/sample-app
# - Credentials: qwiklabs service account
# - Scan Multibranch Pipeline Triggers, check "Periodically if not otherwise run"
# - Interval: 1 minute
# - SAVE # building will take long time
# # Note: Repeat if you see error msg while scanning Multibranch Pipeline Log
# - CHECK YOUR FIRST CHECKPOINT
#

- Back to Cloud shell
# Back to Cloud Shell
git checkout -b new-feature
nano main.go
edit main.go
# change the version number to "2.0.0".
# example: version string = "2.0.0" (in line 46)

run: nano html.go
edit html.go
# change both lines that contains the word blue to orange
# example: <div class="card orange"> (in line 37 and 81)

back to cloud shell, run:
# Back to Cloud Shell
git add Jenkinsfile html.go main.go
git commit -m "Version 2.0.0"
git push origin new-feature
# Check your Jenkins Dashboard
# Check your sample-app branches from jenkins dashboard (new-feature branch)

back to cloud shell, run:
# Back to Cloud Shell
curl http://localhost:8001/api/v1/namespaces/new-feature/services/gceme-frontend:80/proxy/version
kubectl get service gceme-frontend -n production
git checkout -b canary
git push origin canary
export FRONTEND_SERVICE_IP=$(kubectl get -o \
jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
git checkout master
git push origin master
# Check your Jenkins Dashboard
# Check your sample-app branches from jenkins dashboard (canary branch)

back to cloud shell, run:
# Back to Cloud Shell
export FRONTEND_SERVICE_IP=$(kubectl get -o \
jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1; done
# NOTE: after the you see output 2.0.0 CHECK YOUR #2 PROGRESS (CHECK MAY FAIL OR/AND NEED DELAY, DUNNO WHY)
# after the you see output 2.0.0, run:
kubectl get service gceme-frontend -n production
# CHECK YOUR #3 CHECKPOINT
# CHECK YOUR #4 CHECKPOINT (CHECK MAY FAIL OR/AND NEED LONG DELAY)
# CHECK YOUR #2 #3 AND #4 CHECKPOINT (may be a delay)

###############################################################################################

# Note if task 4 not yet marked, try run (may take a long delay):
# Note if task 4 not yet marked, try run:
git merge canary
git push origin master
export FRONTEND_SERVICE_IP=$(kubectl get -o \
jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
# may take a long delay before check progress
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# [Serverless Firebase Development: Challenge Lab](https://www.qwiklabs.com/focuses/14677?parent=catalog)

## Prerequisites

In this challenge lab you will be assessed on your knowledge of the following areas:
* Firestore
* Cloud Run
* Cloud Build
* Container Registry

## Challenge scenario

In this lab you will create a frontend solution using a Rest API and Firestore database. Cloud Firestore is a NoSQL document database that is part of the Firebase platform where you can store, sync, and query data for your mobile and web apps at scale. Lab content is based on resolving a real world scenario through the use of Google Cloud serverless infrastructure.
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
gcloud config set project $(gcloud projects list --format='value(PROJECT_ID)' --filter='qwiklabs-gcp')
git clone https://github.com/rosera/pet-theory.git

# 1. Firestore Database Create
Go to Firestore > Select Naive Mode > Location: nam5 > Create Database

# 2. Firestore Database Populate
cd pet-theory/lab06/firebase-import-csv/solution
npm install
node index.js netflix_titles_original.csv

# 3. Cloud Build Rest API Staging
cd ~/pet-theory/lab06/firebase-rest-api/solution-01
npm install
gcloud builds submit --tag gcr.io/$GOOGLE_CLOUD_PROJECT/rest-api:0.1
gcloud beta run deploy netflix-dataset-service --image gcr.io/$GOOGLE_CLOUD_PROJECT/rest-api:0.1 --allow-unauthenticated
# Choose 1 and us-central1

# 4. Cloud Build Rest API Production
cd ~/pet-theory/lab06/firebase-rest-api/solution-02
npm install
gcloud builds submit --tag gcr.io/$GOOGLE_CLOUD_PROJECT/rest-api:0.2
gcloud beta run deploy netflix-dataset-service --image gcr.io/$GOOGLE_CLOUD_PROJECT/rest-api:0.2 --allow-unauthenticated
# go to cloud run and click netflix-dataset-service then copy the url
SERVICE_URL=<copy url from your netflix-dataset-service>
curl -x GET $SERVICE_URL/2019

# 5. Cloud Build Frontend Staging
cd ~/pet-theory/lab06/firebase-frontend/public
nano app.js # comment line 3 and uncomment line 4, insert your netflix-dataset-service url
npm install
cd ~/pet-theory/lab06/firebase-frontend
gcloud builds submit --tag gcr.io/$GOOGLE_CLOUD_PROJECT/frontend-staging:0.1
gcloud beta run deploy frontend-staging-service --image gcr.io/$GOOGLE_CLOUD_PROJECT/frontend-staging:0.1
# Choose 1 and us-central1

# 6. Cloud Build Frontend Production
gcloud builds submit --tag gcr.io/$GOOGLE_CLOUD_PROJECT/frontend-production:0.1
gcloud beta run deploy frontend-production-service --image gcr.io/$GOOGLE_CLOUD_PROJECT/frontend-production:0.1
# Choose 1 and us-central1
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# 1. Check that Cloud Monitoring has been enabled
Open Monitoring from console

# 2. Check that the video queue length custom metric has been created
Check Monitoring > Dashboard > Media_Dashboard
Open VM Instances
- Stop video-queue-monitor instances
- Edit video-queue-monitor instace:
- Go to Custom metadata
- startup-script:
- replace MY_PROJECT_ID value with your PROJECT ID
- replace MY_GCE_INSTANCE_ID value with your your video-queue-monitor Instance id (find it in top)
- replace MY_GCE_INSTANCE_ZONE value with us-east1-b
- SAVE
- Start video-queue-monitor instance

# 3. Check that a custom log based metric for large video upload rate has been created
Go to log explorer
- fill textPayload=~"file_format\: ([4,8]K).*" in query box and run it
- Click Action > Create Metric with name "large_video_upload_rate", Click Create Metric

# 4. Check that custom metrics for the video service have been added to the media dashboard
Go to Monitoring > Dashboard
- Click Media_Dashboard
- Add Chart
- Resource Type: VM Instance
- Metrics: OpenCensus/my.videoservice.org/measure/input_queue_size (uncheck Only show active)
- Filter: instance_id, click your video-queue-monitor instance id (from step 2) then Apply
- SAVE
- Add Chart
- Resource: VM Instance
- Metric: logging/user/large_video_upload_rate

# 5. Check that an alert has been created for large video uploads
Go to Monitoring > Alert
- Create Policy
- Metric: logging/user/large_video_upload_rate
- Treshold: 3
- For: 1 minute
- Name your alert with "large video uploads" then Save
Loading

0 comments on commit 91ba079

Please sign in to comment.