forked from elmoallistair/juaragcp
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
034475c
commit 91ba079
Showing
12 changed files
with
319 additions
and
52 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
# [Create and Manage Cloud Resources: Challenge Lab ](https://www.qwiklabs.com/focuses/10258?parent=catalog) | ||
|
||
## Overview | ||
|
||
This lab is recommended for students who have enrolled in the labs in the [Create and Manage Cloud Resources](https://google.qwiklabs.com/quests/120) quest. Be sure to review those labs before starting this lab. Are you ready for the challenge? | ||
|
||
Topics tested: | ||
* Create an instance | ||
* Create a 3-node Kubernetes cluster and run a simple service | ||
* Create an HTTP(s) load balancer in front of two web servers | ||
|
||
## Challenge Scenario | ||
|
||
You have started a new role as a Junior Cloud Engineer for Jooli, Inc. You are expected to help manage the infrastructure at Jooli. Common tasks include provisioning resources for projects. | ||
|
||
You are expected to have the skills and knowledge for these tasks, so step-by-step guides are not provided. | ||
|
||
Some Jooli, Inc. standards you should follow: | ||
|
||
* Create all resources in the default region or zone, unless otherwise directed. | ||
* Naming normally uses the format team-resource; for example, an instance could be named nucleus-webserver1. | ||
* Allocate cost-effective resource sizes. Projects are monitored, and excessive resource use will result in the containing project's termination (and possibly yours), so plan carefully. This is the guidance the monitoring team is willing to share: unless directed, use f1-micro for small Linux VMs, and use n1-standard-1 for Windows or other applications, such as Kubernetes nodes. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,72 @@ | ||
# Create and Manage Cloud Resources: Challenge Lab | ||
# https://www.qwiklabs.com/focuses/10258 | ||
|
||
# 1. Create a project jumphost instance (zone: us-east1-b) | ||
gcloud compute instances create nucleus-jumphost \ | ||
--zone="us-east1-b" \ | ||
--machine-type="f1-micro" \ | ||
--boot-disk-size=10GB | ||
# if failed to check, go create it manually | ||
|
||
# 2. Create a Kubernetes service cluster | ||
gcloud config set compute/zone us-east1-b | ||
gcloud container clusters create nucleus-jumphost-webserver1 | ||
gcloud container clusters get-credentials nucleus-jumphost-webserver1 | ||
kubectl create deployment hello-app --image=gcr.io/google-samples/hello-app:2.0 | ||
kubectl expose deployment hello-app --type=LoadBalancer --port 8080 | ||
kubectl get service | ||
|
||
# 3. Create the web server frontend | ||
## 3.1 Create Instance Template | ||
cat << EOF > startup.sh | ||
#! /bin/bash | ||
apt-get update | ||
apt-get install -y nginx | ||
service nginx start | ||
sed -i -- 's/nginx/Google Cloud Platform - '"\$HOSTNAME"'/' /var/www/html/index.nginx-debian.html | ||
EOF | ||
gcloud compute instance-templates create nginx-template \ | ||
--metadata-from-file startup-script=startup.sh | ||
|
||
## 3.2 Create Target Pool | ||
gcloud compute target-pools create nginx-pool | ||
# NOTE: Create it us-east1 region | ||
|
||
## 3.3 Create managed instance group | ||
gcloud compute instance-groups managed create nginx-group \ | ||
--base-instance-name nginx \ | ||
--size 2 \ | ||
--template nginx-template \ | ||
--target-pool nginx-pool | ||
|
||
## 3.4 Create firewall rule | ||
gcloud compute firewall-rules create www-firewall --allow tcp:80 | ||
gcloud compute forwarding-rules create nginx-lb \ | ||
--region us-east1 \ | ||
--ports=80 \ | ||
--target-pool nginx-pool | ||
|
||
## 3.5 Create health check | ||
gcloud compute http-health-checks create http-basic-check | ||
gcloud compute instance-groups managed set-named-ports nginx-group \ | ||
--named-ports http:80 | ||
|
||
## 3.6 Create backend service | ||
gcloud compute backend-services create nginx-backend \ | ||
--protocol HTTP --http-health-checks http-basic-check --global | ||
gcloud compute backend-services add-backend nginx-backend \ | ||
--instance-group nginx-group \ | ||
--instance-group-zone us-east1-b \ | ||
--global | ||
|
||
## 3.7 Create url map | ||
gcloud compute url-maps create web-map \ | ||
--default-service nginx-backend | ||
gcloud compute target-http-proxies create http-lb-proxy \ | ||
--url-map web-map | ||
|
||
## 3.8 Create forwarding rule | ||
gcloud compute forwarding-rules create http-content-rule \ | ||
--global \ | ||
--target-http-proxy http-lb-proxy \ | ||
--ports 80 |
17 changes: 17 additions & 0 deletions
17
labs/gsp315_perform-foundational-infrastructure-tasks-in-google-cloud/readme.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
# [Perform Foundational Infrastructure Tasks in Google Cloud: Challenge Lab ](https://www.qwiklabs.com/focuses/10379?parent=catalog) | ||
|
||
## Your challenge | ||
|
||
You are now asked to help a newly formed development team with some of their initial work on a new project around storing and organizing photographs, called memories. You have been asked to assist the memories team with initial configuration for their application development environment; you receive the following request to complete the following tasks: | ||
|
||
* Create a bucket for storing the photographs. | ||
* Create a Pub/Sub topic that will be used by a Cloud Function you create. | ||
* Create a Cloud Function. | ||
* Remove the previous cloud engineer’s access from the memories project. | ||
|
||
Some Jooli Inc. standards you should follow: | ||
|
||
* Create all resources in the **us-east1** region and **us-east1-b** zone, unless otherwise directed. | ||
* Use the project VPCs. | ||
* Naming is normally team-resource, e.g. an instance could be named **kraken-webserver1** | ||
* Allocate cost effective resource sizes. Projects are monitored and excessive resource use will result in the containing project's termination (and possibly yours), so beware. This is the guidance the monitoring team is willing to share; unless directed, use **f1-micro** for small Linux VMs and **n1-standard-1** for Windows or other applications such as Kubernetes nodes. |
20 changes: 20 additions & 0 deletions
20
labs/gsp315_perform-foundational-infrastructure-tasks-in-google-cloud/script.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
export PROJECT_ID=$DEVSHELL_PROJECT_ID | ||
|
||
# 1. Create a bucket | ||
gsutil mb gs://$PROJECT_ID | ||
|
||
# 2. Create a Pub/Sub topic | ||
gcloud pubsub topics create $PROJECT_ID | ||
|
||
# 3. Create the Cloud Function | ||
# Go to Cloud Functions > Create Function | ||
# Trigger: Cloud Storage | ||
# Event type: Finalize/Create | ||
# Entry Point: thumbnail | ||
# Runtime: Node.js | ||
# fill index.js and package.json with given scripts | ||
# replace line 15 in index.js, in this case, fill with your project id | ||
# upload one JPG or PNG image into the bucket | ||
|
||
# 4. Remove the previous cloud engineer | ||
# Go to IAM > find your second username > Click Pencil Icon > Delete |
2 changes: 1 addition & 1 deletion
2
labs/gsp321_set-up-and-configure-a-cloud-environment-in-google-cloud-challenge-lab/guide.txt
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
13 changes: 13 additions & 0 deletions
13
labs/gsp344_serverless-firebase-development-challenge-lab/readme.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
# [Serverless Firebase Development: Challenge Lab](https://www.qwiklabs.com/focuses/14677?parent=catalog) | ||
|
||
## Prerequisites | ||
|
||
In this challenge lab you will be assessed on your knowledge of the following areas: | ||
* Firestore | ||
* Cloud Run | ||
* Cloud Build | ||
* Container Registry | ||
|
||
## Challenge scenario | ||
|
||
In this lab you will create a frontend solution using a Rest API and Firestore database. Cloud Firestore is a NoSQL document database that is part of the Firebase platform where you can store, sync, and query data for your mobile and web apps at scale. Lab content is based on resolving a real world scenario through the use of Google Cloud serverless infrastructure. |
40 changes: 40 additions & 0 deletions
40
labs/gsp344_serverless-firebase-development-challenge-lab/script.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,40 @@ | ||
gcloud config set project $(gcloud projects list --format='value(PROJECT_ID)' --filter='qwiklabs-gcp') | ||
git clone https://github.com/rosera/pet-theory.git | ||
|
||
# 1. Firestore Database Create | ||
Go to Firestore > Select Naive Mode > Location: nam5 > Create Database | ||
|
||
# 2. Firestore Database Populate | ||
cd pet-theory/lab06/firebase-import-csv/solution | ||
npm install | ||
node index.js netflix_titles_original.csv | ||
|
||
# 3. Cloud Build Rest API Staging | ||
cd ~/pet-theory/lab06/firebase-rest-api/solution-01 | ||
npm install | ||
gcloud builds submit --tag gcr.io/$GOOGLE_CLOUD_PROJECT/rest-api:0.1 | ||
gcloud beta run deploy netflix-dataset-service --image gcr.io/$GOOGLE_CLOUD_PROJECT/rest-api:0.1 --allow-unauthenticated | ||
# Choose 1 and us-central1 | ||
|
||
# 4. Cloud Build Rest API Production | ||
cd ~/pet-theory/lab06/firebase-rest-api/solution-02 | ||
npm install | ||
gcloud builds submit --tag gcr.io/$GOOGLE_CLOUD_PROJECT/rest-api:0.2 | ||
gcloud beta run deploy netflix-dataset-service --image gcr.io/$GOOGLE_CLOUD_PROJECT/rest-api:0.2 --allow-unauthenticated | ||
# go to cloud run and click netflix-dataset-service then copy the url | ||
SERVICE_URL=<copy url from your netflix-dataset-service> | ||
curl -x GET $SERVICE_URL/2019 | ||
|
||
# 5. Cloud Build Frontend Staging | ||
cd ~/pet-theory/lab06/firebase-frontend/public | ||
nano app.js # comment line 3 and uncomment line 4, insert your netflix-dataset-service url | ||
npm install | ||
cd ~/pet-theory/lab06/firebase-frontend | ||
gcloud builds submit --tag gcr.io/$GOOGLE_CLOUD_PROJECT/frontend-staging:0.1 | ||
gcloud beta run deploy frontend-staging-service --image gcr.io/$GOOGLE_CLOUD_PROJECT/frontend-staging:0.1 | ||
# Choose 1 and us-central1 | ||
|
||
# 6. Cloud Build Frontend Production | ||
gcloud builds submit --tag gcr.io/$GOOGLE_CLOUD_PROJECT/frontend-production:0.1 | ||
gcloud beta run deploy frontend-production-service --image gcr.io/$GOOGLE_CLOUD_PROJECT/frontend-production:0.1 | ||
# Choose 1 and us-central1 |
40 changes: 40 additions & 0 deletions
40
labs/gsp388_monitor-and-log-with-google-cloud-operations-suite-challenge-lab/guide.txt
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,40 @@ | ||
# 1. Check that Cloud Monitoring has been enabled | ||
Open Monitoring from console | ||
|
||
# 2. Check that the video queue length custom metric has been created | ||
Check Monitoring > Dashboard > Media_Dashboard | ||
Open VM Instances | ||
- Stop video-queue-monitor instances | ||
- Edit video-queue-monitor instace: | ||
- Go to Custom metadata | ||
- startup-script: | ||
- replace MY_PROJECT_ID value with your PROJECT ID | ||
- replace MY_GCE_INSTANCE_ID value with your your video-queue-monitor Instance id (find it in top) | ||
- replace MY_GCE_INSTANCE_ZONE value with us-east1-b | ||
- SAVE | ||
- Start video-queue-monitor instance | ||
|
||
# 3. Check that a custom log based metric for large video upload rate has been created | ||
Go to log explorer | ||
- fill textPayload=~"file_format\: ([4,8]K).*" in query box and run it | ||
- Click Action > Create Metric with name "large_video_upload_rate", Click Create Metric | ||
|
||
# 4. Check that custom metrics for the video service have been added to the media dashboard | ||
Go to Monitoring > Dashboard | ||
- Click Media_Dashboard | ||
- Add Chart | ||
- Resource Type: VM Instance | ||
- Metrics: OpenCensus/my.videoservice.org/measure/input_queue_size (uncheck Only show active) | ||
- Filter: instance_id, click your video-queue-monitor instance id (from step 2) then Apply | ||
- SAVE | ||
- Add Chart | ||
- Resource: VM Instance | ||
- Metric: logging/user/large_video_upload_rate | ||
|
||
# 5. Check that an alert has been created for large video uploads | ||
Go to Monitoring > Alert | ||
- Create Policy | ||
- Metric: logging/user/large_video_upload_rate | ||
- Treshold: 3 | ||
- For: 1 minute | ||
- Name your alert with "large video uploads" then Save |
Oops, something went wrong.