-
-
Authentication & Authorization in Google Cloud with Cloud IAM
-
Asynchronous Communication in Google Cloud with Cloud Pub Sub
-
Operations in Google Cloud Platform
-
Go to search bar > type 'compute engine' and hit enter > create
-
Provide configurations like, name, region, machine type, os details etc and then click on create.
Readings:
-
Once VM is created, install Apache in it and inside the index page, let's just keep the VM's hostname and IP address.
-
To launch the index page hosted on VM, use the External IP. Make sure you have allow the HTTP/HTTPS traffic on this VM.
-
Go to Search bar > type External IP Addresses > list of all external IP addresses can be found here which are assigned to VMs
-
Now to reserver an IP or create a static IP, click on Reserve Static Address and provide the configurations.
-
Once static IP is created, it needs to be assigned to the VM. For that, click on change and select the VM.
We will configure our startup script during VM creation. For that, while creating VM and configuring options, go to Management > under Automation, you will find Startup script text area. Update your script there and create instance.
To test it, click on external IP and verify everything's working as expected.
Click on create and new instance template will be created. Based on it, now, VMs can be created.
To create image from a disk, ensure VM instance attached with that disk should be in stopped state.
Provide all required configurations and click on create. Once created, you can find it under Images tab.
Now, you can create an instance template and use this custom image under boot disk section and create VM instances based on this template.
Proceed with the VM creation and select under Management, security, disks, networking, sole tenancy
Click on 'Create Budget' and configure as per your requirements.
There are 2 ways of creating GPU machines:
1st Way
2nd Way
Select machine type and then add GPU to it.
Provide configurations as per your requirements and click on create.
Rolling Update
To configure rolling update, go to instance groups > update VMs
In this scenario, let's say you want this MIG to run on different Instance template. In summary, you want to update the instance template.
After that, select type update. With Automatic update type, you've few more configure options to select.
Rolling Restart
In the last section, we looked at a few ways to make updates to our managed instance group and it was concluded with setting of instance template. However, one thing that we did not discuss is when would the template be updated? Will it be updated immediately?
Note that the manage instance group configuration would be updated immediately, but would the instance be immediately updated to that template? Will there be any downtime while making the update to that specific template?
The answer to that is after updating the instance template you can configure how to do the update. How do you roll out the new template to the existing virtual machines?
There are multiple commands that you can make use of. You can make use of the recreate-instances and the update-instances that we have used earlier. Or there is another set of command which you can use called rolling-action start-update.
When you're doing recreate-instances, you are doing everything manually. However, you can also automate the upgrade to the new version template in a more controlled fashion using rolling-action start-update.
Readings:
Select type of load balancing as per your requirement.
Let's create a HTTP(s) load balancer. For that, click on Start Configuration and provide configurations.
Click on Continue and configure backend, host and path rules and frontend.
Readings:
Readings:
-
Go to search bar > type 'App Engine' > click on 'Create application'
-
Select Region and click on 'Create app'
-
Select language and environment and click on 'Next'
-
App engine app will be created successfully.
-
Now to deploy service to this app, you can either use GCloud SDK or cloud shell. Let's use cloud shell.
-
For demo purpose, let's write simple Python flask program via inbuilt editor in cloud shell (focus only on code inside default-service directory for now).
-
Inside
app.yaml
, you have application configuration required for App Engine. For instance, 'runtime' is one of the configuration which tells App Engine about the application environment.runtime: python39
-
To deploy, use command,
gcloud app deploy
. Ensure, you are on the right project. -
In this deployment, we didn't specify the service name, so it will be deployed as default service. Also, we didn't specify the default service for this specific deployment.
-
Once this deployment is successful, you can go to the URL at which your app is exposed and check if it is working fine.
App Dashboards
In App Dashboards, you have can see metrics and summary.
Services
Versions
You can also get above information using commands:
Now, let's say you want to deploy another version of it. For that, you can do it like below:
During the next version deployment, your application will keep on running of current version. So, there will be no downtime. As soon as the next version is deployed. Traffic will be shifted to the new version.
To find out the link of this version (or active version), you can use gcloud app browse
command. However, to find out the link for specific version, just add --version=<version>
flag.
Previously, we had seen that after deploying new version, full traffic was automatically shifted to it. However, sometimes, you really don't want this rather slowly transfer the traffic to new version after doing some testing.
Let's suppose, currently our app is running on v2 and now, we are updating it to v3.
--no-promote
, it will disable the transfer of all traffic to the newly deployed version.
After verifying that everything is working fine. You can now shift or split the traffic between multiple versions.
Readings:
In addition to the default service, you can create the multiple services. In our example, we will create the new service whose code is inside my-first-service
. We will deploy the same. After deploy, new service is created with it's first version.
If you check the services, you will find 2 services running.
To find the link for the new service's version, do like below:
Just add --version
flag if you are looking for a specific version of that service.
In search bar, type 'Kubernetes Engine' > Create Cluster
You will 2 cluster modes
For now, click on the standard, provide the configurations and click on create.
Now, once cluster is created successfully, connect to it and you can perform K8s related tasks using kubectl
CLI.
Readings:
Readings:
Readings:
Readings:
Readings:
Let's quickly see where you can actually look at these options.
So, when we create a virtual machine, that's when you can attach block storage devices with it. You can either attach persistent disks or local SSDs.
During creation of VM, if you go to the boot disk, you will see a persistent disk is attached from where your operating system is loaded and whenever we create a virtual machine, a boot disk is automatically attached with the virtual machine. So by default, a persistent disk called a boot disk is attached with your VM.
If you'd want to have additional persistent disks attached, then go to Disks > Add New Disk
Remember, Local SSD is available with selected machine types
Readings:
Readings:
We know that boot disk of persistent disks. However, if you delete your VM, the boot disk will also get deleted. To avoid that you can configure like below:
While creating the VM
Go to Boot Disk > Change > Show Advanced Configuration > Deletion Rule > Keep Boot Disk
In a existing VM
Go to you instance > Edit > Under Storage (Boot Disk) section > Deletion Rule > Keep Disk
Creating Snapshot
Once your snapshot is created, you can create an VM instance using it. But remember that snapshots must be created from a boot disk.
You can also create a disk using snapshot.
Scheduling Snapshots
Snapshot schedule is not really linked to a disk. It's a general schedule. Once created, you need to explicitly assigned it to a disk.
Assigning snapshot schedule to a disk.
After creating a machine image, instance can be created based on it.
To get an idea about object and cloud storage, we'll start with demo.
Go to search > type Cloud Storage and you might see few buckets like below. Bucket is kind of a container for all the objects that you would want to place in cloud storage. We will talk about it in lengths later.
Let's create a new bucket. For that, click on 'Create Bucket'. Now you have to provide bucket name which must be unique globally.
Next, chose where to store data.
Now choose default storage class
Let's keep other options with default values.
One of the important things to be noticed in here is that there is no mention of the size of the storage that we would want All that we said is that this is the bucket and this is where we want to create it.
Once bucket is created, you may go ahead and store your data in it.
In bucket, objects are stored in key-value pair.
In below image, '2030/10/course1.png' is the key and value is the content of course1.png image.
Whenever we are storing files, we will not be updating files bit by bit. So, whenever we want to change any file, we will create a new image and then upload the entire image as such. So, we would be treating the entire object as a single unit. We'll not do partial updates.
Readings:
Storage classes can also be defined at object level. So, in the same bucket, you can have different objects with different storage classes.
To configure the lifecycle for your bucket, go to 'Lifecycle' tab and then 'Add a rule'
To configure server side encryption, go to 'Configuration' tab of your bucket and click on 'Encryption Type'.
You can also add encryption at the time of creating bucket in 'Advanced Setting' section.
-
You have resources in the cloud (examples, a virtual server, a database etc).
-
You have identities (human and non-human) that need to access those resources and perform actions.
- for eg., start, stop or terminate a virtual server
-
How do you identify users in the cloud?
-
How do you configure resources they can access?
-
How can you configure what actions to allow?
-
-
In GCP, Identity and Access Management (Cloud IAM) provides this service
-
Authentication - Is it the right user?
-
Authorization - Do they have the right access?
-
Identities can be:
-
A GCP User (Google Account or Externally Authenticated User)
-
A Group of GCP Users
-
An Application running in GCP
-
An Application running in your data center
-
Unauthenticated users
-
-
Provides very granular control:
-
Provides very granular control,
-
to perform single action
-
on a specific cloud resource
-
from a specific IP address
-
during a specific time window
-
-
-
Scenario
I want to provide access to manage a specific cloud storage bucket to a colleague of mine.
Important generic concepts:
-
Member: My colleague
-
Resource: Specific cloud storage bucket
-
Action: Upload/Delete Objects
In Google Cloud IAM,
-
Roles are set of permissions to perform specific actions on specific resources.
- Roles do NOT know about members. It is all about permissions.
-
How do you assign permissions to a member?
- By using Policy. In policy, you assign (or bind) a role to a member.
Solution
-
Choose a Role with right permissions (Ex: Storage Object Admin).
-
Create policy which binds the member with that Role.
Note
IAM in AWS is very different from GCP.
Readings:
-
Roles are permissions:
- Perform some set of actions on some set of resources.
-
There are 3 types of roles:
-
Basic Roles (or Primitive roles) - Owner/Editor/Viewer
-
Viewer(roles.viewer) - Read-only actions
-
Editor(roles.editor) - Viewer + Edit actions
-
Owner(roles.owner) - Editor + Manage Roles and Permissions + Billing
-
Basic roles are earliest version and not recommended to use in production
-
-
Predefined Roles - Fine grained roles predefined and managed by Google.
- Different roles for different purposes, for examples, Storage Admin, Storage Object Admin, Storage Object Viewer, Storage Object Creator
-
Custom Roles - When predefined roles are NOT sufficient, you can create your own custom roles
-
Readings:
Let's play with roles. For that, type roles in search bar and press enter. You will be presented with varieties of roles available in GCP. Let's see one role of each category.
1. Basic Roles
In filter, type Name:roles/viewer
and you will see more than 2000 permissions are associated with this role. Analyze all such permissions associated with it.
2. Predefined Roles
In filter, type Storage Admin Object
and you will see more than 10 permissions are associated with this role. Analyze all such permissions associated with it.
3. Custom Roles
Follow this link to know how to create custom roles.
GUI
Below things are covered in this demo:
-
In search bar, type IAM and try adding a new member, modify present member roles.
-
Checkout Policy Troubleshooter.
CLI
Below are few important commands used to perform IAM related tasks:
Readings:
Scenario
An application is running on VM and we want to give this VM access to create cloud storage bucket.
Solution
-
Create a service account with role Compute Instance Admin & Storage Admin and add users to grant access of this service account if required.
-
Create a VM and add this service account.
Use Case 1 - VM using Cloud Storage
Use Case 2 - Connect on premises machine to cloud storage
Until now, we have been talking about resources which are present in Google Cloud. Now, let's suppose there is a server present outside GCP.
Use Case 3 - Connect on premises machine to cloud storage but for few hours
Readings:
Readings:
Suppose, you want to expose buckets, let's say, in a public website, then below are the steps to do so:
Readings:
Readings:
Readings:
-
To begin with Cloud SQL, in search bar, type Cloud SQL and select SQL
-
Click on create instance, select the database provider, configure it and create the instance.
-
Once your instance is up, create the database.
-
To interact with database (performing database related operations like creating tables etc), go to Overview > Connect using Cloud Shell
-
Once connected to your database, you can perform your db related tasks.
Commands Used in this demo
# Cloud SQL
gcloud sql connect my-first-cloud-sql-instance --user=root --quiet
gcloud config set project glowing-furnace-304608
gcloud sql connect my-first-cloud-sql-instance --user=root --quiet
use todos
create table user (id integer, username varchar(30) );
describe user;
insert into user values (1, 'Ranga');
select * from user;
-
Search Bar > Cloud Spanner > Create Instance
-
Configure it as per your requirements.
-
In 'Allocate Compute Capacity', you can either choose 'node' or 'processing unit' (it is introduced recently).
-
Once cloud spanner instance is created, proceed with the database and table creation.
# Cloud Spanner CREATE TABLE Users ( UserId INT64 NOT NULL, UserName STRING(1024) ) PRIMARY KEY(UserId);
-
Search Bar > Firestore > Select desired mode. Cloud Firestore is the next generation of cloud Datastore. So, if you're creating new projects then the recommendation is to go for Native mode. However, if you have old Datastore projects that you are moving over to Firestore then the recommendation is to go with Datastore mode. Remember, once you choose the mode, it will be permanent for your project.
-
Once mode is chosen, select the location, which can be regional or multi-regional.
-
After doing all above and click on 'create database' to create the database.
-
Now, to add data into the database, click on 'start collection'. A collection is a set of one or more documents that contain data. It is similar to tables in relational database. Once collection is created, to store data, you have to create document under which you can define your fields.
-
One of the important things to remember about Datastore or Firestore is the fact that it is hierarchical. It means inside a document, you can add another collection.
-
Search Bar > Memorystore > Create Instance (for redis or memcached, based on your requirements)
-
Provide configuration details and click on create.
Whenever you look at any of the import or export scenarios in Google Cloud platform, you would see that typically cloud storage is involved. Whenever you'd want to move data from somewhere to somewhere, you would first move it to cloud storage and then move it to the destination and that's the reason why you would see that most of these services support export to and from cloud storage.
Readings:
- Asynchronous Communication — Methods and Strategies
- Understanding Synchronous and Asynchronous Communication
Readings:
Few Commands for Reference
> gcloud config set project <project_name>
> gcloud pubsub topics create <topic_name>
> gcloud pubsub subscriptions create <subscription_name> --topic=<topic_name>
> gcloud pubsub topics publish <topic_name> --message=<message_content>
> gcloud pubsub subscriptions pull <subscription_name>
> gcloud pubsub subscriptions pull <subscription_name> --auto-ack
> gcloud pubsub topics list
> gcloud pubsub topics delete <topic_name>
> gcloud pubsub topics list-subscriptions <topic_name>
Readings:
Readings:
Follow below links to see the creation of VPCs in GCP
- How to Create VPC (Virtual Private Cloud) Network in GCP - linuxtechi
- VPC Creation on Google Cloud Platform(GCP) - Medium
Readings:
Readings:
In this section, let's look at how you can perform operations in the cloud. Developing applications is important. However, maintaining applications and maintaining them in production is very important as well. That's where monitoring, logging, tracing, debugging, and all these kind of things become really, really important.
Readings:
Readings:
This is a practice section where you need to create a bucket and then associate a cloud function to it which will be triggered as soon as any object is uploaded to this bucket. In this cloud function you can do implement anything which generates a log when an object is uploaded to bucket.
The whole purpose of this exercise is to demonstrate the cloud logging.
Readings:
Note
In a free tier, we don't have organization and folders. We directly create projects.
Readings:
Let's look at different types of IAM Members or Identities:
Readings:
Note
Organization policy always overrides whatever is configured in IAM. So, if an organization policy prohibits the creation of resources in, let's say, a specific region even though a user might have that access through IAM, he will not able to create the resource in that specific region because Organization policy has the highest priority.
Readings:
Click here to access Google Price Calculator.
Readings:
Readings: