diff --git a/README.md b/README.md index 32a501a..3c35a67 100644 --- a/README.md +++ b/README.md @@ -1,25 +1,35 @@ # Bedrock -[![Build Status](https://dev.azure.com/epicstuff/bedrock/_apis/build/status/Microsoft.bedrock?branchName=master)](https://dev.azure.com/epicstuff/bedrock/_build/latest?definitionId=54&branchName=master) -[![Go Report Card](https://goreportcard.com/badge/github.com/microsoft/bedrock)](https://goreportcard.com/report/github.com/microsoft/bedrock) +Bedrock provides patterns, implementation, and automation for operating production Kubernetes clusters based on a GitOps workflow, building on the best practices we have discovered in working with dozens of deployments with customers in operationalizing Kubernetes clusters. -Bedrock is automation and tooling for operationalizing production Kubernetes clusters with a [GitOps](./gitops) workflow. GitOps enables you to build a workflow around your deployments and infrastructure similiar to that of a typical development workflow: pull request based operational changes, point in time auditability into what was deployed on the Kubernetes cluster, and providing nonrepudation about who made those changes. - -This GitOps workflow revolves around [Fabrikate](https://github.com/Microsoft/fabrikate) definitions that enable you to specify your deployments at a higher level of abstraction that separates structure from configuration. This makes them easier to maintain versus directly specifying them in Kubernetes resource manifest YAML or cobbling together shell scripts to build Kubernetes resource manifests from templating solutions. Fabrikate definitions also allow you to leverage common pieces across many deployments and to share structure amongst different clusters differentiated only by config. - -Bedrock also provides [guidance and automation](./gitops/README.md) for building GitOps pipelines with a variety of popular CI/CD orchestrators. - -Finally, Bedrock provides a set of Terraform environment templates for deploying your Kubernetes clusters, including automation for setting up the GitOps Operator [Flux](https://github.com/fluxcd/flux) in your cluster. +Bedrock helps you: +* Define and maintain infrastructure deployments across multiple clusters. +* Deploy and automate a secure end to end GitOps workflow. +* Deploy and manage service workloads from source code to their deployment in-cluster. +* Observe ongoing deployments across multiple services, their revisions, and multiple clusters utilizing those services. ## Getting Started +* [Installing Prerequisites](./tools/prereqs/README.md) +* [Walkthrough: Deploying a First Workload](./docs/firstWorkload) +* [Deep Dive: Why GitOps?](./docs/why-gitops.md) + +## Infrastructure Management +* [Walkthrough: Single Cluster Infrastructure Deployment](./docs/singleKeyvault/README.md) +* [Deep Dive: Multicluster and "Day 2" Infrastructure Scenarios](./docs/multicluster.md) +* [CLI Reference](https://github.com/CatalystCode/spk/blob/master/guides/cloud-infra-management.md) -A Bedrock deployment follows three steps at a high level: +## GitOps Pipeline +* [Walkthrough: GitOps Pipeline](./docs/hldToManifest.md) +* [Deep Dive: The End to End Deployment Pipeline](./docs/gitops-pipeline.md) -1. [Create and deploy](./cluster/README.md) a GitOps enabled Kubernetes cluster. -2. Define a [Fabrikate](https://github.com/microsoft/fabrikate) high level deployment definition. -3. [Setup a GitOps pipeline](./gitops/README.md) to automate deployments of this definition to this cluster based on typical application and cluster lifecycle events. +## Service Management +* [Walkthrough: Service Management](./docs/services.md) +* [Deep Dive: Service Lifecycle Management](https://github.com/CatalystCode/spk/blob/master/guides/building-helm-charts-for-spk.md) +* [CLI Reference](https://github.com/CatalystCode/spk/blob/master/guides/service-management.md) -The steps required to operationalize a production Kubernetes cluster can be pretty extensive, so we have also put together a [simple walkthrough for deploying a first cluster](./docs/azure-simple/README.md) that makes a great first step. +## Deployment Observability +* [Walkthrough: Observing Service Deployments](./docs/introspection.md) +* [CLI Reference](https://github.com/CatalystCode/spk/blob/master/guides/service-introspection.md) ## Community @@ -27,7 +37,7 @@ The steps required to operationalize a production Kubernetes cluster can be pret ## Contributing -We do not claim to have all the answers and would greatly appreciate your ideas and pull requests. +We do not claim to have all the answers and would greatly appreciate your ideas, issues, and pull requests. This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us diff --git a/docs/README.md b/docs/README.md deleted file mode 100644 index e75a24c..0000000 --- a/docs/README.md +++ /dev/null @@ -1,21 +0,0 @@ -# Bedrock documentation v1.0.0 - -Bedrock Documents --------- -* [Walkthrough azure-simple](./azure-simple/README.md) -* [Use Fabrikate in DevOps Pipeline to automate updates to Bedrock Deployment](./devops/README.md) - -Fabrikate Documents --------- -* [Generate manifest using Fabrikate to update running Bedrock cluster](./fabrikate/README.md) -* [Fabrikate](https://github.com/Microsoft/fabrikate) -* [Fabrikate cloud-native](https://github.com/microsoft/fabrikate-definitions/tree/master/definitions/fabrikate-cloud-native) - -Various --------- -* [Bedrock pipeline](https://github.com/microsoft/bedrock/blob/master/gitops/PipelineThinking.md) -* [Image Tag Release Pipeline](https://github.com/microsoft/bedrock/blob/master/gitops/azure-devops/ImageTagRelease.md) - -Slack Discussion Forum --------- -* [Bedrock Discussion](https://join.slack.com/t/bedrockco/shared_invite/enQtNjIwNzg3NTU0MDgzLTdiZGY4ZTM5OTM4MWEyM2FlZDA5MmE0MmNhNTQ2MGMxYTY2NGYxMTVlZWFmODVmODJlOWU0Y2U2YmM1YTE0NGI) diff --git a/docs/azure-simple/README.md b/docs/azure-simple/README.md deleted file mode 100644 index 7331c76..0000000 --- a/docs/azure-simple/README.md +++ /dev/null @@ -1,1069 +0,0 @@ -# A Walkthrough Deploying a Bedrock Environment - -This document walks through a Bedrock deployment. It does not include everything available using the [gitops](../../gitops/README.md) workflow. We deploy a Kubernetes cluster and create an empty repo for Flux updates. After the cluster is running we add a manifest file to the repo to demonstrate Flux automation. - -This walkthrough consists of the following steps: - -- [Prerequisites](#prerequisites) - - [Install the required tooling](#install-the-required-tooling) - - [Install the Azure CLI](#install-the-azure-cli) -- [Set Up Flux Manifest Repository](#set-up-flux-manifest-repository) - - [Generate an RSA key pair to use as the manifest repository deploy key](#generate-an-rsa-key-pair-to-use-as-the-manifest-repository-deploy-key) - - [Grant deploy key access to the manifest repository](#grant-deploy-key-access-to-the-manifest-repository) -- [Create an RSA Key Pair to use as node key](#create-an-rsa-key-pair--to-use-as-node-key) -- [Create an Azure Service Principal](#create-an-azure-service-principal) - - [Configure Terraform For Azure Access](#configure-terraform-for-azure-access) -- [Clone the Bedrock Repository](#clone-the-bedrock-repository) - - [Set Up Terraform Deployment Variables](#set-up-terraform-deployment-variables) - - [Deploy the Template](#deploy-the-template) - - [Terraform Init](#terraform-init) - - [Terraform Plan](#terraform-plan) - - [Terraform Apply](#terraform-apply) - - [Terraform State](#terraform-state) -- [Interact with the Deployed Cluster](#interact-with-the-deployed-cluster) - - [Deploy an update using Kubernetes manifest](#deploy-an-update-using-kubernetes-manifest) - -# Prerequisites - -Before starting the deployment, there are several required steps: - -- Install the required common tools (kubectl, helm, and terraform). See also [Required Tools](https://github.com/microsoft/bedrock/tree/master/cluster). Note: this tutorial currently uses [Terraform 0.12.6](https://releases.hashicorp.com/terraform/0.12.6/). -- Enroll as an Azure subscriber. The free trial subscription does not support enough cores to run this tutorial. -- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest). - -The following procedures complete the prerequisites and walk through the process of configuring Terraform and Bedrock scripts, deploying the cluster, and checking the deployed cluster's health. Then we add a new manifest file to demonstrate Flux update. - -## Install the required tooling - -This document assumes one is running a current version of Ubuntu. Windows users can install the [Ubuntu Terminal](https://www.microsoft.com/store/productId/9NBLGGH4MSV6) from the Microsoft Store. The Ubuntu Terminal enables Linux command-line utilities, including bash, ssh, and git that will be useful for the following deployment. _Note: You will need the Windows Subsystem for Linux installed to use the Ubuntu Terminal on Windows_. - -Ensure that the [required tools](https://github.com/microsoft/bedrock/tree/master/cluster#required-tools), are installed in your environment. Alternatively, there are [scripts](https://github.com/jmspring/bedrock-dev-env/tree/master/scripts) that will install `helm`, `terraform` and `kubectl`. In this case, use `setup_kubernetes_tools.sh` and `setup_terraform.sh`. The scripts install the tools into `/usr/local/bin`. - -## Install the Azure CLI - -For information specific to your operating system, see the [Azure CLI install guide](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest). You can also use [this script](https://github.com/jmspring/bedrock-dev-env/blob/master/scripts/setup_azure_cli.sh) if running on a Unix based machine. - -# Set Up Flux Manifest Repository - -We will deploy the Bedrock environment using the empty repo and then add a Kubernetes manifest that defines a simple Web application. The change to the repo will automatically update the deployment. - -To prepare the Flux manifest repository, we must: - -1. [Create the Flux Manifest Repository](#create-the-flux-manifest-repository) -2. [Generate an RSA Key Pair to use as the Manifest Repository Deploy Key](#generate-an-rsa-key-pair-to-use-as-the-manifest-repository-deploy-key) -3. [Grant Deploy Key access to the Manifest Repository](#grant-deploy-key-access-to-the-manifest-repository) - -## Create the Flux Manifest Repository - -[Create an empty git repository](https://github.com/new/) with a name that clearly signals that the repo is used for the Flux manifests. For example `bedrock-deploy-demo`. - -Flux requires that the git respository have at least one commit. Initialize the repo with an empty commit. - -```bash -git commit --allow-empty -m "Initializing the Flux Manifest Repository" -``` - -More edocumentation around Service Principals are available in the [Bedrock documentation]. - -## Generate an RSA Key Pair to use as the Manifest Repository Deploy Key - -Generate the [deploy key](https://developer.github.com/v3/guides/managing-deploy-keys/#deploy-keys) using `ssh-keygen`. The public portion of the key pair will be uploaded to GitHub as a deploy key. - -Run: `ssh-keygen -b 4096 -t rsa -f ~/.ssh/gitops-ssh-key`. - -```bash -$ ssh-keygen -b 4096 -t rsa -f ~/.ssh/gitops-ssh-key -Generating public/private rsa key pair. -Enter passphrase (empty for no passphrase): -Enter same passphrase again: -Your identification has been saved in /Users/jmspring/.ssh/gitops-ssh-key. -Your public key has been saved in /Users/jmspring/.ssh/gitops-ssh-key.pub. -The key fingerprint is: -SHA256:jago9v63j05u9WoiNExnPM2KAWBk1eTHT2AmhIWPIXM jmspring@kudzu.local -The key's randomart image is: -+---[RSA 4096]----+ -|.=o.B= + | -|oo E..= . | -| + =..oo. | -| . +.*o= | -| o * S.. | -| . * . . | -|... o ... . | -|... .o+.. . | -| .o..===o. | -+----[SHA256]-----+ -kudzu:azure-simple jmspring$ -``` - -This will create public and private keys for the Flux repository. We will assign the public key under the following heading: [Adding the Repository Key](#adding-the-repository-key). The private key is stored on the machine originating the deployment. - -## Grant Deploy Key Access to the Manifest Repository - -The public key of the [RSA key pair](#create-an-rsa-key-pair-for-a-deploy-key-for-the-flux-repository) previously created needs to be added as a deploy key. Note: _If you do not own the repository, you will have to fork it before proceeding_. - -First, display the contents of the public key: `more ~/.ssh/gitops-ssh-key.pub`. - -```bash -$ more ~/.ssh/gitops-ssh-key.pub -ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDTNdGpnmztWRa8RofHl8dIGyNkEayNR6d7p2JtJ7+zMj0HRUJRc+DWvBML4DvT29AumVEuz1bsVyVS2f611NBmXHHKkbzAZZzv9gt2uB5sjnmm7LAORJyoBEodR/T07hWr8MDzYrGo5fdTDVagpoHcEke6JT04AL21vysBgqfLrkrtcEyE+uci4hRVj+FGL9twh3Mb6+0uak/UsTFgfDi/oTXdXOFIitQgaXsw8e3rkfbqGLbhb6o1muGd1o40Eip6P4xejEOuIye0cg7rfX461NmOP7HIEsUa+BwMExiXXsbxj6Z0TXG0qZaQXWjvZF+MfHx/J0Alb9kdO3pYx3rJbzmdNFwbWM4I/zN+ng4TFiHBWRxRFmqJmKZX6ggJvX/d3z0zvJnvSmOQz9TLOT4lqZ/M1sARtABPGwFLAvPHAkXYnex0v93HUrEi7g9EnM+4dsGU8/6gx0XZUdH17WZ1dbEP7VQwDPnWCaZ/aaG7BsoJj3VnDlFP0QytgVweWr0J1ToTRQQZDfWdeSBvoqq/t33yYhjNA82fs+bR/1MukN0dCWMi7MqIs2t3TKYW635E7VHp++G1DR6w6LoTu1alpAlB7d9qiq7o1c4N+gakXSUkkHL8OQbQBeLeTG1XtYa//A5gnAxLSzxAgBpVW15QywFgJlPk0HEVkOlVd4GzUw== sl;jlkjgl@kudzu.local -``` - -Next, on the repository, select `Settings` -> `Deploy Keys` -> `Add deploy key`. Give your key a title and paste in the contents of your public key. Important: allow the key to have `Write Access`. - -![enter key](./images/addDeployKey.png) - -Click "Add key", and you should see: - -![key result](./images/deployKeyResult.png) - -## Create Azure Resource Group - -Note: You need to create a resource group in your subscription first before you apply terraform. Use the following command to create a resource group - -```bash -az group create -l westus2 -n testazuresimplerg -``` - -## Create an Azure Service Principal - -We use a single [Azure Service Principal](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals) for configuring Terraform and for the [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) cluster being deployed. In Bedrock, see the [Service Principal documention](https://github.com/microsoft/bedrock/tree/master/cluster/azure#create-an-azure-service-principal). - -[Login to the Azure CLI](https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli) using the `az login` command. - -Get the Id of the subscription by running `az account show`. - -Then, create the Service Principal using `az ad sp create-for-rbac --role contributor --scopes "/subscriptions/7060bca0-1234-5-b54c-ab145dfaccef"` as follows: - -```bash -~$ az account show -{ - "environmentName": "AzureCloud", - "id": "7060bca0-1234-5-b54c-ab145dfaccef", - "isDefault": true, - "name": "jmspring trial account", - "state": "Enabled", - "tenantId": "72f984ed-86f1-41af-91ab-87acd01ed3ac", - "user": { - "name": "jmspring@kudzu.local", - "type": "user" - } -} -~$ az ad sp create-for-rbac --role contributor --scopes "/subscriptions/7060bca0-1234-5-b54c-ab145dfaccef" -{ - "appId": "7b6ab9ae-dead-abcd-8b52-0a8ecb5beef7", - "displayName": "azure-cli-2019-06-13-04-47-36", - "name": "http://azure-cli-2019-06-13-04-47-36", - "password": "35591cab-13c9-4b42-8a83-59c8867bbdc2", - "tenant": "72f988bf-86f1-41af-91ab-2d7cd011db47" -} -``` - -Take note of the following values. They will be needed for configuring Terraform as well as the deployment as described under the heading [Configure Terraform for Azure Access](#configure-terraform-for-azure-access): - -- Subscription Id (`id` from account): `7060bca0-1234-5-b54c-ab145dfaccef` -- Tenant Id: `72f984ed-86f1-41af-91ab-87acd01ed3ac` -- Client Id (appId): `7b6ab9ae-dead-abcd-8b52-0a8ecb5beef7` -- Client Secret (password): `35591cab-13c9-4b42-8a83-59c8867bbdc2` - -## Create an RSA Key Pair to use as Node Key - -The Terraform scripts use this node key to setup log-in credentials on the nodes in the AKS cluster. We will use this key when setting up the Terraform deployment variables. To generate the node key, run `ssh-keygen -b 4096 -t rsa -f ~/.ssh/node-ssh-key`: - -```bash -$ ssh-keygen -b 4096 -t rsa -f ~/.ssh/node-ssh-key -Generating public/private rsa key pair. -Enter passphrase (empty for no passphrase): -Enter same passphrase again: -Your identification has been saved in /home/jims/.ssh/node-ssh-key. -Your public key has been saved in /home/jims/.ssh/node-ssh-key.pub. -The key fingerprint is: -SHA256:+8pQ4MuQcf0oKT6LQkyoN6uswApLZQm1xXc+pp4ewvs jims@fubu -The key's randomart image is: -+---[RSA 4096]----+ -| ... | -| . o. o . | -|.. .. + + | -|... .= o * | -|+ ++ + S o | -|oo=..+ = . | -|++ ooo=.o | -|B... oo=.. | -|*+. ..oEo.. | -+----[SHA256]-----+ -``` - -## Configure Terraform For Azure Access - -Terraform supports a number of methods for authenticating with Azure. Bedrock uses [authenticating with a Service Principal and client secret](https://www.terraform.io/docs/providers/azurerm/auth/service_principal_client_secret.html). This is done by setting a few environment variables via the Bash `export` command. - -To set the variables, use the key created under the previous heading [Create an Azure Service Principal](#create-an-azure-service-principal). (The ARM_CLIENT_ID is `app_id` from the previous procedure. The ARM_SUBSCRIPTION_ID is account `id`.) - -Set the variables as follows: - -```bash -$ export ARM_SUBSCRIPTION_ID=7060bca0-1234-5-b54c-ab145dfaccef -$ export ARM_TENANT_ID=72f984ed-86f1-41af-91ab-87acd01ed3ac -$ export ARM_CLIENT_SECRET=35591cab-13c9-4b42-8a83-59c8867bbdc2 -$ export ARM_CLIENT_ID=7b6ab9ae-dead-abcd-8b52-0a8ecb5beef7 -``` - -If you execute `env | grep ARM` you should see: - -```bash -$ env | grep ARM -ARM_SUBSCRIPTION_ID=7060bca0-1234-5-b54c-ab145dfaccef -ARM_TENANT_ID=72f984ed-86f1-41af-91ab-87acd01ed3ac -ARM_CLIENT_SECRET=35591cab-13c9-4b42-8a83-59c8867bbdc2 -ARM_CLIENT_ID=7b6ab9ae-dead-abcd-8b52-0a8ecb5beef7 -``` - -## Clone the Bedrock Repository - -Clone the [Bedrock repository](https://github.com/microsoft/bedrock) with the command: `git clone https://github.com/microsoft/bedrock.git` - -```bash -$ git clone https://github.com/microsoft/bedrock.git -Cloning into 'bedrock'... -remote: Enumerating objects: 37, done. -remote: Counting objects: 100% (37/37), done. -remote: Compressing objects: 100% (32/32), done. -remote: Total 2154 (delta 11), reused 11 (delta 5), pack-reused 2117 -Receiving objects: 100% (2154/2154), 29.33 MiB | 6.15 MiB/s, done. -Resolving deltas: 100% (1022/1022), done. -``` - -To verify, navigate to the `bedrock/cluster/environments` directory and do an `ls` command: - -```bash -$ ls -l -total 0 -drwxr-xr-x 8 jmspring staff 256 Jun 12 09:11 azure-common-infra -drwxr-xr-x 15 jmspring staff 480 Jun 12 09:11 azure-multiple-clusters -drwxr-xr-x 6 jmspring staff 192 Jun 12 09:11 azure-simple -drwxr-xr-x 7 jmspring staff 224 Jun 12 09:11 azure-single-keyvault -drwxr-xr-x 7 jmspring staff 224 Jun 12 09:11 azure-velero-restore -drwxr-xr-x 3 jmspring staff 96 Jun 12 09:11 minikube -``` - -Each of the directories represent a common pattern supported within Bedrock. For more information see the [Bedrock github repo](https://github.com/microsoft/bedrock/tree/master/cluster/azure). - -## Set Up Terraform Deployment Variables - -As mentioned, we will be using `azure-simple`. Changing to that directory and doing an `ls -l` command reveals: - -```bash -$ cd azure-simple -$ ls -l -total 32 --rw-r--r-- 1 jmspring staff 460 Jun 12 09:11 README.md --rw-r--r-- 1 jmspring staff 1992 Jun 12 09:11 main.tf --rw-r--r-- 1 jmspring staff 703 Jun 12 09:11 terraform.tfvars --rw-r--r-- 1 jmspring staff 2465 Jun 12 09:11 variables.tf -``` - -The inputs for a Terraform deployment are specified in a `.tfvars` file. In the `azure-simple` repository, a skeleton exists in the form of `terraform.tfvars` with the following fields. To get the ssh_public_key, run: `more ~/.ssh/node-ssh-key.pub`. The path to the private key is `"/home//.ssh/gitops-ssh-key"`. - -```bash -$ cat terraform.tfvars -resource_group_name="" -cluster_name="" -agent_vm_count = "3" -dns_prefix="" -service_principal_id = "" -service_principal_secret = "" -ssh_public_key = "ssh-rsa ..." # from node-ssh-key.pub -gitops_ssh_url = "git@github.com:/.git" # ssh url to manifest repo -gitops_ssh_key = "/home//.ssh/gitops-ssh-key" # path to private gitops repo key -vnet_name = "" - -#-------------------------------------------------------------- -# Optional variables - Uncomment to use -#-------------------------------------------------------------- -# gitops_url_branch = "release-123" -# gitops_poll_interval = "30s" -# gitops_label = "custom-flux-sync" -# gitops_path = "prod" -# network_policy = "calico" -# network_plugin = "azure" -``` - -From previous procedures, we have values for `service_principal_id`, `service_principal_secret`, `ssh_public_key`, `gitops_ssh_key`. For purposes of this walkthrough use `agent_vm_count=3` as default - -To get the gitopp_ssh_url, go back to the empty repository that was created in [Set Up Flux Manifest Repository](#set-up-flux-manifest-repository). This example uses SSH: `git@github.com:/bedrock-deploy-demo.git`. - -Define the remainding fields: - -- `resource_group_name`: `testazuresimplerg` -- `cluster_name`: `testazuresimplecluster` -- `dns_prefix`: `testazuresimple` -- `vnet_name`: `testazuresimplevnet` - -Note: You need to create a resource group in your subscription first before you apply terraform. Use the following command to create a resource group - -```bash -az group create -l westus2 -n testazuresimplerg -``` - -The, `gitops_ssh_key` is a _path_ to the RSA private key we created under [Set Up Flux Manifest Repository](#set-up-flux-manifest-repository) -The `ssh_public_key` is the RSA public key that was created for [AKS node access](#create-an-rsa-key-for-logging-into-aks-nodes). - -Make a copy of the `terraform.tfvars` file and name it `testazuresimple.tfvars` for a working copy. Next, using the values just defined, fill in the other values that were generated. Then, remove the old terraform.tfvars file. - -When complete `testazuresimple.tfvars` should resemble: - -```bash -$ cat testazuresimple.tfvars -resource_group_name="testazuresimplerg" -cluster_name="testazuresimplecluster" -agent_vm_count = "3" -dns_prefix="testazuresimple" -service_principal_id = "7b6ab9ae-dead-abcd-8b52-0a8ecb5beef7" -service_principal_secret = "35591cab-13c9-4b42-8a83-59c8867bbdc2" -ssh_public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCo5cFB/HiJB3P5s5kL3O24761oh8dVArCs9oMqdR09+hC5lD15H6neii4azByiMB1AnWQvVbk+i0uwBTl5riPgssj6vWY5sUS0HQsEAIRkzphik0ndS0+U8QI714mb3O0+qA4UYQrDQ3rq/Ak+mkfK0NQYL08Vo0vuE4PnmyDbcR8Pmo6xncj/nlWG8UzwjazpPCsP20p/egLldcvU59ikvY9+ZIsBdAGGZS29r39eoXzA4MKZZqXU/znttqa0Eed8a3pFWuE2UrntLPLrgg5hvliOmEfkUw0LQ3wid1+4H/ziCgPY6bhYJlMlf7WSCnBpgTq3tlgaaEHoE8gTjadKBk6bcrTaDZ5YANTEFAuuIooJgT+qlLrVql+QT2Qtln9CdMv98rP7yBiVVtQGcOJyQyG5D7z3lysKqCMjkMXOCH2UMJBrurBqxr6UDV3btQmlPOGI8PkgjP620dq35ZmDqBDfTLpsAW4s8o9zlT2jvCF7C1qhg81GuZ37Vop/TZDNShYIQF7ekc8IlhqBpbdhxWV6ap16paqNxsF+X4dPLW6AFVogkgNLJXiW+hcfG/lstKAPzXAVTy2vKh+29OsErIiL3SDqrXrNSmGmXwtFYGYg3XZLiEjleEzK54vYAbdEPElbNvOzvRCNdGkorw0611tpCntbpC79Q/+Ij6eyfQ== user" -gitops_ssh_url = "git@github.com:/bedrock-deploy-demo.git" -gitops_ssh_key = "/home//.ssh/gitops-ssh-key" -vnet_name = "testazuresimplevnet" - -#-------------------------------------------------------------- -# Optional variables - Uncomment to use -#-------------------------------------------------------------- -# gitops_url_branch = "release-123" -# gitops_poll_interval = "30s" -# gitops_path = "prod" -# network_policy = "calico" -# network_plugin = "azure" -``` - -## Deploy the Template - -With the Terraform variables file, [testazuresimple.tfvars](#set-up-terraform-deployment-variables), it is time to do the Terraform deployment. There are three steps to this process: - -- `terraform init` which initializes the local directory with metadata and other necessities Terraform needs. -- `terraform plan` which sanity checks your variables against the deployment -- `terraform apply` which actually deploys the infrastructure defined - -Make sure you are in the `bedrock/cluster/environments/azure-simple` directory and that you know the path to `testazuresimple.tfvars` (it is assumed that is in the same directory as the `azure-simple` environment). - -### Terraform Init - -First execute `terraform init`: - -```bash -$ terraform init -Initializing modules... -- module.provider - Getting source "github.com/Microsoft/bedrock/cluster/azure/provider" -- module.vnet - Getting source "github.com/Microsoft/bedrock/cluster/azure/vnet" -- module.aks-gitops - Getting source "github.com/Microsoft/bedrock/cluster/azure/aks-gitops" -- module.provider.common-provider - Getting source "../../common/provider" -- module.aks-gitops.aks - Getting source "../../azure/aks" -- module.aks-gitops.flux - Getting source "../../common/flux" -- module.aks-gitops.kubediff - Getting source "../../common/kubediff" -- module.aks-gitops.aks.azure-provider - Getting source "../provider" -- module.aks-gitops.aks.azure-provider.common-provider - Getting source "../../common/provider" -- module.aks-gitops.flux.common-provider - Getting source "../provider" -- module.aks-gitops.kubediff.common-provider - Getting source "../provider" - -Initializing provider plugins... -- Checking for available provider plugins on https://releases.hashicorp.com... -- Downloading plugin for provider "null" (2.1.2)... -- Downloading plugin for provider "azurerm" (1.29.0)... -- Downloading plugin for provider "azuread" (0.3.1)... - -Terraform has been successfully initialized! - -You may now begin working with Terraform. All Terraform commands -should work. Try running "terraform plan" to see -any changes that are required for your infrastructure. - -If you ever set or change modules or backend configuration for Terraform, -rerun this command to reinitialize your working directory. If you forget, other -commands will detect it and remind you to do so if necessary. -``` - -### Terraform Plan - -Next, execute `terraform plan` and specify the location of our variables file: `$ terraform plan -var-file=testazuresimple.tfvars` - -```bash -$ terraform plan -var-file=testazuresimple.tfvars -Refreshing Terraform state in-memory prior to plan... -The refreshed state will be used to calculate this plan, but will not be -persisted to local or remote state storage. - - ------------------------------------------------------------------------- - -An execution plan has been generated and is shown below. -Resource actions are indicated with the following symbols: - + create - -Terraform will perform the following actions: - - # module.vnet.azurerm_subnet.subnet[0] will be created - + resource "azurerm_subnet" "subnet" { - + address_prefix = "10.10.1.0/24" - + id = (known after apply) - + ip_configurations = (known after apply) - + name = "testsimplecluster-aks-subnet" - + resource_group_name = "testsimplerg" - + service_endpoints = [] - + virtual_network_name = "testsimplevnet" - } - - # module.vnet.azurerm_virtual_network.vnet will be created - + resource "azurerm_virtual_network" "vnet" { - + address_space = [ - + "10.10.0.0/16", - ] - + dns_servers = [] - + id = (known after apply) - + location = "westus2" - + name = "testsimplevnet" - + resource_group_name = "testsimplerg" - + tags = { - + "environment" = "azure-simple" - } - - + subnet { - + address_prefix = (known after apply) - + id = (known after apply) - + name = (known after apply) - + security_group = (known after apply) - } - } - - # module.aks-gitops.module.aks.azurerm_kubernetes_cluster.cluster will be created - + resource "azurerm_kubernetes_cluster" "cluster" { - + dns_prefix = "testsimpledns" - + fqdn = (known after apply) - + id = (known after apply) - + kube_admin_config = (known after apply) - + kube_admin_config_raw = (sensitive value) - + kube_config = (known after apply) - + kube_config_raw = (sensitive value) - + kubernetes_version = "1.14.8" - + location = "westus2" - + name = "testsimplecluster" - + node_resource_group = (known after apply) - + resource_group_name = "testsimplerg" - + tags = (known after apply) - - + addon_profile { - - + oms_agent { - + enabled = false - + log_analytics_workspace_id = (known after apply) - } - } - - + agent_pool_profile { - + count = 3 - + dns_prefix = (known after apply) - + fqdn = (known after apply) - + max_pods = (known after apply) - + name = "default" - + os_disk_size_gb = 30 - + os_type = "Linux" - + type = "AvailabilitySet" - + vm_size = "Standard_D4s_v3" - + vnet_subnet_id = (known after apply) - } - - + linux_profile { - + admin_username = "k8sadmin" - - + ssh_key { - + key_data = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCo5cFB/HiJB3P5s5kL3O24761oh8dVArCs9oMqdR09+hC5lD15H6neii4azByiMB1AnWQvVbk+i0uwBTl5riPgssj6vWY5sUS0HQsEAIRkzphik0ndS0+U8QI714mb3O0+qA4UYQrDQ3rq/Ak+mkfK0NQYL08Vo0vuE4PnmyDbcR8Pmo6xncj/nlWG8UzwjazpPCsP20p/egLldcvU59ikvY9+ZIsBdAGGZS29r39eoXzA4MKZZqXU/znttqa0Eed8a3pFWuE2UrntLPLrgg5hvliOmEfkUw0LQ3wid1+4H/ziCgPY6bhYJlMlf7WSCnBpgTq3tlgaaEHoE8gTjadKBk6bcrTaDZ5YANTEFAuuIooJgT+qlLrVql+QT2Qtln9CdMv98rP7yBiVVtQGcOJyQyG5D7z3lysKqCMjkMXOCH2UMJBrurBqxr6UDV3btQmlPOGI8PkgjP620dq35ZmDqBDfTLpsAW4s8o9zlT2jvCF7C1qhg81GuZ37Vop/TZDNShYIQF7ekc8IlhqBpbdhxWV6ap16paqNxsF+X4dPLW6AFVogkgNLJXiW+hcfG/lstKAPzXAVTy2vKh+29OsErIiL3SDqrXrNSmGmXwtFYGYg3XZLiEjleEzK54vYAbdEPElbNvOzvRCNdGkorw0611tpCntbpC79Q/+Ij6eyfQ== jims@fubu" - } - } - - + network_profile { - + dns_service_ip = "10.0.0.10" - + docker_bridge_cidr = "172.17.0.1/16" - + load_balancer_sku = "basic" - + network_plugin = "azure" - + network_policy = "azure" - + pod_cidr = (known after apply) - + service_cidr = "10.0.0.0/16" - } - - + role_based_access_control { - + enabled = true - } - - + service_principal { - + client_id = "631b0647-b300-4611-8349-842864d1c301" - + client_secret = (sensitive value) - } - } - - # module.aks-gitops.module.aks.azurerm_log_analytics_solution.solution will be created - + resource "azurerm_log_analytics_solution" "solution" { - + id = (known after apply) - + location = "westus2" - + resource_group_name = "testsimplerg" - + solution_name = "ContainerInsights" - + workspace_name = (known after apply) - + workspace_resource_id = (known after apply) - - + plan { - + name = (known after apply) - + product = "OMSGallery/ContainerInsights" - + publisher = "Microsoft" - } - } - - # module.aks-gitops.module.aks.azurerm_log_analytics_workspace.workspace will be created - + resource "azurerm_log_analytics_workspace" "workspace" { - + id = (known after apply) - + location = "westus2" - + name = (known after apply) - + portal_url = (known after apply) - + primary_shared_key = (sensitive value) - + resource_group_name = "testsimplerg" - + retention_in_days = (known after apply) - + secondary_shared_key = (sensitive value) - + sku = "PerGB2018" - + tags = (known after apply) - + workspace_id = (known after apply) - } - - # module.aks-gitops.module.aks.local_file.cluster_credentials[0] will be created - + resource "local_file" "cluster_credentials" { - + directory_permission = "0777" - + file_permission = "0777" - + filename = "./output/bedrock_kube_config" - + id = (known after apply) - + sensitive_content = (sensitive value) - } - - # module.aks-gitops.module.aks.random_id.workspace will be created - + resource "random_id" "workspace" { - + b64 = (known after apply) - + b64_std = (known after apply) - + b64_url = (known after apply) - + byte_length = 8 - + dec = (known after apply) - + hex = (known after apply) - + id = (known after apply) - + keepers = { - + "group_name" = "testsimplerg" - } - } - - # module.aks-gitops.module.flux.null_resource.deploy_flux[0] will be created - + resource "null_resource" "deploy_flux" { - + id = (known after apply) - + triggers = { - + "enable_flux" = "true" - + "flux_recreate" = "false" - } - } - -Plan: 8 to add, 0 to change, 0 to destroy. - ------------------------------------------------------------------------- - -Note: You didn't specify an "-out" parameter to save this plan, so Terraform -can't guarantee that exactly these actions will be performed if -"terraform apply" is subsequently run. -``` - -As seen from the output, a number of objects have been defined for creation. - -### Terraform Apply - -The final step is to issue `terraform apply -var-file=testazuresimple.tfvars` which uses the file containing the variables we defined above (if you run `terraform apply` without `-var-file=` it will take any `*.tfvars` file in the folder, for example, the sample _terraform.tfvars_ file, if you didn't remove it, and start asking for the unspecified fields). - -The output for `terraform apply` is quite long, so the snippet below contains only the beginning and the end (sensitive output has been removed). The full output can be found in [./extras/terraform_apply_log.txt](./extras/terraform_apply_log.txt). Note the beginning looks similar to `terraform plan` and the output contains the status of deploying each component. Based on dependencies, Terraform deploys components in the proper order derived from a dependency graph. - -```bash -$ terraform apply -var-file=testazuresimple.tfvars - -An execution plan has been generated and is shown below. -Resource actions are indicated with the following symbols: - + create - -Terraform will perform the following actions: - - + azurerm_resource_group.cluster_rg - id: - location: "westus2" - name: "testazuresimplerg" - tags.%: - - + module.vnet.azurerm_resource_group.vnet - id: - location: "westus2" - name: "testazuresimplerg" - tags.%: - - + module.vnet.azurerm_subnet.subnet - id: - address_prefix: "10.10.1.0/24" - ip_configurations.#: - name: "testazuresimplecluster-aks-subnet" - resource_group_name: "testazuresimplerg" - service_endpoints.#: "1" - virtual_network_name: "testazuresimplevnet" - - + module.vnet.azurerm_virtual_network.vnet - id: - address_space.#: "1" - address_space.0: "10.10.0.0/16" - location: "westus2" - name: "testazuresimplevnet" - resource_group_name: "testazuresimplerg" - subnet.#: - tags.%: "1" - tags.environment: "azure-simple" - - + module.aks-gitops.module.aks.azurerm_kubernetes_cluster.cluster - id: - addon_profile.#: - agent_pool_profile.#: "1" - agent_pool_profile.0.count: "3" - agent_pool_profile.0.dns_prefix: - agent_pool_profile.0.fqdn: - agent_pool_profile.0.max_pods: - agent_pool_profile.0.name: "default" - agent_pool_profile.0.os_disk_size_gb: "30" - agent_pool_profile.0.os_type: "Linux" - agent_pool_profile.0.type: "AvailabilitySet" - agent_pool_profile.0.vm_size: "Standard_D2s_v3" - agent_pool_profile.0.vnet_subnet_id: "${var.vnet_subnet_id}" - dns_prefix: "testazuresimple" - fqdn: - kube_admin_config.#: - kube_admin_config_raw: - kube_config.#: - kube_config_raw: - kubernetes_version: "1.13.5" - linux_profile.#: "1" - linux_profile.0.admin_username: "k8sadmin" - linux_profile.0.ssh_key.#: "1" - linux_profile.0.ssh_key.0.key_data: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCo5cFB/HiJB3P5s5kL3O24761oh8dVArCs9oMqdR09+hC5lD15H6neii4azByiMB1AnWQvVbk+i0uwBTl5riPgssj6vWY5sUS0HQsEAIRkzphik0ndS0+U8QI714mb3O0+qA4UYQrDQ3rq/Ak+mkfK0NQYL08Vo0vuE4PnmyDbcR8Pmo6xncj/nlWG8UzwjazpPCsP20p/egLldcvU59ikvY9+ZIsBdAGGZS29r39eoXzA4MKZZqXU/znttqa0Eed8a3pFWuE2UrntLPLrgg5hvliOmEfkUw0LQ3wid1+4H/ziCgPY6bhYJlMlf7WSCnBpgTq3tlgaaEHoE8gTjadKBk6bcrTaDZ5YANTEFAuuIooJgT+qlLrVql+QT2Qtln9CdMv98rP7yBiVVtQGcOJyQyG5D7z3lysKqCMjkMXOCH2UMJBrurBqxr6UDV3btQmlPOGI8PkgjP620dq35ZmDqBDfTLpsAW4s8o9zlT2jvCF7C1qhg81GuZ37Vop/TZDNShYIQF7ekc8IlhqBpbdhxWV6ap16paqNxsF+X4dPLW6AFVogkgNLJXiW+hcfG/lstKAPzXAVTy2vKh+29OsErIiL3SDqrXrNSmGmXwtFYGYg3XZLiEjleEzK54vYAbdEPElbNvOzvRCNdGkorw0611tpCntbpC79Q/+Ij6eyfQ== jims@fubu" - location: "westus2" - name: "testazuresimplecluster" - network_profile.#: "1" - network_profile.0.dns_service_ip: "10.0.0.10" - network_profile.0.docker_bridge_cidr: "172.17.0.1/16" - network_profile.0.network_plugin: "azure" - network_profile.0.network_policy: "azure" - network_profile.0.pod_cidr: - network_profile.0.service_cidr: "10.0.0.0/16" - node_resource_group: - resource_group_name: "testazuresimplerg" - role_based_access_control.#: "1" - role_based_access_control.0.enabled: "true" - service_principal.#: "1" - service_principal.3262013094.client_id: "7b6ab9ae-7de4-4394-8b52-0a8ecb5d2bf7" - service_principal.3262013094.client_secret: - tags.%: - - + module.aks-gitops.module.aks.azurerm_resource_group.cluster - id: - location: "westus2" - name: "testazuresimplerg" - tags.%: - - + module.aks-gitops.module.aks.null_resource.cluster_credentials - directory_permission = "0777" - file_permission = "0777" - filename = "./output/bedrock_kube_config" - id = (known after apply) - sensitive_content = (sensitive value) - - + module.aks-gitops.module.flux.null_resource.deploy_flux - id: - triggers.%: "2" - triggers.enable_flux: "true" - triggers.flux_recreate: "" - - -Plan: 8 to add, 0 to change, 0 to destroy. - -Do you want to perform these actions? - Terraform will perform the actions described above. - Only 'yes' will be accepted to approve. - - Enter a value: yes - -module.vnet.azurerm_resource_group.vnet: Creating... - location: "" => "westus2" - name: "" => "testazuresimplerg" - tags.%: "" => "" -azurerm_resource_group.cluster_rg: Creating... - location: "" => "westus2" - name: "" => "testazuresimplerg" - tags.%: "" => "" -module.vnet.azurerm_resource_group.vnet: Creation complete after 3s (ID: /subscriptions/7060bca0-7a3c-44bd-b54c-...acfac/resourceGroups/testazuresimplerg) -azurerm_resource_group.cluster_rg: Creation complete after 3s (ID: /subscriptions/7060bca0-7a3c-44bd-b54c-...acfac/resourceGroups/testazuresimplerg) -module.aks-gitops.module.aks.azurerm_resource_group.cluster: Creating... - location: "" => "westus2" - name: "" => "testazuresimplerg" - tags.%: "" => "" -module.vnet.azurerm_virtual_network.vnet: Creating... - address_space.#: "" => "1" - address_space.0: "" => "10.10.0.0/16" - location: "" => "westus2" - name: "" => "testazuresimplevnet" - resource_group_name: "" => "testazuresimplerg" - subnet.#: "" => "" - tags.%: "" => "1" - tags.environment: "" => "azure-simple" -module.aks-gitops.module.aks.azurerm_resource_group.cluster: Creation complete after 1s (ID: /subscriptions/7060bca0-7a3c-44bd-b54c-...acfac/resourceGroups/testazuresimplerg) -module.vnet.azurerm_virtual_network.vnet: Still creating... (10s elapsed) -module.vnet.azurerm_virtual_network.vnet: Creation complete after 14s (ID: /subscriptions/7060bca0-7a3c-44bd-b54c-...rk/virtualNetworks/testazuresimplevnet) -module.vnet.azurerm_subnet.subnet: Creating... - address_prefix: "" => "10.10.1.0/24" - ip_configurations.#: "" => "" - name: "" => "testazuresimplecluster-aks-subnet" - resource_group_name: "" => "testazuresimplerg" - service_endpoints.#: "" => "1" - virtual_network_name: "" => "testazuresimplevnet" -... -module.aks-gitops.module.aks.null_resource.cluster_credentials: Creation complete after 0s (ID: 6839616869196222748) -module.aks-gitops.module.flux.null_resource.deploy_flux: Creating... - triggers.%: "" => "2" - triggers.enable_flux: "" => "true" - triggers.flux_recreate: "" => "" -module.aks-gitops.module.flux.null_resource.deploy_flux: Provisioning with 'local-exec'... -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): Executing: ["/bin/sh" "-c" "echo 'Need to use this var so terraform waits for kubeconfig ' 6839616869196222748;KUBECONFIG=./output/bedrock_kube_config /home/jims/code/src/github.com/microsoft/bedrock/cluster/environments/azure-simple/.terraform/modules/7836162b7abd77fba9c644439dc54fd9/deploy_flux.sh -b 'master' -f 'https://github.com/weaveworks/flux.git' -g 'git@github.com:jmspring//manifests.git' -k '/home/jims/.ssh/gitops-ssh-key' -d 'testazuresimplecluster-flux' -c '5m' -e 'prod' -s 'true' -r 'docker.io/weaveworks/flux' -t '1.12.2'"] -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): Need to use this var so terraform waits for kubeconfig 6839616869196222748 -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): flux repo root directory: testazuresimplecluster-flux -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): creating testazuresimplecluster-flux directory -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): cloning https://github.com/weaveworks/flux.git -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): Cloning into 'flux'... -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): Note: checking out 'e366684a9e995d447e6471543a832a325ff87f5a'. - -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): You are in 'detached HEAD' state. You can look around, make experimental -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): changes and commit them, and you can discard any commits you make in this -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): state without impacting any branches by performing another checkout. - -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): If you want to create a new branch to retain commits you create, you may -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): do so (now or later) by using -b with the checkout command again. Example: - -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): git checkout -b - -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): creating manifests directory -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): generating flux manifests with helm template -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): wrote ./manifests/flux/templates/kube.yaml -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): wrote ./manifests/flux/templates/serviceaccount.yaml -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): wrote ./manifests/flux/templates/rbac.yaml -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): wrote ./manifests/flux/templates/service.yaml -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): wrote ./manifests/flux/templates/deployment.yaml -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): wrote ./manifests/flux/templates/memcached.yaml -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): creating kubernetes namespace flux if needed -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): namespace/flux created -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): creating kubernetes secret flux-ssh from key file path /home/jims/.ssh/gitops-ssh-key -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): secret/flux-ssh created -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): Applying flux deployment -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): deployment.apps/flux created -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): configmap/flux-kube-config created -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): deployment.apps/flux-memcached created -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): service/flux-memcached created -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): clusterrole.rbac.authorization.k8s.io/flux created -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): clusterrolebinding.rbac.authorization.k8s.io/flux created -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): service/flux created -module.aks-gitops.module.flux.null_resource.deploy_flux (local-exec): serviceaccount/flux created -module.aks-gitops.module.flux.null_resource.deploy_flux: Creation complete after 8s (ID: 495632976516241457) - -Apply complete! Resources: 8 added, 0 changed, 0 destroyed. -``` - -### Terraform State - -The results of `terraform apply` are enumerated in the `terraform.tfstate` file. For an overview of resources created, run `terraform state list`: - -```bash -~/bedrock/cluster/environments/azure-simple$ terraform state list - -azurerm_resource_group.cluster_rg -module.aks-gitops.module.aks.azurerm_kubernetes_cluster.cluster -module.aks-gitops.module.aks.azurerm_resource_group.cluster -module.aks-gitops.module.aks.null_resource.cluster_credentials -module.aks-gitops.module.flux.null_resource.deploy_flux -module.vnet.azurerm_resource_group.vnet -module.vnet.azurerm_subnet.subnet -module.vnet.azurerm_virtual_network.vnet -``` - -To see all the details, run `terraform show` - -To see one element, for example, run `terraform state show module.vnet.azurerm_virtual_network.vnet`: - -```bash -~/bedrock/cluster/environments/azure-simple$ terraform state show module.vnet.azurerm_virtual_network.vnet -id = /subscriptions/b59451c1-cd43-41b3-b3a4-74155d8f6cf6/resourceGroups/tst-az-simple-rg/providers/Microsoft.Network/virtualNetworks/testazuresimplevnet -address_space.# = 1 -address_space.0 = 10.10.0.0/16 -ddos_protection_plan.# = 0 -dns_servers.# = 0 -location = westus2 -name = testazuresimplevnet -resource_group_name = tst-az-simple-rg -subnet.# = 0 -tags.% = 1 -tags.environment = azure-simple -``` - -## Interact with the Deployed Cluster - -After `terraform apply` finishes, there is one critical output artifact: the Kubernetes config file for the deployed cluster that is generated and saved in the `output` directory. The default file is `output/bedrock_kube_config`. The following steps use this file to interact with the deployed Bedrock AKS cluster. - -Using the config file `output/bedrock_kube_config`, one of the first things we can do is list all pods deployed within the cluster: - -```bash -KUBECONFIG=./output/bedrock_kube_config kubectl get po --all-namespaces - -NAMESPACE NAME READY STATUS RESTARTS AGE -default spartan-app-7dc87b8c45-nrnnn 1/1 Running 0 70s -flux flux-5897d4679b-tckth 1/1 Running 0 2m3s -flux flux-memcached-757756884-w5xgz 1/1 Running 0 2m4s -kube-system azure-cni-networkmonitor-cl587 1/1 Running 0 3m14s -kube-system azure-cni-networkmonitor-pskl2 1/1 Running 0 3m12s -kube-system azure-cni-networkmonitor-wgdxb 1/1 Running 0 3m26s -kube-system azure-ip-masq-agent-2vdz9 1/1 Running 0 3m26s -kube-system azure-ip-masq-agent-ltfsc 1/1 Running 0 3m14s -kube-system azure-ip-masq-agent-wbksx 1/1 Running 0 3m12s -kube-system azure-npm-5cmx7 1/1 Running 1 3m26s -kube-system azure-npm-jqdch 1/1 Running 1 3m14s -kube-system azure-npm-vhm9h 1/1 Running 1 3m12s -kube-system coredns-6b58b8549f-gg6kr 1/1 Running 0 6m52s -kube-system coredns-6b58b8549f-wkmp7 1/1 Running 0 2m32s -kube-system coredns-autoscaler-7595c6bd66-bb2kc 1/1 Running 0 6m48s -kube-system kube-proxy-b7hsx 1/1 Running 0 3m12s -kube-system kube-proxy-bfsqt 1/1 Running 0 3m26s -kube-system kube-proxy-jsftr 1/1 Running 0 3m14s -kube-system kubernetes-dashboard-69b6c88658-99xdf 1/1 Running 1 6m51s -kube-system metrics-server-766dd9f7fd-zs7l2 1/1 Running 1 6m51s -kube-system tunnelfront-6988c794b7-z2clv 1/1 Running 0 6m48s -``` - -Note that there is also a namespace `flux`. As previously mentioned, Flux is managing the deployment of all of the resources into the cluster. Taking a look at the description for the flux pod `flux-5897d4679b-tckth`, we see the following: - -```bash -$ KUBECONFIG=./output/bedrock_kube_config kubectl describe po/flux-5897d4679b-tckth --namespace=flux -Name: flux-5897d4679b-tckth -Namespace: flux -Priority: 0 -PriorityClassName: -Node: aks-default-30249513-2/10.10.1.66 -Start Time: Tue, 18 Jun 2019 06:32:49 +0000 -Labels: app=flux - pod-template-hash=5897d4679b - release=flux -Annotations: -Status: Running -IP: 10.10.1.76 -Controlled By: ReplicaSet/flux-5897d4679b -Containers: - flux: - Container ID: docker://cc4cf38387a883f964cc65b9a1dd13439be756be3cf2d84fa1ca2ced69d98c3a - Image: docker.io/weaveworks/flux:1.12.2 - Image ID: docker-pullable://weaveworks/flux@sha256:368bc5b219feffb1fe00c73cd0f1be7754591f86e17f57bc20371ecba62f524f - Port: 3030/TCP - Host Port: 0/TCP - Args: - --ssh-keygen-dir=/var/fluxd/keygen - --k8s-secret-name=flux-ssh - --memcached-hostname=flux-memcached - --memcached-service= - --git-url=git@github.com:jmspring/manifests.git - --git-branch=master - --git-path=prod - --git-user=Weave Flux - --git-email=support@weave.works - --git-set-author=false - --git-poll-interval=5m - --git-timeout=20s - --sync-interval=5m - --git-ci-skip=false - --registry-poll-interval=5m - --registry-rps=200 - --registry-burst=125 - --registry-trace=false - State: Running - Started: Tue, 18 Jun 2019 06:33:18 +0000 - Ready: True - Restart Count: 0 - Requests: - cpu: 50m - memory: 64Mi - Environment: - KUBECONFIG: /root/.kubectl/config - Mounts: - /etc/fluxd/ssh from git-key (ro) - /etc/kubernetes/azure.json from acr-credentials (ro) - /root/.kubectl from kubedir (rw) - /var/fluxd/keygen from git-keygen (rw) - /var/run/secrets/kubernetes.io/serviceaccount from flux-token-d2h55 (ro) -Conditions: - Type Status - Initialized True - Ready True - ContainersReady True - PodScheduled True -Volumes: - kubedir: - Type: ConfigMap (a volume populated by a ConfigMap) - Name: flux-kube-config - Optional: false - git-key: - Type: Secret (a volume populated by a Secret) - SecretName: flux-ssh - Optional: false - git-keygen: - Type: EmptyDir (a temporary directory that shares a pod's lifetime) - Medium: Memory - SizeLimit: - acr-credentials: - Type: HostPath (bare host directory volume) - Path: /etc/kubernetes/azure.json - HostPathType: - flux-token-d2h55: - Type: Secret (a volume populated by a Secret) - SecretName: flux-token-d2h55 - Optional: false -QoS Class: Burstable -Node-Selectors: -Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s - node.kubernetes.io/unreachable:NoExecute for 300s -Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal Scheduled 3m30s default-scheduler Successfully assigned flux/flux-5897d4679b-tckth to aks-default-30249513-2 - Normal Pulling 3m22s kubelet, aks-default-30249513-2 pulling image "docker.io/weaveworks/flux:1.12.2" - Normal Pulled 3m12s kubelet, aks-default-30249513-2 Successfully pulled image "docker.io/weaveworks/flux:1.12.2" - Normal Created 2m57s kubelet, aks-default-30249513-2 Created container - Normal Started 2m57s kubelet, aks-default-30249513-2 Started container -``` - -What is more interesting is to take a look at the Flux logs, one can checkout the activities that `flux` is performing. A fuller log can be found [here](./extras/flux_log.txt). But a snippet: - -```bash -KUBECONFIG=./output/bedrock_kube_config kubectl log po/flux-5897d4679b-tckth --namespace=flux -log is DEPRECATED and will be removed in a future version. Use logs instead. -ts=2019-06-18T06:33:18.668235584Z caller=main.go:193 version=1.12.2 -ts=2019-06-18T06:33:18.781628775Z caller=main.go:350 component=cluster identity=/etc/fluxd/ssh/identity -ts=2019-06-18T06:33:18.781698175Z caller=main.go:351 component=cluster identity.pub="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDTNdGpnmztWRa8RofHl8dIGyNkEayNR6d7p2JtJ7+zMj0HRUJRc+DWvBML4DvT29AumVEuz1bsVyVS2f611NBmXHHKkbzAZZzv9gt2uB5sjnmm7LAORJyoBEodR/T07hWr8MDzYrGo5fdTDVagpoHcEke6JT04AL21vysBgqfLrkrtcgaXsw8e3rkfbqGLbhb6o1muGdEyE+uci4hRVj+FGL9twh3Mb6+0uak/UsTFgfDi/oTXdXOFIitQ1o40Eip6P4xejEOuIye0cg7rfX461NmOP7HIEsUa+BwMExiXXsbxj6Z0TXG0qZaQXWjvZF+MfHx/J0Alb9kdO3pYx3rJbzmdNFwbWM4I/zN+ng4TFiHBWRxRFmqJmKZX6ggJvX/d3z0zvJnvSmOQz9TLOT4lqZ/M1sARtABPGwFLAvPHAkXYnex0v93HUrEi7g9EnM+4dsGU8/6gx0XZUdH17WZ1dbEP7VQwDPnWCaZ/aaG7BsoJj3VnDlFP0QytgVweWr0J1ToTRQQZDfWdeSBvoqq/t33yYhjNA82fs+bR/1MukN0dCWMi7MqIs2t3TKYW635E7VHp++G1DR6w6LoTu1alpAlB7d9qiq7o1c4N+gakXSUkkHL8OQbQBeLeTG1XtYa//A5gnAxLSzxAgBpVW15QywFgJlPk0HEVkOlVd4GzUw==" -ts=2019-06-18T06:33:18.781740875Z caller=main.go:352 component=cluster host=https://10.0.0.1:443 version=kubernetes-v1.13.5 -ts=2019-06-18T06:33:18.781823975Z caller=main.go:364 component=cluster kubectl=/usr/local/bin/kubectl -ts=2019-06-18T06:33:18.783257271Z caller=main.go:375 component=cluster ping=true -ts=2019-06-18T06:33:18.790498551Z caller=main.go:508 url=git@github.com:jmspring/manifests.git user="Weave Flux" email=support@weave.works signing-key= sync-tag=flux-sync notes-ref=flux set-author=false -ts=2019-06-18T06:33:18.790571551Z caller=main.go:565 upstream="no upstream URL given" -ts=2019-06-18T06:33:18.791840947Z caller=main.go:586 addr=:3030 -ts=2019-06-18T06:33:18.819345472Z caller=loop.go:90 component=sync-loop err="git repo not ready: git repo has not been cloned yet" -ts=2019-06-18T06:33:18.819404372Z caller=images.go:18 component=sync-loop msg="polling images" -``` - -## Deploy an update using Kubernetes manifest - -Flux automation makes it easy to upgrade services or infrastructure deployed by Bedrock. In this example Flux watches the repo we set up previously under the heading [Set Up Flux Manifest Repository](#set-up-flux-manifest-repository). Now we add a simple Web application to the running deployment by pushing a .yaml manifest to the repo. The .yaml specification describes the service `mywebapp` and type: a `LoadBalancer`. It specifies the source the Docker image that contains it: `image: andrebriggs/goserver:v1.2` and how many containers to run: `replicas: 3`. The containers will be accessible through the load balancer. - -When the .yaml file is complete we will push it to the repo, or simply drop it on GitHub. Flux is querying the repo for changes and will deploy the new service replicas as defined by this manifest. - -Create the following .yaml file and name it something like myWebApp.yaml. The image for this application is specified by the line: `image: andrebriggs/goserver:v1.2`. - -```yaml -# mywebapp services -################################################################################################## -apiVersion: v1 -kind: Service -metadata: - name: mywebapp - labels: - app: mywebapp -spec: - type: LoadBalancer - ports: - - port: 8080 - name: http - selector: - app: mywebapp ---- -apiVersion: extensions/v1beta1 #TODO: Migrate to apps/v1 -kind: Deployment -metadata: - name: mywebapp-v1 -spec: - replicas: 3 - minReadySeconds: 10 # Wait 2 seconds after each new pod comes up before marked as "ready" - strategy: - type: RollingUpdate # describe how we do rolling updates - rollingUpdate: - maxUnavailable: 1 # When updating take one pod down at a time - maxSurge: 1 # When updating never have more than one extra pod. If replicas = 2 then never 3 pods when updating - template: - metadata: - labels: - app: mywebapp - version: v1 - spec: - containers: - - name: mywebapp - image: andrebriggs/goserver:v1.2 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 8080 ---- - -``` - -To see the changes as Flux picks them up and deploys them, open a bash command window and navigate to the `bedrock/cluster/environments/azure-simple` directory. - -Get your Flux pod name by running: `KUBECONFIG=./output/bedrock_kube_config kubectl get pod -n flux` - -Copy the name of the pod (the one that is not memcached). - -Then run the command: `KUBECONFIG=./output/bedrock_kube_config kubectl logs -f --namespace=flux`. This will display a running log of the deployment. - -Now, push or drop the myWebApp.yaml file to the empty repo created under the previous heading [Set Up Flux Manifest Repository](#set-up-flux-manifest-repository). You can click `Upload files` on the GitHub repo page and drop the .yaml file: - -![Set up empty Flux repository](./images/dropYAMLfile.png) - -Watch the running log for changes: - -```bash -ts=2019-07-12T19:49:23.759689114Z caller=loop.go:103 component=sync-loop event=refreshed url=git@github.com:MikeDodaro/bedrock-deploy-demo.git branch=master HEAD=e8b49abbc56f3a8d63a28da10aaf7366a92ff35a -ts=2019-07-12T19:49:26.185598493Z caller=sync.go:470 component=cluster method=Sync cmd=apply args= count=2 -ts=2019-07-12T19:49:27.449368158Z caller=sync.go:536 component=cluster method=Sync cmd="kubectl apply -f -" took=1.263687361s err=null output="service/mywebapp created\ndeployment.extensions/mywebapp-v1 created" -ts=2019-07-12T19:49:27.464471331Z caller=daemon.go:624 component=daemon event="Sync: e8b49ab, default:deployment/mywebapp-v1, default:service/mywebapp" logupstream=false -ts=2019-07-12T19:49:31.136091192Z caller=loop.go:441 component=sync-loop tag=flux-sync old=2444b17dafb7dd4a68059c6634ef943a99cbf725 new=e8b49abbc56f3a8d63a28da10aaf7366a92ff35a -ts=2019-07-12T19:49:31.411320788Z caller=loop.go:103 component=sync-loop event=refreshed url=git@github.com:MikeDodaro/bedrock-deploy-demo.git branch=master HEAD=e8b49abbc56f3a8d63a28da10aaf7366a92ff35a -ts=2019-07-12T19:50:18.649194507Z caller=warming.go:206 component=warmer updated=andrebriggs/goserver successful=3 attempted=3 -``` - -In this output, Flux has found the repo `bedrock-deploy-demo` and created the new service: `"kubectl apply -f -" took=1.263687361s err=null output="service/mywebapp created\ndeployment.extensions/mywebapp-v1 created"`. - -Open another bash window. When the new service is running, use `KUBECONFIG=./output/bedrock_kube_config kubectl get po --all-namespaces` to find the new namespaces in the deployment. - -Then run `KUBECONFIG=./output/bedrock_kube_config kubectl get svc --all-namespaces`. The output will include the `EXTERNAL-IP` address and `PORT` of the `mywebapp` load balancer: - -```bash -$ KUBECONFIG=./output/bedrock_kube_config kubectl get svc --all-namespaces -NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -default kubernetes ClusterIP 10.0.0.1 443/TCP 44m -default mywebapp LoadBalancer 10.0.96.208 52.175.216.214 8080:30197/TCP 23m -flux flux ClusterIP 10.0.139.133 3030/TCP 34m -flux flux-memcached ClusterIP 10.0.246.230 11211/TCP 34m -kube-system kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 44m -kube-system kubernetes-dashboard ClusterIP 10.0.222.104 80/TCP 44m -kube-system metrics-server ClusterIP 10.0.189.185 443/TCP 44m -``` - -The EXTERNAL-IP, in this case is: 52.175.216.214. Append the port and use http://52.175.216.214:8080 to run the service in a browser. - -![Deployed Web application running](./images/WebAppRunning.png) diff --git a/docs/firstWorkload/README.md b/docs/firstWorkload/README.md new file mode 100644 index 0000000..23eee8e --- /dev/null +++ b/docs/firstWorkload/README.md @@ -0,0 +1,701 @@ +# A First Workload With Bedrock + +The best way to start learning about Bedrock is walk through the deployment of a cluster and a first workload on it, enabling you to see how Bedrock makes deploying infrastructure easier and how GitOps works first hand. + +In this walkthrough, we will: +1. Create our GitOps resource manifest repo that will act as the source of truth for our in-cluster deployments. +2. Scaffold, generate, and deploy our first infrastructure deployment. +3. Make our first GitOps commit and see those changes deployed in the cluster by Flux. + +## Create and Configure GitOps Resource Manifest Repo + +In a GitOps workflow, a git repo is the source of truth of what should be deployed in our cluster. An operator in the Kubernetes cluster (Flux in the case of Bedrock) watches this repo and applies changes as they are made to the cluster such that the resources in the cluster exactly match the GitOps resource manifest repo. + +Our next step is to create and configure this repo for this workflow. + +### Create the Flux Manifest Repository + +Follow the instructions on [Create a project in Azure DevOps](https://docs.microsoft.com/en-us/azure/devops/organizations/projects/create-project?view=azure-devops&tabs=preview-page) to create a project in Azure DevOps and an empty git repository. **Note:** For the repository, use a name that signifies that the repo is used for a GitOps workflow (eg. `app-cluster-manifests`) and then clone it locally: + +```bash +$ git clone https://myOrganization@dev.azure.com/myOrganization/myProject/_git/app-cluster-manifests +``` + +You can find more detailed instructions on how to clone an Azure DevOps project [here](https://docs.microsoft.com/en-us/azure/devops/repos/git/clone?view=azure-devops&tabs=visual-studio). + +Flux requires that the git resource manifest repository have at least one commit, so let's initialize the repo with an empty commit: + +``` bash +$ cd app-cluster-manifests +$ git commit --allow-empty -m "Initializing GitOps Resource Manifest Repository" +$ git push origin master +``` + +### Generate a Deploy Key for the GitOps Resource Manifest Repo + +Flux pushes a tag to the git repo to track the last commit it has reconciled against once it finishes its reconcilitation of a commit. This operation requires authentication such that the repo can validate that Flux is authorized to push these tags. + +For a Github repo, an SSH key is used for authentication. + +To create a GitOps SSH key: +1. Create a separate directory to store they key and other infrastructure deployment items: + +```bash +$ mkdir -p ~/cluster-deployment +``` + +2. Create a key pair for the GitOps workflow: + +```bash +$ mkdir -p ~/cluster-deployment/keys +$ ssh-keygen -b 4096 -t rsa -f ~/cluster-deployment/keys/gitops-ssh-key +Generating +lic/private rsa key pair. +Enter passphrase (empty for no passphrase): +Enter same passphrase again: +Your identification has been saved in /Users/myuser/.ssh/gitops-ssh-key. +Your public key has been saved in /Users/myuser/.ssh/gitops-ssh-key.pub. +The key fingerprint is: +SHA256:jago9v63j05u9WoiNExnPM2KAWBk1eTHT2AmhIWPIXM myuser@computer.local +The key's randomart image is: ++---[RSA 4096]----+ +|.=o.B= + | +|oo E..= . | +| + =..oo. | +| . +.*o= | +| o . S.. | +| . * . . | +|... o ... . | +|... .o+.. . | +| .o..===o. | ++----[SHA256]-----+ +``` + +This creates the private and public keys for our GitOps workflow: +1. Private key: for Flux to authenticate against the GitOps repo +2. Public key: for the GitOps repo to validate the passed credentials. + +The public key will be uploaded to GitHub as a deploy key. The private key will be used during the cluster deployment of Flux. + +### Add Deploy Key to the Manifest Repository +Prerequisites: +- Ownership permissions to the git repo + +Steps: +1. Copy the contents of the public key to your clipboard: + +**MacOS** + +```bash +$ pbcopy < ~/cluster-deployment/keys/gitops-ssh-key.pub +``` + +**Ubuntu (including WSL)** + +```bash +$ cat ~/cluster-deployment/keys/gitops-ssh-key.pub | xclip +``` + +1. Next, on AzureDevOps repository, open your security settings by browsing to the web portal and select your avatar in the upper right of the user interface. Select Security in the menu that appears. +![enter key](./images/ssh_profile_access.png) + +2. Select SSH public keys, and then select + New Key. +![new key](./images/ssh_accessing_security_key.png) + +3. Copy the contents of the public key (for example, id_rsa.pub) that you generated into the Public Key Data field. +![copy key](./images/ssh_key_input.png) + +4. Give the key a useful description (this description will be displayed on the SSH public keys page for your profile) so that you can remember it later. Select Save to store the public key. Once saved, you cannot change the key. You can delete the key or create a new entry for another key. There are no restrictions on how many keys you can add to your user profile. + +For more information on adding an ssh key to Azure DevOps, see [Use SSH key authentication](https://docs.microsoft.com/en-us/azure/devops/repos/git/use-ssh-keys-to-authenticate?view=azure-devops&tabs=current-page#step-2--add-the-public-key-to-azure-devops-servicestfs). + +## Scaffold Cluster Deployment + +With our GitOps resource manifest repo and key pair created, let’s move on to scaffolding out our cluster deployment. + +Creating, managing, and maintaining infrastructure deployment templates is a challenge, especially at scale. Large scale deployments can consist of dozens of nearly identical clusters differentiated only by slight differences in config based on the cloud region they are operating in or otherwise. + +Bedrock helps manage this complexity with infrastructure environment templates and definitions. Let’s see this in action by scaffolding out our first definition with Bedrock’s `spk` command line tool: + +```bash +$ cd ~/cluster-deployment +$ spk infra scaffold --name cluster --source https://github.com/microsoft/bedrock --version master --template cluster/environments/azure-simple +``` + +This fetches the specified deployment template, creates a `cluster` directory, and places a `definition.yaml` file in it: + +```yaml +name: cluster +source: 'https://github.com/microsoft/bedrock' +template: cluster/environments/azure-simple +version: master +variables: + agent_vm_count: '3' + agent_vm_size: Standard_D2s_v3 + acr_enabled: 'true' + gc_enabled: 'true' + cluster_name: + dns_prefix: + flux_recreate: 'false' + gitops_ssh_url: + gitops_ssh_key: + gitops_path: + gitops_url_branch: master + resource_group_name: + ssh_public_key: + service_principal_id: + service_principal_secret: + gitops_poll_interval: 5m + gitops_label: flux-sync + vnet_name: + service_cidr: 10.0.0.0/16 + dns_ip: 10.0.0.10 + docker_cidr: 172.17.0.1/16 + address_space: 10.10.0.0/16 + subnet_prefix: 10.10.1.0/24 + network_plugin: azure + network_policy: azure + oms_agent_enabled: 'false' +``` + +This `definition.yaml` is our first infrastructure definition. It contains a reference to a deployment template that is maintained by the Bedrock project in this case -- but this template could exist anywhere. + +This template is the base of our deployment. Note: The `spk` tool also extracted the variables for this Terraform template and provided them, with defaults (if available). + +## Completing our Deployment Definition + +Next we'll fill all of the empty items in this template with config values. + +### Cluster name, DNS prefix, VNET name, and resource group + +Choose a cluster name (for example: `myname-cluster`) and replace both `cluster_name` and `dns_prefix` with this. This will be the name of the cluster to be created in your subscription. + +Update the value for `resource_group_name` to be a variant of this, like `myname-cluster-rg`, and update `vnet_name` with a variant as well, like `myname-vnet`. Your cluster will be created in this resource group and VNET. + +### Configure GitOps Repo + +Update the `gitops_ssh_url` to your GitOps resource manifest repo, using the `ssh` url format available when you clone the repo from Github. For example: `git@ssh.dev.azure.com:v3/myOrganization/myProject/app-cluster-manifests`. + +Set the `gitops_ssh_key` to the GitOps private key we created previously. If you followed those steps, you can set this value to `~/cluster-deployment/keys/gitops-ssh-key`. + +In multi-cluster scenarios, we will often keep all of the resource manifests for all of our clusters in the same repo, but in this simple case, we are only managing one cluster, so we are going to use the root of our GitOps repo as our root for our in-cluster resource manifests. + +Given this, make `gitops_path` an empty string `""`. + +### Create an Azure Service Principal + +Our deployment specification includes references to values for the [Azure Service Principal](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals) for the [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) cluster: + +``` + service_principal_id: "" + service_principal_secret: "" +``` + +For this walkthrough, we will use one Service Principal to deploy with Terraform and for the AKS cluster itself. + +1. [login to the Azure CLI](https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli): + +```bash +$ az login +``` + +2. Get the id for your subscription: + +```bash +$ az account show +{ + "environmentName": "AzureCloud", + "id": "7060bca0-1234-5-b54c-ab145dfaccef", + "isDefault": true, + "name": "Fabrikam Subscription", + "state": "Enabled", + "tenantId": "72f984ed-86f1-41af-91ab-87acd01ed3ac", + "user": { + "name": "olina@fabrikam.io", + "type": "user" + } +} +``` + +3. Create the Service Principal (using the subscription id above): + +```bash +$ mkdir -p ~/cluster-deployment/sp +$ az ad sp create-for-rbac --scopes "/subscriptions/7060bca0-1234-5-b54c-ab145dfaccef" > ~/cluster-deployment/sp/sp.json +$ cat ~/cluster-deployment/sp/sp.json +{ + "appId": "7b6ab9ae-dead-abcd-8b52-0a8ecb5beef7", + "displayName": "azure-cli-2019-06-13-04-47-36", + "name": "http://azure-cli-2019-06-13-04-47-36", + "password": "35591cab-13c9-4b42-8a83-59c8867bbdc2", + "tenant": "72f988bf-86f1-41af-91ab-2d7cd011db47" +} +``` + +Since these are sensitive secrets, we are going to use environment variables to pass them into our deployment to avoid accidentially checking them in. Given this, create the following environment variables using the following mapping: + +```bash +$ export ARM_SUBSCRIPTION_ID=(subscription id from above) +$ export ARM_TENANT_ID=(tenant from Service Principal) +$ export ARM_CLIENT_SECRET=(password from Service Principal) +$ export ARM_CLIENT_ID=(appId from Servive Principal) +``` + +or, with `jq` installed: +``` +$ export ARM_SUBSCRIPTION_ID=$(az account show | jq -r .id) +$ export ARM_TENANT_ID=$(cat ~/cluster-deployment/sp/sp.json | jq -r .tenant) +$ export ARM_CLIENT_ID=$(cat ~/cluster-deployment/sp/sp.json | jq -r .appId) +$ export ARM_CLIENT_SECRET=$(cat ~/cluster-deployment/sp/sp.json | jq -r .password) +``` + +Using the values from above, these environment variables would look like: + +```bash +$ export ARM_SUBSCRIPTION_ID=7060bca0-1234-5-b54c-ab145dfaccef +$ export ARM_TENANT_ID=72f984ed-86f1-41af-91ab-87acd01ed3ac +$ export ARM_CLIENT_SECRET=35591cab-13c9-4b42-8a83-59c8867bbdc2 +$ export ARM_CLIENT_ID=7b6ab9ae-dead-abcd-8b52-0a8ecb5beef +``` + +3. In the `definition.yaml`, delete `service_principal_id` and `service_principal_secret` + +4. Define the `service_principal_id` and `service_principal_secret` environment variables: + +```bash +$ export TF_VAR_service_principal_id=${ARM_CLIENT_ID} +$ export TF_VAR_service_principal_secret=${ARM_CLIENT_SECRET} +``` + +Documentation about Service Principals is available in the [Bedrock documentation](https://github.com/microsoft/bedrock/tree/master/cluster/azure#create-an-azure-service-principal). + +### Create a Node Key + +When you deploy an AKS cluster, you provide an SSH key pair that enables you, in rare circumstances, to shell into the nodes of the cluster. + +1. We’ll use the same process that we used to create a key pair for GitOps: + +```bash +$ ssh-keygen -b 4096 -t rsa -f ~/cluster-deployment/keys/node-ssh-key +Generating public/private rsa key pair. +Enter passphrase (empty for no passphrase): +Enter same passphrase again: +Your identification has been saved in /home/myuser/cluster-deployment/keys/node-ssh-key. +Your public key has been saved in /home/myuser/cluster-deployment/keys/node-ssh-key.pub. +The key fingerprint is: +SHA256:+8pQ4MuQcf0oKT6LQkyoN6uswApLZQm1xXc+pp4ewvs myuser@computer.local +The key's randomart image is: ++---[RSA 4096]----+ +| ... | +| . o. o . | +|.. .. + + | +|... .= o * | +|+ ++ + S o | +|oo=..+ = . | +|++ ooo=.o | +|B... oo=.. | +|*+. ..oEo.. | ++----[SHA256]-----+ +``` + +2. Copy the public key for this node key pair into your clipboard using the same method you did for the GitOps public key: + +**MacOS** +```bash +$ pbcopy < ~/cluster-deployment/keys/node-ssh-key.pub +``` + +**Ubuntu (& WSL)** +```bash +$ cat ~/cluster-deployment/keys/node-ssh-key.pub | xclip +``` + +3. Paste this into your `definition.yaml` file as the value for `ssh_public_key`. + +### Create Azure Resource Group + +You will need a resource group in your subscription before you do a `terraform apply`. + +To create a resource group: + +```bash +$ az group create -l westus2 -n myuser-cluster-rg +``` +### Generate Terraform Deployment + +With these prep steps completed, let’s generate Terraform templates from this cluster definition directory: + +```bash +$ cd ~/cluster-deployment/cluster +$ spk infra generate -p cluster +``` + +`spk` reads our `definition.yaml` file, downloads the template referred to in it, applies the parameters we have provided, and creates a generated Terraform script in a directory called `cluster-generated`. + +## Deploy Cluster + +From this `generated` directory we can `init` our Terraform deployment to fetch all of the upstream Terraform module dependencies. + +```bash +$ cd ~/cluster-deployment/cluster-generated/cluster +$ terraform init +Initializing modules... +Downloading github.com/microsoft/bedrock?ref=0.12.0//cluster/azure/aks-gitops for aks-gitops... +- aks-gitops in .terraform/modules/aks-gitops/cluster/azure/aks-gitops +- aks-gitops.aks in .terraform/modules/aks-gitops/cluster/azure/aks +- aks-gitops.aks.azure-provider in .terraform/modules/aks-gitops/cluster/azure/provider +- aks-gitops.aks.azure-provider.common-provider in .terraform/modules/aks-gitops/cluster/common/provider +- aks-gitops.flux in .terraform/modules/aks-gitops/cluster/common/flux +- aks-gitops.flux.common-provider in .terraform/modules/aks-gitops/cluster/common/provider +- aks-gitops.kubediff in .terraform/modules/aks-gitops/cluster/common/kubediff +- aks-gitops.kubediff.common-provider in .terraform/modules/aks-gitops/cluster/common/provider +Downloading github.com/microsoft/bedrock?ref=0.12.0//cluster/azure/provider for provider... +- provider in .terraform/modules/provider/cluster/azure/provider +- provider.common-provider in .terraform/modules/provider/cluster/common/provider +Downloading github.com/microsoft/bedrock?ref=0.12.0//cluster/azure/vnet for vnet... +- vnet in .terraform/modules/vnet/cluster/azure/vnet + +Initializing the backend... + +Initializing provider plugins... +- Checking for available provider plugins... +- Downloading plugin for provider "null" (hashicorp/null) 2.1.2... +- Downloading plugin for provider "random" (hashicorp/random) 2.2.1... +- Downloading plugin for provider "azuread" (hashicorp/azuread) 0.5.1... +- Downloading plugin for provider "azurerm" (hashicorp/azurerm) 1.32.1... + +Terraform has been successfully initialized! + +You may now begin working with Terraform. Try running "terraform plan" to see +any changes that are required for your infrastructure. All Terraform commands +should now work. + +If you ever set or change modules or backend configuration for Terraform, +rerun this command to reinitialize your working directory. If you forget, other +commands will detect it and remind you to do so if necessary. +``` + +Our next step is to plan the deployment, which will preflight our deployment script and the configured variables, and output the changes that would happen in our infrastructure if applied: + +```bash +$ terraform plan -var-file=spk.tfvars +Refreshing Terraform state in-memory prior to plan... +The refreshed state will be used to calculate this plan, but will not be +persisted to local or remote state storage. + +data.azurerm_resource_group.cluster_rg: Refreshing state... +module.aks-gitops.data.azurerm_resource_group.aksgitops: Refreshing state... +module.vnet.data.azurerm_resource_group.vnet: Refreshing state... +module.aks-gitops.module.aks.data.azurerm_resource_group.cluster: Refreshing state... + +------------------------------------------------------------------------ + +An execution plan has been generated and is shown below. +Resource actions are indicated with the following symbols: + + create + +Terraform will perform the following actions: + + # module.vnet.azurerm_subnet.subnet[0] will be created + + resource "azurerm_subnet" "subnet" { + + address_prefix = "10.10.1.0/24" + + id = (known after apply) + + ip_configurations = (known after apply) + + name = "myuser-cluster-aks-subnet" + + resource_group_name = "myuser-cluster-rg" + + service_endpoints = [] + + virtual_network_name = "myuser-cluster-vnet" + } + +.... snip .... + +Plan: 8 to add, 0 to change, 0 to destroy. + +------------------------------------------------------------------------ + +Note: You didn't specify an "-out" parameter to save this plan, so Terraform +can't guarantee that exactly these actions will be performed if +"terraform apply" is subsequently run. +``` + +Finally, since we are happy with these changes, we apply the Terraform template. Please confirm with "yes" for a prompt to perform the actions. + +``` +$ terraform apply -var-file=spk.tfvars +An execution plan has been generated and is shown below. +Resource actions are indicated with the following symbols: + + create + +Terraform will perform the following actions: + + + azurerm_resource_group.cluster_rg + id: + location: "westus2" + name: "myuser-cluster-rg" + tags.%: + +... snip ... + +Plan: 8 to add, 0 to change, 0 to destroy. + +Do you want to perform these actions? + Terraform will perform the actions described above. + Only 'yes' will be accepted to approve. + + Enter a value: yes + +module.vnet.azurerm_resource_group.vnet: Creating... + location: "" => "westus2" + name: "" => "testazuresimplerg" + tags.%: "" => "" +azurerm_resource_group.cluster_rg: Creating... + location: "" => "westus2" + name: "" => "testazuresimplerg" + tags.%: "" => "" + +.... snip ... + +Apply complete! Resources: 8 added, 0 changed, 0 destroyed. +``` + +You have successfully have deployed your first cluster with Bedrock! + +This might seem like a lot of overhead for creating a single cluster. The real advantage of this comes when you need to manage multiple clusters that are only slightly differentiated by config, or when you want to do upgrades to a new version of the template, and a variety of other “day 2” scenarios. You can read in detail about these scenarios in our infrastructure definitions documentation. + +### Using Terraform State + +Terraform stores the results of our `terraform apply` in a `terraform.tfstate` file. You can see an overview of resources created with: + +```bash +$ terraform state list +azurerm_resource_group.cluster_rg +module.aks-gitops.module.aks.azurerm_kubernetes_cluster.cluster +module.aks-gitops.module.aks.azurerm_resource_group.cluster +module.aks-gitops.module.aks.null_resource.cluster_credentials +module.aks-gitops.module.flux.null_resource.deploy_flux +module.vnet.azurerm_resource_group.vnet +module.vnet.azurerm_subnet.subnet +module.vnet.azurerm_virtual_network.vnet +```` + +You can see more details about any one of these created resources with: + +```bash +$ terraform state show module.vnet.azurerm_virtual_network.vnet +id = /subscriptions/b59451c1-cd43-41b3-b3a4-74155d8f6cf6/resourceGroups/tst-az-simple-rg/providers/Microsoft.Network/virtualNetworks/testazuresimplevnet +address_space.# = 1 +address_space.0 = 10.10.0.0/16 +ddos_protection_plan.# = 0 +dns_servers.# = 0 +location = westus2 +name = testazuresimplevnet +resource_group_name = tst-az-simple-rg +subnet.# = 0 +tags.% = 1 +tags.environment = azure-simple +``` + +And a full set of details with: + +```bash +$ terraform show +``` + +### Interacting with the Deployed Cluster + +The `azure-simple` Terraform template we used in this walkthrough automatically copies the Kubernetes config file from the cluster into the `output` directory. This config file has all of the details we need to interact with our new cluster. + +To utilize it, we first need to merge it into our own config file and make it the default configuration. We can do that with this: + +```bash +$ KUBECONFIG=./output/bedrock_kube_config:~/.kube/config kubectl config view --flatten > merged-config && mv merged-config ~/.kube/config +``` + +With this, you should be able to see the pods running in the cluster: + +```bash +$ kubectl get pods --all-namespaces +NAMESPACE NAME READY STATUS RESTARTS AGE +flux flux-5698f45759-ntnz5 1/1 Running 0 10m +flux flux-memcached-7c9c56b487-wcsvr 1/1 Running 0 10m +kube-system azure-cni-networkmonitor-7bjwt 1/1 Running 0 13m +kube-system azure-cni-networkmonitor-h7m64 1/1 Running 0 13m +kube-system azure-cni-networkmonitor-q7hn2 1/1 Running 0 13m +kube-system azure-ip-masq-agent-2xtng 1/1 Running 0 13m +kube-system azure-ip-masq-agent-5v6vz 1/1 Running 0 13m +kube-system azure-ip-masq-agent-jpb5h 1/1 Running 0 13m +kube-system azure-npm-l5flr 2/2 Running 1 13m +kube-system azure-npm-qsxnq 2/2 Running 0 13m +kube-system azure-npm-zs8hz 2/2 Running 0 13m +kube-system coredns-7fc597cc45-7m7cm 1/1 Running 0 11m +kube-system coredns-7fc597cc45-q2kr8 1/1 Running 0 18m +kube-system coredns-autoscaler-7ccc76bfbd-pfwjh 1/1 Running 0 18m +kube-system kube-proxy-c8p2j 1/1 Running 0 13m +kube-system kube-proxy-tnrd2 1/1 Running 0 13m +kube-system kube-proxy-wsqhn 1/1 Running 0 13m +kube-system kubernetes-dashboard-cc4cc9f58-qjbc2 1/1 Running 0 18m +kube-system metrics-server-58b6fcfd54-c2w6x 1/1 Running 0 18m +kube-system tunnelfront-5787f6d67-f84zn 1/1 Running 0 18m +``` + +As you can see, Flux was provisioned as part of cluster creation, and we can see the pods are running in the cluster. + +Copying the name of the flux pod, let’s fetch the logs for it: + +```bash +$ kubectl logs flux-5897d4679b-tckth -n flux +ts=2019-06-18T06:33:18.668235584Z caller=main.go:193 version=1.12.2 +ts=2019-06-18T06:33:18.781628775Z caller=main.go:350 component=cluster identity=/etc/fluxd/ssh/identity +ts=2019-06-18T06:33:18.781698175Z caller=main.go:351 component=cluster identity.pub="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDTNdGpnmztWRa8RofHl8dIGyNkEayNR6d7p2JtJ7+zMj0HRUJRc+DWvBML4DvT29AumVEuz1bsVyVS2f611NBmXHHKkbzAZZzv9gt2uB5sjnmm7LAORJyoBEodR/T07hWr8MDzYrGo5fdTDVagpoHcEke6JT04AL21vysBgqfLrkrtcgaXsw8e3rkfbqGLbhb6o1muGdEyE+uci4hRVj+FGL9twh3Mb6+0uak/UsTFgfDi/oTXdXOFIitQ1o40Eip6P4xejEOuIye0cg7rfX461NmOP7HIEsUa+BwMExiXXsbxj6Z0TXG0qZaQXWjvZF+MfHx/J0Alb9kdO3pYx3rJbzmdNFwbWM4I/zN+ng4TFiHBWRxRFmqJmKZX6ggJvX/d3z0zvJnvSmOQz9TLOT4lqZ/M1sARtABPGwFLAvPHAkXYnex0v93HUrEi7g9EnM+4dsGU8/6gx0XZUdH17WZ1dbEP7VQwDPnWCaZ/aaG7BsoJj3VnDlFP0QytgVweWr0J1ToTRQQZDfWdeSBvoqq/t33yYhjNA82fs+bR/1MukN0dCWMi7MqIs2t3TKYW635E7VHp++G1DR6w6LoTu1alpAlB7d9qiq7o1c4N+gakXSUkkHL8OQbQBeLeTG1XtYa//A5gnAxLSzxAgBpVW15QywFgJlPk0HEVkOlVd4GzUw==" +ts=2019-06-18T06:33:18.781740875Z caller=main.go:352 component=cluster host=https://10.0.0.1:443 version=kubernetes-v1.13.5 +ts=2019-06-18T06:33:18.781823975Z caller=main.go:364 component=cluster kubectl=/usr/local/bin/kubectl +ts=2019-06-18T06:33:18.783257271Z caller=main.go:375 component=cluster ping=true +ts=2019-06-18T06:33:18.790498551Z caller=main.go:508 url=git@github.com:jmspring/manifests.git user="Weave Flux" email=support@weave.works signing-key= sync-tag=flux-sync notes-ref=flux set-author=false +ts=2019-06-18T06:33:18.790571551Z caller=main.go:565 upstream="no upstream URL given" +ts=2019-06-18T06:33:18.791840947Z caller=main.go:586 addr=:3030 +ts=2019-06-18T06:33:18.819345472Z caller=loop.go:90 component=sync-loop err="git repo not ready: git repo has not been cloned yet" +ts=2019-06-18T06:33:18.819404372Z caller=images.go:18 component=sync-loop msg="polling images" +``` + +## Deploy an update using Kubernetes manifest + +The GitOps workflow we established with Flux and Bedrock makes it easy to control the workflow that is running in the cluster. Flux watches the GitOps resource manifest repo and applies any changes we make there to the cluster. + +Let’s try this by creating a YAML file with a set of Kubernetes resources for a simple service and committing it to the resource manifest repo. In your resource manifest git repo directory that we cloned earlier, create a file called `azure-vote-all-in-one-redis.yaml` and place the following into it: + +```yaml +apiVersion: apps/v1beta1 +kind: Deployment +metadata: + name: azure-vote-back +spec: + replicas: 1 + template: + metadata: + labels: + app: azure-vote-back + spec: + nodeSelector: + "beta.kubernetes.io/os": linux + containers: + - name: azure-vote-back + image: redis + ports: + - containerPort: 6379 + name: redis +--- +apiVersion: v1 +kind: Service +metadata: + name: azure-vote-back +spec: + ports: + - port: 6379 + selector: + app: azure-vote-back +--- +apiVersion: apps/v1beta1 +kind: Deployment +metadata: + name: azure-vote-front +spec: + replicas: 1 + strategy: + rollingUpdate: + maxSurge: 1 + maxUnavailable: 1 + minReadySeconds: 5 + template: + metadata: + labels: + app: azure-vote-front + spec: + nodeSelector: + "beta.kubernetes.io/os": linux + containers: + - name: azure-vote-front + image: microsoft/azure-vote-front:v1 + ports: + - containerPort: 80 + resources: + requests: + cpu: 250m + limits: + cpu: 500m + env: + - name: REDIS + value: "azure-vote-back" +--- +apiVersion: v1 +kind: Service +metadata: + name: azure-vote-front +spec: + type: LoadBalancer + ports: + - port: 80 + selector: + app: azure-vote-front +--- +``` + +This defines a multi-container application that includes a web front end and a Redis instance running in the cluster. + +![voting app](./images/voting-app-deployed-in-azure-kubernetes-service.png) + +Let’s commit this file and push it to our remote GitOps repo: + +```bash +$ git add azure-vote-all-in-one-redis.yaml +$ git commit -m "Add simple web application" +$ git push origin master +``` + +Watch the Flux pod logs again, but this time tailing them so we get updates with `-f`. Please note that it may take upto 5 minutes for the update to be reflected. + +```bash +$ kubectl logs flux-5897d4679b-tckth -n flux -f +``` + +Once Flux starts its next reconcilation, we should see at the end of the output that Flux has found the repo `app-cluster-manifests` and created the new service: + +`"kubectl apply -f -" took=1.263687361s err=null output="service/azure-vote-front created\ndeployment.extensions/azure-vote-front created"`. + +Once applied, we should be able to see the web app pods running in our cluster: + +```bash +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +azure-vote-back-6d4b4776b6-pfdpt 1/1 Running 0 21d +azure-vote-front-5ccf899cf6-wrtf4 1/1 Running 0 21d +``` + +We should also see the LoadBalancer service by querying the set of services in the cluster: + +``` +$ kubectl get services --all-namespaces +NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +default azure-vote-front LoadBalancer 10.0.6.209 52.143.80.54 80:30396/TCP 21d +default kubernetes ClusterIP 10.0.0.1 443/TCP 44m +default azure-vote-back ClusterIP 10.0.125.58 6379/TCP 21d +flux flux ClusterIP 10.0.139.133 3030/TCP 34m +flux flux-memcached ClusterIP 10.0.246.230 11211/TCP 34m +kube-system kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 44m +kube-system kubernetes-dashboard ClusterIP 10.0.222.104 80/TCP 44m +kube-system metrics-server ClusterIP 10.0.189.185 443/TCP 44m +``` + +External load balancers like this take time to provision. If the EXTERNAL-IP of service is still pending, keep trying periodically until it is provisioned. + +The EXTERNAL-IP, in the case above, is: 52.143.80.54. By appending the port our service is hosted on we can use http://52.143.80.54:80 to fetch the service in a browser. + +![voting app deployed](./images/azure-voting-deployed.png) + +And that’s it. We have created a GitOps resource manifest repo, scaffolded and deployed an AKS cluster, and used GitOps to deploy a web app workload to it. + +As a final step, you probably want to delete your Kubernetes cluster to save on your wallet. Thankfully, Terraform has a command for that: + +```bash +$ cd ~/cluster-deployment/cluster-generated +$ terraform destroy -var-file=spk.tfvars +``` diff --git a/docs/azure-simple/images/WebAppRunning.png b/docs/firstWorkload/images/WebAppRunning.png similarity index 100% rename from docs/azure-simple/images/WebAppRunning.png rename to docs/firstWorkload/images/WebAppRunning.png diff --git a/docs/azure-simple/images/Win_Subsys_Linux.png b/docs/firstWorkload/images/Win_Subsys_Linux.png similarity index 100% rename from docs/azure-simple/images/Win_Subsys_Linux.png rename to docs/firstWorkload/images/Win_Subsys_Linux.png diff --git a/docs/azure-simple/images/addDeployKey.png b/docs/firstWorkload/images/addDeployKey.png similarity index 100% rename from docs/azure-simple/images/addDeployKey.png rename to docs/firstWorkload/images/addDeployKey.png diff --git a/docs/firstWorkload/images/azuve-voting-deployed.png b/docs/firstWorkload/images/azuve-voting-deployed.png new file mode 100644 index 0000000..3f32c89 Binary files /dev/null and b/docs/firstWorkload/images/azuve-voting-deployed.png differ diff --git a/docs/azure-simple/images/deployKeyResult.png b/docs/firstWorkload/images/deployKeyResult.png similarity index 100% rename from docs/azure-simple/images/deployKeyResult.png rename to docs/firstWorkload/images/deployKeyResult.png diff --git a/docs/azure-simple/images/dropYAMLfile.png b/docs/firstWorkload/images/dropYAMLfile.png similarity index 100% rename from docs/azure-simple/images/dropYAMLfile.png rename to docs/firstWorkload/images/dropYAMLfile.png diff --git a/docs/azure-simple/images/emptyRepository.png b/docs/firstWorkload/images/emptyRepository.png similarity index 100% rename from docs/azure-simple/images/emptyRepository.png rename to docs/firstWorkload/images/emptyRepository.png diff --git a/docs/azure-simple/images/empty_Repo.png b/docs/firstWorkload/images/empty_Repo.png similarity index 100% rename from docs/azure-simple/images/empty_Repo.png rename to docs/firstWorkload/images/empty_Repo.png diff --git a/docs/firstWorkload/images/ssh_accessing_security_key.png b/docs/firstWorkload/images/ssh_accessing_security_key.png new file mode 100644 index 0000000..c97e920 Binary files /dev/null and b/docs/firstWorkload/images/ssh_accessing_security_key.png differ diff --git a/docs/firstWorkload/images/ssh_key_input.png b/docs/firstWorkload/images/ssh_key_input.png new file mode 100644 index 0000000..c16cbfc Binary files /dev/null and b/docs/firstWorkload/images/ssh_key_input.png differ diff --git a/docs/firstWorkload/images/ssh_profile_access.png b/docs/firstWorkload/images/ssh_profile_access.png new file mode 100644 index 0000000..376f7d8 Binary files /dev/null and b/docs/firstWorkload/images/ssh_profile_access.png differ diff --git a/docs/firstWorkload/images/voting-app-deployed-in-azure-kubernetes-service.png b/docs/firstWorkload/images/voting-app-deployed-in-azure-kubernetes-service.png new file mode 100644 index 0000000..ab9bf12 Binary files /dev/null and b/docs/firstWorkload/images/voting-app-deployed-in-azure-kubernetes-service.png differ diff --git a/docs/gitops-pipeline.md b/docs/gitops-pipeline.md new file mode 100644 index 0000000..00682e6 --- /dev/null +++ b/docs/gitops-pipeline.md @@ -0,0 +1,131 @@ +# The End to End GitOps Deployment Pipeline + +This deep dive will cover the operational details around the core GitOps based pipeline in Bedrock. The average service devops person can be blissfully unware of the details because Bedrock largely automates these implementation details, but it is still helpful to understand how all of it functions and fits together. Its workflow centers around three repositories: the application repo, the high level definition repo, and the resource manifest repo: + +![End to End GitOps Pipeline](images/spk-resource-diagram.png) +

End to End GitOps Pipeline

+ +We will describe the high level definition repo in more detail later, but at a high level, it contains a higher level description of what should be deployed in a customer such that human pull request reviewers can better understand what is being proposed in an operational change. The resource manifest repo, in contrast, is the raw YAML that is applied to Kubernetes to express the current state of the cluster. + +These repositories are linked together by Azure Devops pipelines. Between the application repo and the high level definition repo we have two types of pipelines. The first is the lifecycle management pipeline, which manages the creation and deletion of services and rings through pull requests on the high level definition repo. The second type of pipeline is created for each service that is contained in the application repo. This pipeline is responsible for building the container for the service and creating a pull request on the high level definition repo to release them. You can learn more about the automation around establishing these pipelines and performing day to day tasks in our [Service Management walkthrough](./services.md). + +Finally, the High Level Definition repository and the Resource Manifest repos are linked by a single pipeline that takes the high level definition of the deployment, builds it with Fabrikate, and checks the deployment YAML artifacts into the Resource Manifest repo. You can learn more about automation around establishing this pipeline in our [GitOps Pipeline walkthrough](./gitops-pipeline.md). + +## Deep Dive: High Level Definitions + +In [Why GitOps?](./why-gitops.md) we discussed how the git repo of record contains the low level Kubernetes resource manifests and that commits against this repo are reconciled with Kubernetes by Flux to bring the cluster into the same state. We also saw in our [first workload walkthrough](../firstWorkload) how we could commit a simple set of resource manifests to this repo and that Flux would reconcile that change with the cluster. + +In the real world, however, the Kubernetes resource manifests that comprise an application definition are typically very complex. For example, the resource manifests needed to deploy ElasticSearch can run to 500+ lines and the complete deployment of an Elasticsearch / Fluentd / Kibana (EFK) logging stack can be over 1200 lines. These resource manifests, by their YAML nature, are typically very dense, context free, and very indentation sensitive -- making them a dangerous surface to directly edit without introducing a high risk for operational disaster. + +This has traditionally been solved in the Kubernetes ecosystem with higher level tools like [helm](http://helm.sh). Helm provides templating for the boilerplate inherent in these resource definitions and also provide a reasonable set of default configuration values. Helm continues to be the best way to generate the resource manifests for applications, and we use Helm in our GitOps CI/CD process, checking the generated resource manifests into the resource manifest git repo that we described previously. + +That said, a second problem that you have to address when you start to compose a real world production Kubernetes deployment is that the resource manifests that describe the in-cluster workload tend to be composed of the combination of many Helm charts. For example, to deploy the EFK logging stack above, you might want to generate resource manifests using four charts from helm/charts: `stable/elasticsearch`, `stable/elasticsearch-curator`, `stable/fluentd-elasticsearch`, and `stable/grafana`. + +While you could utilize shell scripts to do this or even create a large helm with subdependencies, this is brittle and not easy to share between deployments, something that is essential in large company contexts where they may have hundreds of clusters running and where reuse, leverage, and central maintenance is critical. + +In Bedrock we utilize high level deployment definitions, which can themselves reference remote subcomponents. In this way, you can compose the overall deployment out a set of common, centrally managed components. Such a stack for the above EFK logging stack might look like: + +```json +{ + "name": "elasticsearch-fluentd-kibana", + "type": "static", + "path": "./manifests", + "subcomponents": [ + { + "name": "elasticsearch", + "type": "helm", + "source": "https://github.com/helm/charts", + "method": "git", + "path": "stable/elasticsearch" + }, + { + "name": "elasticsearch-curator", + "type": "helm", + "source": "https://github.com/helm/charts", + "method": "git", + "path": "stable/elasticsearch-curator" + }, + { + "name": "fluentd-elasticsearch", + "type": "helm", + "source": "https://github.com/helm/charts", + "method": "git", + "path": "stable/fluentd-elasticsearch" + }, + { + "name": "kibana", + "type": "helm", + "source": "https://github.com/helm/charts", + "method": "git", + "path": "stable/kibana" + } + ] +} +``` + +Such a deployment specification requires tooling and the Bedrock project maintains a tool called Fabrikate to generate the low level resource manifests from these high level definitions. It is intended to be executed as part of a CI/CD pipeline that sits between a high level definition of your deployment and the resource manifest repo that Flux watches. This enables the components of a deployment to be written at a higher (and hence less error prone) level and to be able to share those components amongst deployments. + +![GitOps pipeline](images/manifest-gen.png) +

GitOps Pipeline: High Level Definition to Resource Manifest Pipeline

+ +A final problem that Fabrikate solves is that, in real world scale workloads, there are often multiple clusters deployed for the same workload for scale, reliability, and/or latency reasons. These clusters tend to only differ slightly in terms of their config and there is a strong desire to centralize the common config for these clusters such that it remains DRY. + +Fabrikate solves this with composable configuration files. These configuration files are loaded and applied at generation time to build the final set of configuration values that are used during `helm template`. Using our EFK stack example from above, and since we know the different subcomponents that make up this stack, we can preconfigure the connections between these different subcomponents with config values with a configuration file that looks like this such that we can do this once in one spot: + +```yaml +config: +subcomponents: + elasticsearch: + namespace: elasticsearch + injectNamespace: true + elasticsearch-curator: + namespace: elasticsearch + injectNamespace: true + config: + cronjob: + successfulJobsHistoryLimit: 0 + configMaps: + config_yml: |- + --- + client: + hosts: + - elasticsearch-master.elasticsearch.svc.cluster.local + port: 9200 + use_ssl: False + fluentd-elasticsearch: + namespace: fluentd + injectNamespace: true + config: + elasticsearch: + host: "elasticsearch-master.elasticsearch.svc.cluster.local" + kibana: + namespace: kibana + injectNamespace: true + config: + elasticsearchHosts: "http://elasticsearch-master.elasticsearch.svc.cluster.local:9200" +``` + +Fabrikate also enables you to override configuration such that you can utilize the same high level definition with a `common` set of configuration, but also differentiate the configuration applied to the `prod-east` and `prod-west` clusters with specific `prod-east` and `prod-west` configuration that preempts this `common` configuration. + +Our EFK preconfigured stack above can be itself checked into a git repo and referenced from another high level deployment definition file. For example, if we wanted to define a “cloud native” stack with all of the observability, service mesh, and management components included, we could express this with a deployment config that looks like: + +```yaml +name: "cloud-native" +type: static +path: "./manifests" +subcomponents: + - name: "elasticsearch-fluentd-kibana" + source: "../elasticsearch-fluentd-kibana" + - name: "prometheus-grafana" + source: "../prometheus-grafana" + - name: "linkerd2" + source: "../linkerd2" + - name: "kured" + source: "../kured" + - name: "jaeger" + source: "../jaeger-operator" + - name: "traefik" + source: "../traefik" +``` + +Such a hierarchical approach to specifying deployments allows for the reuse of lower level stacks (like the EFK example above) and for updates to these dependent stacks to be applied centrally at the source -- as opposed to having to make N downstream commits in each deployment repo. diff --git a/docs/hldToManifest.md b/docs/hldToManifest.md new file mode 100644 index 0000000..f5f1a21 --- /dev/null +++ b/docs/hldToManifest.md @@ -0,0 +1,199 @@ +# Setting up an HLD to Manifest pipeline + +In [First Workload](./firstWorkload/README.md) we deployed the Azure Voting App using a GitOps workflow by pushing the `azure-vote-all-in-one-redis.yaml` Kubernetes resource manifest file. In [High level Deployment Definitions](./high-level-definitions.md) we learned that, Kubernetes resource manifests that comprise an application definition are typically very complex. These resource manifests, by their YAML nature, are typically very dense, context free, and very indentation sensitive -- making them a dangerous surface to directly edit without introducing a high risk for operational disaster. + +We also learned that real world Kubernetes deployments tend to be composed of the combination of many Helm charts. Maintaining and generating various Helm charts can be a challenge and why Bedrock introduced the concept of high level definitions to meet this complexity. + +In this walkthrough, we will set up an Azure DevOps pipeline that generates a resource manifest from an HLD definition for the Azure Voting App and pushes it to the Manifest Repository. + +## Requirements + +There are a few requirements to use this automation: + +1. The application code and supporting repositories are hosted on + [Azure Devops](https://azure.microsoft.com/en-us/services/devops/). + - If starting from scratch, then first create a + [new Azure Devops Organization](https://docs.microsoft.com/en-us/azure/devops/user-guide/sign-up-invite-teammates?view=azure-devops), + then + [create a project](https://docs.microsoft.com/en-us/azure/devops/organizations/projects/create-project?view=azure-devops&tabs=preview-page). +2. A Manifest Repository inside the Azure DevOps project from Step 1. [Create a repository](https://docs.microsoft.com/en-us/azure/devops/repos/git/create-new-repo?view=azure-devops). +3. An HLD Repository inside the Azure DevOps project from Step 1. [Create a repository](https://docs.microsoft.com/en-us/azure/devops/repos/git/create-new-repo?view=azure-devops). +4. The application will be packaged and run using container images hosted on + [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) +5. The user running `spk` has full access to the above resources. +6. The user is running the latest `spk` + [release](https://github.com/catalystcode/spk/releases). +7. The user has + [Azure CLI installed](https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest). +8. The user is running [git](http://git-scm.org) version + [2.22](https://github.blog/2019-06-07-highlights-from-git-2-22/) or later. + +**Note**: If a user wishes to store helm charts in the application + repositories, then all repositories (application, high level definition, + materialized manifests) must be in the same Azure DevOps Organization AND + Project. This is what Step 2 and Step 3 are doing. + +## Setup SPK + +Download the latest version of `spk` from the +[releases](https://github.com/catalystcode/spk/releases) page and add it to your +PATH. + +To setup a local configuration: + +1. [Generate a Personal Access Token](#generating-personal-access-token) +2. [Create a spk config file](#create-spk-config-file) +3. [Initialize spk](#initializing-spk) + +## Generate Personal Access Token + +Generate a new Personal Access Token (PAT) to grant `spk` permissions in the +Azure Devops Project. Please grant PAT the following permissions: + +- Build (Read & execute) +- Code (Read, write, & manage) +- Variable Groups (Read, create, & manage) + +For help, follow the +[guide](https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&tabs=preview-page). + +## Create SPK config file + +Create a copy of `spk-config.yaml` from the starter +[template](./../spk-config.yaml) with the appropriate values for the +`azure_devops` section. + +Your `azure_devops` section should look similar to this: +```yaml +azure_devops: + access_token: "8w98gzilabcde6aq5insk7tt64yasprnnetlemvcc2eubzwzwqppl" # This is a Personal Access Token with permission to modify and access the HLD, manifest and infra repos. Leave this empty if project is public. Details for the PAT at: https://github.com/CatalystCode/spk/blob/master/docs/project-service-management-guide.md#generating-personal-access-token + hld_repository: "https://dev.azure.com/myOrganization/myProject/_git/app-cluster-hlds" # Repository URL for your Bedrock HLDs + manifest_repository: "https://dev.azure.com/myOrganization/myProject/_git/app-cluster-manifests" # Repository URL for your materialized manifests generated by fabrikate. + infra_repository: "" # Repository URL that contains your terraform templates to be sed for scaffolding and generating infrastructure deployment templates. + org: "myOrganization" # Your AzDo Org + project: "myProject" # Your AzDo project +``` + +**Note:** This `spk-config.yaml` should not be commited anywhere, as it contains +sensitive credentials. For an alternative approach on how to add secrets to `spk-config.yaml` using environment variables, see these [instructions](https://github.com/CatalystCode/spk#environment-variables). + +## Initialize SPK + +Run `spk init -f ` where `` the path to the +configuation file. + +**Note:** When running `spk init -f `, `spk` will copy the +values from the config file and store it into local memory elsewhere. If you +wish to utilize `spk` with another project or target, then you must rerun +`spk init` with another configuration first OR, you may overwrite each commands +via flags. + + +## Repositories +Our next step is to onboard the repositories that support the +deployment of our services: + +1. The high level definition repository (Step 3 from the [Requirements](#requirements)) +2. The materialized manifest repository (Step 2 from the [Requirements](#requirements)) + +### High Level Definition Repository + +This repository holds the Bedrock High Level Deployment Definition (HLD) and +associated configurations. + +This HLD is processed via [fabrikate](https://github.com/microsoft/fabrikate) in +Azure Devops on each change to generate Kubernetes YAML manifests that are +applied to the Kubernetes cluster by Flux. + +#### Initializing the High Level Definition Repository + +- Make sure your SPK config points to the HLD repo you created in Step 3 of [Requirements](#requirements). When you change the values in the SPK config, make sure you re-initialize SPK by running `spk init -f `. +- [Clone the repository.](https://docs.microsoft.com/en-us/azure/devops/repos/git/create-new-repo?view=azure-devops#clone-the-repo-to-your-computer) +- Initialize via `spk`, this will add the fabrikate + [traefik2](https://github.com/microsoft/fabrikate-definitions/tree/master/definitions/traefik2) + as the initial sample component. This can be overridden via optional flags. + ``` + spk hld init --git-push + ``` + +**NOTE** `spk hld` command documentation can be found +[here](/guides/hld-management.md). + +If the initialization succeeded, you will see a message similar to this: +``` +info: Link to create PR: https://dev.azure.com/myOrganization/myProject/_git/app-cluster-hlds/pullrequestcreate?sourceRef=spk-hld-init&targetRef=master +``` + +This means that we were able to generate and HLD with the default traefik2 component and all the changes were added to a new branch and are ready to be added to a Pull Request. + +To verify run: +``` +$ git branch -a +* master + remotes/origin/HEAD -> origin/master + remotes/origin/master + remotes/origin/spk-hld-init +``` +As you can see we now have a `spk-hld-init` branch. + +Go to the "Link to create a PR" that we got earlier after running the `spk hld init --git-push` command. You will see: +![hld new pr](./images/hld-new-pr.png) + + +If you scroll down, you will see several files were added: `component.yaml` and `manifest-generation.yaml`. These files contain the information for our traefik component and for the pipeline. +![hld pr](./images/hld-pr.png) + + +Click "Create" to create the PR. Then click "Complete". Finally click "Complete merge": +![hld pr complete](./images/hld-pr-complete-merge.png) + +Your changes should now be in the `master` branch. Pull the latest changes: +``` +$ git pull origin master +remote: Azure Repos +remote: Found 1 objects to send. (3 ms) +Unpacking objects: 100% (1/1), 238 bytes | 238.00 KiB/s, done. +From ssh.dev.azure.com:v3/myOrganization/myProject/app-cluster-hlds + * branch master -> FETCH_HEAD + 32b0b14..3ee2da1 master -> origin/master +Updating 32b0b14..3ee2da1 +Fast-forward + .gitignore | 1 + + component.yaml | 6 ++++++ + manifest-generation.yaml | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + 3 files changed, 65 insertions(+) + create mode 100644 .gitignore + create mode 100644 component.yaml + create mode 100644 manifest-generation.yaml +``` + +## Deploy Manifest Generation Pipeline + +Deploy a manifest generation pipeline between the high level definition repo and +the materialized manifests repo. Assuming you have configured `spk`, you can run +this without flag parameters from your HLD repo root: + +``` +$ spk hld install-manifest-pipeline +``` + +You can view the newly created pipeline in your Azure DevOps project: +![hld manifest pipeline](./images/hld-manifest-pipeline.png) + +Once the pipeline finishes running successfully, you will see that the manifests have been generated and pushed to the `app-cluster-manifests` repository: +![manifest repo](./images/manifest-repo.png) + +After some time, flux will apply the changes: +``` +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +traefik2-6f8ddc69cc-79n4g 0/1 ContainerCreating 0 8s +``` + +And we can also confirm the service is available: +``` +$ kubectl get services +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kubernetes ClusterIP 10.0.0.1 443/TCP 21h +traefik2 LoadBalancer 10.0.209.68 137.135.15.52 80:31328/TCP,443:30149/TCP 19h +``` diff --git a/docs/images/cors-menu.png b/docs/images/cors-menu.png new file mode 100644 index 0000000..bfb558a Binary files /dev/null and b/docs/images/cors-menu.png differ diff --git a/docs/images/cors-settings.png b/docs/images/cors-settings.png new file mode 100644 index 0000000..cb66d5e Binary files /dev/null and b/docs/images/cors-settings.png differ diff --git a/docs/images/hld-manifest-pipeline.png b/docs/images/hld-manifest-pipeline.png new file mode 100644 index 0000000..36ce049 Binary files /dev/null and b/docs/images/hld-manifest-pipeline.png differ diff --git a/docs/images/hld-new-pr.png b/docs/images/hld-new-pr.png new file mode 100644 index 0000000..1bab28a Binary files /dev/null and b/docs/images/hld-new-pr.png differ diff --git a/docs/images/hld-pr-complete-merge.png b/docs/images/hld-pr-complete-merge.png new file mode 100644 index 0000000..6a11b96 Binary files /dev/null and b/docs/images/hld-pr-complete-merge.png differ diff --git a/docs/images/hld-pr.png b/docs/images/hld-pr.png new file mode 100644 index 0000000..cd9f79f Binary files /dev/null and b/docs/images/hld-pr.png differ diff --git a/docs/images/manifest-gen.png b/docs/images/manifest-gen.png new file mode 100644 index 0000000..5a52db8 Binary files /dev/null and b/docs/images/manifest-gen.png differ diff --git a/docs/images/manifest-repo.png b/docs/images/manifest-repo.png new file mode 100644 index 0000000..4319aec Binary files /dev/null and b/docs/images/manifest-repo.png differ diff --git a/docs/images/pipelines-edit-save.png b/docs/images/pipelines-edit-save.png new file mode 100644 index 0000000..316affe Binary files /dev/null and b/docs/images/pipelines-edit-save.png differ diff --git a/docs/images/pipelines-edit.png b/docs/images/pipelines-edit.png new file mode 100644 index 0000000..a09cf3d Binary files /dev/null and b/docs/images/pipelines-edit.png differ diff --git a/docs/images/pipelines.png b/docs/images/pipelines.png new file mode 100644 index 0000000..f679f05 Binary files /dev/null and b/docs/images/pipelines.png differ diff --git a/docs/images/service-introspection-dashboard.png b/docs/images/service-introspection-dashboard.png new file mode 100644 index 0000000..ac3f6af Binary files /dev/null and b/docs/images/service-introspection-dashboard.png differ diff --git a/docs/images/service-introspection-sources.png b/docs/images/service-introspection-sources.png new file mode 100644 index 0000000..0881e36 Binary files /dev/null and b/docs/images/service-introspection-sources.png differ diff --git a/docs/images/service-introspection-tool.png b/docs/images/service-introspection-tool.png new file mode 100644 index 0000000..3494604 Binary files /dev/null and b/docs/images/service-introspection-tool.png differ diff --git a/docs/images/spk-resource-diagram.png b/docs/images/spk-resource-diagram.png new file mode 100644 index 0000000..e77ad58 Binary files /dev/null and b/docs/images/spk-resource-diagram.png differ diff --git a/docs/introspection.md b/docs/introspection.md new file mode 100644 index 0000000..2d8f613 --- /dev/null +++ b/docs/introspection.md @@ -0,0 +1,169 @@ +# Introspection in Service Deployments + +Many Kubernetes deployments are composed of not just one, but many microservices, and this complexity is compounded by, for latency, scalability, and/or reliability concerns, that these microservices are often also deployed across multiple clusters as well. This makes it hard to reason about the current state of any individual cluster -- and especially a collection of ones that all together constitute the workload. + +To help with this, Bedrock has a service introspection tool to provide better visibility the end to end deployment workflows. It integrates with the GitOps pipeline and service management that were setup in previous walkthrough guides. + +The service introspection tool exists in two forms: +1. Command line tool +2. Web Dashboard + +The service introspection tool provides views into the current status of any change in the system, from continuous integration build to tracking the deployment of the container containing that commit in each of the downstream clusters consuming that container. + +The service introspection main components are: +1. A Bedrock Gitops pipeline that reports back with telemetry data for each of the steps of the system. Currently supported in Azure DevOps +2. An Azure Storage Table that stores all of the telemetry reported back. +3. Introspection tools both for the command line and web dashboard. + +The following diagram shows how the introspection tool integrates with the Azure DevOps pipelines in a Bedrock GitOps Workflow. +![Service Introspection Tool](images/service-introspection-tool.png) + +The Web Dashboard is shown in the image below. In Bedrock, the status is displayed from newest to oldest, and we can see in the first line that a commit has triggered a container build that is currently deployed in the west cluster but that the east cluster hasn't yet been synchronized and is currently still running the previous version of the container. + +![Web Dashboard](images/service-introspection-dashboard.png) + +This walkthrough will cover how to set up introspection in your own deployments and how to use it to observe changes to your cluster. + +## Prerequisites +This guideline assumes you have completed the following: + +1. Set up GitOps. Guideline: [A First Workload With Bedrock](./firstWorkload/README.md) +2. Set up the HLD to Manifests pipeline. Guideline: [Setting up an HLD to Manifest pipeline](hldToManifest.md) +3. Onboard a Service Repository. Guideline: [Service Management](services.md) + + +## Setup an Azure Storage Table +Service introspection tool needs a database to store the information about your +pipelines, builds and deployments. Currently, service introspection supports +storage in the form of an Azure Storage table. Follow the steps below to create +it or use an existing one. + +### Create an Azure storage account +You can use spk to create a storage account if it does not already exist in your subscription in the given resource group. +The storage table will also be created in a newly created or in an existing storage account if it does not exist already. + +The `spk deployment onboard` command will create the storage account: + +``` +$ spk deployment onboard --storage-account-name $STORAGE_ACCOUNT --storage-table-name $TABLE_NAME --storage-location $LOCATION --storage-resource-group-name $RESOURCE_GROUP --service-principal-id $SP_APP_ID --service-principal-password $SP_PASS --tenant-id $SP_TENANT --subscription-id $SUBSCRIPTION +``` +Where: + +- `$STORAGE_ACCOUNT`: Azure storage account name +- `$TABLE_NAME`: Azure storage table name +- `$LOCATION`: Azure location to create new storage account when it does not exist +- `$RESOURCE_GROUP`: Name of the resource group to create new storage account when it does not exist (must already exist) +- `$SP_APP_ID`: Azure service principal id with `contributor` role in Azure Resource Group +- `$SP_PASS`: Azure service principal password +- `$SP_TENANT`: Azure AD tenant id of service principal +- `$SUBSCRIPTION`: Azure subscription id + +More information about its usage can be found [here](https://catalystcode.github.io/spk/commands/index.html#master@deployment_onboard). + +### Storage account CORS settings + +Configure the CORS settings for the storage account to allow requests from the +service introspection dasbhoard. + +1. Go to the [Azure portal](https://portal.azure.com) +2. Search for the name of your storage account +3. Click the CORS options on the menu on the left side: + +![cors menu option](./images/cors-menu.png) + +Add the following settings under **Table Service**: +![cors settings](./images/cors-settings.png) + +**Note:** If you are running the service introspection spk dashboard in a port +other than `4040`, add that entry in the settings instead. + +## Configure the Pipelines +The Bedrock GitOps pipelines need to be configured to start sending data to +`spk` service introspection. If you followed the guidelines from the (Prerequisites)[#prerequisites] each pipeline `yaml` will already have the script needed for introspection. + +To send data from Azure pipelines to the Azure Storage table created +previously, a variable group needs to be configured in Azure DevOps (where the +pipelines are). + +### Create a Variable Group + +We next want to create a variable group that contains the necessary configuration details that our pipelines need in order to push telemetry about the success of our pipelines for observability. + +Create the following `introspection-values.yaml` file with the configuration details: + +``` +name: "introspection-vg" +description: "Service introspection values" +type: "Vsts" +variables: + INTROSPECTION_ACCOUNT_KEY: + value: "Set this to the access key for your storage account" + isSecret: true + INTROSPECTION_ACCOUNT_NAME: + value: "Set this to the name of your storage account" + INTROSPECTION_PARTITION_KEY: + value: "This field can be a distinguishing key that recognizea your source repository in the storage for eg. in this example, we're using the name of the source repository `hello-bedrock`" + INTROSPECTION_TABLE_NAME: + value: "Set this to the name of the table you created previously in [Create a table](#create-a-table)" +``` + +And then use spk's variable-group management to create the variable group: + +``` +$ spk variable-group create --file introspection-values.yaml --org-name $ORG_NAME --devops-project $DEVOPS_PROJECT --personal-access-token $ACCESS_TOKEN +``` + +Where `ORG_NAME` is the name of the Azure Devops org, `DEVOPS_PROJECT` is the name of your Azure Devops project and `ACCESS_TOKEN` is the Personal access token associated with the Azure DevOps org. In [Setting up an HLD to Manifest pipeline](hldToManifestWalkthrough.md) we created a personal access token. + +### Update the Pipelines +Next, we will update all the pipelines to include the variable group we created previously. + +```yaml +variables: + - group: +``` + +First go to the pipelines on the Azure DevOps portal where you created your project: +![Pipelines](images/pipelines.png) + +Next, click on the pipeline definition that you want to edit: +![Edit pipeline](images/pipelines-edit.png) + +Add the name of the variable group you created in the file `introspection-values.yaml`. In this case, the name of the variable group we created was `introspection-vg`: +![Edit pipeline](images/pipelines-edit-save.png) + +Repeat these steps for each pipeline definition. + +## Run the Introspection Tools + +If you haven't already, create a copy of `spk-config.yaml` from the starter +[template](./../spk-config.yaml) with the appropriate values for the +`introspection` section. + +```yaml +introspection: + dashboard: + image: "samiyaakhtar/spektate:prod" # Use this default docker image unless you would like to use a custom one + name: "spektate" + azure: # This is the storage account for the service introspection tool. + account_name: "storage-account-name" # Must be defined to run spk deployment commands + table_name: "storage-account-table-name" # Must be defined to run spk deployment commands + partition_key: "storage-account-table-partition-key" # Must be defined to run spk deployment commands + key: "storage-access-key" # Must be defined to run spk deployment commands. Use ${env:INTROSPECTION_STORAGE_ACCESS_KEY} and set it in .env file + source_repo_access_token: "source_repo_access_token" # Optional. Required only when source repository is private (in order to render the author column in dashboard) +``` + +Initialize `spk` to use these values: +``` +$ spk init -f spk-config.yaml +``` + +Launch the dashboard: +``` +$ spk deployment dashboard +``` + +Launch the command line tool: +``` +$ spk deployment get +``` diff --git a/docs/multicluster.md b/docs/multicluster.md new file mode 100644 index 0000000..fa98792 --- /dev/null +++ b/docs/multicluster.md @@ -0,0 +1,276 @@ +# Multicluster and Day 2 Infrastructure Scenarios + +One of the central problems in any cloud deployment is managing the infrastructure that supports the workload. This task can be very difficult in many large organizations as they may have hundreds of workloads — but many fewer folks working in operations and reliability engineering roles to maintain that infrastructure. + +The scale at which many of these organizations work also compounds this problem. Many of these workloads, for scale, latency, and/or reliability reasons, will span multiple clusters across multiple regions. These organizations need automation that enables them to leverage common deployment templates for all of these clusters to keep this complexity in check. + +They also need the ability to manage configuration across all of these clusters: centralizing config where possible such that it can be updated in one place while still being able to have per cluster config where it isn’t. + +If you followed our [single cluster infrastructure walkthrough](./singleKeyVault/README.md) you saw how Bedrock enables you to scaffold and generate Terraform deployment scripts. We will expand on that here to describe how Bedrock makes maintaining multiple Kubernetes clusters at scale easier. + +Bedrock leverages Terraform for infrastructure deployment and the project itself maintains a number of base environment templates for common cluster deployment scenarios. These are just Terraform scripts and can be used directly (or even independently) from the rest of Bedrock’s automation. + +What Bedrock’s infrastructure automation adds is the ability to maintain cluster deployments at scale by separating the definition of the deployment from the Terraform template used for that deployment such that our Terraform scripts are generated from these two components at deploy time. + +This approach has a couple of advantages: + +1. You can update the Terraform template in a central location and any of the downstream users of the template can take advantage of those improvements. +2. You can [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) out deployment definitions when multiple clusters are deployed with largely the same parameters — but still retain the ability to set per cluster config. + +## Building a Multi-Cluster Definition + +Let’s have a look at how this works in practice by building our first deployment definition for an application called `search` with two clusters in the `east` and `west` regions. We are going to use Bedrock’s `spk` tool to automate this — so [install Bedrock's prerequisites](../tools/prereqs) if you haven’t already. + +We we are going to leverage the `azure-single-keyvault` template from the Bedrock project, which provides a template for a single cluster with Azure Keyvault for secrets management. We can scaffold out our infrastructure definition with this template with the following command: + +```bash +$ spk infra scaffold --name search --source https://github.com/microsoft/bedrock --version 1.0 --template cluster/environments/azure-single-keyvault +``` + +This `scaffold` command creates a directory called `search` and creates a definition.yaml file in it that looks like this: + +```yaml +name: search +source: 'https://github.com/microsoft/bedrock' +template: cluster/environments/azure-single-keyvault +version: 1.0 +backend: + storage_account_name: storage-account-name + access_key: storage-account-access-key + container_name: storage-account-container + key: tfstate-key +variables: + acr_enabled: 'true' + address_space: + agent_vm_count: + agent_vm_size: + cluster_name: + dns_prefix: + flux_recreate: + gc_enabled: 'true' + gitops_poll_interval: 5m + gitops_label: flux-sync + gitops_ssh_url: + gitops_url_branch: master + gitops_ssh_key: + gitops_path: + keyvault_name: + keyvault_resource_group: + resource_group_name: + ssh_public_key: + service_principal_id: + service_principal_secret: + subnet_prefixes: + vnet_name: + subnet_name: + network_plugin: azure + network_policy: azure + oms_agent_enabled: 'false' + enable_acr: 'false' + acr_name: +``` + +`scaffold` has downloaded the template locally, extracted all of the variables for the template, and provided defaults where possible for all of the variables. + +We want to deploy multiple clusters and share common configuration values between them. Given this, this particular definition, because it is the root definition for our workload as a whole across all of the clusters we are going to define, is where we are going to maintain those common values. So let's do that now: + +```yaml +name: search +source: 'https://github.com/microsoft/bedrock' +template: cluster/environments/azure-single-keyvault +version: 1.0 +backend: + storage_account_name: "searchops" + access_key: "7hDvyT4D2DNyD ... snip ... CiNvMEFYX1qTYHX3bT6XYva2tuN6Av+j+Kn259wQmA==" + container_name: "tfstate" +variables: + acr_enabled: true + agent_vm_count: 6 + agent_vm_size: Standard_D8s_v3 + flux_recreate: false + gc_enabled: true + gitops_label: flux-sync + gitops_poll_interval: 60s + gitops_ssh_url: git@ssh.dev.azure.com:v3/fabrikam/search/resource-manifests + gitops_url_branch: master + gitops_ssh_key: "../keys/gitops_repo_key" + keyvault_name: "search-keyvault" + keyvault_resource_group: "search-global-rg" + ssh_public_key: "ssh-rsa AAAAB3Nza ... snip ... lgodNP7GExxNLSLqcsZa9ZALc+P3FRjgYbLC/qMWtkzPH5TEHPU4P5KLbHr4ZN3kV2MiARTtjWOlYMnMnrGu6NYxCmjHsbZxfhhZ2rU3uIEvjUBo9rdtQ== johndoe@fabrikam.com" + service_principal_id: "deadbeef-3703-4842-8a96-9d8b1b7ea442" + service_principal_secret: "a0927660-70f7-4306-8e0f-deadbeef" + network_plugin: "azure" + network_policy: "azure" + oms_agent_enabled: "false" + enable_acr: "true" + acr_name: "fabrikam" +``` + +With our common definition completed, let’s scaffold out our first physical cluster in the `east` region from within our `search-cluster` directory: + +```bash +$ spk infra scaffold --name east --source https://github.com/microsoft/bedrock --version 1.0 --template cluster/environments/azure-single-keyvault +``` + +Scaffolding this cluster also creates a directory (called `east`) and a `definition.yaml` within it. When we go to generate a deployment from this, however, the tool will layer this hierarchy, taking the values from our common `definition.yaml` and then overlaying the values from our `east` definition on top. This is the mechanism that Bedrock uses to [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) out our deployment definitions, enabling you to define common variables in one place and have them inherited in each of the cluster definitions in directories underneath this central definition. + +With this, let’s fill in the cluster definition with the variables specific to the `east` cluster: + +```yaml +name: east +backend: + key: "search-east" +variables: + address_space: "10.7.0.0/16" + cluster_name: "search-east" + dns_prefix: "search-east" + gitops_path: "azure-search-east" + resource_group_name: "search-east-rg" + subnet_prefixes: "10.7.0.0/16" + vnet_name: "search-east-vnet" + subnet_name: "search-east-subnet" +``` + +Note that we didn’t include the `template` and `version` in the cluster `definition.yaml`. This, and several of the common backend configuration variables, are also shared amongst the clusters. + +With our `east` cluster defined let’s scaffold out our final cluster: + +```bash +$ spk infra scaffold --name west --source https://github.com/microsoft/bedrock --version 1.0 --template cluster/environments/azure-single-keyvault +``` + +And fill the cluster definition for this with variable specific to the `west` cluster: + +```yaml +name: west +backend: + key: "search-west" +variables: + address_space: "10.8.0.0/16" + cluster_name: "search-west" + dns_prefix: "search-west" + gitops_path: "azure-search-west" + resource_group_name: "search-west-rg" + subnet_prefixes: "10.8.0.0/16" + vnet_name: "search-west-vnet" + subnet_name: "search-west-subnet" +``` + +So with this, we have an overall definition for the `search` service across two clusters that looks like this: + +``` +. +└── search + ├── definition.yaml + ├── east + │   └── definition.yaml + ├── west +    └── definition.yaml +``` + +Again, when we go to generate the Terraform templates for the `west` cluster, it will first load the common `definition.yaml` at the root and overlay on top of them the values from `west/definition.yaml` definition. + +## Generating Cluster Terraform Templates + +We can now generate the Terraform scripts for deploying our `search ` clusters by executing this from our top level `search` directory: + +```bash +$ spk infra generate --project east +$ spk infra generate --project west +``` + +This will generate the `east` and `west` cluster definitions, combining the per cluster config with the central common config, and generate the Terraform scripts for each of the clusters from on the base template that we specified such that our our directory structure now looks like this: + +``` +. +├── prod-blue +│   ├── README.md +│   ├── acr.tf +│   ├── backend.tfvars +│   ├── main.tf +│   ├── spk.tfvars +│   └── variables.tf +└── prod-green + ├── README.md + ├── acr.tf + ├── backend.tfvars + ├── main.tf + ├── spk.tfvars + └── variables.tf +``` + +## Deploying Cluster + +With our clusters infrastructure templates created, we can now apply the templates. Let’s start with the `east` cluster: + +```bash +$ cd east +$ terraform init -var-file=spk.tfvars -backend-config=./backend.tfvars +$ terraform plan -var-file=spk.tfvars +$ terraform apply -var-file=spk.tfvars +``` + +This deploys our `east` cluster. We can naturally do the same thing for our `west` cluster with the same set of commands: + +```bash +$ cd west +$ terraform init -var-file=spk.tfvars -backend-config=./backend.tfvars +$ terraform plan -var-file=spk.tfvars +$ terraform apply -var-file=spk.tfvars +``` + +## Updating a Deployment Parameter + +Naturally, change is a constant in any real world deployment, and typically we need a way to evolve clusters over time. For example, and to make this discussion concrete, let’s say that our `search` workload has been wildly successful and that we want to expand the capacity of each of our clusters running it. + +In the example above, we can do this by modifying the central `definition.yaml` to use a larger value for `agent_vm_count`, increasing the size from 6 to 8 nodes. + +With this central change done, we can then regenerate the Terraform scripts for each of the clusters in the same manner that we did previously: + +```bash +$ spk infra generate --project east +$ spk infra generate --project west +``` + +And then, cluster by cluster, plan and apply the templates: + +```bash +$ cd east +$ terraform init --var-file=spa.tfvars +$ terraform plan --var-file=spk.tfvars +$ terraform apply --var-file=spk.tfvars +``` + +Since we are using backend state for these clusters to manage state, Terraform will examine the delta between the current and desired states and realize that there is an increase the size of the cluster from 6 to 8 nodes, and perform that adjustment operation on our cluster. + +When our `east` cluster has been successfully upgraded in the same manner we can upgrade our `west` cluster to use 8 nodes. + +## Upgrading Deployment Templates + +One of the key tenets of Bedrock’s infrastructure automation is reducing the differences between clusters to as few as possible such that it is easier for folks in reliability engineering roles to reason about them at scale. + +One way we enable that, as we mentioned previously, is to enable cluster deployments based off of a centrally managed template. This enables downstream service teams to focus on their service and for upstream infrastructure teams to incrementally improve these templates and have them applied downstream. + +If you were watching closely as we specified our `search` workload deployment, you might have noticed in our central deployment template that it specified a particular version of the deployment template: + +```yaml +name: search +source: 'https://github.com/microsoft/bedrock' +template: cluster/environments/azure-single-keyvault +version: 1.0 +... +``` + +This specifies that our deployment should use the `1.0` tag from the git repo specified in `source` such that our deployment template is locked at this particular version. Version locking your deployment like this is important because you typically want to explicitly upgrade to new deployment templates versus have your deployment templates change underneath you while deploying an unrelated change. + +Let’s say that your central infrastructure team has released the `1.1` version of this same template. We can upgrade our definition to that template by simply this version value: + +```yaml +name: search +source: 'https://github.com/microsoft/bedrock' +template: cluster/environments/azure-single-keyvault +version: 1.1 +... +``` + +And then regenerating and applying the cluster definition in the same manner that we did above when we changed a deployment parameter. diff --git a/docs/services.md b/docs/services.md new file mode 100644 index 0000000..450e89c --- /dev/null +++ b/docs/services.md @@ -0,0 +1,122 @@ +# Walkthrough: Service Management + +One the most common activities a modern service team performs is deploying and updating a particular service -- or set of services. This walkthrough will cover onboarding and deploying an initial version of the service in the cluster using the automation available in Bedrock. + +This workflow centers around the repositories that hold application code, associated Dockerfile(s), and helm deployment charts in conjunction with the high level definition repo we have already established. We do not take a very opinionated view of how these repositories are structured: they can hold one (single service) or more (monorepository) service depending on your source control methodology. + +## Onboarding a Service Repository + +Note: Our automation currently only supports Azure Devops and Azure Devops Repos. + +In this walkthrough, we'll use the [Azure Voting App](https://github.com/Azure-Samples/azure-voting-app-redis) as an example service that we are deploying, but you can also swap in your own service. + +1. If you don't have an existing source code repository, [create one in the given Azure Devops Project](https://docs.microsoft.com/en-us/azure/devops/repos/git/create-new-repo?view=azure-devops#create-a-repo-using-the-web-portal) +2. [Clone this repository to your local machine](https://docs.microsoft.com/en-us/azure/devops/repos/git/create-new-repo?view=azure-devops#clone-the-repo-to-your-computer) + +Our automation distinguishes between a `project` and a `service`. A `project` in Bedrock terminology is the same as a git repo, which contains one or more `services`. + +### Onboarding a Service Project + +Navigate to the root of the `project` (for the Azure Voting App example application, this is the root directory) and run the `project init` command: + +``` +$ spk project init +$ git add -A +$ git commit -m "Onboarding project directory" +``` + +This creates a `bedrock.yaml` file that maintains the set of `services` that are part of this `project`. + +It also creates a `hld-lifecycle.yaml` Azure Devops definition that manages the lifecycle of this `project` in the high level definition. + +Finally, it creates a `maintainers.yaml` file with a list of the named maintainers of the project. + +### Creating the Lifecycle Pipeline + +We next want to create the lifecycle pipeline for our `project` which automatically manages adding services (and in advanced scenarios, rings) to our high level deployment definition. + +The first step to do that is to create a common variable group in Azure Devops that contains a set of secrets that we will use in our pipeline: + +``` +$ export VARIABLE_GROUP_NAME=voting-app-vg +$ spk project create-variable-group $VARIABLE_GROUP_NAME -r $ACR_NAME -u $SP_APP_ID -t $SP_TENANT -p $SP_PASS +$ git add -A +$ git commit -m "Adding Project Variable Group." +$ git push -u origin --all +``` + +where `ACR_NAME` is the name of the Azure Container Registry for the project, `SP_APP_ID` is the service principal's id, +`SP_PASS` is the service principal's password, and +`SP_TENANT` is the service principal's tenant. This service principal is expected to have read and write access to the Azure Container Registry. + +This creates the variable group with Azure Devops and also adds it to our `bedrock.yaml` and `hld-lifecycle.yaml` such that it will be used by the pipeline. + +With this created, we can deploy the lifecycle-pipeline itself with: + +``` +$ spk project install-lifecycle-pipeline --org-name $ORG_NAME --devops-project $DEVOPS_PROJECT --repo-url $VOTING_APP_REPO_URL --pipeline-name $PIPELINE_NAME +``` + +where `ORG_NAME` is the name of the Azure Devops org, `DEVOPS_PROJECT` is the name of your Azure Devops project, `SOURCE_REPO_URL` is the git url that you used to clone your application from Azure Devops, and `PIPELINE_NAME` is the name of the pipeline (eg. `azure-voting-app-pipeline` in the case of our sample) that you'd like to create. + +Note: If you are using a repo per service source control strategy you should run install-lifecycle-pipeline once for each repo. + +Once this lifecycle pipeline is created, it will run and create a pull request on your high level definition that adds the `project` as a component to your root. Go to your high level definition repo and accept that pull request. + +## Onboarding a Service + +With that, we have set up all of the pipelines for the project itself, so let's onboard our first service. + +We can do that with `spk service create` which, like all of the spk service and project commands, runs from the root of the repo. In this case, `azure-vote` refers to the path from the root of the repo to the service. + +``` +$ spk service create azure-vote \ +--display-name azure-voting-frontend \ +--helm-config-git https://github.com/edaena/helm-charts \ +--helm-config-path charts/azure-vote \ +--helm-config-branch master \ +--k8s-backend azure-voting-frontend-svc +``` + +As part of service creation, we need to provide to SPK what we want it to deploy in the form of a helm chart. This helm chart is largely freeform, but requires the following elements in its `values.yaml` such that Bedrock can deploy new builds. + +``` +image: + tag: latest + repository: some.acr.io/repo +serviceName: "fabrikam" +``` + +Once completed, `service create` will add the service to your `bedrock.yaml` file for the `project` and add a `build-update-hld.yaml` Azure Devops file to your `service`. + +For this first walkthrough, we are not going to utilize the more advanced ring management functionality that Bedrock provides, so we need to make a small edit to our bedrock.yaml file. After the `displayName` line, add `disableRouteScaffold: true` to prevent scaffolding of ring routing: + +```yaml +rings: + master: + isDefault: true +services: + ./: + displayName: azure-voting-frontend + disableRouteScaffold: true + helm: + chart: +``` + +Then commit all of these files and push them to your Azure Devops repo: + +``` +$ git add -A +$ git commit -m "Onboard voting-app service" +$ git push origin master +``` + +Our final step is to create the source code to container build pipeline for our service. We can do that with: + +``` +$ spk service install-build-pipeline azure-vote -n azure-vote-build-pipeline -o $ORG_NAME -u $VOTING_APP_REPO_URL -d $DEVOPS_PROJECT +``` + +This should create the build pipeline and build the current version of your service into a container using its Dockerfile. It will then create a pull request on the high-level-definition repo for this new image tag. + +Accept this and the azure-voting-app frontend application will be deployed into your cluster. diff --git a/docs/singleKeyvault/README.md b/docs/singleKeyvault/README.md new file mode 100644 index 0000000..293643b --- /dev/null +++ b/docs/singleKeyvault/README.md @@ -0,0 +1,370 @@ +# Deploying Single Cluster with Keyvault + +If you followed our first workload walkthrough you saw how Bedrock enables you to scaffold and generate Terraform deployment scripts. In the first workload walkthrough, to demonstrate a simple GitOps workflow, we deployed a simple cluster using Azure Simple Terraform template. However in the real world, in order to have secure deployments, we need a way to store the essential secrets in Keyvault. In addition, we will also deploy a vnet to provide isolation to our cloud infrastructure. + +In upcoming advanced scenarios, we will be using the Bedrock automation to repeat the cluster creation process by scaffolding configurations to deploy multiple clusters. Since all these clusters use common resources like Keyvault, Storage Account and a Vnet, we will deploy these resources using azure-common-infra template. The environment provisioned using this template is a dependency for other environments (azure-single-keyvualt) we will be using in the subsequent walkthroughs. + +Note: This walkthrough assumes that you already have set all the environment variables as part of [first walkthrough](../Firstworkload/README.md). + +## Deplying the common infrastructure: + +Before you deploy infrastructure environments, you will need to create an Azure Storage Account. You can do this in Azure Portal, or by using the Azure CLI: + +### Resource Group Requirement: + +This environment requires a resource group. The requisite variable is `global_resource_group_name`. To use the Azure CLI to create the resource group, see [here](https://github.com/microsoft/bedrock/blob/master/cluster/azure/README.md). + +To create a resource group, you can use the following command + +``` +$ az group create -l westus2 -n my-global-rg +``` + +### Create Storage Account in Azure: + +Before attempting to deploy the infrastructure environments, you will also need to create an Azure Storage Account. You can do this in Azure Portal, or by using the Azure CLI: + +``` +az storage account create \ + --name mystorageaccount \ + --resource-group my-global-rg \ + --location eastus \ + --sku Standard_LRS \ + --encryption blob +``` + +The Azure CLI needs your storage account credentials for most of the commands in this tutorial. While there are several options for doing so, one of the easiest ways to provide them is to set `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_KEY` environment variables. + +First, display your storage account keys by using the az storage account keys list command: + +``` +az storage account keys list \ + --account-name mystorageaccount \ + --resource-group my-global-rg \ + --output table +``` + +Now, set the `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_KEY` environment variables. You can do this in the Bash shell by using the export command: + +``` +export AZURE_STORAGE_ACCOUNT="mystorageaccount" +export AZURE_STORAGE_KEY="myStorageAccountKey" +``` + +Blobs are always uploaded into a container. You can organize groups of blobs similar to the way you organize your files on your computer in folders. + +Create a container for storing blobs with the az storage container create command. + +``` +az storage container create --name mystoragecontainer +``` + +Next, let's create a folder called azure-common-infra +``` +mkdir azure-common-infra +cd azure-common-infra +``` +Next, use Bedrock cli command to scaffold the configuration for common-infra template using the following command. Here we are using the Bedrock’s predefined `azure-common-infra` template to create configuration parameters for westus cluster. + +``` +spk infra scaffold --name westus --source https://github.com/microsoft/bedrock --version master --template cluster/environments/azure-common-infra +``` + +This `scaffold` command creates a directory called `westus` and creates a definition.yaml file in it that looks like this: + +```yaml +name: westus +source: 'https://github.com/microsoft/bedrock' +template: cluster/environments/azure-common-infra +version: master +backend: + storage_account_name: + access_key: + container_name: + key: tfstate-common-infra +variables: + address_space: + keyvault_name: + global_resource_group_name: + service_principal_id: + subnet_name: + subnet_prefix: + vnet_name: +``` +`scaffold` has downloaded the template locally, extracted all of the variables for the template, and provided defaults where possible for all of the variables. + +Let's fill in the variables for common-infra infrastructure variables. +Note: `global_resource_group_name' is the resource group that was created in the [Resource Group Requirement](#Resource-Group-Requirement:). + +```yaml +name: westus +source: 'https://github.com/microsoft/bedrock' +template: cluster/environments/azure-common-infra +version: master +backend: + storage_account_name: 'mystorageaccount' + access_key: 'CENp3G0qvo4jB1HduRO10ga0jNrN+b4gMibuAp63qZBDRNzXYZrPQvSIS7dUu8XM4lca6HL4RobXCfhjvetWsD+Drw==' + container_name: 'mystoragecontainer' + key: tfstate-common-infra +variables: + address_space: '10.39.0.0/16' + keyvault_name: 'mykeyvault' + global_resource_group_name: 'my-global-rg' + service_principal_id: '91896545-0aa8-4444-5555-111461be44a6' + subnet_name: 'mysubnet' + subnet_prefix: '10.39.0.0/24' + vnet_name: 'myvnet' +``` +Now that we have these variables filled in, we will use 'spk generate' command to generate terraform tfvars file that we will use to provision the infrastructure. Navigate to azure-common-infra/westus folder and run the following command. + +``` +spk infra generate -p westus +``` +This command creates westus-generated directory inside azure-common-infra directory. Navigate to `azure-common-infra/westus-generated` directory. Notice that this directory has terraform variable files. + +Let's provision the common infrastructure by running the following commands from this folder + +``` +terraform init -backend-config=./backend.tfvars +terraform plan -var-file=spk.tfvars +``` +If the plan succeeds, run the following command +``` +terraform apply -var-file=spk.tfvars +... +Do you want to perform these actions? + Terraform will perform the actions described above. + Only 'yes' will be accepted to approve. + + Enter a value: yes +... +``` +This should provision keyvault, vnet in your azure subscription. + +You can reuse the common infrastructure components for multiple clusters. + +## Deplying Azure Single Cluster with Keyvault: + +Now that we have common infrastructure components in place, we are ready to deploy AKS cluster using Bedrock Azure Single Cluster with Keyvault environment. The `azure-single-keyvault` environment deploys a single production level AKS cluster configured with Flux and Azure Keyvault. + +### Resource Group Requirement: + +This environment requires another resource group be created. The requisite variable is `resource_group_name`. To use the Azure CLI to create the resource group, see [here](https://github.com/microsoft/bedrock/blob/master/cluster/azure/README.md). + +To create a resource group, you can use the following command + +``` +$ az group create -l westus2 -n my-cluster-rg +``` +Next, to scaffold infrastructure, we will use spk scaffold command at the root level and the cluster level + +At the same level as your azure-common-infra directory, run the following command + +``` +spk infra scaffold --name azure-single-keyvault --source https://github.com/microsoft/bedrock --version master --template cluster/environments/azure-single-keyvault +``` +This creates a directory named azure-single-keyvault and places global defintion.yaml inside the directory. Now, navigate to this directory and create cluster specific configurations by running the following commnad + +``` +$cd azure-single-keyvault +$spk infra westus --name azure-single-keyvault --source https://github.com/microsoft/bedrock --version master --template cluster/environments/azure-single-keyvault +``` +This creates a subdirectory named westus inside `azure-single-keyvault` directory. Navigate to this directory and open definition.yaml file + +```yaml + + +name: azure-single-keyvault +source: 'https://github.com/microsoft/bedrock' +template: cluster/environments/azure-single-keyvault +version: master +backend: + storage_account_name: storage-account-name + access_key: storage-account-access-key + container_name: storage-account-container + key: tfstate-key +variables: + acr_enabled: 'true' + address_space: + agent_vm_count: + agent_vm_size: + cluster_name: + dns_prefix: + flux_recreate: + gc_enabled: 'true' + gitops_poll_interval: 5m + gitops_label: flux-sync + gitops_ssh_url: + gitops_url_branch: master + gitops_ssh_key: + gitops_path: + keyvault_name: + keyvault_resource_group: + kubernetes_version: 1.15.7 + resource_group_name: + ssh_public_key: + service_principal_id: + service_principal_secret: + subnet_prefixes: + vnet_name: + subnet_name: + network_plugin: azure + network_policy: azure + oms_agent_enabled: 'false' + enable_acr: 'false' + acr_name: +``` + +Next we'll fill all of the empty items in this template with config values. +Note: Use `storage_account_name`, `access_key` , `container_name`, `keyvault_name`, `keyvault_resource_group` and `vnet_name` from the previous [Deploying common infrastructure](#Deploying-common-infrastructure:) step. + +Here, we will be using the manifest repo you created for the first workload. However, let's copy the manifest file from the root directory to a subdirectory called `prod` + +Navigate to your devops repo folder that you cloned from [first walkthrough](../Firstworkload/README.md). Create a subdirectory named prod and copy azure-vote-all-in-one-redis.yaml to that subdirectory. In a future walkthrough, we can have different subdirectories for each cluster with slight variations to the manifest. This step is in preparation for future walkthroughs. + +``` +$ mkdir prod +$ cp azure-vote-all-in-one-redis.yaml +$ git add . +$ git commit -m "copy manifest to new folder" +$ git push origin master +``` + +```yaml +name: cluster +source: 'https://github.com/microsoft/bedrock' +template: cluster/environments/azure-single-keyvault +version: master +backend: + storage_account_name: 'mystorageaccount' + access_key: 'CENp3G0qvo4jB1HduRO10ga0jNrN+b4gMibuAp63qZBDRNzXYZrP7dUu8XM4lca6HL4RobXCfhjvAAAAAbbbb==' + container_name: 'mystoragecontainer' + key: tfstate-single-keyvault +variables: + acr_name: 'jhansiacr2' + agent_vm_count: '3' + agent_vm_size: Standard_D2s_v3 + acr_enabled: 'true' + gc_enabled: 'true' + cluster_name: 'spk-aks2' + dns_prefix: 'spk' + flux_recreate: 'false' + gitops_ssh_url: 'git@ssh.dev.azure.com:v3/myorg/app-cluster-manifests' + gitops_path: 'prod' + gitops_ssh_key: '~/cluster-deployment/keys' + gitops_url_branch: master + keyvault_name: 'mykeyvault' + keyvault_resource_group: 'my-global-rg' + resource_group_name: 'my-cluster-rg' + ssh_public_key: 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDUcBzqjBc59Ypa+Y2Cc9z+wDldZnSEGoJt+sUbux/KrczQmHmKqpdW50zMSY4MYhdfJsn902mZad4qOVj/KvQwwl7cGyWqzK+yEw/CrgqCX9wzloJrq75M1V3Qaaaaaaaaaaaaabbbbbbb91lnmtzGyOIJJlHoxm4TPR8tRhWeAcb6mRBKOeGQSSNekyi08dtYhYHlWFXaSZzqVevgiNYCGkcgXbPE1fE6Da2SAmOdBwANCHE8OZXnh yourname@org.com' + gitops_poll_interval: 2m + gitops_label: flux-sync + vnet_name: 'myvnet' + service_principal_id: '46b1b7dc-168a-ccc-bbb-aaaaaaa' + service_principal_secret: 'aaaa-bbbb-43eb-9ead-dddddd' + kubernetes_version: '1.15.7' + subnet_name: 'mysubnet' + service_cidr: 10.0.0.0/16 + dns_ip: 10.0.0.10 + docker_cidr: 172.17.0.1/16 + address_space: 10.10.0.0/16 + subnet_prefix: 10.10.1.0/24 + subnet_prefixes: 10.10.1.0/24 + network_plugin: azure + network_policy: azure + oms_agent_enabled: 'yes' + ``` +Navigate to azure-single-keyvault folder and use the following command to generate terraform variables using spk + +``` +$cd ~/azure-single-keyvault +$spk infra generate -p westus +``` +spk reads our definition.yaml file, downloads the template referred to in it, applies the parameters we have provided, and creates a generated Terraform script in a directory called azure-single-keyvault-generated which is at the same level as azure-single-keyvault folder. Navigate to azure-single-keyvault-generated/westus folder. Now you are ready to provision the cluster using Terraform + +``` +$terraform init -backend-config=./backend.tfvars -var-file=spk.tfvars +``` +Our next step is to plan the deployment, which will preflight our deployment script and the configured variables, and output the changes that would happen in our infrastructure if applied: + +``` +$terraform plan -var-file=spk.tfvars +``` + +Finally, once plan shows no errors, we can apply the changes + +``` +$terraform apply -var-file=spk.tfvars +... +Do you want to perform these actions? + Terraform will perform the actions described above. + Only 'yes' will be accepted to approve. + + Enter a value: yes +... +``` +It will take few miniutes to get the cluster deployed. + +### Interacting with deployed cluster: + +The `azure-single-keyvault` Terraform template we used in this walkthrough automatically copies the Kubernetes config file from the cluster into the output directory. This config file has all of the details we need to interact with our new cluster. + +To utilize it, we first need to merge it into our own config file and make it the default configuration. We can do that with this: + +$ KUBECONFIG=./output/bedrock_kube_config:~/.kube/config kubectl config view --flatten > merged-config && mv merged-config ~/.kube/config + +With this, you should be able to see the pods running in the cluster: + +``` +NAMESPACE NAME READY STATUS RESTARTS AGE +default azure-vote-back-77dff7bbd5-xlxxf 1/1 Running 0 3h50m +default azure-vote-front-7f7c8c5766-8xpdh 1/1 Running 0 3h50m +flux flux-5997784678-s8qvc 1/1 Running 0 3h51m +flux flux-memcached-6547454f96-w9dz5 1/1 Running 0 3h51m +kube-system azure-cni-networkmonitor-8b57g 1/1 Running 0 3h53m +kube-system azure-cni-networkmonitor-lrdhg 1/1 Running 0 3h53m +kube-system azure-cni-networkmonitor-tssrz 1/1 Running 0 3h53m +kube-system azure-ip-masq-agent-74jg4 1/1 Running 0 3h53m +kube-system azure-ip-masq-agent-76vsj 1/1 Running 0 3h53m +kube-system azure-ip-masq-agent-n47j9 1/1 Running 0 3h53m +kube-system azure-npm-dndnw 1/1 Running 0 3h53m +kube-system azure-npm-nd6l7 1/1 Running 0 3h53m +kube-system azure-npm-q59w5 1/1 Running 0 3h53m +kube-system coredns-698c77c5d7-h2pwp 1/1 Running 0 3h52m +kube-system coredns-698c77c5d7-vqgvf 1/1 Running 0 3h56m +kube-system coredns-autoscaler-79b778686c-qqknv 1/1 Running 0 3h56m +kube-system kube-proxy-n2sdl 1/1 Running 0 33m +kube-system kube-proxy-z2md2 1/1 Running 0 33m +kube-system kube-proxy-zgl4f 1/1 Running 0 33m +kube-system kubernetes-dashboard-74d8c675bc-7zk84 1/1 Running 0 3h56m +kube-system metrics-server-69df9f75bf-hhzgj 1/1 Running 0 3h56m +kube-system tunnelfront-865f7d9f5d-wb4xx 1/1 Running 0 3h56m +kv keyvault-flexvolume-jld5p 1/1 Running 0 3h51m +kv keyvault-flexvolume-qc5tp 1/1 Running 0 3h51m +kv keyvault-flexvolume-szbc8 1/1 Running 0 3h51m +``` +You can get external IP by running the following command + +``` +$ kubectl get services --all-namespaces +NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +default azure-vote-back ClusterIP 10.0.182.151 6379/TCP 4h15m +default azure-vote-front LoadBalancer 10.0.30.238 35.889.68.30 80:32100/TCP 4h15m +default kubernetes ClusterIP 10.0.0.1 443/TCP 4h21m +flux flux ClusterIP 10.0.195.4 3030/TCP 4h16m +flux flux-memcached ClusterIP 10.0.226.253 11211/TCP 4h16m +kube-system kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 4h21m +kube-system kubernetes-dashboard ClusterIP 10.0.172.46 80/TCP 4h21m +kube-system metrics-server ClusterIP 10.0.208.44 443/TCP 4h21m +``` + +External load balancers like this take time to provision. If the EXTERNAL-IP of service is still pending, keep trying periodically until it is provisioned. + +The EXTERNAL-IP, in the case above, is: 35.889.68.30. By appending the port our service is hosted on we can use http://35.889.68.30:80 to fetch the service in a browser. + +![voting app](../firstWorkload/images/voting-app-deployed-in-azure-kubernetes-service.png) + +Congratulations, you have successfully deployed a Azure Kubernetes Cluster with Keyvault using this walkthrough. + + + diff --git a/docs/why-gitops.md b/docs/why-gitops.md new file mode 100644 index 0000000..0607b22 --- /dev/null +++ b/docs/why-gitops.md @@ -0,0 +1,49 @@ +# Why GitOps? + +Kubernetes is, at its heart, a declarative system. You apply definitions, typically described in YAML document form, of what you want to have exist in the cluster, and Kubernetes works to make that the current state of affairs. More importantly, it works to keep it that way – working to restore this state through operational failures like the that of a pod or of a node that hosts a set of pods. + +A sample resource definition for a Service (which is the Kubernetes concept of an internal endpoint backed by a set of pods) looks like this: + +```yaml +apiVersion: v1 +kind: Service +metadata: + labels: + app: user-service + name: user-service + namespace: services +spec: + ports: + - name: http + port: 80 + protocol: TCP + targetPort: 80 + selector: + app: user-service + sessionAffinity: None + type: ClusterIP +``` + +This declarative approach, and the textual format for these definitions, makes it a natural fit for being stored in source control. Using source control as the central system of record like this, known as GitOps, is increasingly utilized for running large scale Kubernetes deployments in production. + +In a GitOps based deployment, the cluster has an operator that is configured during the creation of the cluster to watch a specific git repo that is designated to always contain the set of resource manifests that should be running in the cluster. + +One such implementation of this approach (and the one we use in Bedrock) is Flux, a CNCF project. It periodically reconciles the commits made to this manifest repo and applies them to the Kubernetes cluster as shown in Figure 1. + +TODO: Add simple diagram with Flux pulling from git repo + +Figure 1: Kubernetes cluster with Flux pulling from a git repo + +There are two main security advantages to this pull based approach: +* Flux is able to verify with TLS that it is talking to the correct git repo (and not a man in the middle). +* We do not need to expose the Kubernetes API to manage what is running in our cluster, which is inherently more secure. + +Besides matching up well with Kubernetes operating model and being more secure, building your operations with a GitOps workflow enables you to perform operational tasks in a style similar to a typical development workflow: + +1. Pull Request based workflow: Your team can review each other’s operational changes just like you do with code level changes. +2. Point in time auditability into what is deployed in your cluster: Since the state of the git repo defines what Flux will apply in Kubernetes, you have the ability to have point in time visibility into what was deployed on the cluster. +3. Understand operational changes between commits: As the workflow is based on git, you can inspect the exact set of changes that were made to the cluster. +4. Nonrepudiation of changes: The git commit log identifies who made a change and when they made it. +5. Easy disaster recovery: Since the current operational state of the cluster is storied in git, recovering from a lost cluster entails spinning up a new cluster and pointing it at the git repo. + +These advantages make GitOps, in our opinion, a superior operational model to other traditional push based approaches based around `helm install` or more ad hoc methods like `kubectl apply` from a CI/CD system. diff --git a/pipelines/bedrock-scheduled.yml b/pipelines/bedrock-scheduled.yml index 9f07d0e..951eb7e 100644 --- a/pipelines/bedrock-scheduled.yml +++ b/pipelines/bedrock-scheduled.yml @@ -169,4 +169,4 @@ stages: ARM_BACKEND_STORAGE_CONTAINER: $(ARM_BACKEND_STORAGE_CONTAINER) workingDirectory: '$(modulePath)/test' displayName: 'Integration Test: Bedrock_Azure-Common-MultiCluster ' - \ No newline at end of file + diff --git a/tools/prereqs/README.md b/tools/prereqs/README.md new file mode 100644 index 0000000..d7d0f34 --- /dev/null +++ b/tools/prereqs/README.md @@ -0,0 +1,52 @@ +# Bedrock and SPK Prerequisites + +Bedrock utilizes existing tools from the cloud and cloud native ecosystem. You'll need to install the following prerequisites if you haven't already: + +- Azure CLI +- Kubectl +- Helm +- Fabrikate +- Terraform +- SPK + +We maintain an individual script for each prerequisite to make this easier. + +NOTE: You do not need to use these scripts if you are already utilizing a package manager like `apt` or `brew` to install these. + +To use this, clone the bedrock repository locally and then navigate to `tools/prereqs` and execute the appropriate scripts for the prerequisites you need to install: + +## Azure CLI + +```bash +$ sudo ./setup_azure_cli.sh +``` + +## kubectl + +```bash +$ sudo ./setup_kubectl.sh +``` + +## Helm + +```bash +$ sudo ./setup_helm.sh +``` + +## Fabrikate + +```bash +$ sudo ./setup_fabrikate.sh +``` + +## Terraform + +```bash +$ sudo ./setup_terraform.sh +``` + +## SPK + +```bash +$ sudo ./setup_spk.sh +``` diff --git a/tools/prereqs/common_funcs.sh b/tools/prereqs/common_funcs.sh new file mode 100755 index 0000000..a26b8f6 --- /dev/null +++ b/tools/prereqs/common_funcs.sh @@ -0,0 +1,87 @@ +#!/bin/bash + +function require_root() { + # verify we are running as root + if [[ "$EUID" != 0 ]]; then + echo "Script must be run as root or sudo." + exit 1 + fi +} + +function linux_distro() { + local distroname + if [ -n "$(command -v lsb_release)" ]; then + distroname=$(lsb_release -s -d) + elif [ -f "/etc/os-release" ]; then + distroname=$(grep PRETTY_NAME /etc/os-release | sed 's/PRETTY_NAME=//g' | tr -d '="') + elif [ -f "/etc/debian_version" ]; then + distroname="Debian $(cat /etc/debian_version)" + elif [ -f "/etc/redhat-release" ]; then + distroname=$(cat /etc/redhat-release) + else + distroname="$(uname -s) $(uname -r)" + fi + echo "${distroname}" +} + +function os_type() { + local ostype + if [[ "$OSTYPE" == "linux-gnu" ]]; then + ostype="linux" + elif [[ "$OSTYPE" == "darwin"* ]]; then + ostype="macos" + elif [[ "$OSTYPE" == "cygwin" ]]; then + ostype="cygwin" + elif [[ "$OSTYPE" == "msys" ]]; then + ostype="mingwin" + elif [[ "$OSTYPE" == "win32" ]]; then + ostype="windows" + elif [[ "$OSTYPE" == "freebsd"* ]]; then + ostype="freebsd" + else + ostype="" + fi + echo "${ostype}" +} + +function is_macos() { + local os=`os_type` + if [ "$os" = "macos" ]; then + return 1 + fi + return 0 +} + +function is_ubuntu() { + local os=`os_type` + if [ "$os" != "linux" ]; then + return 0 + fi + + local distro=`linux_distro` + if [ "$distro" == "Ubuntu"* ]; then + return 1 + fi + return 0 +} + +function is_debian() { + local os=`os_type` + if [ "$os" != "linux" ]; then + return 0 + fi + + local distro=`linux_distro` + if [ "$distro" == "Debian"* ]; then + return 1 + fi + return 0 +} + +function is_apt_system() { + local apt=$((`is_debian` + `is_ubuntu`)) + if [ "$apt" -gt 0 ]; then + return 1 + fi + return 0 +} \ No newline at end of file diff --git a/tools/prereqs/setup_azure_cli.sh b/tools/prereqs/setup_azure_cli.sh new file mode 100755 index 0000000..e93ff86 --- /dev/null +++ b/tools/prereqs/setup_azure_cli.sh @@ -0,0 +1,96 @@ +#!/bin/bash + +# load common functions +SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)" +. $SCRIPT_DIR/common_funcs.sh + +require_root + +function apt_install() { + # prompt for confirmation + echo "This script will install the latest version of the Azure CLI using the Microsoft APT repo." + read -p "Do you wish to continue? " -n 1 -r + echo + if [[ ! $REPLY =~ ^[Yy]$ ]] + then + exit 1 + fi + + # install base set of tools + apt-get update + apt-get install -y curl apt-transport-https lsb-release gnupg + + # install microsoft apt repo key + curl -sL https://packages.microsoft.com/keys/microsoft.asc | \ + gpg --dearmor | \ + tee /etc/apt/trusted.gpg.d/microsoft.asc.gpg > /dev/null + + # configure azure cli repo + AZ_REPO=$(lsb_release -cs) + echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | \ + tee /etc/apt/sources.list.d/azure-cli.list + + # install azure cli + apt-get update + apt-get install -y azure-cli +} + +function manual_install() { + # prompt for confirmation + echo "This script will install the latest version of the Azure CLI using the" + echo "manual install method which launches a script. The script will prompt" + echo "for a few questions, like where to install the Azure CLI." + read -p "Do you wish to continue? " -n 1 -r + echo + if [[ ! $REPLY =~ ^[Yy]$ ]] + then + exit 1 + fi + + which curl + if [ "$?" != "0" ]; then + echo "curl is required to install the script." + exit 1 + fi + + curl -L https://aka.ms/InstallAzureCli | bash +} + +function macos_brew_install() { + # prompt for confirmation + echo "This script will install the latest version of the Azure CLI using brew." + read -p "Do you wish to continue? " -n 1 -r + echo + if [[ ! $REPLY =~ ^[Yy]$ ]] + then + exit 1 + fi + + brew update && brew install azure-cli +} + +function macos_install() { + which brew + if [ "$?" -eq "0" ]; then + macos_brew_install + else + manual_install + fi +} + +# Determine the operating system and call the appropriate +# install method. If there isn't a specific specialized +# install, then use the Azure CLI manual install. +ostype=`os_type` +if [ "$ostype" == "linux" ]; then + `is_apt_system` + if [ "$?" -eq "1" ]; then + apt_install + else + manual_install + fi +elif [ "$ostype" == "macos" ]; then + macos_install +else + manual_install +fi diff --git a/tools/prereqs/setup_fabrikate.sh b/tools/prereqs/setup_fabrikate.sh new file mode 100755 index 0000000..08075e4 --- /dev/null +++ b/tools/prereqs/setup_fabrikate.sh @@ -0,0 +1,50 @@ +#!/bin/bash + +# load common functions +SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)" +. $SCRIPT_DIR/common_funcs.sh + +require_root + +function finish { + if [ ! -z "$tmp_dir" ]; then + rm -rf $tmp_dir + fi +} +trap finish EXIT + +# prompt for confirmation +echo "This script will install the latest version of Fabrikate from github." +echo "The script requires that unzip be installed." +read -p "Do you wish to continue? " -n 1 -r +echo +if [[ ! $REPLY =~ ^[Yy]$ ]] +then + exit 1 +fi + +# determine os type +ostype=`os_type` +if [ "$ostype" == "linux" ]; then + arch="linux-amd64" +elif [ "$ostype" == "macos" ]; then + arch="darwin-amd64" +else + echo "OS ($ostype) not supported." + exit 1 +fi + +# create a temporary directory to do work in +tmp_dir=$(mktemp -d -t fab-inst-XXXXXXXXXX) +cd $tmp_dir + +FABRIKATE_FILE=`curl -s -L https://github.com/microsoft/fabrikate/releases/latest | grep "$arch" | sed -n "s/.*\(fab-v[0-9]*.[0-9]*.[0-9]*-$arch.zip\).*/\1/p" | sort -u` + +FABRIKATE_VERSION=`echo $FABRIKATE_FILE | sed -n 's/.*v\([0-9]*.[0-9]*.[0-9]*\).*/\1/p'` + +curl -s -LO https://github.com/microsoft/fabrikate/releases/download/$FABRIKATE_VERSION/$FABRIKATE_FILE +unzip $FABRIKATE_FILE -d /usr/local/bin + +cd - + +echo "fab installed into /usr/local/bin" \ No newline at end of file diff --git a/tools/prereqs/setup_helm.sh b/tools/prereqs/setup_helm.sh new file mode 100755 index 0000000..c17a431 --- /dev/null +++ b/tools/prereqs/setup_helm.sh @@ -0,0 +1,39 @@ +#!/bin/bash + +HELM_DESIRED_VERSION="v2.16.1" + +# load common functions +SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)" +. $SCRIPT_DIR/common_funcs.sh + +require_root + +function finish { + if [ ! -z "$tmp_dir" ]; then + rm -rf $tmp_dir + fi +} +trap finish EXIT + +# prompt for confirmation +echo "This script will install version $HELM_DESIRED_VERSION of helm from Github." +echo "Bedrock currently only supports version 2.x of helm." +read -p "Do you wish to continue? " -n 1 -r +echo +if [[ ! $REPLY =~ ^[Yy]$ ]] +then + exit 1 +fi + +# create a temporary directory to do work in +tmp_dir=$(mktemp -d -t ci-XXXXXXXXXX) +cd $tmp_dir + +# retrieve and install helm +# Currently, Bedrock only works with Helm 2.x, so specifying a specific version +curl -LO https://git.io/get_helm.sh +chmod 700 ./get_helm.sh +./get_helm.sh --version $HELM_DESIRED_VERSION +cd - + +echo "heml installed into /usr/local/bin" \ No newline at end of file diff --git a/tools/prereqs/setup_kubectl.sh b/tools/prereqs/setup_kubectl.sh new file mode 100755 index 0000000..57ff47a --- /dev/null +++ b/tools/prereqs/setup_kubectl.sh @@ -0,0 +1,45 @@ +#!/bin/bash + +# load common functions +SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)" +. $SCRIPT_DIR/common_funcs.sh + +require_root + +function finish { + if [ ! -z "$tmp_dir" ]; then + rm -rf $tmp_dir + fi +} +trap finish EXIT + +# prompt for confirmation +echo "This script will install the latest version of kubectl." +read -p "Do you wish to continue? " -n 1 -r +echo +if [[ ! $REPLY =~ ^[Yy]$ ]] +then + exit 1 +fi + +# create a temporary directory to do work in +tmp_dir=$(mktemp -d -t ci-XXXXXXXXXX) +cd $tmp_dir + +# determine os type +ostype=`os_type` +if [ "$ostype" == "linux" ]; then + arch="linux/amd64" +elif [ "$ostype" == "macos" ]; then + arch="darwin/amd64" +else + echo "OS ($ostype) not supported." + exit 1 +fi + +# retrieve and install kubectl +curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/$arch/kubectl +mv kubectl /usr/local/bin/ +chmod +x /usr/local/bin/kubectl + +echo "kubectl installed into /usr/local/bin" \ No newline at end of file diff --git a/tools/prereqs/setup_spk.sh b/tools/prereqs/setup_spk.sh new file mode 100755 index 0000000..447c77c --- /dev/null +++ b/tools/prereqs/setup_spk.sh @@ -0,0 +1,47 @@ +#!/bin/bash +set -e + +# load common functions +SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)" +. $SCRIPT_DIR/common_funcs.sh + +require_root + +function finish { + if [ ! -z "$tmp_dir" ]; then + rm -rf $tmp_dir + fi +} +trap finish EXIT + +# prompt for confirmation +echo "This script will install the latest version of Spk from github." +read -p "Do you wish to continue? " -n 1 -r +echo +if [[ ! $REPLY =~ ^[Yy]$ ]] +then + exit 1 +fi + +# create a temporary directory to do work in +tmp_dir=$(mktemp -d -t ci-XXXXXXXXXX) +cd $tmp_dir + +# determine os type +ostype=`os_type` +if [ "$ostype" == "linux" ]; then + arch="linux" +elif [ "$ostype" == "macos" ]; then + arch="macos" +else + echo "OS ($ostype) not supported." + exit 1 +fi + +SPK_VERSION=`curl -s -L https://github.com/catalystcode/spk/releases/latest | grep "spk\/archive" | grep zip | awk -F"archive/" '{print $2}' | awk -F ".zip" '{print $1}'` + +curl -s -LO https://github.com/catalystcode/spk/releases/download/$SPK_VERSION/spk-$arch +cp spk-$arch /usr/local/bin/spk +chmod +x /usr/local/bin/spk + +echo "spk installed into /usr/local/bin" \ No newline at end of file diff --git a/tools/prereqs/setup_terraform.sh b/tools/prereqs/setup_terraform.sh new file mode 100755 index 0000000..b3f0c34 --- /dev/null +++ b/tools/prereqs/setup_terraform.sh @@ -0,0 +1,45 @@ +#!/bin/bash + +# load common functions +SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)" +. $SCRIPT_DIR/common_funcs.sh + +require_root + +function finish { + if [ ! -z "$tmp_dir" ]; then + rm -rf $tmp_dir + fi +} +trap finish EXIT + +# prompt for confirmation +echo "This script will install the latest version of Terraform from github." +read -p "Do you wish to continue? " -n 1 -r +echo +if [[ ! $REPLY =~ ^[Yy]$ ]] +then + exit 1 +fi + +# create a temporary directory to do work in +tmp_dir=$(mktemp -d -t ci-XXXXXXXXXX) +cd $tmp_dir + +# determine os type +ostype=`os_type` +if [ "$ostype" == "linux" ]; then + arch="linux_amd64" +elif [ "$ostype" == "macos" ]; then + arch="darwin_amd64" +else + echo "OS ($ostype) not supported." + exit 1 +fi + +TERRAFORM_VERSION=`curl -L -s https://github.com/hashicorp/terraform/releases/latest | grep archive | grep zip | awk -F"/v" '{print $2}' | awk -F".zip" '{print $1}'` +curl -LO -s https://releases.hashicorp.com/terraform/$TERRAFORM_VERSION/terraform_"$TERRAFORM_VERSION"_$arch.zip + +unzip terraform_"$TERRAFORM_VERSION"_$arch.zip -d /usr/local/bin/ + +echo "terraform installed in /usr/local/bin" \ No newline at end of file