Skip to content

How To Guides

Johannes Rudolph edited this page Nov 2, 2023 · 31 revisions

🕹️ How to install Unipipe CLI

Run the command below depending on your operating system.

Binary distribution

Check the content of the file to be sure that the install script is safe.

Linux

curl -sf -L https://raw.githubusercontent.com/meshcloud/unipipe-service-broker/master/cli/install.sh | sudo bash

Mac OS

curl -sf -L https://raw.githubusercontent.com/meshcloud/unipipe-service-broker/master/cli/install.sh | sh

Windows

Invoke-WebRequest -Uri https://raw.githubusercontent.com/meshcloud/unipipe-service-broker/master/cli/install.ps1 -OutFile install.ps1
.\install.ps1

Install Unipipe CLI with Deno

If you already have a deno runtime installed on your system, you can install and upgrade unipipe cli comfortably via:

deno install --unstable --allow-read --allow-write --allow-env --allow-net --allow-run https://raw.githubusercontent.com/meshcloud/unipipe-service-broker/master/cli/unipipe/main.ts

How to upgrade Unipipe CLI

Use the built-in unipipe upgrade command to switch to a particular official version of the cli.

🚢 How to deploy UniPipe Service Broker

Given that UniPipe Service Broker is simply a docker container that can run anywhere, there are many different ways to deploy it.

Deploy UniPipe Service Broker with Terraform

The recommended method is using terraform scripts created by unipipe generate. This is currently supported for:

You'll need unipipe cli installed to follow the instructions in this How-To guide. 🕹️ How to install unipipe cli

Use unipipe cli to generate a deployment.

# If you want to deploy on Azure run
unipipe generate unipipe-service-broker-deployment --deployment aci_tf --destination .
# If you want to deploy on GCP run
unipipe generate unipipe-service-broker-deployment --deployment gcp_cloudrun_tf --destination .

Open the resulting file and follow the instructions at the top.

When you are done, apply the Terraform template.

terraform apply

The terraform output contains an SSH public key. The Service Broker uses the SSH key to access the git instance repository. Grant write access on the git instance repository to this SSH key.

  • For a git repository hosted on GitHub, add the SSH key as a "Deploy Key". Follow the official GitHub Docs for this.
  • For a git repository hosted on GitLab, add the SSH key as a "Deploy Key". Follow the official GitLab Docs for this.
  • For a git repository hosted on a different platform, consult the documentation of the platform provider.

Run UniPipe Service Broker using Docker

We publish unipipe-service-broker container images to GitHub Container Registry here. These images are built on GitHub actions and are available publicly

docker pull ghcr.io/meshcloud/unipipe-service-broker:latest

Run the container with configuration passed as environment variables.

Environment variables are described in the Configuration Reference.

Note: We used to publish old versions of unipipe-service-broker as GitHub packages (not GHCR) which unfortunately can't be deleted from GitHub. Please make sure to use ghcr.io to pull the latest versions and not the legacy docker.pkg.github.com URLs.

Note: You should attach a persistent volume to the image to make sure the changes to the local git repository are persisted in case the application terminates unexpectedly or restarts.

Deploy UniPipe Service Broker to Cloud Foundry

Configure a Manifest file like this:

applications:
- name: unipipe-service-broker
  memory: 1024M
  path: build/libs/unipipe-service-broker-0.9.0.jar
  env:
    GIT_REMOTE: <https or ssh url for remote git repo>
    GIT_USERNAME: <if you use https, enter username here>
    GIT_PASSWORD: <if you use https, enter password here>
    APP_BASIC-AUTH-USERNAME: <the username for securing the OSB API itself>
    APP_BASIC-AUTH-PASSWORD: <the password for securing the OSB API itself>

Environment variables are described in the Configuration Reference. Build and deploy using the manifest file from above:

# Build the jar
./gradlew build
# Deploy it to CF
cf push -f cf-manifest.yml

Manually deploy a Azure Container Group for UniPipe Service Broker + Caddy (SSL)

You need to prepare the following:

  • an Azure subscription where UniPipe is deployed to
  • a git repository where your catalog.yml and service instances will be stored
  • a defined catalog.yml

Use the provided guide by Microsoft for deploying a TLS side container using nginx, If you provide your own SSL certificates. (https://docs.microsoft.com/en-us/azure/container-instances/container-instances-container-group-ssl)

Login into your Azure CLI to prepare your following resources in your Azure Subscription.

Create a Resource Group in you Azure subscription

az group create --location westeurope --resource-group rs-osb-azure --tags {OSB} --subscription subscriptionId

Create a Storage Account in your resource group

# Create the storage account with the paraz login --allow-no-subscriptionsameters
az storage account create \
    --resource-group rs-osb-azure \
    --name osbstorage \
    --location westeurope \
    --sku Standard_LRS \
    --subscription subscriptionId
# 1.) you can also choose other options for the sku
# https://docs.microsoft.com/en-us/rest/api/storagerp/srp_sku_types
# 2.) only lowercase and number can be used for the name

Create a shared file in the Storage Account

az storage share create \
  --name acishare \
  --account-name osbstorage \
  --subscription subscriptionId

You need the following 3 information for your deployment file configuration (next section of this guide):

  • storage-account name: in the above example it would be "osbstorage"

  • share name: in the above example it would be the name used for the creation of the shared file "acishare"

  • Storage account key: use the following command to get the Storage Account key:

    az storage account keys list \
    	--resource-group rs-osb-azure \
    	--subscription subscriptionId \
    	--account-name osbstorage --query "[0].value" --output tsv

    copy the output which will be used later.

Create the deployment files. Two files are needed to deploy the Azure Container Group

  • the deployment YAML file which defines your Container Group
  • Caddyfile, the caddy configuration for the https reverse proxy
  1. Caddy You need to create a file named Caddyfile - it must not have a file ending.

Here is a template for the content of the Caddyfile:

[Domain for the Azure Container Group] {
    reverse_proxy localhost:8075
}

Here is an example using the Azure DNS label

example-osb.westeurope.azurecontainer.io {
    reverse_proxy localhost:8075
}

Note: You must ensure that the Azure DNS label is not already used by another resource. You can check by creating an Azure Container Instance. Skip everything and go straight to the Network Tab and enter a DNS Label.

You can also use your own DNS entry if you want.

You need to get the base64 string of the file, i.e. by executing the following command:

base64 [pathToFile]

For the example Caddyfile it looks like this:

base64 Caddyfile                                                                                                                          [20/11/6|12:32]
output:
ABC123=
  1. Deployment YAML file

Here is an template of the deployment YAML file:

api-version: 2019-12-01
location: westeurope
name: gneric-osb-with-ssl         # configure a name for your azure container group
properties:
  containers:
  - name: caddy                   # re-name the container if needed. Beaware of the Azure constrains for the container name
    properties:
      image: registry.hub.docker.com/library/caddy
      ports:
      - port: 443
        protocol: TCP
      resources:
        requests:                 # adjust container resources according to your needs
          cpu: 1.0
          memoryInGB: 1.5
      volumeMounts:
        - name: data
          mountPath: /data
        - name: config
          mountPath: /config
        - name: caddy-config
          mountPath: /etc/caddy
  - name: unipipeservicebroker              # re-name the container if needed. Beaware of the Azure constrains for the container name
    properties:
      image: ghcr.io/meshcloud/unipipe-service-broker:v1.0.10
      ports:
        - port: 8075
          protocol: TCP
      resources:                 # adjust container resources according to your needs
        requests:
          cpu: 1.0
          memoryInGB: 1.5
      environmentVariables:       # configure the unipipe service broker environment variables accordingly to your setup. you can delete not needed parts
        - name: GIT_REMOTE
          secureValue:
        - name: GIT_REMOTE_BRANCH
          secureValue:
        - name: GIT_LOCAL_PATH
          secureValue:
        - name: GIT_SSH_KEY
          secureValue:
        - name: GIT_USERNAME
          secureValue:
        - name: GIT_PASSWORD
          secureValue:
        - name: APP_BASIC_AUTH_USERNAME
          secureValue:
        - name: APP_BASIC_AUTH_PASSWORD
          secureValue:
  volumes:
  - secret:
      Caddyfile:
    name: caddy-config
  - name: data
    azureFile:
      sharename: acishare
      storageAccountName: osbstorage
      storageAccountKey: <Storage account key>
  - name: config
    azureFile:
      sharename: acishare
      storageAccountName: osbstorage
      storageAccountKey: <Storage account key>
  ipAddress:
    ports:
    - port: 443
      protocol: TCP
    type: Public
    dnsNameLabel: example-osb     # only use this if you use the Azure DNS Label, otherwiese delete this line. Azure will add the i.e. westeurope.azurecontainer.io part while deploying the Azure Container Group
  osType: Linux
tags: null
type: Microsoft.ContainerInstance/containerGroups

For further information regarding the environmentVariables for the UniPipe service broker visit https://github.com/meshcloud/unipipe-sevice-broker

Here is an example of a working deployment file (secrets are left empty for security reasons 😉):

api-version: 2019-12-01
location: westeurope
name: unipipe-service-broker-with-ssl
properties:
  containers:
  - name: caddy
    properties:
      image: registry.hub.docker.com/library/caddy
      ports:
      - port: 443
        protocol: TCP
      resources:
        requests:
          cpu: 1.0
          memoryInGB: 1.5
      volumeMounts:
      - name: caddy-config
        mountPath: /etc/caddy
  - name: genericosb
    properties:
      image: ghcr.io/meshcloud/unipipe-service-broker:v1.0.10
      ports:
        - port: 8075
          protocol: TCP
      resources:
        requests:
          cpu: 1.0
          memoryInGB: 1.5
      environmentVariables:
        - name: GIT_REMOTE
          secureValue:
        - name: GIT_REMOTE_BRANCH
          secureValue:
        - name: GIT_USERNAME
          secureValue:
        - name: GIT_PASSWORD
          secureValue:
        - name: APP_BASIC_AUTH_USERNAME
          secureValue:
        - name: APP_BASIC_AUTH_PASSWORD
          secureValue:
  volumes:
  - secret:
      Caddyfile: "ABC123="
    name: caddy-config
  - name: config
    azureFile:
      sharename: acishare
      storageAccountName: osbstorage
      storageAccountKey: <Storage account key>
  ipAddress:
    ports:
    - port: 443
      protocol: TCP
    type: Public
    dnsNameLabel: example-osb
  osType: Linux
tags: null
type: Microsoft.ContainerInstance/containerGroups

secureValue will ensure that the environment variables are not visible in the properties of the Azure ACS container

Deploy the container group by executing the following command for the AZ CLI:

az container create --file [deploymentYAML] --resource-group [resourceGroupName] --subscription [subscriptionId]

NOTE: Please be aware that caddy's automatic certificate provisioning (if no persistent storage is configured) is limited to 5 certificates per week → this limits the re-start as well as the stop and start of the container (and container group) to max. 5 times.

Run garbage collection for deleted instances

You can clean up deleted instances every once in a while for example as a nightly job in a CI/CD pipeline.

The below command would delete instance folders for deleted instances:

# Run this command from the repository root
unipipe list --deleted -o json | jq '.[].instance.serviceInstanceId' | xargs -I % sh -c 'rm -rf instances/%'