Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[release-ocm-2.4] MGMT-7351 - Move deployment scripts from assisted-service into test-infra scripts #1276

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -398,7 +398,7 @@ redeploy_nodes_with_install: destroy_nodes deploy_nodes_with_install
############

clear_operator:
DISKS="${LSO_DISKS}" ./assisted-service/deploy/operator/destroy.sh
DISKS="${LSO_DISKS}" ./scripts/operator/destroy.sh

deploy_assisted_operator: clear_operator
$(MAKE) start_load_balancer START_LOAD_BALANCER=true
Expand Down
2 changes: 1 addition & 1 deletion scripts/deploy_assisted_service.sh
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ EOF
export ASSISTED_NAMESPACE=${NAMESPACE}
export SERVICE_IMAGE=${SERVICE}

./assisted-service/deploy/operator/deploy.sh
./scripts/operator/deploy.sh
echo "Installation of Assisted Install operator passed successfully!"

# Update the LB configuration to point to the service route endpoint
Expand Down
105 changes: 105 additions & 0 deletions scripts/operator/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
# Assisted Service Operator

This directory includes two main actions:
* Assisted Service operator installation workflow, including
installation of Local Storage Operator and Hive Operator.
* ZTP workflow of spoke clusters.

## Dependencies

Operator installation requires an OCP 4.8 cluster as the "Hub Cluster".
Also, ZTP flow requires a node with enough CPU cores, memory and disk size
which is connected to vBMC system.
In order to have a workable setup, you can use
[dev-scripts](https://github.com/openshift-metal3/dev-scripts) with the following configurations:

```
IP_STACK=v4 # disconnected env is not yet fully supported

# ZTP-related configurations:

# This will define our single-node host, which is eligible
# for installation by assisted-service standards
NUM_EXTRA_WORKERS=1
EXTRA_WORKER_VCPU=8
EXTRA_WORKER_MEMORY=32768
EXTRA_WORKER_DISK=120

# This will enable us provisioning BMH by BMAC with the
# redfish-virtualmedia driver, as well as enabling
# rebooting by assisted-installer
PROVISIONING_NETWORK_PROFILE=Disabled
REDFISH_EMULATOR_IGNORE_BOOT_DEVICE=True
```

## Operator Installation

A complete installation of hub-cluster consists on the following:

* Setting up several (virtual) disks for persistent storage.
* Installing Local Storage Operator and creating a storage class.
* Installing Hive Operator.
* Installing Assisted Service Operator.
* Configuring BMO to watch all namespaces searching for BMH objects.

Installation of the operator is pretty simple:

```
# replace with path in your system for any eligible cluster auth:
export KUBECONFIG=/home/test/dev-scripts/ocp/ostest/auth/kubeconfig

cd scripts/operator/
./deploy.sh
```

By default, this will define sdb,sdc,...,sdf disks on workers if present,
or on master nodes if there are no dedicated worker nodes. If you want to
control which disks are being created, use:

```
DISKS=$(echo sd{b..d}) ./deploy.sh
```

If you want to skip LSO installation (in case LSO is already installed), use:
Some other configurations are also available:

```
export INSTALL_LSO=false # in case LSO is already installed
export STORAGE_CLASS_NAME=storage-class # if you want to define this name by yourself
./deploy.sh
```

## Running ZTP Flow (with BMH, BMAC, and other friends)

Again, it's quite easy:

```
# replace with your paths:
export ASSISTED_PULLSECRET_JSON=/home/test/dev-scripts/pull_secret.json
export EXTRA_BAREMETALHOSTS_FILE=/home/test/dev-scripts/ocp/ostest/extra_baremetalhosts.json

cd scripts/operator/ztp/
./deploy_spoke_cluster.sh
```

The following actions are happening in this script:
* Secrets for pull-secret and for private SSH key will be created.
* A BMH object will be created for the extra host specified on the provided json file.
* The following objects will be created as well: cluster-deployment, infra-env,
cluster-image-set, agent-cluster-install.
* It will wait for an agent object to get created, indicating the host has joined the cluster.
* It will wait for the installation to successfully pass.

You can customize this script with the following arguments:
```
export ASSISTED_NAMESPACE=assisted-installer
export ASSISTED_CLUSTER_NAME=assisted-test-cluster
export DS_OPENSHIFT_VERSION=openshift-v4.8.0 # this will be the name of the cluster-image-set object
export OPENSHIFT_INSTALL_RELEASE_IMAGE=quay.io/openshift-release-dev/ocp-release:4.8.0-fc.3-x86_64
export ASSISTED_CLUSTER_DEPLOYMENT_NAME=assisted-test-cluster
export ASSISTED_AGENT_CLUSTER_INSTALL_NAME=assisted-agent-cluster-install
export ASSISTED_INFRAENV_NAME=assisted-infra-env
export ASSISTED_PULLSECRET_NAME=assisted-pull-secret
export ASSISTED_PRIVATEKEY_NAME=assisted-ssh-private-key
export SPOKE_CONTROLPLANE_AGENTS=1 # currently only single-node is supported
```
47 changes: 47 additions & 0 deletions scripts/operator/common.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
__dir=${__dir:-"$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"}
__root=${__root:-"$(realpath ${__dir}/../..)"}

if [ -z "${DISKS:-}" ]; then
export DISKS=$(echo sd{b..f})
fi

export DISCONNECTED="${DISCONNECTED:-false}"
if [ "${DISCONNECTED}" = "true" ]; then
export LOCAL_REGISTRY="${LOCAL_REGISTRY_DNS_NAME}:${LOCAL_REGISTRY_PORT}"
fi

##############
# Deployment #
##############

export ASSISTED_DEPLOYMENT_METHOD="${ASSISTED_DEPLOYMENT_METHOD:-from_index_image}"
export HIVE_DEPLOYMENT_METHOD="${HIVE_DEPLOYMENT_METHOD:-with_olm}"

export ASSISTED_NAMESPACE="${ASSISTED_NAMESPACE:-assisted-installer}"
export SPOKE_NAMESPACE="${SPOKE_NAMESPACE:-assisted-spoke-cluster}"
export HIVE_NAMESPACE="${HIVE_NAMESPACE:-hive}"
export ASSISTED_UPGRADE_OPERATOR="${ASSISTED_UPGRADE_OPERATOR:-false}"
export ASSISTED_SERVICE_OPERATOR_CATALOG="assisted-service-operator-catalog"

############
# Versions #
############
DEFAULT_OS_IMAGES="${DEFAULT_OS_IMAGES:-$(cat ${__root}/assisted-service/data/default_os_images.json)}"
DEFAULT_RELEASE_IMAGES="${DEFAULT_RELEASE_IMAGES:-$(cat ${__root}/assisted-service/data/default_release_images.json)}"

# Get sorted release images relevant for the operator (only default cpu architecture)
SORTED_RELEASE_IMAGES=$(echo ${DEFAULT_RELEASE_IMAGES} | jq -rc 'map(select(.cpu_architecture=="x86_64")) | sort_by(.openshift_version)')

if [[ "${ASSISTED_UPGRADE_OPERATOR}" == "false" ]]; then
RELEASE_IMAGE=$(echo ${SORTED_RELEASE_IMAGES} | jq -rc '[.[].url][-1]')
VERSION=$(echo ${SORTED_RELEASE_IMAGES} | jq -rc '[.[].openshift_version][-1]')
else
# Before the AI operator upgrade, we install the version prior to the most current one of OCP.
# E.g. the most current version of OCP we are installing is 4.9, and the version previous to that is 4.8.
RELEASE_IMAGE=$(echo ${SORTED_RELEASE_IMAGES} | jq -rc '[.[].url][-2]')
VERSION=$(echo ${SORTED_RELEASE_IMAGES} | jq -rc '[.[].openshift_version][-2]')
fi

export ASSISTED_OPENSHIFT_VERSION="${ASSISTED_OPENSHIFT_VERSION:-openshift-v${VERSION}}"
export ASSISTED_OPENSHIFT_INSTALL_RELEASE_IMAGE="${ASSISTED_OPENSHIFT_INSTALL_RELEASE_IMAGE:-${RELEASE_IMAGE}}"
export OS_IMAGES=$(echo ${DEFAULT_OS_IMAGES} | jq -rc 'map(select(.openshift_version>="4.8"))')
74 changes: 74 additions & 0 deletions scripts/operator/deploy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
#!/usr/bin/env bash

set -o nounset
set -o pipefail
set -o errexit

__dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
__root="$(realpath ${__dir}/../..)"

source ${__dir}/common.sh
source ${__dir}/utils.sh
source ${__dir}/mirror_utils.sh

#########
# Setup #
#########

function setup_disconnected_parameters() {
# Some of the variables over here can be sourced from dev-scripts
# source common.sh
# source utils.sh
# source network.sh
# set +x
# export -f wrap_if_ipv6 ipversion

if [ "${OPENSHIFT_CI:-false}" = "false" ]; then
export ASSISTED_DEPLOYMENT_METHOD="from_community_operators"
fi

export HIVE_DEPLOYMENT_METHOD="from_upstream"

export MIRROR_BASE_URL="http://$(wrap_if_ipv6 ${PROVISIONING_HOST_IP})/images"
export AUTHFILE="${XDG_RUNTIME_DIR}/containers/auth.json"
mkdir -p $(dirname ${AUTHFILE})

merge_authfiles "${PULL_SECRET_FILE}" "${REGISTRY_CREDS}" "${AUTHFILE}"

${__root}/hack/setup_env.sh hive_from_upstream

ocp_mirror_release \
"${PULL_SECRET_FILE}" \
"${ASSISTED_OPENSHIFT_INSTALL_RELEASE_IMAGE}" \
"${LOCAL_REGISTRY}/$(get_image_repository_only ${ASSISTED_OPENSHIFT_INSTALL_RELEASE_IMAGE})"
}

set -o xtrace

if [ "${DISCONNECTED}" = "true" ]; then
setup_disconnected_parameters
fi

#######
# LSO #
#######

${__dir}/libvirt_disks.sh create

if [ "${INSTALL_LSO:-true}" = "true" ]; then
${__dir}/setup_lso.sh install_lso
fi

${__dir}/setup_lso.sh create_local_volume

########
# Hive #
########

${__dir}/setup_hive.sh "${HIVE_DEPLOYMENT_METHOD}"

############
# Assisted #
############

${__dir}/setup_assisted_operator.sh "${ASSISTED_DEPLOYMENT_METHOD}"
29 changes: 29 additions & 0 deletions scripts/operator/destroy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@

#!/usr/bin/env bash

set -o nounset

__dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source ${__dir}/common.sh

function destroy_spoke() {
kubectl delete namespace "${SPOKE_NAMESPACE}"
kubectl delete clusterimageset "${ASSISTED_OPENSHIFT_VERSION}"
}

function destroy_hub() {
kubectl delete namespace "${ASSISTED_NAMESPACE}"
kubectl delete agentserviceconfigs.agent-install.openshift.io agent
kubectl delete localvolume -n openshift-local-storage assisted-service
kubectl delete catalogsource assisted-service-catalog -n openshift-marketplace

${__dir}/libvirt_disks.sh destroy
kubectl get pv -o=name | xargs -r kubectl delete
}

if [ -z "$@" ]; then
destroy_spoke
destroy_hub
fi

"$@"
Loading