Skip to content

Latest commit

 

History

History
230 lines (137 loc) · 15.6 KB

planning-migration-3-4.adoc

File metadata and controls

230 lines (137 loc) · 15.6 KB

Differences between {product-title} 3 and 4

{product-title} {product-version} introduces architectural changes and enhancements/ The procedures that you used to manage your {product-title} 3 cluster might not apply to {product-title} 4.

For information on configuring your {product-title} 4 cluster, review the appropriate sections of the {product-title} documentation. For information on new features and other notable technical changes, review the OpenShift Container Platform 4.12 release notes.

It is not possible to upgrade your existing {product-title} 3 cluster to {product-title} 4. You must start with a new {product-title} 4 installation. Tools are available to assist in migrating your control plane settings and application workloads.

Architecture

With {product-title} 3, administrators individually deployed {op-system-base-full} hosts, and then installed {product-title} on top of these hosts to form a cluster. Administrators were responsible for properly configuring these hosts and performing updates.

{product-title} 4 represents a significant change in the way that {product-title} clusters are deployed and managed. {product-title} 4 includes new technologies and functionality, such as Operators, machine sets, and {op-system-first}, which are core to the operation of the cluster. This technology shift enables clusters to self-manage some functions previously performed by administrators. This also ensures platform stability and consistency, and simplifies installation and scaling.

For more information, see OpenShift Container Platform architecture.

Immutable infrastructure

{product-title} 4 uses {op-system-first}, which is designed to run containerized applications, and provides efficient installation, Operator-based management, and simplified upgrades. {op-system} is an immutable container host, rather than a customizable operating system like {op-system-base}. {op-system} enables {product-title} 4 to manage and automate the deployment of the underlying container host. {op-system} is a part of {product-title}, which means that everything runs inside a container and is deployed using {product-title}.

In {product-title} 4, control plane nodes must run {op-system}, ensuring that full-stack automation is maintained for the control plane. This makes rolling out updates and upgrades a much easier process than in {product-title} 3.

For more information, see {op-system-first}.

Operators

Operators are a method of packaging, deploying, and managing a Kubernetes application. Operators ease the operational complexity of running another piece of software. They watch over your environment and use the current state to make decisions in real time. Advanced Operators are designed to upgrade and react to failures automatically.

For more information, see Understanding Operators.

Installation and upgrade

Installation process

To install {product-title} 3.11, you prepared your {op-system-base-full} hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster.

In {product-title} {product-version}, you use the OpenShift installation program to create a minimum set of resources required for a cluster. After the cluster is running, you use Operators to further configure your cluster and to install new services. After first boot, {op-system-first} systems are managed by the Machine Config Operator (MCO) that runs in the {product-title} cluster.

For more information, see Installation process.

If you want to add {op-system-base-full} worker machines to your {product-title} {product-version} cluster, you use an Ansible playbook to join the {op-system-base} worker machines after the cluster is running. For more information, see Adding {op-system-base} compute machines to an {product-title} cluster.

Infrastructure options

In {product-title} 3.11, you installed your cluster on infrastructure that you prepared and maintained. In addition to providing your own infrastructure, {product-title} 4 offers an option to deploy a cluster on infrastructure that the {product-title} installation program provisions and the cluster maintains.

Upgrading your cluster

In {product-title} 3.11, you upgraded your cluster by running Ansible playbooks. In {product-title} {product-version}, the cluster manages its own updates, including updates to {op-system-first} on cluster nodes. You can easily upgrade your cluster by using the web console or by using the oc adm upgrade command from the OpenShift CLI and the Operators will automatically upgrade themselves. If your {product-title} {product-version} cluster has {op-system-base} worker machines, then you will still need to run an Ansible playbook to upgrade those worker machines.

For more information, see Updating clusters.

Migration considerations

Review the changes and other considerations that might affect your transition from {product-title} 3.11 to {product-title} 4.

Storage considerations

Review the following storage changes to consider when transitioning from {product-title} 3.11 to {product-title} {product-version}.

Local volume persistent storage

Local storage is only supported by using the Local Storage Operator in {product-title} {product-version}. It is not supported to use the local provisioner method from {product-title} 3.11.

For more information, see Persistent storage using local volumes.

FlexVolume persistent storage

The FlexVolume plugin location changed from {product-title} 3.11. The new location in {product-title} {product-version} is /etc/kubernetes/kubelet-plugins/volume/exec. Attachable FlexVolume plugins are no longer supported.

For more information, see Persistent storage using FlexVolume.

Container Storage Interface (CSI) persistent storage

Persistent storage using the Container Storage Interface (CSI) was Technology Preview in {product-title} 3.11. {product-title} {product-version} ships with several CSI drivers. You can also install your own driver.

Red Hat OpenShift Data Foundation

OpenShift Container Storage 3, which is available for use with {product-title} 3.11, uses Red Hat Gluster Storage as the backing storage.

{rh-storage-first} 4, which is available for use with {product-title} 4, uses Red Hat Ceph Storage as the backing storage.

Unsupported persistent storage options

Support for the following persistent storage options from {product-title} 3.11 has changed in {product-title} {product-version}:

  • GlusterFS is no longer supported.

  • CephFS as a standalone product is no longer supported.

  • Ceph RBD as a standalone product is no longer supported.

If you used one of these in {product-title} 3.11, you must choose a different persistent storage option for full support in {product-title} {product-version}.

For more information, see Understanding persistent storage.

Migration of in-tree volumes to CSI drivers

{product-title} 4 is migrating in-tree volume plug-ins to their Container Storage Interface (CSI) counterparts. In {product-title} {product-version}, CSI drivers are the new default for the following in-tree volume types:

  • Amazon Web Services (AWS) Elastic Block Storage (EBS)

  • Azure Disk

  • Google Cloud Platform Persistent Disk (GCP PD)

  • OpenStack Cinder

All aspects of volume lifecycle, such as creation, deletion, mounting, and unmounting, is handled by the CSI driver.

For more information, see CSI automatic migration.

Networking considerations

Review the following networking changes to consider when transitioning from {product-title} 3.11 to {product-title} {product-version}.

Network isolation mode

The default network isolation mode for {product-title} 3.11 was ovs-subnet, though users frequently switched to use ovn-multitenant. The default network isolation mode for {product-title} {product-version} is controlled by a network policy.

If your {product-title} 3.11 cluster used the ovs-subnet or ovs-multitenant mode, it is recommended to switch to a network policy for your {product-title} {product-version} cluster. Network policies are supported upstream, are more flexible, and they provide the functionality that ovs-multitenant does. If you want to maintain the ovs-multitenant behavior while using a network policy in {product-title} {product-version}, follow the steps to configure multitenant isolation using network policy.

For more information, see About network policy.

OVN-Kubernetes as the default networking plug-in in Red Hat OpenShift Networking

In {product-title} 3.11, OpenShift SDN was the the default networking plug-in in Red Hat OpenShift Networking. In {product-title} {product-version}, OVN-Kubernetes is now the default networking plug-in.

For information on migrating to OVN-Kubernetes from OpenShift SDN, see Migrating from the OpenShift SDN network plug-in.

Logging considerations

Review the following logging changes to consider when transitioning from {product-title} 3.11 to {product-title} {product-version}.

Deploying OpenShift Logging

{product-title} 4 provides a simple deployment mechanism for OpenShift Logging, by using a Cluster Logging custom resource.

For more information, see Installing OpenShift Logging.

Aggregated logging data

You cannot transition your aggregate logging data from {product-title} 3.11 into your new {product-title} 4 cluster.

For more information, see About OpenShift Logging.

Unsupported logging configurations

Some logging configurations that were available in {product-title} 3.11 are no longer supported in {product-title} {product-version}.

For more information on the explicitly unsupported logging cases, see Maintenance and support.

Security considerations

Review the following security changes to consider when transitioning from {product-title} 3.11 to {product-title} {product-version}.

Unauthenticated access to discovery endpoints

In {product-title} 3.11, an unauthenticated user could access the discovery endpoints (for example, /api/* and /apis/*). For security reasons, unauthenticated access to the discovery endpoints is no longer allowed in {product-title} {product-version}. If you do need to allow unauthenticated access, you can configure the RBAC settings as necessary; however, be sure to consider the security implications as this can expose internal cluster components to the external network.

Identity providers

Configuration for identity providers has changed for {product-title} 4, including the following notable changes:

  • The request header identity provider in {product-title} {product-version} requires mutual TLS, where in {product-title} 3.11 it did not.

  • The configuration of the OpenID Connect identity provider was simplified in {product-title} {product-version}. It now obtains data, which previously had to specified in {product-title} 3.11, from the provider’s /.well-known/openid-configuration endpoint.

OAuth token storage format

Newly created OAuth HTTP bearer tokens no longer match the names of their OAuth access token objects. The object names are now a hash of the bearer token and are no longer sensitive. This reduces the risk of leaking sensitive information.

Default security context constraints

The restricted security context constraints (SCC) in {product-title} 4 can no longer be accessed by any authenticated user as the restricted SCC in {product-title} 3.11. The broad authenticated access is now granted to the restricted-v2 SCC, which is more restrictive than the old restricted SCC. The restricted SCC still exists; users that want to use it must be specifically given permissions to do it.

For more information, see Managing security context constraints.

Monitoring considerations

Review the following monitoring changes when transitioning from {product-title} 3.11 to {product-title} {product-version}. You cannot migrate Hawkular configurations and metrics to Prometheus.

Alert for monitoring infrastructure availability

The default alert that triggers to ensure the availability of the monitoring structure was called DeadMansSwitch in {product-title} 3.11. This was renamed to Watchdog in {product-title} 4. If you had PagerDuty integration set up with this alert in {product-title} 3.11, you must set up the PagerDuty integration for the Watchdog alert in {product-title} 4.

For more information, see Applying custom Alertmanager configuration.