Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOC-737 K8s 6.0.20-4 release notes #1398

Merged
merged 5 commits into from
Jun 14, 2021
Merged
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 73 additions & 0 deletions content/platforms/release-notes/k8s-6-0-20-4.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
---
Title: Redis Enterprise for Kubernetes Release Notes 6.0.20-4 (May 2021)
description: Release notes for version 6.0.20-4
weight: 69
alwaysopen: false
categories: ["Platforms"]
---

The Redis Enterprise K8s [6.0.20-4](https://github.com/RedisLabs/redis-enterprise-k8s-docs/releases/tag/v6.0.20-4) release is a *major release* on top of [6.0.8-20](https://github.com/RedisLabs/redis-enterprise-k8s-docs/releases/tag/v6.0.8-20) providing support for the [Redis Enterprise Software release 6.0.20-69]({{< relref "/rs/release-notes/rs-6-0-20-april-2021.md" >}}) and includes several enhancements and bug fixes.

## Overview

This release of the operator provides:
* New features
* Various bug fixes

To upgrade your deployment to this latest release, see ["Upgrading a Redis Enterprise Cluster in Operator-based Architecture"]({{< relref "/platforms/kubernetes/tasks/upgrading-with-operator.md" >}}).

## Images
This release includes the following container images:
* **Redis Enterprise**: redislabs/redis:6.0.20-69 or redislabs/redis:6.0.20-69.rhel7-openshift
* **Operator and Bootstrapper**: redislabs/operator:6.0.20-4
* **Services Rigger**: redislabs/k8s-controller:6.0.20-4 or redislabs/services-manager:6.0.20-4 (on the Red Hat registry)

## New features
* Support for Openshit 4.7
* Support for Kubernetes 1.20

## New preview features
* Hashicorp Vault integration - REC secret
* Hashicorp Vault integration - REDB secrets

## Important fixes

- Fixed upgrade issue with custom container repositories specifying port numbers
- REDB controller no longer performs reconciliation until Redis Enterprise software version complies with operator
- Removed unused node.js package from Services Rigger image
- Fixed operator crash on change of uiServiceType
- Avoid excessive logging within RS pod (envoy_access.log)

## Known limitations

- 6.0.20-4 does not appear in OpenShift Lifecycle Manager(OLM) for OpenShift 4.7. Deploy manually as a workaround. A future maintenance release will address this.
- Hashicorp Vault integration has no support for Gesher. There is no workaround at this time.
- In some cases when the Redis Enterprise Cluster container in the Redis Enterprise Cluster(REC) pod restarts, the REC node remains down. To workaround this, restart the pod, while ensuring the majority of REC nodes are available.
- When a pod's status is shown as 'CrashLoopBackOff' the process of cluster recovery will not complete. The workaround is to delete the crashing pods manually. The recovery process will then continue.
- A cluster name longer than 20 characters will result in a rejected route configuration, because this will cause the host part of the domain name will exceed 63 characters. The workaround is to limit cluster name to 20 characters or fewer.
- A cluster CR specification error is not reported if two or more invalid CR resources are updated in sequence.
- When a cluster is in an unreachable state, the cluster is still shown as running, instead of being reported as an error.
- STS Readiness probe fails to mark a node as "not ready" when running 'rladmin status' on the node fails.
- The 'redis-enterprise-operator' role is missing permissions on replica sets.
- Openshift 3.11 does not support DockerHub private registries. This is a known OpenShift issue.
- DNS conflicts are possible between the cluster 'mdns_server' and the Kubernetes DNS. This only impacts DNS resolution from within cluster nodes for Kubernetes DNS names.
- Kubernetes-based 5.4.10 deployments seem to negatively impact existing 5.4.6 deployments that share a Kubernetes cluster.
- In Kubernetes, the node CPU usage reported is for the Kubernetes worker node hosting the REC pod.
- In OLM-deployed operators, the cluster deployment will fail if the name is not "rec". When the operator is deployed via the OLM, the security context constraints (scc) are bound to a specific service account name (i.e., "rec"). To avoid deployment failure, name the cluster "rec".
- The master pod is not always labeled in Rancher.
- When REC clusters are deployed on Kubernetes clusters with unsynchronized clocks, the REC cluster does not start correctly. The fix is to use NTP to synchronize the underlying K8s nodes.
- When a REC cluster is deployed in a project (namespace) and has REDB resources, the REDB resources must be deleted before the REC can be deleted. Until the REDB resources are deleted, the project deletion will hang. The fix is to delete the REDB resources first and the REC second. Afterwards, you may delete the project.
- In Kubernetes 1.15 or older, the PVC labels come from the match selectors and not the PVC templates. These versions can not support PVC labels. If you require this feature, the only fix is to upgrade the K8s cluster to a newer version.

## Compatability Notes

- OpenShift 4.7 and Rancher/kOps 1.20 are now supported
- OpenShift 4.1, 4.2, 4.3 (previously deprecated) are no longer supported
- GKE K8s version 1.14 (previously deprecated) is no longer supported
- kOps (upstream K8s) 1.13, 1.14 (previously deprecated) are no longer supported

## Deprecation notice

- OpenShift 4.4 (no longer supported by Red Hat) is deprecated
- GKE K8s versions 1.15, 1.16 (no longer supported by Google) are deprecated
- kOps (upstream K8s) 1.15 is deprecated