Releases: percona/percona-xtradb-cluster-operator
v1.18.0
Release Highlights
This release of Percona Operator for MySQL based on Percona XtraDB Cluster includes the following new features and improvements:
PMM3 support
The Operator is natively integrated with PMM 3, enabling you to monitor the health and performance of your Percona Distribution for MySQL deployment and at the same time enjoy enhanced performance, new features, and improved security that PMM 3 provides.
Note that the Operator supports both PMM2 and PMM3. The decision on what PMM version is used depends on the authentication method you provide in the Operator configuration: PMM2 uses API keys while PMM3 uses service account token. If the Operator configuration contains both authentication methods with non-empty values, PMM3 takes the priority.
To use PMM, ensure that the PMM client image is compatible with the PMM Server version. Check Percona certified images for the correct client image.
For how to configure monitoring with PMM, see the documentation.
Improved monitoring for clusters in multi-region or multi-namespace deployments in PMM
Now you can define a custom name for your clusters deployed in different data centers. This name helps Percona Management and Monitoring (PMM) Server to correctly recognize clusters as connected and monitor them as one deployment. Similarly, PMM Server identifies clusters deployed with the same names in different namespaces as separate ones and correctly displays performance metrics for you on dashboards.
To assign a custom name, define this configuration in the Custom Resource manifest for your cluster:
spec:
pmm:
customClusterName: testClusterName
More resilient database restores without matching user Secrets
You no longer need matching user Secrets between your backup and your target cluster to perform a restore. The Operator now has a post-restore step that changes user passwords in the restored database to the ones from the local Secret. Also, it creates missing system users and adds missing grants.
This flow is the same regardless of whether you restore to the same cluster or to a completely new one.
The removal of this major roadblock to have a Secret for restores makes your disaster recovery process smoother and more reliable. This enhancement makes managing databases on Kubernetes more robust and operator-friendly.
Improved backup retention for streamlined management of scheduled backups in cloud storage
A new backup retention configuration gives you more control over how backups are managed in storage and retained in Kubernetes.
With the deleteFromStorage
flag , you can disable automatic deletion from AWS S3 or Azure Blob storage and instead rely on native cloud lifecycle policies. This makes backup cleanup more efficient and better aligned with flexible storage strategies.
The legacy keep
option is now deprecated and mapped to the new retention
block for compatibility. We encourage you to start using the backup.schedule.retention
configuration:
schedule:
- name: "sat-night-backup"
schedule: "0 0 * * 6"
retention:
count: 3
type: count
deleteFromStorage: true
storageName: s3-us-west
Note that if you have both backup.schedule.keep
and backup.schedule.retention
defined, the backup.schedule.retention
takes precedence.
Added labels to identify the version of the Operator
Custom Resource Definition (CRD) is compatible with the last three Operator versions. To know which Operator version is attached to it, we've added labels to all Custom Resource Definitions. The labels help you identify the current Operator version and decide if you need to update the CRD. To view the labels, run: kubectl get crd perconaxtradbclusters.pxc.percona.com --show-labels
.
Cross-site replication is now supported for Percona XtraDB Cluster 8.4
Cross-site replication is now available with Percona XtraDB Cluster 8.4.x, lifting one of the limitations in the Operator for this database version. This enhancement marks a significant step toward general availability of Percona XtraDB Cluster 8.4 in the Operator by enabling multi-site deployments and improving resilience across distributed environments.
Deprecation, Rename and Removal
-
The
pxc.expose.loadBalancerIP
,haproxy.exposePrimary.loadBalancerIP
,haproxy.exposeReplicas.loadBalancerIP
andproxysql.expose.loadBalancerIP
keys are deprecated. TheloadBalancerIP
field is also deprecated upstream in Kubernetes
due to its inconsistent behavior across cloud providers and lack of dual-stack support. As a result, its usage is strongly discouraged.We recommend using cloud provider-specific annotations instead, as they offer more predictable and portable behavior for managing load balancer IP assignments.
The
pxc.expose.loadBalancerIP
,haproxy.exposePrimary.loadBalancerIP
,haproxy.exposeReplicas.loadBalancerIP
andproxysql.expose.loadBalancerIP
keys are scheduled for removal in future releases. -
The
backup.schedule.keep
field is deprecated and will be removed after release 1.21.0. We recommend using thebackup.schedule.retention
instead as follows:schedule: - name: "sat-night-backup" schedule: "0 0 ** 6" retention: count: 3 type: count deleteFromStorage: true storageName: s3-us-west
-
New repositories for Percona XtraBackup and Logcollector
Now the Operator uses the official Percona Docker images for the
percona-xtrabackup
andlogcollector
components. Pay attention to the new image repositories when you upgrade the Operator and the database. Check the Percona certified images for exact image names. -
Changes for Helm charts:
- PMM3 is now the default. To keep using PMM2, set the
pmm.tag: 2.44.1
- If you install or upgrade the Operator with default manifests using Helm charts on Openshift 4.19, you must use the
docker.io
registry prefix to guarantee successful download from the DockerHubpercona-xtradb-cluster
repository. Read the Considerations for using OpenShift 4.19 section for more information.
- PMM3 is now the default. To keep using PMM2, set the
Known limitations
Considerations for using OpenShift 4.19
Starting with OpenShift 4.19, the way images with not fully qualified names are pulled has changed for repositories that share the same repository name on DockerHub and Red Hat Marketplace. By default the tags are pulled from Red Hat Marketplace. Specifying not fully qualified image names may result in the ImagePullBackOff
error.
- OLM installation: Images are provided with the fully qualified names and are pulled from the Red Hat Marketplace/DockerHub registry.
- Manual install/update with default manifests: Images must use the
docker.io
registry prefix to guarantee successful download from the Dockerhubpercona-xtradb-cluster
repository.
See our documentation for manual installation or update.
Changleog
New Features
-
K8SPXC-1284 - Add the ability to configure protocol for peer-list DNS SRV lookups
-
K8SPXC-1599 - Allowed setting
loadBalancerClass
service type and using a custom implementation of a load balancer rather than the cloud provider default one
Improvements
-
K8SPXC-1375 - Added a new retention configuration to allow users to delegate backup cleanup to cloud lifecycle policies (Thank you user Tristan for reporting this issue)
-
K8SPXC-1376 - Added the ability to restore from backup without a matching Secret resource
-
K8SPXC-1399 - Added a documentation how to set up a disaster recovery system and transfer workloads between sites
-
K8SPXC-1415 - Updated the
percona-xtrabackup
image to use the officialpercona-xtrabackup
Docker image -
K8SPXC-1430 - Improved handling of autogenerated certificates depending on the
delete-ssl
finalizer configuration -
K8SPXC-1448, K8SPXC-1449 - Improved the
pvc-resize
test by using a custom storage class for EKS, reducing errors and improving the quota handling during resize -
K8SPXC-1450 - Improved PVC resizing behavior when reducing the storage size by reverting the values when the quota is reached
-
K8SPXC-1472 - Deprecated the
loadBalancerIP
field due to its deprecation upstream -
K8SPXC-1513 - Added PXC 8.4 support for version service
-
K8SPXC-1529 - Added support for cross-site replication with MySQL 8.4.0 by adding the use of
authentication_policy
instead ofdefault_authentication_plugin
-
K8SPXC-1553 - Added support for PMM v3
-
[K8SPXC-1560](https://...
v1.17.0
Release Highlights
Improved observability for HAProxy and ProxySQL
Get insights into the HAProxy and ProxySQL performance by connecting to their statistics pages. Use the cluster-name-haproxy:8084
and cluster-name-proxysql:6070
endpoints to do so. Learn about other available ports in the documentation.
Improved cluster load management during backups
If parallel backups overload your cluster, you can turn off parallel execution to prevent this. Previously, this meant that you could only run one backup at a time - no new backups could start until the current one was finished. Now, the Operator queues backups and runs them one after another automatically. You can fine-tune the backup sequence by setting the start time for all backups or for a specific on-demand one using the spec.backup.startingDeadlineSeconds
Custom Resource option. This provides greater control over backup operations.
Another improvement is for the case when your database cluster becomes unhealthy, for example, when a Pod crashes or restarts. The Operator suspends running backups to reduce the cluster's load. Once the cluster recovers and reports a Ready status, the Operator resumes the suspended backup. To further offload the cluster during an unhealthy state, you can configure how long a backup remains suspended by using the spec.backup.suspendedDeadlineSeconds
Custom Resource option. If this time expires before the cluster recovers, the backup is marked as "failed."
Monitor PMM Client health and status
Percona Monitoring and Management (PMM) is a great tool to monitor the health of your database cluster. Now you can also learn if PMM itself is healthy using probes - a Kubernetes diagnostics mechanism to check the health and status of containers. Use the spec.pmm.readinessProbes.*
and spec.pmm.livenessProbes.*
Custom Resource options to fine-tune Readiness and Liveness probes for PMM Client.
Improved observability of binary log backups
Get insights into the success and failure rates of binlog operations, timeliness of processing and uploads and potential gaps or inconsistencies in binlog data with the Prometheus metrics added for the Operator. Gather this data by connecting to the <pitr-pod-service>:8080/metrics
endpoint. Learn more about the available metrics in the documentation.
Deprecation, Rename and Removal
The spec.haproxy.exposePrimary.enabled
field is deprecated. If enabled via the spec.haproxy.enabled
, the HAProxy primary service is already exposed.
New Features
-
K8SPXC-747, K8SPXC-1473 - Add the ability to access the statistics pages for HAProxy and ProxySQL
-
K8SPXC-1366 - Add the ability to queue backups and run them sequentially, and to optimize the cluster load with the ability to suspend backups for an unhealthy cluster. A user can assign the start time and suspension time to backups to manage them better.
-
K8SPXC-1432 - Enable users to configure cluster-wide Operator deployments in OpenShift certified catalog using OLM.
Improvements
-
K8SPXC-1367 - Now a user can configure Readiness and Liveness probes for PMM Client container to check its health and status
-
K8SPXC-1461 - Improve logging for resizing PVC with the information about successful and failed PVC resize. Log errors on resize attempts if the Storage Class doesn't support resizing.
-
K8SPXC-1466 - Mark the containers that provide the service as default ones with the annotation. This enables a user to connect to a Pod without explicitly specifying a container.
-
K8SPXC-1473 - Add the ability to connect to the built-in statistics pages for HAProxy and ProxySQL by exposing the ports for those pages
-
K8SPXC-1475 - Update the backup image to use AWS CLI instead of MinIO CLI due to the license change
-
K8SPXC-1510 - Add the ability to suppress messages about the use of deprecated features in MySQL Error Log by adding the
log_error_suppression_list
key from themy.cnf
configuration file and defining the message number in thespec.pxc.configuration
subsection of the Custom Resource manifest. See how to change MySQL options for steps. This improves readability for MySQL error log. -
K8SPXC-1512 - For Percona XtraDB Cluster version 8.4 and above, binary log user defined functions for point-in-time recovery (
binlog_utils_udf
) are now installed as a component instead of a plugin. This improves their compatibility across platforms and provides automatic dependency handling. -
K8SPXC-1542 - Improve binlog upload for large files to Azure blob storage with the ability to define the block size and the number of concurrent writers for the upload (Thanks to user dcaputo-harmoni for contribution)
-
K8SPXC-1543 - Set PITR controller reference for binlog-collector deployment the same way as it's set for PXC and proxy StatefulSets. This creates a connection between PITR deployment and cluster resource (Thank you Vlad Gusev for the contribution)
-
K8SPXC-1544 - Improve observability of binlog collector by adding the support of basic Prometheus metrics (Thank you Vlad Gusev for the contribution)
-
K8SPXC-1567 - Normalize duplicate slashes if the bucket path for binlog collector ends with a slash (
/
) (Thank you Vlad Gusev for the contribution) -
K8SPXC-1596 - Assign a correct status to a backup if data upload fails due to incomplete backup
-
K8SPXC-1620 - Fixed the issue with a failing backup by adding a retry logic to the cloud storage cleanup task to check for uploaded files and clean them up before uploading new files
Bugs Fixed
-
K8SPXC-1152 Fixed the issue with the restore process being stuck when the Operator is restarted by setting annotations on the
perconaxtradbclusterrestores
object -
K8SPXC-1482 Fixed the issue with the excessive connection resets on every pod recreation because the cluster's peer-list is not aware of Time To Live (TTL) defined for Kubernetes DNS records. Now there's a 30 second waiting period after a peer update (Thank you Vlad Gusev for reporting this issue and contributing to it)
-
K8SPXC-1483 - Fixed the bug where the point-in-time recovery collector process hangs if
mysqlbinlog
cannot connect to the database and start. Now the named pipeline is created with theO_RDONLY
(Open for Read Only) andO_NONBLOCK
(Non-Blocking Mode) to unlock the point-in-time recovery collector process. (Thank you Vlad Gusev for reporting this issue and contributing to it) -
K8SPXC-1509 - Fixed the bug where the cluster enters the error state temporarily if point-in-time is enabled for it.
-
K8SPXC-1534 - Fixed the issue with the inconsistent secret reconciliation by improving the controller's behavior to timely sync the secret cache and create an internal Secret immediately after its reconciliation.
-
K8SPXC-1538 - Fixed the issue with the Operator failing when it tries to reconcile the Custom Resource for the
haproxy-replica
service if thehaproxy-primary
service has the typeLoadBalancer
and theLoadBalancerSourceRanges
value defined. Now thehaproxy-replica
service inherits this configuration. -
K8SPXC-1546, K8SPXC-1549 - Fixed the issue with the PITR pod crashing on attempt to assign a GTID
set to each binlog if the database cluster has a large number of binlogs by caching thebinlog->gtid
set pairs -
K8SPXC-1547 - Removed the outdated example from the
backup.yaml
manifest and update the documentation how to track backup progress -
K8SPXC-1616 - Fixed a bug where the ProxySQL fails to be configured if the password for a
proxysqladmin
user starts with a star (*
) character by reporting an error and making the Operator regenerate a new password that doesn't start with a star...
v1.16.1
Bugs Fixed
- K8SPXC-1536: Fix a bug where scheduled backups were not working due to a bug in the Operator that was creating Kubernetes resources with the names exceeding the allowed length (Thanks to Vlad Gusev for contribution)
Supported Platforms
The Operator was developed and tested with Percona XtraDB Cluster versions 8.4.2-2.1 (Tech preview), 8.0.39-30.1, and 5.7.44-31.65. Other options may also work but have not been tested. Other software components include:
- Percona XtraBackup versions 8.4.0-1, 8.0.35-30.1 and 2.4.29
- HAProxy 2.8.11
- ProxySQL 2.7.1
- LogCollector based on fluent-bit 3.2.2
- PMM Client 2.44.0
Percona Operators are designed for compatibility with all CNCF-certified Kubernetes distributions. Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below for Operator version 1.16.0:
- Google Kubernetes Engine (GKE) 1.28 - 1.30
- Amazon Elastic Container Service for Kubernetes (EKS) 1.28 - 1.31
- Azure Kubernetes Service (AKS) 1.28 - 1.31
- OpenShift 4.14.42 - 4.17.8
- Minikube 1.34.0 based on Kubernetes 1.31.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.
v1.16.0
Release Highlights
Declarative user management (technical preview)
Before the Operator version 1.16.0 custom MySQL users had to be created manually. Now the declarative creation of custom MySQL users is supported via the users
subsection in the Custom Resource. You can specify a new user in deploy/cr.yaml
manifest, setting the user’s login name and hosts this user is allowed to connect from, PasswordSecretRef (a reference to a key in a Secret resource containing user’s password) and as well as databases the user is going to have access to and the appropriate permissions:
users:
- name: my-user
dbs:
- db1
- db2
hosts:
- localhost
grants:
- SELECT
- DELETE
- INSERT
withGrantOption: true
passwordSecretRef:
name: my-user-pwd
key: my-user-pwd-key
...
See documentation to find more details about this feature with additional explanations and the list of current limitations.
Percona XtraDB Cluster 8.4 support (technical preview)
Percona XtraDB Cluster based on Percona Server for MySQL 8.4 versions is now supported by the Operator in addition to 8.0 and 5.7 versions. The appropriate images for Percona XtraDB Cluster and Percona XtraBackup are included into the list of Percona-certified images. Being a technical preview, Percona XtraDB Cluster 8.4 is not yet recommended for production environments.
New Features
- K8SPXC-377: It is now possible to create and manage users via the Custom Resource
- K8SPXC-1456: Now the user can run Percona XtraDB Cluster Pods initContainers with a security context different from the Pods security context, useful to customize deployment on tuned Kubernetes environments (Thanks to Vlad Gusev for contribution)
Improvements
- K8SPXC-1230 and K8SPXC-1378: Now the Operator assigns labels to all Kubernetes objects it creates (backups/restores, Secrets, Volumes, etc.) to make them clearly distinguishable
- K8SPXC-1411: Enabling/disabling TLS on a running cluster is now possible simply by toggling the appropriate Custom Resource option
- K8SPXC-1451: The automated storage scaling is now disabled by default and needs to be explicitly enabled with the
enableVolumeExpansion
Custom Resource option - K8SPXC-1462: A restart of Percona XtraDB Cluster Pods is now triggered by the monitor user’s password change if the user secret is used within a sidecar container, which can be useful for custom monitoring solutions (Thanks to Vlad Gusev for contribution)
- K8SPXC-1503: Improved logic saves logs from the appearance of a number of temporary non-critical errors related to ProxySQL user sync and non-presence of point-in-time recovery files (Thanks to dcaputo-harmoni for contribution)
- K8SPXC-1500: A new
backup.activeDeadlineSeconds
Custom Resource option was added to fail the backup job automatically after the specified timeout (Thanks to Vlad Gusev for contribution) - K8SPXC-1532: The peer-list tool used by the Operator was removed from standard HAProxy, ProxySQL and PXC Docker images because recent Operator versions are adding it with the initContainer approach
Bugs Fixed
- K8SPXC-1222: Fix a bug where upgrading a cluster with hundreds of thousands of tables would fail due to a timeout
- K8SPXC-1398: Fix a bug which sporadically prevented the scheduled backup job Pod from successfully completing the process
- K8SPXC-1413 and K8SPXC-1458: Fix the Operator Pod segfault which was occurring when restoring a backup without backupSource Custom Resource subsection or without storage specified in the backupSource
- K8SPXC-1416: Fix a bug where disabling parallel backups in Custom Resource caused all backups to get stuck in presence of any failed backup
- K8SPXC-1420: Fix a bug where HAProxy exposed at the time of point-in-time restore could make conflicting transactions, causing the PITR Pod stuck on the duplicate key error
- K8SPXC-1422: Fix the cluster endpoint change from the external IP to the service name when upgrading the Operator
- K8SPXC-1444: Fix a bug where Percona XtraDB Cluster initial creation state was changing to “error” if the backup restore was taking too long
- K8SPXC-1454: Fix a bug where the Operator erroneously generated SSL secrets when upgrading from 1.14.0 to 1.15.0 with
allowUnsafeConfigurations: true
Custom Resource option
Deprecation, Rename and Removal
- Operator versions older than 1.14.1 become incompatible with new HAProxy, ProxySQL and PXC Docker images due to the absence of the peer-list tool in them. If you are still using the older Operator version, make sure to update the Operator before switching to the latest database and proxy images. You can see the list of Percona certified images for the current release, and check image versions certified for previous releases in the documentation archive.
Supported Platforms
The Operator was developed and tested with Percona XtraDB Cluster versions 8.4.2-2.1 (Tech preview), 8.0.39-30.1, and 5.7.44-31.65. Other options may also work but have not been tested. Other software components include:
- Percona XtraBackup versions 8.4.0-1, 8.0.35-30.1 and 2.4.29
- HAProxy 2.8.11
- ProxySQL 2.7.1
- LogCollector based on fluent-bit 3.2.2
- PMM Client 2.44.0
Percona Operators are designed for compatibility with all CNCF-certified Kubernetes distributions. Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below for Operator version 1.16.0:
- Google Kubernetes Engine (GKE) 1.28 - 1.30
- Amazon Elastic Container Service for Kubernetes (EKS) 1.28 - 1.31
- Azure Kubernetes Service (AKS) 1.28 - 1.31
- OpenShift 4.14.42 - 4.17.8
- Minikube 1.34.0 based on Kubernetes 1.31.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.
v1.14.1
Bugs Fixed
- K8SPXC-1476: Fix a bug where upgrade could put the cluster into a non-operational state if using Storage Classes without the Volume expansion capabilities, by introducing a new
enableVolumeExpansion
Custom Resource option toggling this functionality
Deprecation, Change, Rename and Removal
- The new
enableVolumeExpansion
Custom Resource option allows to disable the automated storage scaling with Volume Expansion capability. The default value of this option is false, which means that the automated scaling is turned off by default.
Supported Platforms
The Operator was developed and tested with Percona XtraDB Cluster versions 8.0.35-27.1 and 5.7.44-31.65. Other options may also work but have not been tested. Other software components include:
- Percona XtraBackup versions 2.4.29-1 and 8.0.35-30.1
- HAProxy 2.8.5-1
- ProxySQL 2.5.5-1.1
- LogCollector based on fluent-bit 2.1.10-1
- PMM Client 2.41.1
The following platforms were tested and are officially supported by the Operator 1.14.1:
- Google Kubernetes Engine (GKE) 1.25 - 1.29
- Amazon Elastic Container Service for Kubernetes (EKS) 1.24 - 1.29
- Azure Kubernetes Service (AKS) 1.26 - 1.28
- OpenShift 4.12.50 - 4.14.13
- Minikube 1.32.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.
v1.15.1
Bugs Fixed
- K8SPXC-1476: Fix a bug where upgrade could put the cluster into a non-operational state if using Storage Classes without the Volume expansion capabilities, by introducing a new
enableVolumeExpansion
Custom Resource option toggling this functionality
Deprecation, Change, Rename and Removal
- The new
enableVolumeExpansion
Custom Resource option allows to disable the automated storage scaling with Volume Expansion capability. The default value of this option is false, which means that the automated scaling is turned off by default.
Supported Platforms
The Operator was developed and tested with Percona XtraDB Cluster versions 8.0.36-28.1 and 5.7.44-31.65. Other options may also work but have not been tested. Other software components include:
- Percona XtraBackup versions 8.0.35-30.1 and 2.4.29-1
- HAProxy 2.8.5
- ProxySQL 2.5.5
- LogCollector based on fluent-bit 3.1.4
- PMM Client 2.42.0
The following platforms were tested and are officially supported by the Operator 1.15.0:
- Google Kubernetes Engine (GKE) 1.27 - 1.30
- Amazon Elastic Container Service for Kubernetes (EKS) 1.28 - 1.30
- Azure Kubernetes Service (AKS) 1.28 - 1.30
- OpenShift 4.13.46 - 4.16.7
- Minikube 1.33.1 based on Kubernetes 1.30.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.
v1.15.0
Release Highlights
General availability of the automated volume resizing
The possibility to resize Persistent Volumes by just changing the value of the resources.requests.storage option in the PerconaXtraDBCluster custom resource, introduced in the previous release as a technical preview, graduates to general availability.
Allowing haproxy-replica Service to cycle through the reader instances only
By default haproxy-replica Service directs connections to all Pods of the database cluster in a round-robin manner. The new haproxy.exposeReplicas.onlyReaders Custom Resource option allows to modify this behavior: setting it to true excludes current MySQL primary instance (writer) from the list, leaving only the reader instances. By default the option is set to false, which means that haproxy-replicas sends traffic to all Pods, including the active writer. The feature can be useful to simplify the application logic by splitting read and write MySQL traffic on the Kubernetes level.
Also, it should be noted that changing haproxy.exposeReplicas.onlyReaders
value will cause HAProxy Pods to restart.
Fixing the overloaded allowUnsafeConfigurations flag
In the previous Operator versions allowUnsafeConfigurations Custom Resource option was used to allow configuring a cluster with unsafe parameters, such as starting it with less than 3 Percona XtraDB Cluster instances. In fact, setting this option to true resulted in a wide range of reduced safety features without the user’s explicit intent: disabling TLS, allowing backups in unhealthy clusters, etc.
With this release, a separate unsafeFlags Custom Resource section is introduced for the fine-grained control of the safety loosening features:
unsafeFlags:
tls: false
pxcSize: false
proxySize: false
backupIfUnhealthy: false
If the appropriate option is set to false and the Operator detects unsafe parameters, it sets cluster status to error, and prints an error message in the log.
Also, TLS configuration is now enabled or disabled by setting unsafeFlags.tls
and tls.enabled
Custom Resource options to true or false.
New Features
- K8SPXC-1330: A new haproxy.exposeReplicas.onlyReaders Custom Resource option makes haproxy-replicas Service to forward requests to reader instances of the MySQL cluster, avoiding the primary (writer) instance.
- K8SPXC-1355: Finalizers were renamed to contain fully qualified domain names (FQDNs), avoiding potential conflicts with other finalizer names in the same Kubernetes environment
Improvements
- K8SPXC-1357: HAProxy Pod no longer restarts when the operator user’s password changes, which is useful or the applications with persistent connection to MySQL
- K8SPXC-1358: Removing allowUnsafeConfigurations Custom Resource option in favor of fine-grained safety control in the unsafeFlags subsection
- K8SPXC-1368: Kubernetes PVC DataSources for Percona XtraDB Cluster Volumes are now officially supported via the pxc.volumeSpec.persistentVolumeClaim.dataSource subsection in the Custom Resource
- K8SPXC-1385: Dynamic Volume resize now checks resource quotas and the PVC storage limits
- K8SPXC-1423: The percona.com/delete-pxc-pvc finalizer is now able to delete also temporary secrets created by the Operator
Bugs Fixed
- K8SPXC-1067: Fix a bug where changing gracePeriod, nodeSelector, priorityClassName, runtimeClassName, and schedulerName fields in the haproxy Custom Resource subsection didn’t propagate changes to the haproxy StatefulSet
- K8SPXC-1338: Fix a bug where binary log collector Pod had unnecessary restart during the Percona XtraDB Cluster rolling restart
- K8SPXC-1364: Fix a bug where log rotation functionality didn’t work when the proxy_protocol_networks option was enabled in the Percona XtraDB Cluster custom configuration
- K8SPXC-1365: Fix pxc-operator Helm chart bug where it wasn’t able to create namespaces if multiple namespaces were specified in the watchNamespace option
- K8SPXC-1371: Fix a bug in pxc-db Helm chart which had wrong Percona XtraDB Cluster version for the 1.14.0 release and tried to downgrade the database in case of the helm chart upgrade
- K8SPXC-1380: Fix a bug due to which values in the resources requests for the restore job Pod were overwritten by the resources limits ones
- K8SPXC-1381: Fix a bug where HAProxy check script was not correctly identifying all the possible ”offline” state of a PXC instance, causing applications connects to an instance NOT able to serve the query
- K8SPXC-1382: Fix a bug where creating a backup on S3 storage failed automatically if s3.credentialsSecret Custom Resource option was not present
- K8SPXC-1396: The xtrabackup user didn’t have rights to grant privileges available in its own privilege level to other users, which caused the point-in-time recovery fail due to access denied
- K8SPXC-1408: Fix a bug where the Operator blocked all restores (including ones without PiTR) in case of a binlog gap detected, instead of only blocking PiTR restores
- K8SPXC-1418: Fix a bug where CA Certificate generated by cert-manager had expiration period of 1 year instead of the 3 years period used by the Operator for other generated certificates, including ones used for internal and external communications
Deprecation, Rename and Removal
-
Starting from now,
allowUnsafeConfigurations
Custom Resource option is deprecated in favor of a number of options under the unsafeFlags subsection. Also, starting from now the Operator will not set safe defaults automatically. Upgrading existing clusters withallowUnsafeConfiguration=false
and a configuration considered unsafe (i.e.pxc.size<3
ortls.enabled=false
) will print errors in the log and the cluster will have error status until the values are fixed. -
Finalizers were renamed to contain fully qualified domain names:
delete-pxc-pods-in-order
renamed topercona.com/delete-pxc-pods-in-order
delete-ssl
renamed topercona.com/delete-ssl
delete-proxysql-pvc
renamed topercona.com/delete-proxysql-pvc
delete-pxc-pvc
renamed topercona.com/delete-pxc-pvc
-
The pxc-operator Helm chart now has
createNamespace
option now is set to false by default, resulting in not creating any namespaces unless explicitly allowed to do so by the user
Supported Platforms
The Operator was developed and tested with Percona XtraDB Cluster versions 8.0.36-28.1 and 5.7.44-31.65. Other options may also work but have not been tested. Other software components include:
- Percona XtraBackup versions 8.0.35-30.1 and 2.4.29-1
- HAProxy 2.8.5
- ProxySQL 2.5.5
- LogCollector based on fluent-bit 3.1.4
- PMM Client 2.42.0
The following platforms were tested and are officially supported by the Operator 1.15.0:
- Google Kubernetes Engine (GKE) 1.27 - 1.30
- Amazon Elastic Container Service for Kubernetes (EKS) 1.28 - 1.30
- Azure Kubernetes Service (AKS) 1.28 - 1.30
- OpenShift 4.13.46 - 4.16.7
- Minikube 1.33.1 based on Kubernetes 1.30.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.