You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Percona Operator for MySQL automates deploying and operating
4
-
open source MySQL clusters on Kubernetes.
3
+
Percona Operator for MySQL automates deploying and operating Percona Server for MySQL clusters on Kubernetes. This document explains what components the Operator uses and how they work together to provide a highly available MySQL database. Also, read more about [How the Operator works](how-it-works.md).
5
4
6
-
Containers deployed with the Operator include the following components:
5
+
## Components
7
6
8
-
*[Percona Server for MySQL :octicons-link-external-16:](https://www.percona.com/doc/percona-server/LATEST/index.html) - a free, fully compatible, enhanced, and open source drop-in replacement for any MySQL database,
7
+
The [StatefulSet :octicons-link-external-16:](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) deployed with the Operator includes the following components:
9
8
10
-
*[Percona XtraBackup :octicons-link-external-16:](https://www.percona.com/doc/percona-xtrabackup/8.0/index.html) - a hot backup utility for MySQL based servers that doesn’t lock your database during the backup,
9
+
*[Percona Server for MySQL :octicons-link-external-16:](https://www.percona.com/doc/percona-server/LATEST/index.html) - a free, fully compatible, enhanced, and open source drop-in replacement for any MySQL database
11
10
12
-
*[Orchestrator :octicons-link-external-16:](https://github.com/openark/orchestrator) - a replication topology manager for MySQL used when [asynchronous replication :octicons-link-external-16:](https://dev.mysql.com/doc/refman/8.0/en/group-replication-primary-secondary-replication.html) between MySQL instances [is turned on](operator.md#mysqlclustertype),
13
-
14
-
*[HAProxy :octicons-link-external-16:](https://haproxy.org) - a load balancing and proxy service compatible with both [asynchronous replication :octicons-link-external-16:](https://dev.mysql.com/doc/refman/8.0/en/group-replication-primary-secondary-replication.html) and [group replication :octicons-link-external-16:](https://dev.mysql.com/doc/refman/8.0/en/group-replication.html) between MySQL instances,
15
-
16
-
*[MySQL Router :octicons-link-external-16:](https://dev.mysql.com/doc/mysql-router/8.0/en/) - a proxy solution which can be used instead of HAProxy when [group replication :octicons-link-external-16:](https://dev.mysql.com/doc/refman/8.0/en/group-replication.html) between MySQL instances [is turned on](operator.md#mysqlclustertype),
17
-
18
-
*[Percona Toolkit :octicons-link-external-16:](https://docs.percona.com/percona-toolkit/) - a set of tools useful in debugging MySQL Pods.
11
+
*[Percona XtraBackup :octicons-link-external-16:](https://www.percona.com/doc/percona-xtrabackup/8.0/index.html) - a hot backup utility for MySQL based servers that doesn’t lock your database during the backup
19
12
20
-
The Operator architecture builds on Percona Server for MySQL. For asynchronous
21
-
replication, it leverages Orchestrator to manage topology and failover,
22
-
as illustrated below.
23
-
24
-

13
+
*[Orchestrator :octicons-link-external-16:](https://github.com/openark/orchestrator) - a replication topology manager for MySQL used when [asynchronous replication :octicons-link-external-16:](https://dev.mysql.com/doc/refman/8.0/en/group-replication-primary-secondary-replication.html) between MySQL instances [is turned on](operator.md#mysqlclustertype),
25
14
26
-
Being a regular MySQL Server instance, each node contains the same set
27
-
of data synchronized across nodes. The recommended configuration is to
28
-
have at least 3 nodes. In a basic setup with this amount of nodes,
29
-
Percona Server for MySQL provides high availability, continuing to
30
-
function if you take any of the nodes down.
15
+
*[HAProxy :octicons-link-external-16:](https://haproxy.org) - a proxy and load balancing service serving as the entry point to your database cluster. It is compatible with both [asynchronous replication :octicons-link-external-16:](https://dev.mysql.com/doc/refman/8.0/en/group-replication-primary-secondary-replication.html) and [group replication :octicons-link-external-16:](https://dev.mysql.com/doc/refman/8.0/en/group-replication.html) between MySQL instances,
31
16
32
-
## High availability
17
+
*[MySQL Router :octicons-link-external-16:](https://dev.mysql.com/doc/mysql-router/8.0/en/) - a proxy solution which can be used instead of HAProxy for MySQL clusters with [group replication :octicons-link-external-16:](https://dev.mysql.com/doc/refman/8.0/en/group-replication.html) is turned on,
33
18
34
-
To provide high availability, the Operator uses [node-affinity :octicons-link-external-16:](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
35
-
to run Percona Server for MySQL instances on separate worker nodes when possible. If
36
-
some node fails, Kubernetes reschedules the affected Pod on another node. With asynchronous
37
-
replication, Orchestrator handles primary election and topology recovery.
19
+
*[Percona Toolkit :octicons-link-external-16:](https://docs.percona.com/percona-toolkit/) - a set of tools for debugging MySQL Pods.
38
20
39
-

21
+
It can also include sidecar containers such as PMM Client or your custom ones. This depends on how you further fine-tune your cluster. Learn more about [sidecar containers](sidecar.md).
40
22
41
-
With Group
42
-
Replication, recovery is coordinated by the group membership protocol.
43
23
44
-

24
+
## Replication types
45
25
46
-
Clients connect
47
-
via HAProxy or MySQL Router to reach a healthy writer/reader instance.
26
+
Each MySQL node in your cluster contains a complete copy of your data, replicated across all nodes.
48
27
49
-
## How the Operator works
28
+
The Operator supports two replication types, each with different characteristics for performance, consistency, and availability. You [choose the replication type](operator.md#mysqlclustertype) when configuring your cluster.
50
29
51
-
The Operator extends the Kubernetes API with a special `PerconaServerMySQL` object. It's a Go application that watches for events on these objects. Each `PerconaServerMySQL` object maps to one separate MySQL setup.
30
+
### Asynchronous replication (Beta)
52
31
53
-
When you create, change, or delete a `PerconaServerMySQL` object, the Operator automatically creates or updates the necessary Kubernetes resources, like Pods and StatefulSets, to ensure your database cluster matches the desired state.
32
+
With asynchronous replication, writes complete on the primary instance without waiting for replicas. After a write completes, the primary records the change in its binary log, and replicas apply these changes independently.
54
33
55
-
For storage, the Operator leverages Persistent Volumes (PVs) and PersistentVolumeClaims (PVCs) to ensure your data is durable. A *PersistentVolumeClaim* (PVC) is used to implement
56
-
the automatic storage provisioning to pods. The Container Storage Interface (CSI) allows the storage to be re-mounted on a different node if a failure occurs.
***Read scaling** - You can distribute application read requests to different replica instances, improving read throughput.
38
+
***Consistency** - Eventual consistency: replicas may lag behind the primary instance, which can affect applications requiring real-time data. There is a risk that some transactions committed on the primary may be lost if it fails before replicas catch up.
39
+
***Write scaling** - Does not allow for horizontal write scaling; scaling writes relies on vertical scaling, which is increasing the resources (RAM, CPU) of the primary instance, rather than on adding more write nodes.
***Status** - Currently in Beta and not recommended for production use.
59
42
60
-
The process in which data from a *primary* MySQL instance is copied and sent to
61
-
other MySQL instances of the database cluster (so-called *replicas*) is known as
62
-
*MySQL replication*.
43
+
### Group replication
63
44
64
-
The Operator [provides you a choice](operator.md#mysqlclustertype) between two
65
-
replication types:
45
+
With group replication, write transactions require consensus from the group before completing. Read transactions can execute on any instance, while writes only occur on the primary.
66
46
67
-
***Asynchronous replication** means that a write is completed on the
68
-
primary and is not influenced by the replicas themselves. After completion
69
-
of its changes, the primary populates the binary log with the data
70
-
modification or the actual statement. Then the replica executes each change
71
-
on its own database and obtains consistent set of data.
47
+
**Characteristics:**
72
48
73
-
***Group Replication** means that a read transaction can be executed on any
74
-
instance, while write transactions happen only on primary. For a write transaction, MySQL tries to get consensus with the other
75
-
instances before returning it completed back to the client.
49
+
***Consistency** – Provides strong consistency, and when set to a high transaction consistency level, helps prevent stale reads.
50
+
***Read scaling** – Enables horizontal scaling of reads without stale reads when set with a high transaction consistency level.
51
+
***Performance** – Write operations are slower than asynchronous replication due to the group consensus mechanism.
52
+
***Failover** – Built-in native group membership protocol automatically handles member recovery and primary election.
53
+
***Limitations** – Group replication limits the cluster to a maximum of 9 MySQL instances per group. Large transactions can noticeably slow down the system, and especially large transactions may even trigger a replication member fault if the transaction message cannot be copied between group members within a 5-second network window.
54
+
***Status** – General Availability (GA) and recommended for production use.
76
55
77
-
Both replication types have their pros and cons.
56
+
!!! note
78
57
79
-
*Asynchronous replication* is faster. Also, you can distribute read
80
-
requests of your application to different instances. On the other hand, it can
81
-
be less reliable in terms of consistency: replicas may lag behind the primary
82
-
instance, impacting any applications that depend on real-time data. Also, some
83
-
transaction committed on the primary instance may not be available on replicas
84
-
if the primary fails. Finally, asynchronous replication doesn't allow you
instance RAM and CPU increase) as the only available option.
58
+
MySQL documentation may also use the terms "source/replica" instead of "primary/replica".
87
59
88
-
*Group replication* allows horizontal scaling of reads, preventing stale reads when set with high transaction consistency level, but it is slower. Group replication has some
Particularly, the number of MySQL instances in a single replication group can't
91
-
exceed 9. Also, extra large transactions can cause noticeable system slowdown,
92
-
and in some cases even can cause the [replication member fault :octicons-link-external-16:](https://dev.mysql.com/doc/refman/8.0/en/group-replication-limitations.html#group-replication-limitations-transaction-size)
93
-
when transaction message cannot be copied between group members over the
94
-
network within a 5-second window.
60
+
## Proxy solutions
95
61
96
-
!!! note
62
+
The proxy you use depends on your replication type and requirements:
97
63
98
-
MySQL documentation may also use the terms "source/replica" instead of
99
-
"primary/replica".
64
+
***HAProxy** - Works with both asynchronous replication and group replication. Provides load balancing, health checks, and connection pooling.
100
65
101
-
Proxy solution used by the Operator for the database cluster depends on the
102
-
replication type used in the cluster. [HAProxy](haproxy-conf.md) can be used
103
-
with both replication types, while [MySQL Router](router-conf.md) can be used
104
-
with Group Replication.
66
+
***MySQL Router** - Available only for group replication. Offers intelligent routing, connection pooling, and read-write splitting capabilities.
| Max nodes | Higher (practical limits) | 9 per group |
117
79
| Proxy | HAProxy | HAProxy or MySQL Router |
118
80
119
-
Tip: Choose Group Replication for stronger consistency and read scaling; choose
120
-
asynchronous replication for lower write latency and simpler topology.
81
+
**Tip:** Choose Group Replication for stronger consistency and read scaling; choose asynchronous replication for lower write latency and simpler topology when it reaches GA status.
121
82
122
83
You can change the replication type if needed. Refer to the [Change replication type](change-replication-type.md) guide for step-by-step instructions. Note that replication type change is not supported on a running cluster.
123
84
124
-
### What to read next
85
+
## High availability
86
+
87
+
The Operator provides high availability through multiple layers of protection:
88
+
89
+
### Pod distribution
90
+
91
+
The Operator uses [node affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) to distribute Percona Server for MySQL instances across separate worker nodes when possible. This prevents a single node failure from taking down multiple database instances.
92
+
93
+
### Automatic recovery
94
+
95
+
If a node fails, Kubernetes automatically reschedules the affected Pod on another healthy node. Inside your cluster, automatic recovery is handled as follows:
96
+
97
+
* In **asynchronous replication** clusters, the Orchestrator detects the failure, promotes a healthy replica to primary, and updates the replication topology.
98
+
99
+

100
+
101
+
102
+
* In **group replication** clusters, the native group membership protocol automatically handles member removal, primary election, and topology recovery.
103
+
104
+

105
+
106
+
### Client connectivity
107
+
108
+
Clients connect through HAProxy or MySQL Router, which automatically route traffic to healthy MySQL instances. These proxies detect failures and redirect connections away from failed nodes, ensuring your applications always connect to available database instances.
109
+
110
+
For configuration details, see:
125
111
126
-
*[Operator configuration and Custom Resources](operator.md)
127
112
*[HAProxy configuration](haproxy-conf.md)
128
113
*[MySQL Router configuration](router-conf.md)
129
-
*[Backups](backups.md)
130
-
*[Updating and upgrades](update.md)
114
+
115
+
116
+
## What to read next
117
+
118
+
Now that you understand the architecture, explore these topics:
119
+
120
+
*[Operator configuration and Custom Resources](operator.md) - Learn how to configure your cluster
121
+
*[Backups](backups.md) - Understand backup and restore operations
122
+
*[Scaling](scaling.md) - Scale your cluster horizontally or vertically
123
+
*[High availability configuration](constraints.md) - Configure anti-affinity and pod distribution
124
+
*[Updating and upgrades](update.md) - Keep your cluster up to date
Copy file name to clipboardExpand all lines: docs/constraints.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Binding Distribution for MySQL components to Specific Kubernetes Nodes
1
+
# Control Pod scheduling on specific Kubernetes nodes with affinity, anti‑affinity and tolerations
2
2
3
3
The Operator automatically assigns Pods to nodes with sufficient resources for balanced distribution across the cluster. You can configure Pods to be scheduled on specific nodes. For example, for improved performance on the SSD
4
4
equipped machine or for cost optimization by choosing the nodes in the same availability zone.
The Percona Operator for MySQL acts as your assistant in managing databases on Kubernetes. It extends the Kubernetes API with a custom `PerconaServerMySQL` resource. Think of it as a blueprint that defines how you want your MySQL database to look and behave.
4
+
5
+
Whenever you create or update a `PerconaServerMySQL` Custom Resource, the Operator steps in and handles the hard work for you. It automatically does the following:
6
+
7
+
1. Creates and manages the necessary Kubernetes resources (StatefulSets, Services, Pods)
8
+
2. Ensures your cluster matches the desired state you've defined
9
+
3. Monitors the cluster health and automatically recovers from failures
10
+
4. Coordinates upgrades and scaling operations
11
+
12
+
These operations ensure that your actual database environment always matches your request.
13
+
14
+
Each MySQL node in your cluster contains a complete copy of your data, synchronized across all nodes.
15
+
16
+

17
+
18
+
The recommended configuration is to use at least 3 nodes. Such setup provides high availability — if any node fails, the cluster continues operating normally. Read more about [high-availability](architecture.md#high-availability)
19
+
20
+
To keep your data safe and persistent, the Operator uses Kubernetes storage systems called Persistent Volumes (PVs) and PersistentVolumeClaims (PVCs). When you request storage for your database, a PVC automatically finds and attaches available storage for you. If a node fails, the Kubernetes storage system can move your data to another node, making sure your database remains available and your data stays protected.
21
+
22
+
Ready to get started? Continue to the [quickstart guide](quickstart.md) to deploy your first cluster, or explore the [architecture overview](architecture.md) to understand the inner workings of the Operator. For hands-on steps and best practices, check out [What next?](what-next.md).
0 commit comments