-
Notifications
You must be signed in to change notification settings - Fork 33
PG-1127 Rewamped HA solution (17) #679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 23 commits
bb9e170
8f3a3e8
5673f82
bcb0094
f86706a
f9ab900
f5cde85
cbf2a6d
0ca8732
626f125
2f4b499
c6d2b03
a740284
0d5d349
4a4380a
3cd7746
1ffa9b7
82ede94
6b32bc0
5cc3143
51481a5
4f9fdf4
70b5e20
62a10ad
c99d038
c385b6b
14e7192
dd944a0
f6815a6
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,64 @@ | ||
# ETCD | ||
|
||
`etcd` is one of the key components in high availability architecture, therefore, it's important to understand it. | ||
|
||
`etcd` is a distributed key-value consensus store that helps applications store and manage cluster configuration data and perform distributed coordination of a PostgreSQL cluster. | ||
|
||
`etcd` runs as a cluster of nodes that communicate with each other to maintain a consistent state. The primary node in the cluster is called the "leader", and the remaining nodes are the "followers". | ||
|
||
## How `etcd` works | ||
|
||
Each node in the cluster stores data in a structured format and keeps a copy of the same data to ensure redundancy and fault tolerance. When you write data to `etcd`, the change is sent to the leader node, which then replicates it to the other nodes in the cluster. This ensures that all nodes remain synchronized and maintain data consistency. | ||
|
||
When a client wants to change data, it sends the request to the leader. The leader accepts the writes and proposes this change to the followers. The followers vote on the proposal. If a majority of followers agree (including the leader), the change is committed, ensuring consistency. The leader then confirms the change to the client. | ||
|
||
This flow corresponds to the Raft consensus algorithm, based on which `etcd` works. Read morea bout it the [`ectd` Raft consensus](#etcd-raft-consensus) section. | ||
|
||
## Leader election | ||
|
||
An `etcd` cluster can have only one leader node at a time. The leader is responsible for receiving client requests, proposing changes, and ensuring they are replicated to the followers. When an `etcd` cluster starts, or if the current leader fails, the nodes hold an election to choose a new leader. Each node waits for a random amount of time before sending a vote request to other nodes, and the first node to get a majority of votes becomes the new leader. The cluster remains available as long as a majority of nodes (quorum) are still running. | ||
|
||
### How many members to have in a cluster | ||
|
||
The recommended approach is to deploy an odd-sized cluster (e.g., 3, 5, or 7 nodes). The odd number of nodes ensures that there is always a majority of nodes available to make decisions and keep the cluster running smoothly. This majority is crucial for maintaining consistency and availability, even if one node fails. For a cluster with `n` members, the majority is `(n/2)+1`. | ||
|
||
To better illustrate this concept, take an example of clusters with 3 nodes and 4 nodes. In a 3-node cluster, if one node fails, the remaining 2 nodes still form a majority (2 out of 3), and the cluster can continue to operate. In a 4-node cluster, if one node fails, there are only 3 nodes left, which is not enough to form a majority (3 out of 4). The cluster stops functioning. | ||
|
||
## `etcd` Raft consensus | ||
|
||
The heart of `etcd`'s reliability is the Raft consensus algorithm. Raft ensures that all nodes in the cluster agree on the same data. This ensures a consistent view of the data, even if some nodes are unavailable or experiencing network issues. | ||
|
||
An example of the Raft's role in `etcd` is the situation when there is no majority in the cluster. If a majority of nodes can't communicate (for example, due to network partitions), no new leader can be elected, and no new changes can be committed. This prevents the system from getting into an inconsistent state. The system waits for the network to heal and a majority to be re-established. This is crucial for data integrity. | ||
|
||
nastena1606 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
You can also check [this resource :octicons-link-external-17:](https://thesecretlivesofdata.com/raft/) to learn more about Raft and understand it better. | ||
|
||
## `etcd` logs and performance considerations | ||
|
||
`etcd` keeps a detailed log of every change made to the data. These logs are essential for several reasons, including the ensurance of consistency, fault tolerance, leader elections, auditing, and others, maintaining a consistent state across nodes. For example, if a node fails, it can use the logs to catch up with the other nodes and restore its data. The logs also provide a history of all changes, which can be useful for debugging and security analysis if needed. | ||
|
||
### Slow disk performance | ||
|
||
`etcd` is very sensitive to disk I/O performance. Writing to the logs is a frequent operation and will be slow if the disk is slow. This can lead to timeouts, delaying consensus, instability, and even data loss. In extreme cases, slow disk performance can cause a leader to fail health checks, triggering unnecessary leader elections. Always use fast, reliable storage for `etcd`. | ||
|
||
### Slow or high-latency networks | ||
|
||
Communication between `etcd` nodes is critical. A slow or unreliable network can cause delays in replicating data, increasing the risk of stale reads. This can trigger premature timeouts leading to leader elections happening more frequently, and even delays in leader elections in some cases, impacting performance and stability. Also keep in mind that if nodes cannot reach each other in a timely manner, the cluster may lose quorum and become unavailable. | ||
|
||
## etcd Locks | ||
|
||
`etcd` provides a distributed locking mechanism, which helps applications coordinate actions across multiple nodes and access to shared resources preventing conflicts. Locks ensure that only one process can hold a resource at a time, avoiding race conditions and inconsistencies. Patroni is an example of an application that uses `etcd` locks for primary election control in the PostgreSQL cluster. | ||
|
||
### Deployment considerations | ||
|
||
Running `etcd` on separate hosts has the following benefits: | ||
|
||
* Both PostgreSQL and `etcd` are highly dependant on I/O. And running them on the separate hosts improves performance. | ||
|
||
* Higher resilience. If one or even two PostgreSQL node crash, the `etcd` cluster remains healthy and can trigger a new primary election. | ||
|
||
* Scalability and better performance. You can scale the `etcd` cluster separately from PostgreSQL based on the load and thus achieve better performance. | ||
|
||
Note that separate deployment increases the complexity of the infrastructure and requires additional effort on maintenance. Also, pay close attention to network configuration to eliminate the latency that might occur due to the communication between `etcd` and Patroni nodes over the network. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If a separate dedicated host for etcd is not a viable option, you can use the same host machines used for Patroni and PostgreSQL. The majority of the deployments use such a setup to reduce the cost. |
||
|
||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
# Architecture | ||
|
||
In the [overview of high availability](high-availability.md), we discussed the required components to achieve high-availability. | ||
|
||
Our recommended minimalistic approach to a highly-available deployment is to have a three-node PostgreSQL cluster with the cluster management and failover mechanisms, load balancer and a backup / restore solution. | ||
|
||
The following diagram shows this architecture with the tools we recommend to use. If the cost and the number of nodes is a constraint, refer to the [Bare-minimum architecture](#bare-minimum-architecture) section. | ||
nastena1606 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
 | ||
|
||
## Components | ||
|
||
The components in this architecture are: | ||
|
||
### Database layer | ||
|
||
- PostgreSQL nodes bearing the user data. | ||
|
||
- Patroni - an automatic failover system. Patroni requires and uses the Distributed Configuration Store to store the cluster configuration, health and status. | ||
|
||
- watchdog - a mechanism that will reset the whole system when they do not get a keepalive heartbeat within a specified timeframe. This adds an additional layer of fail safe in case usual Patroni split-brain protection mechanisms fail. | ||
|
||
### DCS layer | ||
|
||
- etcd - a Distributed Configuration Store. It stores the state of the PostgreSQL cluster and handles the election of a new primary. The odd number of nodes (minimum three) is required to always have the majority to agree on updates to the cluster state. | ||
|
||
### Load balancing layer | ||
|
||
- HAProxy - the load balancer and the single point of entry to the cluster for client applications. Minimum two instances are required for redundancy. | ||
|
||
- keepalived - a high-availability and failover solution for HAProxy. It provides a virtual IP (VIP) address for HAProxy and prevents its single point of failure by failing over the services to the operational instance | ||
|
||
- (Optional) pgbouncer - a connection pooler for PostgreSQL. The aim of pgbouncer is to lower the performance impact of opening new connections to PostgreSQL. | ||
|
||
### Services layer | ||
|
||
- pgBackRest - the backup and restore solution for PostgreSQL. It should also be redundant to eliminate a single point of failure. | ||
|
||
- (Optional) Percona Monitoring and Management (PMM) - the solution to monitor the health of your cluster | ||
|
||
## Bare-minimum architecture | ||
|
||
There may be constraints to use the [recommended reference architecture](#architecture), like the number of available servers or the cost for additional hardware. You can still achieve high-availability with the minimum two database nodes and three `etcd` instances. The following diagram shows this architecture: | ||
nastena1606 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
 | ||
|
||
Using such architecture has the following limitations: | ||
|
||
* This setup only protects against a one node failure, either a database or a etcd node. Losing more than one node results in the read-only database. | ||
* The application must be able to connect to multiple database nodes and fail over to the new primary in the case of outage. | ||
* The application must act as the load-balancer. It must be able to determine read/write and read-only requests and distribute them across the cluster. | ||
* The `pbBackRest` component is optional but highly-recommended for disaster recovery. To eliminate a single point of failure, it should also be redundant but we're not discussing redundancy in this solution. [Contact us](https://www.percona.com/about/contact) to discuss it if this is the requirement for you. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The to The There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We can remove the sentence |
||
|
||
## Additional reading | ||
|
||
[How components work together](ha-components.md){.md-button} | ||
|
||
## Next steps | ||
|
||
[Deployment - initial setup](ha-init-setup.md){.md-button} |
Uh oh!
There was an error while loading. Please reload this page.