-
Notifications
You must be signed in to change notification settings - Fork 33
PG-1127 Rewamped HA solution (17) #679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
nastena1606
wants to merge
26
commits into
17
Choose a base branch
from
PG-1127-HA-rewamp-17
base: 17
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
26 commits
Select commit
Hold shift + click to select a range
bb9e170
Extended description of architecture
nastena1606 8f3a3e8
PG-1127 Rewamp HA solution
nastena1606 5673f82
Split setup into individual components pages
nastena1606 bcb0094
Reworked the intro
nastena1606 f86706a
Added diagrams to overview
nastena1606 f9ab900
Moved components interaction to a separate page
nastena1606 f5cde85
Updated how components work together
nastena1606 cbf2a6d
Reworked Components part
nastena1606 0ca8732
Added Patroni description
nastena1606 626f125
Markup polishing
nastena1606 2f4b499
Patroni and pgBackRest config'
nastena1606 c6d2b03
Fixed commands, added disclaier about datadir cleanup for Patroni
nastena1606 a740284
keepalived setup
nastena1606 0d5d349
Added HAproxy description to components
nastena1606 4a4380a
Added pgBackRest info
nastena1606 3cd7746
Updated How components work page
nastena1606 1ffa9b7
Updated images, archtecture with 2 types of diagrams, added 3rd HApro…
nastena1606 82ede94
Updated diagram with watchdog component
nastena1606 6b32bc0
Updated Patroni config for 4.0.x versions
nastena1606 5cc3143
Added Troubleshoot Patroni startup options subsection
nastena1606 51481a5
Modified HAProxy and keepalived configuration
nastena1606 4f9fdf4
Updated after the review. P1
nastena1606 70b5e20
Merge branch '17' into PG-1127-HA-rewamp-17
nastena1606 62a10ad
Updated after the review. P2
nastena1606 c99d038
Restoring the new navigation
nastena1606 c385b6b
Updated after the review. P3
nastena1606 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,64 @@ | ||
# ETCD | ||
|
||
`etcd` is one of the key components in high availability architecture, therefore, it's important to understand it. | ||
|
||
`etcd` is a distributed key-value consensus store that helps applications store and manage cluster configuration data and perform distributed coordination of a PostgreSQL cluster. | ||
|
||
`etcd` runs as a cluster of nodes that communicate with each other to maintain a consistent state. The primary node in the cluster is called the "leader", and the remaining nodes are the "followers". | ||
|
||
## How `etcd` works | ||
|
||
Each node in the cluster stores data in a structured format and keeps a copy of the same data to ensure redundancy and fault tolerance. When you write data to `etcd`, the change is sent to the leader node, which then replicates it to the other nodes in the cluster. This ensures that all nodes remain synchronized and maintain data consistency. | ||
|
||
When a client wants to change data, it sends the request to the leader. The leader accepts the writes and proposes this change to the followers. The followers vote on the proposal. If a majority of followers agree (including the leader), the change is committed, ensuring consistency. The leader then confirms the change to the client. | ||
|
||
This flow corresponds to the Raft consensus algorithm, based on which `etcd` works. Read morea bout it the [`ectd` Raft consensus](#etcd-raft-consensus) section. | ||
|
||
## Leader election | ||
|
||
An `etcd` cluster can have only one leader node at a time. The leader is responsible for receiving client requests, proposing changes, and ensuring they are replicated to the followers. When an `etcd` cluster starts, or if the current leader fails, the nodes hold an election to choose a new leader. Each node waits for a random amount of time before sending a vote request to other nodes, and the first node to get a majority of votes becomes the new leader. The cluster remains available as long as a majority of nodes (quorum) are still running. | ||
|
||
### How many members to have in a cluster | ||
|
||
The recommended approach is to deploy an odd-sized cluster (e.g., 3, 5, or 7 nodes). The odd number of nodes ensures that there is always a majority of nodes available to make decisions and keep the cluster running smoothly. This majority is crucial for maintaining consistency and availability, even if one node fails. For a cluster with `n` members, the majority is `(n/2)+1`. | ||
|
||
To better illustrate this concept, take an example of clusters with 3 nodes and 4 nodes. In a 3-node cluster, if one node fails, the remaining 2 nodes still form a majority (2 out of 3), and the cluster can continue to operate. In a 4-node cluster, if one node fails, there are only 3 nodes left, which is not enough to form a majority (3 out of 4). The cluster stops functioning. | ||
|
||
## `etcd` Raft consensus | ||
|
||
The heart of `etcd`'s reliability is the Raft consensus algorithm. Raft ensures that all nodes in the cluster agree on the same data. This ensures a consistent view of the data, even if some nodes are unavailable or experiencing network issues. | ||
|
||
An example of the Raft's role in `etcd` is the situation when there is no majority in the cluster. If a majority of nodes can't communicate (for example, due to network partitions), no new leader can be elected, and no new changes can be committed. This prevents the system from getting into an inconsistent state. The system waits for the network to heal and a majority to be re-established. This is crucial for data integrity. | ||
|
||
nastena1606 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
You can also check [this resource :octicons-link-external-17:](https://thesecretlivesofdata.com/raft/) to learn more about Raft and understand it better. | ||
|
||
## `etcd` logs and performance considerations | ||
|
||
`etcd` keeps a detailed log of every change made to the data. These logs are essential for several reasons, including the ensurance of consistency, fault tolerance, leader elections, auditing, and others, maintaining a consistent state across nodes. For example, if a node fails, it can use the logs to catch up with the other nodes and restore its data. The logs also provide a history of all changes, which can be useful for debugging and security analysis if needed. | ||
|
||
### Slow disk performance | ||
|
||
`etcd` is very sensitive to disk I/O performance. Writing to the logs is a frequent operation and will be slow if the disk is slow. This can lead to timeouts, delaying consensus, instability, and even data loss. In extreme cases, slow disk performance can cause a leader to fail health checks, triggering unnecessary leader elections. Always use fast, reliable storage for `etcd`. | ||
|
||
### Slow or high-latency networks | ||
|
||
Communication between `etcd` nodes is critical. A slow or unreliable network can cause delays in replicating data, increasing the risk of stale reads. This can trigger premature timeouts leading to leader elections happening more frequently, and even delays in leader elections in some cases, impacting performance and stability. Also keep in mind that if nodes cannot reach each other in a timely manner, the cluster may lose quorum and become unavailable. | ||
|
||
## etcd Locks | ||
|
||
`etcd` provides a distributed locking mechanism, which helps applications coordinate actions across multiple nodes and access to shared resources preventing conflicts. Locks ensure that only one process can hold a resource at a time, avoiding race conditions and inconsistencies. Patroni is an example of an application that uses `etcd` locks for primary election control in the PostgreSQL cluster. | ||
|
||
### Deployment considerations | ||
|
||
Running `etcd` on separate hosts has the following benefits: | ||
|
||
* Both PostgreSQL and `etcd` are highly dependant on I/O. And running them on the separate hosts improves performance. | ||
|
||
* Higher resilience. If one or even two PostgreSQL node crash, the `etcd` cluster remains healthy and can trigger a new primary election. | ||
|
||
* Scalability and better performance. You can scale the `etcd` cluster separately from PostgreSQL based on the load and thus achieve better performance. | ||
|
||
Note that separate deployment increases the complexity of the infrastructure and requires additional effort on maintenance. Also, pay close attention to network configuration to eliminate the latency that might occur due to the communication between `etcd` and Patroni nodes over the network. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If a separate dedicated host for etcd is not a viable option, you can use the same host machines used for Patroni and PostgreSQL. The majority of the deployments use such a setup to reduce the cost. |
||
|
||
If a separate dedicated host for 1 is not a viable option, you can use the same host machines used for Patroni and PostgreSQL. | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
# Architecture | ||
|
||
In the [overview of high availability](high-availability.md), we discussed the required components to achieve high-availability. | ||
|
||
Our recommended minimalistic approach to a highly-available deployment is to have a three-node PostgreSQL cluster with the cluster management and failover mechanisms, load balancer and a backup / restore solution. | ||
|
||
The following diagram shows this architecture, including all additional components. If you are considering a simple and cost-effective setup, refer to the [Bare-minimum architecture](#bare-minimum-architecture) section. | ||
|
||
 | ||
|
||
## Components | ||
|
||
The components in this architecture are: | ||
|
||
### Database layer | ||
|
||
- PostgreSQL nodes bearing the user data. | ||
|
||
- Patroni - an automatic failover system. Patroni requires and uses the Distributed Configuration Store to store the cluster configuration, health and status. | ||
|
||
- watchdog - a mechanism that will reset the whole system when they do not get a keepalive heartbeat within a specified timeframe. This adds an additional layer of fail safe in case usual Patroni split-brain protection mechanisms fail. | ||
|
||
### DCS layer | ||
|
||
- etcd - a Distributed Configuration Store. It stores the state of the PostgreSQL cluster and handles the election of a new primary. The odd number of nodes (minimum three) is required to always have the majority to agree on updates to the cluster state. | ||
|
||
### Load balancing layer | ||
|
||
- HAProxy - the load balancer and the single point of entry to the cluster for client applications. Minimum two instances are required for redundancy. | ||
|
||
- keepalived - a high-availability and failover solution for HAProxy. It provides a virtual IP (VIP) address for HAProxy and prevents its single point of failure by failing over the services to the operational instance | ||
|
||
- (Optional) pgbouncer - a connection pooler for PostgreSQL. The aim of pgbouncer is to lower the performance impact of opening new connections to PostgreSQL. | ||
|
||
### Services layer | ||
|
||
- pgBackRest - the backup and restore solution for PostgreSQL. It should also be redundant to eliminate a single point of failure. | ||
|
||
- (Optional) Percona Monitoring and Management (PMM) - the solution to monitor the health of your cluster | ||
|
||
## Bare-minimum architecture | ||
|
||
There may be constraints to use the [reference architecture with all additional components](#architecture), like the number of available servers or the cost for additional hardware. You can still achieve high-availability with the minimum two database nodes and three `etcd` instances. The following diagram shows this architecture: | ||
|
||
 | ||
|
||
Using such architecture has the following limitations: | ||
|
||
* This setup only protects against a one node failure, either a database or a etcd node. Losing more than one node results in the read-only database. | ||
* The application must be able to connect to multiple database nodes and fail over to the new primary in the case of outage. | ||
* The application must act as the load-balancer. It must be able to determine read/write and read-only requests and distribute them across the cluster. | ||
- The `pbBackRest` component is optional as it doesn't server the purpose of high-availability. But it is highly-recommended for disaster recovery and is a must fo production environments. [Contact us](https://www.percona.com/about/contact) to discuss backup configurations and retention policies. | ||
|
||
## Additional reading | ||
|
||
[How components work together](ha-components.md){.md-button} | ||
|
||
## Next steps | ||
|
||
[Deployment - initial setup](ha-init-setup.md){.md-button} |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.