-
Notifications
You must be signed in to change notification settings - Fork 330
Lift the etcd limit from 8GiB to 100GiB #1071
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Lift the etcd limit from 8GiB to 100GiB #1071
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: ronaldngounou The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
ba37b27 to
c93d626
Compare
c93d626 to
49407a6
Compare
|
Lint issues fixed: |
|
If you're doing this refactoring, I'd like to make it clear to users that the 100GB is a recommended maximum size, and not a hard limit. This would mean different text in a couple of places. I don't know what the actual hard limit is; probably need to look at the boltDB code. |
|
Could you please suggest a wording that we should have in the meatime? |
| also means more memory usage**. Just I mentioned in the beginning of this post, the suggested max value is 8GB. Of course, | ||
| If your VM has big memory (e.g. 64GB), it's OK to set a value > 8GB. | ||
| also means more memory usage**. Just I mentioned in the beginning of this post, the suggested max value is 100GB. Of course, | ||
| If your VM has big memory (e.g. 64GB), it's OK to set a value > 100GB. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| If your VM has big memory (e.g. 64GB), it's OK to set a value > 100GB. | |
| If your VM has big memory (e.g. 128GB), it's OK to set a value > 100GB. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a smaller VM with 64GB RAM may good with 8GB database, but if your DB is 100 GB and your VM has only 64 GB RAM, it can drastically slow down operations. Just thought to suitable with the context of this doc, you should set larger VM RAM such as 128GB.
| ## Memory | ||
|
|
||
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 8GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. | ||
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 100GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 100GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. | |
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 8GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. 100GB is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Within the context of this doc: etcd has a relatively small memory ...... Typically 8GB is enough.... 100GB is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it is makes more sense for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we actually have a warning at 100GB? I don't have a machine I can test that on.
| ## Storage size limit | ||
|
|
||
| The default storage size limit is 2GB, configurable with `--quota-backend-bytes` flag; supports up to 8GB. | ||
| The default storage size limit is 2GB, configurable with `--quota-backend-bytes` flag; supports up to 100GB. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
etcd v3.1.0 to v3.3.0 was released from 2017 to before May 2019. I believe the optimizations to boltDB to scale the boltDB beyond 8GBlimit only applied for version released after 2019. That means, IMHO, you only need to update docs for v3.5.0-v3.7.0.
We don't need to update older versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd restrict that further, and only update 3.5 and up.
| ## Memory | ||
|
|
||
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 8GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. | ||
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 100GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make this a limit, not a recommendation:
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 100GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. | |
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 8GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly, up to a recommended maximum of 100GB. |
|
For |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This points to the need to really remove the storage size limit from this doc. It doesn't belong in the dev guide, it belongs in the operations guide.
|
May I ask if there is any data in etcd that affects the cluster during data compression and fragmentation after storing 50GB of data? And how long does it take for large-scale insertion/query operations after completing the above operations |
As per performance improvements to etcd size limits have been evaluated to 100GB instead of 8GB.
https://www.cncf.io/blog/2019/05/09/performance-optimization-of-etcd-in-web-scale-data-scenario/
Contributes to issue #588