-
Notifications
You must be signed in to change notification settings - Fork 6.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
set kubelet_status_update_frequency to 120s #12106
base: master
Are you sure you want to change the base?
set kubelet_status_update_frequency to 120s #12106
Conversation
On a ~300 node cluster updating this setting to 120s brought Prometheus' Bytes allocated and freed per second from 1Gi down to ~450Mi and the CPU time from ~7 seconds down to ~3 seconds. It also brought GC cycles per minute from ~8 down to ~3.3. Signed-off-by: Patrick O’Brien <patrick.obrien@thetradedesk.com>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: poblahblahblah The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @poblahblahblah. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I don't know if we'll switch the default (we should investigate why we diverged from upstream k8s on this), but that would need a action required in the release note I think, since it changes default
|
I think a better way to do this would be to document the record rather than change the defaults. |
edit: I re-read the comment I wrote and I could have worded it better. I would love the know the reasoning behind 10s being the default here. |
I'll dig in git next week to see why this was done this way.
In the meantime, would you have at hand a good background documentation on that particular setting ?
Also, if there is a scale concern which depends on the number of nodes, if we change it, we could consider making the default a function of the number of nodes (something like `groups['k8s_cluster'] | dict2items | length * some factor` )
|
/ok-to-test |
Are we talking about --node-status-update-frequency ? Because from what I can tell the default is 10s like in kubespray. https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ (In that case we're unlikely to change this, deviating from kubernetes defaults has an higher bar) |
c07d60b -> commit which introduced the parameter. |
What type of PR is this?
What this PR does / why we need it:
The default setting out of the box for this is 5 minutes. As far as I can tell this is what EKS, AKS, and ACK all use. On our metal clusters we were seeing a lot more GC activity causing memory contention and higher CPU usage. It was eventually traced to how often the Node Conditions were updating. You can see when we rolled out this change to this cluster in the screenshot below.
On a ~300 node cluster updating this setting to 120s brought Prometheus' Bytes allocated and freed per second from 1Gi down to ~450Mi and the CPU time from ~7 seconds down to ~3 seconds. It also brought GC cycles per minute from ~8 down to ~3.3.
I recognize that updating this setting is called out in the Large Deployment docs, but I think 10s is a bit low for most cases. Even on our small clusters this was causing a noticeable difference with Prometheus.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: