-
Notifications
You must be signed in to change notification settings - Fork 40.6k
kubectl: Cache Verifier.HasSupport calls #132003
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Hi @tchap. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/sig cli |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: tchap The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test |
|
||
func (cv *cachingVerifier) HasSupport(gvk schema.GroupVersionKind) error { | ||
// Try to get the cached value. | ||
// Once cached, the value never changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is partially correct. Because openapi specs are changed. At some point in time, this cache starts serving stale data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I need to check more properly how OpenAPI caching works and whether it can actually change within a single kubectl invocation. This change is related to apply
, so I may try to set it up so that it's enabled in that command only so that I don't have to care for other cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, unless I am missing something, OpenAPI schema is fetched once and cached anyway, so this doesn't make the situation any worse. It only caches results of HasSupport since there is some processing of the schema that takes time.
b9688d4
to
7c07abb
Compare
The underlying implementation decodes OpenAPI schema on every HasSupport call. This causes these calls to be expensive and noticable when many resources are being applied. This patch wraps the verifier with a thread-safe cache so that when many resources of the same kind are being checked, only the first call includes schema decoding. This solution is specifically limited to the kubectl codebase without any changes required in client-go, for now, although that is where the issue is actually originating.
7c07abb
to
d2fb076
Compare
/cc @seans3 |
/retest-required |
1 similar comment
/retest-required |
What type of PR is this?
/kind feature
What this PR does / why we need it:
The underlying implementation decodes OpenAPI schema on every HasSupport call. This causes these calls to be expensive and noticeable when many resources are being applied.
This patch wraps the verifier with a thread-safe cache so that when many resources of the same kind are being checked, only the first call includes schema decoding.
This solution is specifically limited to the kubectl codebase without any changes required in client-go, for now, although that is where the issue is actually originating.
This can be transparently removed when deemed unnecessary. It's more like the first step to improve the issue.
Which issue(s) this PR fixes:
Fixes kubernetes/kubectl#1682
Does this PR introduce a user-facing change?