-
Notifications
You must be signed in to change notification settings - Fork 4.5k
xdsclient: race around resource subscriptions and unsubscsriptions #8125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Full debugging logs https://paste.googleplex.com/6320159020810240 from repro purnesh42H#16 |
This repro demonstrates an issue where closing the first client (cc1) in a separate goroutine, after a full xDS lifecycle and a successful RPC, can lead to problems for a subsequent client (cc2). Here's the breakdown:
The key problem here is cc2 subscription request using the same version and nonce received by cc1. |
One way to solve this problem can be to modify
While the xds protocol spec says that the xDS client should send the last seen version after the initial request in the DiscoveryRequest https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol#ack-nack-and-resource-type-instance-version, If the StreamImpl is no longer tracking any active subscriptions for a particular resource type, isn't it logical for it to "forget" the last known version for that type on that stream? At this point we have already finished cleaning up all the caches. When a new subscription for that type comes in, it's effectively a fresh start for that type from the stream's perspective. |
The race happens because even though the unsubscription from cc1 callback executes after re-subscription from cc2, there is no guarantee that the correct sequence of messages (unsubscription, then re-subscription) are sent. If re-subscription is sent without unsubscription request to complete, server will not send the resource because of the same version info from cc1. So, a better solution would be to retain the cached resource until the unsubscription request is fully completed by the server. This approach ensures two outcomes:
The gRPC C++ implementation addressed a similar issue in PR #38698, where the cleanup of unwatched resources (cache eviction) is performed just before sending the request message that effectively unsubscribes the resource from the xDS server. However, I believe an even more reliable approach for grpc-go is to perform this cleanup after receiving a response from the management server that acknowledges the state change—specifically, at the end of processing in handleResourceUpdate (for an update that implicitly confirms the unsubscription) or handleResourceDoesNotExist (which explicitly confirms the resource is no longer being sent). I've implemented this in the same repro PR purnesh42H@3240f37#diff-d83abbbb9296e17020a2bced581392dc6ca31c8100f8987bee1b96eb2cbc2402 This change has successfully fixed the test case that was failing due to this race condition. It's worth noting that this modification will alter the expectations for some existing tests, as we are no longer immediately deleting resources from the cache or closing xDS channels upon triggering an unwatch operation. |
We are seeing a race with the xDS client when the application has multiple grpc channels being created and shutdown at the same time, repeatedly. One possible way to get this race to happen is for an application to open multiple grpc channels to the same target (around the same time), make one RPC in each channel, and shut it down, and do this process over and over again.
The sequence of events that trigger the race is as follows:
1
is created to targetfoo
. It goes through the whole xDS flow successfully and is being shut down.2
is created to the same targetfoo
.1
shutdown, it starts unsubscribing to the xDS resources.grpc-go/xds/internal/xdsclient/authority.go
Line 680 in aa629e0
grpc-go/xds/internal/xdsclient/transport/ads/ads_stream.go
Line 217 in aa629e0
grpc-go/xds/internal/xdsclient/authority.go
Line 715 in aa629e0
2
starts out and gets a reference to the existing xDS client (when it attempts to create a new one). It registers a watch for the same LDS resource as channel1
.1
). So, it updates the cache and creates a placeholder for this resource. See:grpc-go/xds/internal/xdsclient/authority.go
Line 625 in aa629e0
1
and the subscription by channel2
), it sees there there is a single LDS resource to be requested, and it also sees a nonce and a version for that resource type. This is the same nonce and version that was received when the same resource was requested as part of channel 1. So, such a request is sent out, the management server does not send a response back, because it already responded to the same request name with the same version and nonce on the same ADS stream.At this point channel
2
is stuck until the LDS resource-not-found timer fires after15s
and from that point on RPCs on the channel start to fail.The fundamental reason behind this race is that the resource is deleted from the xDS client cache even before it is sent out of the ADS stream. Also important to note is that some resource state is maintained in the ADS stream implementation, while some is maintained in the xDS client and these both can go out of sync.
The text was updated successfully, but these errors were encountered: