-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The kuberenetes secrets provider should regenerate the agent configuration when a secrets change is detected #4168
Comments
Pinging @elastic/elastic-agent (Team:Elastic-Agent) |
Hey @cmacknz , is this issue still ongoing? Is this still a priority? |
@constanca-m @cmacknz We're still running into this issue |
The ContextProvider has
But I am having trouble understanding what this mapping is. Would this be our secret cache? I tried to check the other mappings using the same function but they did not make it any clearer. Is there a way to see the output of the context, like using |
You would want one mapping for each secret the provider is populating in the policy. The leader election provider seems like the simplest example: Lines 113 to 115 in 4b404b1
This is dynamically setting |
Thank you @cmacknz . I tried to do it like this: if updatedCache := p.updateCache(); updatedCache {
mapping := map[string]interface{}{}
p.secretsCacheMx.RLock()
for key, data := range p.secretsCache {
mapping[key] = data.value
}
p.secretsCacheMx.RUnlock()
err := comm.Set(mapping)
... Which all it does is add an entry with the key secret and key value (just the string, not the full value with the last accessed timestamp), but this ends up breaking something in the agent, and when I look at discover nothing is there. I have a warning in the logs that was not there in the current version I tested, but it does not help to track what is wrong in the mapping:
|
You'll need to turn on debug logging to see what is wrong. The event can't be accepted by Elasticsearch for some reason (mapping conflict is the most common reason). |
Ok, I will do that. I have another question though. When we use the They are not part of the config. |
I think that is likely part of the reason why the I think what you probably want is to implement both Then as the cache is refreshed you can call @blakerouse might have a better way to deal with this, he's more familiar with the provider implementations than I am. |
I think a change needs to be done the interface to support this. This should work:
That should trigger an update when something changes, without having to call Set which this provider should not do. |
I went ahead and created a PR #4368, to add the Signal method. That will allow the kubernetes secrets provider to just call |
Thank you @blakerouse for such quick action. I will have a look and test if it works, but at first look, it seems to be exactly what is needed |
The Kubernetes secrets provider today implements the Fetch context provider interface, which provides the secret value whenever the agent configuration is being regenerated for another reason.
elastic-agent/internal/pkg/core/composable/providers.go
Lines 9 to 17 in 4b404b1
In cases where secrets are short lived and rotated frequently, this can result in the agent policy using stale secrets values. In general there is currently always a risk that a secret remains stale if the agent configuration is not regenerated because of another detected change in the system, for example a policy change.
The Kuberenetes secrets provider should instead implement the Comm (or communicating) context provider interface, which informs the agent when a change is detected so the running configuration can be regenerated.
elastic-agent/internal/pkg/core/composable/providers.go
Lines 18 to 25 in 4b404b1
For an example, this is the way the Kubernetes leader election provider is implemented:
elastic-agent/internal/pkg/composable/providers/kubernetesleaderelection/kubernetes_leaderelection.go
Lines 48 to 49 in 4b404b1
elastic-agent/internal/pkg/composable/providers/kubernetesleaderelection/kubernetes_leaderelection.go
Lines 112 to 121 in 4b404b1
Since the secrets provider now implements an optional cache, whenever the cache refreshes we could detect if a value has changed and notify the agent.
elastic-agent/internal/pkg/composable/providers/kubernetessecrets/kubernetes_secrets.go
Lines 104 to 117 in 4b404b1
Ideally we would use a watcher on the secret values, but this has been found to have performance and memory usage concerns.
The text was updated successfully, but these errors were encountered: