This section summarizes the changes in each release.
Also see:
-
{kibana-ref}/release-notes.html[{kib} release notes]
-
{beats-ref}/release-notes.html[{beats} release notes]
Review important information about the {fleet} and {agent} 8.10.4 release.
Breaking changes can prevent your application from optimal operation and performance. Before you upgrade, review the breaking changes, then mitigate the impact to your application.
- elastic-agent
elastic-agent-autodiscover
library has been updated to version 0.6.4, disabling metadata For kubernetes.deployment
and kubernetes.cronjob
fields.
Details
The elastic-agent-autodiscover
Kubernetes library by default comes with add_resource_metadata.deployment=false
and add_resource_metadata.cronjob=false
.
Impact
Pods that will be created from deployments or cronjobs will not have the extra metadata field for kubernetes.deployment
or kubernetes.cronjob
, respectively. This change was made to avoid the memory impact of keeping the feature enabled in big Kubernetes clusters.
For more information, refer to #3591.
- {agent}
- {fleet}
-
-
Fix validation errors in KQL queries. (#168329)
-
Review important information about the {fleet} and {agent} 8.10.3 release.
-
Fleet Server Insertion of Sensitive Information into Log File (ESA-2023-20)
An issue was discovered in Fleet Server >= v8.10.0 and < v8.10.3 where Agent enrollment tokens are being inserted into the Fleet Server’s log file in plain text.
These enrollment tokens could allow someone to enroll an agent into an agent policy, and potentially use that to retrieve other secrets in the policy including for Elasticsearch and third-party services. Alternatively a threat actor could potentially enroll agents to the clusters and send arbitrary events to Elasticsearch.
The issue is resolved in 8.10.3.
For more information, see our related security announcement.
Important
|
The known issue that prevents successful upgrades in an air-gapped environment for {agent} versions 8.9.0 to 8.10.2 has been resolved in this release. If you’re using an air-gapped environment, we recommend installing version 8.10.3 or any higher version to avoid not being unable to upgrade. |
- {agent}
- {fleet}
- {agent}
Review important information about the {fleet} and {agent} 8.10.2 release.
PGP key download fails in an air-gapped environment
Details
Important
|
If you’re using an air-gapped environment, we recommended installing version 8.10.3 or any higher version, to avoid being unable to upgrade. |
Starting from version 8.9.0, when {agent} tries to perform an upgrade, it first verifies the binary signature with the key bundled in the agent.
This process has a backup mechanism that will use the key coming from https://artifacts.elastic.co/GPG-KEY-elastic-agent
instead of the one it already has.
In an air-gapped environment, the agent won’t be able to download the remote key and therefore cannot be upgraded.
Impact
For the upgrade to succeed, the agent needs to download the remote key from a server accessible from the air-gapped environment. Two workarounds are available.
Option 1
If an HTTP proxy is available to be used by the {agents} in your {fleet}, add the proxy settings using environment variables as explained in Proxy Server connectivity using default host variables.
Please note that you need to enable HTTP Proxy usage for artifacts.elastic.co
to bypass this problem, so you can craft the HTTP_PROXY
, HTTPS_PROXY
and NO_PROXY
environment variables to be used exclusively for it.
Option 2
As the upgrade URL is not customizable, we have to "trick" the system by pointing https://artifacts.elastic.co/
to another host that will have the file.
The following examples require a server in your air-gapped environment that will expose the key you will have downloaded from https://artifacts.elastic.co/GPG-KEY-elastic-agent`
.
Example 1: Manual
Edit the {agent} server hosts file to add the following content:
<YOUR_HOST_IP> artifacts.elastic.co
The Linux hosts file path is /etc/hosts
.
Windows hosts file path is C:\Windows\System32\drivers\etc\hosts
.
Example 2: Puppet
host { 'elastic-artifacts':
ensure => 'present'
comment => 'Workaround for PGP check'
ip => '<YOUR_HOST_IP>'
}
Example 3: Ansible
- name : 'elastic-artifacts'
hosts : 'all'
become: 'yes'
tasks:
- name: 'Add entry to /etc/hosts'
lineinfile:
path: '/etc/hosts'
line: '<YOUR_HOST_IP> artifacts.elastic.co'
- {agent}
-
-
Updated Go version to 1.20.8. #3393
-
- {fleet}
-
-
Fixed force delete package API, fixed validation check to reject request if package is used by agents. (#166623)
-
Review important information about the {fleet} and {agent} 8.10.1 release.
PGP key download fails in an air-gapped environment
Details
Important
|
If you’re using an air-gapped environment, we recommended installing version 8.10.3 or any higher version, to avoid being unable to upgrade. |
Starting from version 8.9.0, when {agent} tries to perform an upgrade, it first verifies the binary signature with the key bundled in the agent.
This process has a backup mechanism that will use the key coming from https://artifacts.elastic.co/GPG-KEY-elastic-agent
instead of the one it already has.
In an air-gapped environment, the agent won’t be able to download the remote key and therefore cannot be upgraded.
Impact
For the upgrade to succeed, the agent needs to download the remote key from a server accessible from the air-gapped environment. Two workarounds are available.
Option 1
If an HTTP proxy is available to be used by the {agents} in your {fleet}, add the proxy settings using environment variables as explained in Proxy Server connectivity using default host variables.
Please note that you need to enable HTTP Proxy usage for artifacts.elastic.co
to bypass this problem, so you can craft the HTTP_PROXY
, HTTPS_PROXY
and NO_PROXY
environment variables to be used exclusively for it.
Option 2
As the upgrade URL is not customizable, we have to "trick" the system by pointing https://artifacts.elastic.co/
to another host that will have the file.
The following examples require a server in your air-gapped environment that will expose the key you will have downloaded from https://artifacts.elastic.co/GPG-KEY-elastic-agent`
.
Example 1: Manual
Edit the {agent} server hosts file to add the following content:
<YOUR_HOST_IP> artifacts.elastic.co
The Linux hosts file path is /etc/hosts
.
Windows hosts file path is C:\Windows\System32\drivers\etc\hosts
.
Example 2: Puppet
host { 'elastic-artifacts':
ensure => 'present'
comment => 'Workaround for PGP check'
ip => '<YOUR_HOST_IP>'
}
Example 3: Ansible
- name : 'elastic-artifacts'
hosts : 'all'
become: 'yes'
tasks:
- name: 'Add entry to /etc/hosts'
lineinfile:
path: '/etc/hosts'
line: '<YOUR_HOST_IP> artifacts.elastic.co'
- {agent}
-
-
Improve logging during {agent} upgrades. #3382
-
Review important information about the {fleet} and {agent} 8.10.0 release.
Breaking changes can prevent your application from optimal operation and performance. Before you upgrade, review the breaking changes, then mitigate the impact to your application.
{agent} diagnostics unavailable with {fleet-server} below 8.10.0.
Details
The mechanism that {fleet} uses to generate diagnostic bundles has been updated. To collect {agent} diagnostics, {fleet-server} needs to be at version 8.10.0 or higher.
Impact
If you need to access a diagnostic bundle for an agent, ensure that {fleet-server} is at the required version.
PGP key download fails in an air-gapped environment
Details
Important
|
If you’re using an air-gapped environment, we recommended installing version 8.10.3 or any higher version, to avoid being unable to upgrade. |
Starting from version 8.9.0, when {agent} tries to perform an upgrade, it first verifies the binary signature with the key bundled in the agent.
This process has a backup mechanism that will use the key coming from https://artifacts.elastic.co/GPG-KEY-elastic-agent
instead of the one it already has.
In an air-gapped environment, the agent won’t be able to download the remote key and therefore cannot be upgraded.
Impact
For the upgrade to succeed, the agent needs to download the remote key from a server accessible from the air-gapped environment. Two workarounds are available.
Option 1
If an HTTP proxy is available to be used by the {agents} in your {fleet}, add the proxy settings using environment variables as explained in Proxy Server connectivity using default host variables.
Please note that you need to enable HTTP Proxy usage for artifacts.elastic.co
to bypass this problem, so you can craft the HTTP_PROXY
, HTTPS_PROXY
and NO_PROXY
environment variables to be used exclusively for it.
Option 2
As the upgrade URL is not customizable, we have to "trick" the system by pointing https://artifacts.elastic.co/
to another host that will have the file.
The following examples require a server in your air-gapped environment that will expose the key you will have downloaded from https://artifacts.elastic.co/GPG-KEY-elastic-agent`
.
Example 1: Manual
Edit the {agent} server hosts file to add the following content:
<YOUR_HOST_IP> artifacts.elastic.co
The Linux hosts file path is /etc/hosts
.
Windows hosts file path is C:\Windows\System32\drivers\etc\hosts
.
Example 2: Puppet
host { 'elastic-artifacts':
ensure => 'present'
comment => 'Workaround for PGP check'
ip => '<YOUR_HOST_IP>'
}
Example 3: Ansible
- name : 'elastic-artifacts'
hosts : 'all'
become: 'yes'
tasks:
- name: 'Add entry to /etc/hosts'
lineinfile:
path: '/etc/hosts'
line: '<YOUR_HOST_IP> artifacts.elastic.co'
Filtering Elastic Agents in Kibana generates an "Error fetching agents" message
Details
A {kibana-ref}/kuery-query.html[KQL query] in a Fleet search field now returns a 400
error when the query is not valid.
Previously, the search fields would accept any type of query, but with the merge of #161064 any type of KQL sent to {fleet} needs to have a valid field name, otherwise it returns an error.
Cause
Entering an invalid KQL query on one of the {fleet} KQL search fields or through the API produces the error.
Affected search fields in the {fleet} UI:
-
Agent list
-
Agent policies
-
Enrollment Keys
Affected endpoints in the [fleet-api-docs] (these are the endpoints that accept the parameter ListWithKuery
):
-
GET api/fleet/agents
-
GET api/fleet/agent_status
-
GET api/fleet/agent_policies
-
GET api/fleet/package_policies
-
GET api/fleet/enrollment_api_keys
-
GET api/fleet/agent_status
Impact
To avoid getting the 400
error, the queries should be valid.
For instance, entering the query 8.10.0
results in an error. The correct query should be: local_metadata.agent.version="8.10.0"
.
As another example, when viewing the Agents tab in Fleet, typing a hostname such as a0c8c88ef2f5
in the search field results in an error. The correct query should have the correct field name, taken from among the allowed ones, for example local_metadata.host.hostname: a0c8c88ef2f5
.
The list of available field names is visible by clicking on any of the search fields.
The 8.10.0 release Added the following new and notable features.
- {fleet}
- {fleet-server}
- {agent}
-
-
Report the version from the {agent} package instead of the agent binary to enhance release process. #2908
-
Implement tamper protection for {elastic-endpoint} uninstall use cases. #2781
-
Add component-level diagnostics and CPU profiling. #3118
-
Improve upgrade process to use upgraded version of Watcher to ensure a successful upgrade. #3140 #2873
-
- {fleet}
-
-
Add support for runtime fields. #161129.
-
- {fleet-server}
- {agent}
-
-
Redundant calls to
/api/fleet/setup
were removed in favor of {kib}-initiated calls. #2985 #2910 -
Updated Go version to 1.20.7. #3177
-
Add runtime prevention to prevent {elastic-defend} from running if {agent} is not installed in the default location. #3114
-
Add a new flag
complete
to agent metadata to signal that the instance running is synthetics-capable. #3190 #1754 -
Add support for setting GOMAXPROCS to limit CPU usage through the agent policy. #3179
-
Add logging to the restart step of the {agent} upgrade rollback process. #3245 #3305
-
- {fleet}
- {agent}
-
-
Don’t trigger Indicator of Compromise (IoC) alert on Windows uninstall. #3014 #2970
-
Fix credential redaction in diagnostic bundle collection. #3165
-
Ensure that {agent} upgrades are rolled back even when the upgraded agent crashes immediately and repeatedly. #3220 #3123
-
Ensure that Elastic Agent is restarted during rollback. #3268
-
Fix how the diagnostics command handles the custom path to save the diagnostics. #3340 #3339
-