Skip to content

Latest commit

 

History

History
682 lines (533 loc) · 25.4 KB

release-notes-8.6.asciidoc

File metadata and controls

682 lines (533 loc) · 25.4 KB

Release notes

This section summarizes the changes in each release.

Also see:

  • {kibana-ref}/release-notes.html[{kib} release notes]

  • {beats-ref}/release-notes.html[{beats} release notes]

{fleet} and {agent} 8.6.2

Review important information about the {fleet} and {agent} 8.6.2 release.

Known issues

Osquery live query results can take up to five minutes to show up in {kib}.

Details
A known issue in {agent} may prevent live query results from being available in the {kib} UI even though the results have been successfully sent to {es}. For more information, refer to #2066.

Impact
Be aware that the live query results shown in {kib} may be delayed by up to 5 minutes.

Adding a {fleet-server} integration to an agent results in panic if the agent was not bootstrapped with a {fleet-server}.

Details

A panic occurs because the {agent} does not have a fleet.server in the fleet.enc configuration file. When this happens, the agent fails with a message like:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x557b8eeafc1d]
goroutine 86 [running]:
github.com/elastic/elastic-agent/internal/pkg/agent/application.FleetServerComponentModifier.func1({0xc000652f00, 0xa, 0x10}, 0x557b8fa8eb92?)
...

For more information, refer to #2170.

Impact

To work around this problem, uninstall the {agent} and install it again with {fleet-server} enabled during the bootstrap process.

Installing {agent} on MacOS Ventura may fail if Full Disk Access has not been granted to the application used for installation.

Details
This issue occurs on MacOS Ventura when Full Disk Access is not granted to the application that runs the installation command. This could be either a Terminal or any custom package that a user has built to distribute {agent}.

For more information, refer to #2103.

Impact
{agent} will fail to install and produce "Error: failed to fix permissions: chown elastic-agent.app: operation not permitted" message. Ensure that the application used to install {agent} (for example, the Terminal or custom package) has Full Disk Access before running sudo ./elastic-agent install.

{agent} upgrades scheduled for a future time do not run.

Details
A known issue in {agent} may prevent upgrades scheduled to execute at a later time from running. For more information refer to #2343.

Impact
{kib} may show an agent as being stuck with the Updating status. If the scheduled start time has passed, you may force the agent to run by sending it any action (excluding an upgrade action), such as a change to the policy or the log level.

{fleet} ignores custom server.* attributes provided through integration settings.

Details
{fleet} will ignore any custom server.* attributes provided through the custom configurations yaml block of the intgration. For more information refer to #2303.

Impact
Custom yaml settings are silently ignored by {fleet}. Settings with input blocks, such as Max agents are still effective.

{agent} does not load {fleet} configuration when upgrading to {agent} version 8.6.x from version 8.2.x or earlier.

Details
{agent} will not load fleet.yml information after upgrading to {agent} version 8.6.x from version 8.2.x or earlier This is due to the unencrypted config stored in fleet.yml not being migrated correctly to the encrypted config store fleet.enc. For a managed agent the symptom manifests as the inability to connect to {fleet} after upgrade with the following log:

Error: fleet configuration is invalid: empty access token
...

For more information refer to #2249.

Impact
{agent} loses the ability to connect back to {fleet} after upgrade. Fix for this issue is available in version 8.7.0 and higher, for version 8.6 a re-enroll is necessary for the agent to connect back to {fleet}

Enhancements

{fleet}
  • Adds the ability to run agent policy schema in batches during {fleet} setup. Also adds xpack.fleet.setup.agentPolicySchemaUpgradeBatchSize config #150688

Bug fixes

{fleet}
  • Fix max 20 installed integrations returned from Fleet API #150780

  • Fix updates available when beta integrations are off #149515 #149486

{fleet-server}
  • Prevent {fleet-server} from crashing by allowing the the Warn log level to be specified as "warning" or "warn" #2328 #2331

{agent}
  • Ignore {fleet} connectivity state when considering whether an upgrade should be rolled back. Avoids unnecessary upgrade failures due to transient network errors #2239

  • Preserve persistent process state between upgrades. The {filebeat} registry is now correctly preserved during {agent} upgrades. #2136 #2207

  • Enable nodejs engine validation when bundling synthetics #2249 #2256 #2225

  • Guarantee that services are stopped before they are started. Fixes occasional upgrade failures when Elastic Defend is installed #2226

  • Fix an issue where inputs in {beats} started by {agent} can be incorrectly disabled. This primarily occurs when changing the log level. #2232 #34504

{fleet} and {agent} 8.6.1

Review important information about the {fleet} and {agent} 8.6.1 release.

Known issues

Osquery live query results can take up to five minutes to show up in {kib}.

Details
A known issue in {agent} may prevent live query results from being available in the {kib} UI even though the results have been successfully sent to {es}. For more information, refer to #2066.

Impact
Be aware that the live query results shown in {kib} may be delayed by up to 5 minutes.

Adding a {fleet-server} integration to an agent results in panic if the agent was not bootstrapped with a {fleet-server}.

Details

A panic occurs because the {agent} does not have a fleet.server in the fleet.enc configuration file. When this happens, the agent fails with a message like:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x557b8eeafc1d]
goroutine 86 [running]:
github.com/elastic/elastic-agent/internal/pkg/agent/application.FleetServerComponentModifier.func1({0xc000652f00, 0xa, 0x10}, 0x557b8fa8eb92?)
...

For more information, refer to #2170.

Impact

To work around this problem, uninstall the {agent} and install it again with {fleet-server} enabled during the bootstrap process.

Changing the {agent} log level can incorrectly disable inputs in supervised {beats}.

Details

Data collection may be disabled when the {agent} log level is changed. Avoid changing the {agent} log level.

Upgrade to 8.6.2 to fix the problem. For more information, refer to #2232.

{fleet-server} will crash when configured to use the Warning log level.

Details

{fleet-server} will crash when configured to use the Warning log level. Do not use the Warning log level. Affected {fleet-server} instances must be reinstalled to fix the problem.

Upgrade to 8.6.2 to fix the problem. For more information, refer to #2328.

The initial release of {agent} 8.6.1 for MacOS contained an unsigned elastic-agent executable. On MacOS Ventura, upgrading from 8.6.1 to any other version will fail and can disable Elastic Defend protections.

Details
The initial release of {agent} version 8.6.1 for MacOS contained an unsigned elastic-agent executable and a correctly signed endpoint-security executable. The endpoint-security executable implements the endpoint protection functionality of the Elastic Defend integration.

New functionality in MacOS Gatekeeper in MacOS Ventura prevents the unsigned elastic-agent executable from modifying the installation of the signed endpoint-security executable causing upgrades from affected 8.6.1 versions to fail. The failed upgrade can leave {agent} in an unhealthy state with the endpoint-security executable disabled. Note that MacOS Gatekeeper implements a signature cache, such that the upgrade is only likely to fail on MacOS Ventura machines that have been rebooted since the first upgrade to version 8.6.1.

As of February 27th 2023 the {agent} 8.6.1 artifacts for MacOS have been updated with a correctly signed elastic-agent executable. To verify that the signature of the elastic-agent executable is correct, run the command below and ensure that Elasticsearch, Inc appears in the Authority field.

tar xvfz elastic-agent-8.6.1-darwin-aarch64.tar.gz
cd elastic-agent-8.6.1-darwin-aarch64
codesign -dvvvv ./elastic-agent
...
Signature size=9068
Authority=Developer ID Application: Elasticsearch, Inc (2BT3HPN62Z)
Authority=Developer ID Certification Authority
Authority=Apple Root CA
Timestamp=Feb 24, 2023 at 4:33:02 AM
...

Impact
Any {agent} deployed to MacOS Ventura that was upgraded to version 8.6.1 prior to February 27th 2023 must be reinstalled using a version with correctly signed executables. Upgrades to any other version will fail and lead to broken functionality, including disabling the protections from Elastic Defend.

The specific steps to follow to correct this problem are:

  1. Download a version of {agent} with correctly signed executables.

  2. Unenroll the affected agents, either from the command line or the Fleet UI. A new agent ID will be generated when reinstalling.

  3. Run the elastic-agent uninstall command to remove the incorrectly signed version of {agent}.

  4. From the directory containing the new, correctly signed {agent} artifacts run the elastic-agent install command. The agent may be reenrolled at install time or separately with the elastic-agent enroll command.

Installing {agent} on MacOS Ventura may fail if Full Disk Access has not been granted to the application used for installation.

Details
This issue occurs on MacOS Ventura when Full Disk Access is not granted to the application that runs the installation command. This could be either a Terminal or any custom package that a user has built to distribute {agent}.

For more information, refer to #2103.

Impact
{agent} will fail to install and produce "Error: failed to fix permissions: chown elastic-agent.app: operation not permitted" message. Ensure that the application used to install {agent} (for example, the Terminal or custom package) has Full Disk Access before running sudo ./elastic-agent install.

{agent} upgrades scheduled for a future time do not run.

Details
A known issue in {agent} may prevent upgrades scheduled to execute at a later time from running. For more information refer to #2343.

Impact
{kib} may show an agent as being stuck with the Updating status. If the scheduled start time has passed, you may force the agent to run by sending it any action (excluding an upgrade action), such as a change to the policy or the log level.

{fleet} ignores custom server.* attributes provided through integration settings.

Details
{fleet} will ignore any custom server.* attributes provided through the custom configurations yaml block of the intgration. For more information refer to #2303.

Impact
Custom yaml settings are silently ignored by {fleet}. Settings with input blocks, such as Max agents are still effective.

{agent} does not load {fleet} configuration when upgrading to {agent} version 8.6.x from version 8.2.x or earlier.

Details
{agent} will not load fleet.yml information after upgrading to {agent} version 8.6.x from version 8.2.x or earlier This is due to the unencrypted config stored in fleet.yml not being migrated correctly to the encrypted config store fleet.enc. For a managed agent the symptom manifests as the inability to connect to {fleet} after upgrade with the following log:

Error: fleet configuration is invalid: empty access token
...

For more information refer to #2249.

Impact
{agent} loses the ability to connect back to {fleet} after upgrade. Fix for this issue is available in version 8.7.0 and higher, for version 8.6 a re-enroll is necessary for the agent to connect back to {fleet}

Bug fixes

{fleet}
  • Fix missing policy ID in installation URL for cloud integrations #149243

  • Fix package installation APIs to install packages without a version #149193

  • Fix issue where the latest GA version could not be installed if there was a newer prerelease version in the registry #149133 #149104

{fleet-server}
  • Update the .fleet-agent index when acknowledging policy changes when {ls} is the configured output. Fixes agents always showing as Updating when using the {ls} output #2119

{agent}
  • Fix issue where {beats} started by {agent} may fail with an output unit has no config error #2138 #2086

  • Restore the ability to set custom HTTP headers at enrollment time. Fixes traffic filters in Integrations Server cloud deployments #2158 #32993

  • Make it easier to filter agent logs from the combined agent log file #2044 #1810

{fleet} and {agent} 8.6.0

Review important information about the {fleet} and {agent} 8.6.0 release.

Breaking changes

Breaking changes can prevent your application from optimal operation and performance. Before you upgrade, review the breaking changes, then mitigate the impact to your application.

Each input in an agent policy must have a unique ID

Details
Each input in an agent policy must have a unique ID, like id: my-unique-input-id. This change only affects standalone agents. Unique IDs are automatically generated in agent policies managed by {fleet}. For more information, refer to #1994

Impact
Make sure that your standalone agent policies have a unique ID.

Diagnostics --pprof argument has been removed and is now always provided

Details
The diagnostics command gathers diagnostic information about the {agent} and each component/unit it runs. Starting in 8.6.0, the --pprof argument is no longer available because pprof information is now always provided. For more information, refer to #1140.

Impact
Remove the --pprof argument from any scripts or commands you use.

Not dedoted Kubernetes pod labels to be used for autodiscover conditions

Details
Kubernetes pod labels used in autodiscover conditions are not dedoted anymore. This means that . are not replaced with _ in labels like app.kubernetes.io/component=controller. This follows the same approach as kubernetes annotations. For more information refer to [kubernetes-provider].

Impact
Any template used for standalone elastic agent or installed integration that makes use of dedoted kubernetes labels inside conditions has to be updated.

Known issues

Osquery live query results can take up to five minutes to show up in {kib}.

Details
A known issue in {agent} may prevent live query results from being available in the {kib} UI even though the results have been successfully sent to {es}. For more information, refer to #2066.

Impact
Be aware that the live query results shown in {kib} may be delayed by up to 5 minutes.

Installing {agent} on MacOS Ventura may fail if Full Disk Access has not been granted to the application used for installation.

Details
This issue occurs on MacOS Ventura when Full Disk Access is not granted to the application that runs the installation command. This could be either a Terminal or any custom package that a user has built to distribute {agent}.

For more information, refer to #2103.

Impact
{agent} will fail to install and produce "Error: failed to fix permissions: chown elastic-agent.app: operation not permitted" message. Ensure that the application used to install {agent} (for example, the Terminal or custom package) has Full Disk Access before running sudo ./elastic-agent install.

Beats started by agent may fail with output unit has no config error.

Details
A known issue in {agent} may lead to Beat processes being started without a valid output. To correct the problem, trigger a restart of {agent} or the affected Beats. For Beats managed by {agent}, you can trigger a restart by changing the {agent} log level or the output section of the {agent} policy. For more information, refer to #2086.

Impact
{agent} will appear unhealthy and the affected Beats will not be able to write event data to {es} or Logstash.

{agent} upgrades scheduled for a future time do not run.

Details
A known issue in {agent} may prevent upgrades scheduled to execute at a later time from running. For more information refer to #2343.

Impact
{kib} may show an agent as being stuck with the Updating status. If the scheduled start time has passed, you may force the agent to run by sending it any action (excluding an upgrade action), such as a change to the policy or the log level.

{fleet} ignores custom server.* attributes provided through integration settings.

Details
{fleet} will ignore any custom server.* attributes provided through the custom configurations yaml block of the intgration. For more information refer to #2303.

Impact
Custom yaml settings are silently ignored by {fleet}. Settings with input blocks, such as Max agents are still effective.

{agent} does not load {fleet} configuration when upgrading to {agent} version 8.6.x from version 8.2.x or earlier.

Details
{agent} will not load fleet.yml information after upgrading to {agent} version 8.6.x from version 8.2.x or earlier This is due to the unencrypted config stored in fleet.yml not being migrated correctly to the encrypted config store fleet.enc. For a managed agent the symptom manifests as the inability to connect to {fleet} after upgrade with the following log:

Error: fleet configuration is invalid: empty access token
...

For more information refer to #2249.

Impact
{agent} loses the ability to connect back to {fleet} after upgrade. Fix for this issue is available in version 8.7.0 and higher, for version 8.6 a re-enroll is necessary for the agent to connect back to {fleet}

New features

The 8.6.0 release adds the following new and notable features.

{fleet}
  • Differentiate kubernetes integration multipage experience #145224

  • Add prerelease toggle to Integrations list #143853

  • Add link to allow users to skip multistep add integration workflow #143279

{agent}
  • Upgrade Node to version 18.12.0 #1657

  • Add experimental support for running the elastic-agent-shipper #1527 #219

  • Collect logs from sub-processes via stdout and stderr and write them to a single, unified Elastic Agent log file #1702 #221

  • Remove inputs when all streams are removed #1869 #1868

  • No longer restart {agent} on log level change #1914 #1896

  • Add inspect components command to inspect the computed components/units model of the current configuration (for example, elastic-agent inspect components) #1701 #836

  • Add support for the Common Expression Language (CEL) {filebeat} input type #1719

  • Only support {es} as an output for the beta synthetics integration #1491

  • New control protocol between the {agent} and its subprocesses enables per integration health reporting and simplifies new input development #836 #1701

  • All binaries for every supported integration are now bundled in the {agent} by default #836 #126

Enhancements

{fleet}
  • Add ?full option to get package info endpoint to return all package fields #144343

{agent}
  • Health Status: {agent} now indicates detailed status information for each sub-process and input type #1747 #100

  • Change internal directory structure: add a components directory to contain binaries and associated artifacts, and remove the downloads directory #836 #1701

Bug fixes

{fleet}
  • Only show {fleet}-managed data streams on data streams list page #143300

  • Fix synchronization bug in {fleet-server} that can lead to {es} being flooded by requests to /.fleet-actions/_fleet/_fleet_search #2205.

{agent}
  • {agent} now uses the locally bound port (8221) when running {fleet-server} instead of the external port (8220) #1867