The {es} output sends events directly to {es} by using the {es} HTTP API.
Compatibility: This output works with all compatible versions of {es}. See the Elastic Support Matrix.
This example configures an {es} output called default
in the
elastic-agent.yml
file:
outputs:
default:
type: elasticsearch
hosts: [127.0.0.1:9200]
username: elastic
password: changeme
This example is similar to the previous one, except that it uses the recommended token-based (API key) authentication:
outputs:
default:
type: elasticsearch
hosts: [127.0.0.1:9200]
api_key: "my_api_key"
Note
|
Token-based authentication is required in an {serverless-full} environment. |
The elasticsearch
output type supports the following settings, grouped by
category. Many of these settings have sensible defaults that allow you to run
{agent} with minimal configuration.
Setting |
|||
|
(list) The list of {es} nodes to connect to. The events are distributed to
these nodes in round robin order. If one node becomes unreachable, the event is
automatically sent to another node. Each {es} node can be defined as a
outputs:
default:
type: elasticsearch
hosts: ["10.45.3.2:9220", "10.45.3.1:9230"] (1)
protocol: https
path: /elasticsearch
Note that Elasticsearch Nodes in the {serverless-full} environment are exposed on port 443. |
||
|
(string) The name of the protocol {es} is reachable on. The options are:
|
||
|
(boolean) If set to Default: |
||
|
(string) Additional headers to send to proxies during CONNECT requests. |
||
|
(string) The URL of the proxy to use when connecting to the {es} servers. The
value may be either a complete URL or a |
When sending data to a secured cluster through the elasticsearch
output, {agent} can use any of the following authentication methods:
outputs:
default:
type: elasticsearch
hosts: ["https://myEShost:9200"]
username: "your-username"
password: "your-password"
Setting | Description |
---|---|
|
(string) The basic authentication password for connecting to {es}. |
|
(string) The basic authentication username for connecting to {es}. This user needs the privileges required to publish events to {es}. Note that in an {serverless-full} environment you need to use token-based (API key) authentication. |
outputs:
default:
type: elasticsearch
hosts: ["https://myEShost:9200"]
api_key: "KnR6yE41RrSowb0kQ0HWoA"
Setting | Description |
---|---|
|
(string) Instead of using a username and password, you can use {kibana-ref}/api-keys.html[API keys] to
secure communication with {es}. The value must be the ID of the API key and the
API key joined by a colon: |
outputs:
default:
type: elasticsearch
hosts: ["https://myEShost:9200"]
ssl.certificate: "/etc/pki/client/cert.pem"
ssl.key: "/etc/pki/client/cert.key"
For a list of available settings, refer to [elastic-agent-ssl-configuration], specifically the settings under [common-ssl-options] and [client-ssl-options].
The following encryption types are supported:
-
aes128-cts-hmac-sha1-96
-
aes128-cts-hmac-sha256-128
-
aes256-cts-hmac-sha1-96
-
aes256-cts-hmac-sha384-192
-
des3-cbc-sha1-kd
-
rc4-hmac
Example output config with Kerberos password-based authentication:
outputs:
default:
type: elasticsearch
hosts: ["http://my-elasticsearch.elastic.co:9200"]
kerberos.auth_type: password
kerberos.username: "elastic"
kerberos.password: "changeme"
kerberos.config_path: "/etc/krb5.conf"
kerberos.realm: "ELASTIC.CO"
The service principal name for the {es} instance is constructed from these options. Based on this configuration, the name would be:
HTTP/my-elasticsearch.elastic.co@ELASTIC.CO
Setting | Description |
---|---|
|
Allow {agent} to connect and send output to an {es} instance that is running an earlier version than the agent version. Note that this setting does not affect {agent}'s ability to connect to {fleet-server}. {fleet-server} will not accept a connection from an agent at a later major or minor version. It will accept a connection from an agent at a later patch version. For example, an {agent} at version 8.14.3 can connect to a {fleet-server} on version 8.14.0, but an agent at version 8.15.0 or later is not able to connect. Default: |
Settings used to parse, filter, and transform data.
Setting |
Description |
||
|
(string) A format string value that specifies the {ref}/ingest.html[ingest pipeline] to write events to. outputs:
default:
type: elasticsearchoutput.elasticsearch:
hosts: ["http://localhost:9200"]
pipeline: my_pipeline_id You can set the ingest pipeline dynamically by using a format string to
access any event field. For example, this configuration uses a custom field,
outputs:
default:
type: elasticsearch hosts: ["http://localhost:9200"]
pipeline: "%{[fields.log_type]}_pipeline" With this configuration, all events with
See the |
||
|
An array of pipeline selector rules. Each rule specifies the {ref}/ingest.html[ingest pipeline]
to use for events that match the rule. During publishing, {agent} uses the first
matching rule in the array. Rules can contain conditionals, format string-based
fields, and name mappings. If the Rule settings:
All the conditions supported by processors are also supported here. The following example sends events to a specific pipeline based on whether the
outputs:
default:
type: elasticsearch hosts: ["http://localhost:9200"]
pipelines:
- pipeline: "warning_pipeline"
when.contains:
message: "WARN"
- pipeline: "error_pipeline"
when.contains:
message: "ERR" The following example sets the pipeline by taking the name returned by the
outputs:
default:
type: elasticsearch
hosts: ["http://localhost:9200"]
pipelines:
- pipeline: "%{[fields.log_type]}"
mappings:
critical: "sev1_pipeline"
normal: "sev2_pipeline"
default: "sev3_pipeline" With this configuration, all events with |
Settings that modify the HTTP requests sent to {es}.
Setting | Description |
---|---|
|
Custom HTTP headers to add to each request created by the {es} output. Example: outputs:
default:
type: elasticsearch
headers:
X-My-Header: Header contents Specify multiple header values for the same header name by separating them with a comma. |
|
Dictionary of HTTP parameters to pass within the URL with index operations. |
|
(string) An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where {es} listens behind an HTTP reverse proxy that exports the API under a custom prefix. |
The memory queue keeps all events in memory.
The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted.
The memory queue is controlled by the parameters flush.min_events
and flush.timeout
.
flush.min_events
gives a limit on the number of events that can be included in a single batch, and
flush.timeout
specifies how long the queue should wait to completely fill an event request. If the
output supports a bulk_max_size
parameter, the maximum batch size will be the smaller of
bulk_max_size
and flush.min_events
.
flush.min_events
is a legacy parameter, and new configurations should prefer to control batch size
with bulk_max_size
. As of 8.13, there is never a performance advantage to limiting batch size with
flush.min_events
instead of bulk_max_size
.
In synchronous mode, an event request is always filled as soon as events are available, even if
there are not enough events to fill the requested batch. This is useful when latency must be
minimized. To use synchronous mode, set flush.timeout
to 0.
For backwards compatibility, synchronous mode can also be activated by setting flush.min_events
to 0
or 1. In this case, batch size will be capped at 1/2 the queue capacity.
In asynchronous mode, an event request will wait up to the specified timeout to try and fill the
requested batch completely. If the timeout expires, the queue returns a partial batch with all
available events. To use asynchronous mode, set flush.timeout
to a positive duration, for example 5s.
This sample configuration forwards events to the output when there are enough events to fill the
output’s request (usually controlled by bulk_max_size
, and limited to at most 512 events by
flush.min_events
), or when events have been waiting for
queue.mem.events: 4096
queue.mem.flush.min_events: 512
queue.mem.flush.timeout: 5s
Setting |
Settings that may affect performance when sending data through the {es} output.
Use the preset
option to automatically configure the group of performance tuning settings to optimize for throughput
, scale
, latency
, or you can select a balanced
set of performance specifications.
The performance tuning preset
values take precedence over any settings that may be defined separately. If you want to change any setting, set preset
to custom
and specify the performance tuning settings individually.
Setting | Description |
---|---|
|
(string) The number of seconds to wait before trying to reconnect to {es}
after a network error. After waiting Default: |
|
(string) The maximum number of seconds to wait before attempting to connect to {es} after a network error. Default: |
|
(int) The maximum number of events to bulk in a single {es} bulk API index request. Events can be collected into batches. {agent} will split batches larger than
Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput. Setting Default: Default: |
|
(int) The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. Set Default: |
|
Configures the full group of performance tuning settings to optimize your {agent} performance when sending data to an {es} output. Refer to [es-output-settings-performance-tuning-settings] for a table showing the group of values associated with any preset, and another table showing EPS (events per second) results from testing the different preset options. Performance tuning preset settings:
Default: |
|
(string) The HTTP request timeout in seconds for the {es} request. Default: |