You can run {agent} inside a container — either with {fleet-server} or standalone. Docker images for all versions of {agent} are available from the Elastic Docker registry. If you are running in Kubernetes, refer to {eck-ref}/k8s-elastic-agent.html[run {agent} on ECK].
Note that running {elastic-agent} in a container is supported only in Linux environments. For this reason we don’t currently provide {agent} container images for Windows.
Considerations:
-
When {agent} runs inside a container, it cannot be upgraded through {fleet} as it expects that the container itself is upgraded.
-
Enrolling and running an {agent} is usually a two-step process. However, this doesn’t work in a container, so a special subcommand,
container
, is called. This command allows environment variables to configure all properties, and runs theenroll
andrun
commands as a single command.
-
{es} for storing and searching your data, and {kib} for visualizing and managing it.
There are two images for {agent}, elastic-agent and elastic-agent-complete. The elastic-agent image contains all the binaries for running {beats}, while the elastic-agent-complete image contains these binaries plus additional dependencies to run browser monitors through Elastic Synthetics. Refer to {observability-guide}/uptime-set-up.html[Synthetic monitoring via {agent} and {fleet}] for more information.
Run the docker pull
command against the Elastic Docker registry:
docker pull docker.elastic.co/elastic-agent/elastic-agent:{version}
Alternately, you can use the hardened Wolfi image. Using Wolfi images requires Docker version 20.10.10 or later. For details about why the Wolfi images have been introduced, refer to our article Reducing CVEs in Elastic container images.
docker pull docker.elastic.co/elastic-agent/elastic-agent-wolfi:{version}
If you want to run Synthetics tests, run the docker pull
command to fetch the elastic-agent-complete image:
docker pull docker.elastic.co/elastic-agent/elastic-agent-complete:{version}
To run Synthetics tests using the hardened Wolfi image, run:
docker pull docker.elastic.co/elastic-agent/elastic-agent-complete-wolfi:{version}
Although it’s optional, we highly recommend verifying the signatures included with your downloaded Docker images to ensure that the images are valid.
Elastic images are signed with Cosign which is part of the Sigstore project. Cosign supports container signing, verification, and storage in an OCI registry. Install the appropriate Cosign application for your operating system.
Run the following commands to verify the elastic-agent container image signature for {agent} v{version}:
wget https://artifacts.elastic.co/cosign.pub <1>
cosign verify --key cosign.pub docker.elastic.co/elastic-agent/elastic-agent:{version} <2>
-
Download the Elastic public key to verify container signature
-
Verify the container against the Elastic public key
If you’re using the elastic-agent-complete image, run the commands as follows:
wget https://artifacts.elastic.co/cosign.pub <1>
cosign verify --key cosign.pub docker.elastic.co/elastic-agent/elastic-agent-complete:{version} <2>
The command prints the check results and the signature payload in JSON format, for example:
Verification for docker.elastic.co/elastic-agent/elastic-agent-complete:{version} --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- Existence of the claims in the transparency log was verified offline
- The signatures were verified against the specified public key
The {agent} container command offers a wide variety of options. To see the full list, run:
docker run --rm docker.elastic.co/elastic-agent/elastic-agent:{version} elastic-agent container -h
If you need to run {fleet-server} as well, adjust the docker run
command above by adding these environment variables:
--env FLEET_SERVER_ENABLE=true \ (1)
--env FLEET_SERVER_ELASTICSEARCH_HOST=<elasticsearch-host> \ (2)
--env FLEET_SERVER_SERVICE_TOKEN=<service-token> (3)
-
Set to
true
to bootstrap {fleet-server} on this {agent}. This automatically forces {fleet} enrollment as well. -
The Elasticsearch host for Fleet Server to communicate with, for example
http://elasticsearch:9200
. -
Service token to use for communication with {es} and {kib}.
Tip
|
Running {agent} on a read-only file system
If you’d like to run {agent} in a Docker container on a read-only file
system, you can do so by specifying the For example:
Where {STATE_PATH} is the path to a stateful directory to mount where {agent} application data can be stored. You can also add |
You can run {agent} in docker-compose. The example below shows how to enroll an {agent}:
version: "3"
services:
elastic-agent:
image: docker.elastic.co/elastic-agent/elastic-agent:{version} <1>
container_name: elastic-agent
restart: always
user: root # note, synthetic browser monitors require this set to `elastic-agent`
environment:
- FLEET_ENROLLMENT_TOKEN=
- FLEET_ENROLL=1
- FLEET_URL=
-
Switch
elastic-agent
toelastic-agent-complete
if you intend to use the complete version. Use theelastic-agent
user instead of root to run Synthetics Browser tests. Synthetic tests cannot run under the root user. Refer to {observability-guide}/uptime-set-up.html[Synthetics {fleet} Quickstart] for more information.
If you need to run {fleet-server} as well, adjust the docker-compose file above by adding these environment variables:
- FLEET_SERVER_ENABLE=true
- FLEET_SERVER_ELASTICSEARCH_HOST=<elasticsearch-host>
- FLEET_SERVER_SERVICE_TOKEN=<service-token>
Refer to [agent-environment-variables] for all available options.
Since a container supports only a single version of {agent},
logs and state are stored a bit differently than when running an {agent} outside of a container. The logs can be found under: /usr/share/elastic-agent/state/data/logs/*
.
It’s important to note that only the logs from the {agent} process itself are logged to stdout
. Subprocess logs are not. Each subprocess writes its own logs to the default
directory inside the logs directory:
/usr/share/elastic-agent/state/data/logs/default/*
Tip
|
Running into errors with {fleet-server}? Check the fleet-server subprocess logs for more information. |
A monitoring endpoint can be enabled to expose resource usage and event processing data. The endpoint is compatible with {agent}s running in both {fleet} mode and Standalone mode.
Enable the monitoring endpoint in elastic-agent.yml
on the host where the {agent} is installed.
A sample configuration looks like this:
agent.monitoring:
enabled: true (1)
logs: true (2)
metrics: true (3)
http:
enabled: true (4)
host: localhost (5)
port: 6791 (6)
-
Enable monitoring of running processes.
-
Enable log monitoring.
-
Enable metrics monitoring.
-
Expose {agent} metrics over HTTP. By default, sockets and named pipes are used.
-
The hostname, IP address, Unix socket, or named pipe that the HTTP endpoint will bind to. When using IP addresses, we recommend only using
localhost
. -
The port that the HTTP endpoint will bind to.
The above configuration exposes a monitoring endpoint at http://localhost:6791/processes
.
http://localhost:6791/processes
output
{
"processes":[
{
"id":"metricbeat-default",
"pid":"36923",
"binary":"metricbeat",
"source":{
"kind":"configured",
"outputs":[
"default"
]
}
},
{
"id":"filebeat-default-monitoring",
"pid":"36924",
"binary":"filebeat",
"source":{
"kind":"internal",
"outputs":[
"default"
]
}
},
{
"id":"metricbeat-default-monitoring",
"pid":"36925",
"binary":"metricbeat",
"source":{
"kind":"internal",
"outputs":[
"default"
]
}
}
]
}
Each process ID in the /processes
output can be accessed for more details.
http://localhost:6791/processes/{process-name}
output
{
"beat":{
"cpu":{
"system":{
"ticks":537,
"time":{
"ms":537
}
},
"total":{
"ticks":795,
"time":{
"ms":796
},
"value":795
},
"user":{
"ticks":258,
"time":{
"ms":259
}
}
},
"info":{
"ephemeral_id":"eb7e8025-7496-403f-9f9a-42b20439c737",
"uptime":{
"ms":75332
},
"version":"7.14.0"
},
"memstats":{
"gc_next":23920624,
"memory_alloc":20046048,
"memory_sys":76104712,
"memory_total":60823368,
"rss":83165184
},
"runtime":{
"goroutines":58
}
},
"libbeat":{
"config":{
"module":{
"running":4,
"starts":4,
"stops":0
},
"reloads":1,
"scans":1
},
"output":{
"events":{
"acked":0,
"active":0,
"batches":0,
"dropped":0,
"duplicates":0,
"failed":0,
"toomany":0,
"total":0
},
"read":{
"bytes":0,
"errors":0
},
"type":"elasticsearch",
"write":{
"bytes":0,
"errors":0
}
},
"pipeline":{
"clients":4,
"events":{
"active":231,
"dropped":0,
"failed":0,
"filtered":0,
"published":231,
"retry":112,
"total":231
},
"queue":{
"acked":0,
"max_events":4096
}
}
},
"metricbeat":{
"system":{
"cpu":{
"events":8,
"failures":0,
"success":8
},
"filesystem":{
"events":80,
"failures":0,
"success":80
},
"memory":{
"events":8,
"failures":0,
"success":8
},
"network":{
"events":135,
"failures":0,
"success":135
}
}
},
"system":{
"cpu":{
"cores":8
},
"load":{
"1":2.5957,
"15":5.415,
"5":3.5815,
"norm":{
"1":0.3245,
"15":0.6769,
"5":0.4477
}
}
}
}