Important
|
Restrictions
Note the following restrictions when installing {agent} on your system:
|
You have a few options for installing and managing an {agent}:
-
Install a {fleet}-managed {agent} (recommended)
With this approach, you install {agent} and use {fleet} in {kib} to define, configure, and manage your agents in a central location.
We recommend using {fleet} management because it makes the management and upgrade of your agents considerably easier.
Refer to [install-fleet-managed-elastic-agent].
-
Install {agent} in standalone mode (advanced users)
With this approach, you install {agent} and manually configure the agent locally on the system where it’s installed. You are responsible for managing and upgrading the agents. This approach is reserved for advanced users only.
Refer to [install-standalone-elastic-agent].
-
Install {agent} in a containerized environment
You can run {agent} inside of a container — either with {fleet-server} or standalone. Docker images for all versions of {agent} are available from the Elastic Docker registry, and we provide deployment manifests for running on Kubernetes.
Refer to:
-
{eck-ref}/k8s-elastic-agent.html[Run {agent} on ECK] — for {eck} users
Important
|
Restrictions in {serverless-short}
If you are using {agent} with {serverless-full}, note these differences from use with {ess} and self-managed {es}:
|
Minimum requirements have been determined by running the {agent} on a GCP e2-micro
instance (2vCPU/1GB).
The {agent} used the default policy, running the system integration and self-monitoring.
During upgrades, double the disk space is required to store the new {agent} binary. After the upgrade completes, the original {agent} is removed from disk to free up the space.
CPU |
Under 2% total, including all monitoring processes |
Disk |
1.7 GB |
RSS Mem Size |
400 MB |
Adding integrations will increase the memory used by the agent and its processes.
If you need to limit the amount of CPU consumption you can use the agent.limits.go_max_procs
configuration option. This parameter limits the number of operating system threads that can be executing Go code simultaneously in each Go process. The agent.limits.go_max_procs
option accepts an integer value not less than 0
, which is the default value that stands for "all available CPUs".
The agent.limits.go_max_procs
limit applies independently to the agent and each underlying Go process that it supervises. For example, if {agent} is configured to supervise two {beats} with agent.limits.go_max_procs: 2
in the policy, then the total CPU limit is six, where each of the three processes (one {agent} and two {Beats}) may execute independently on two CPUs.
This setting is similar to the {beats} {filebeat-ref}/configuration-general-options.html#_max_procs[max_procs
] setting. For more detail, refer to the GOMAXPROCS function in the Go runtime documentation.
To enable agent.limits.go_max_procs
, run a {fleet} API request from the {kib} {kibana-ref}/console-kibana.html[Dev Tools console] to override your current {agent} policy and add the go_max_procs
parameter. For example, to limit Go processes supervised by {agent} to two operating system threads each, run:
PUT kbn:/api/fleet/agent_policies/<policy-id>
{
"name": "<policy-name>",
"namespace": "default",
"overrides": {
"agent": {
"limits": {
"go_max_procs": 2
}
}
}
}