{agent} uses data streams to store time series data across multiple indices while giving you a single named resource for requests. Data streams are well-suited for logs, metrics, traces, and other continuously generated data. They offer a host of benefits over other indexing strategies:
-
Reduced number of fields per index: Indices only need to store a specific subset of your data–meaning no more indices with hundreds of thousands of fields. This leads to better space efficiency and faster queries. As an added bonus, only relevant fields are shown in Discover.
-
More granular data control: For example, file system, load, CPU, network, and process metrics are sent to different indices–each potentially with its own rollover, retention, and security permissions.
-
Flexible: Use the custom namespace component to divide and organize data in a way that makes sense to your use case or company.
-
Fewer ingest permissions required: Data ingestion only requires permissions to append data.
{agent} uses the Elastic data stream naming scheme to name data streams. The naming scheme splits data into different streams based on the following components:
type
-
A generic
type
describing the data, such aslogs
,metrics
,traces
, orsynthetics
. dataset
-
The
dataset
is defined by the integration and describes the ingested data and its structure for each index. For example, you might have a dataset for process metrics with a field describing whether the process is running or not, and another dataset for disk I/O metrics with a field describing the number of bytes read. namespace
-
A user-configurable arbitrary grouping, such as an environment (
dev
,prod
, orqa
), a team, or a strategic business unit. Anamespace
can be up to 100 bytes in length (multibyte characters will count toward this limit faster). Using a namespace makes it easier to search data from a given source by using a matching pattern. You can also use matching patterns to give users access to data when creating user roles.By default the namespace defined for an {agent} policy is propagated to all integrations in that policy. if you’d like to define a more granular namespace for a policy:
-
In {kib}, go to Integrations.
-
On the Installed integrations tab, select the integration that you’d like to update.
-
Open the Integration policies tab.
-
From the Actions menu next to the integration, select Edit integration.
-
Open the advanced options and update the Namespace field. Data streams from the integration will now use the specified namespace rather than the default namespace inherited from the {agent} policy.
-
The naming scheme separates each components with a -
character:
<type>-<dataset>-<namespace>
For example, if you’ve set up the Nginx integration with a namespace of prod
,
{agent} uses the logs
type, nginx.access
dataset, and prod
namespace to store data in the following data stream:
logs-nginx.access-prod
Alternatively, if you use the APM integration with a namespace of dev
,
{agent} stores data in the following data stream:
traces-apm-dev
All data streams, and the pre-built dashboards that they ship with, are viewable on the {fleet} Data Streams page:
Tip
|
If you’re familiar with the concept of indices, you can think of each data stream as a separate index in {es}. Under the hood though, things are a bit more complex. All of the juicy details are available in {ref}/data-streams.html[{es} Data streams]. |
When searching your data in {kib}, you can use a {kibana-ref}/data-views.html[{data-source}] to search across all or some of your data streams.
An index template is a way to tell {es} how to configure an index when it is created. For data streams, the index template configures the stream’s backing indices as they are created.
{es} provides the following built-in, ECS based templates: logs--
, metrics--
, and synthetics--
.
{agent} integrations can also provide dataset-specific index templates, like logs-nginx.access-*
.
These templates are loaded when the integration is installed, and are used to configure the integration’s data streams.
Warning
|
Custom index mappings may conflict with the mappings defined by the integration and may break the integration in {kib}. Do not change or customize any default mappings. |
When you install an integration, {fleet} creates two default @custom
component templates:
-
A
@custom
component template allowing customization across all documents of a given data stream type, named following the pattern:<data_stream_type>@custom
. -
A
@custom
component template for each data stream, named following the pattern:<name_of_data_stream>@custom
.
The @custom
component template specific to a datastream has higher precedence over the data stream type @custom
component template.
You can edit a @custom
component template to customize your {es} indices:
-
Open {kib} and navigate to to {stack-manage-app} > Index Management > Data Streams.
-
Find and click the name of the integration data stream, such as
logs-cisco_ise.log-default
. -
Click the index template link for the data stream to see the list of associated component templates.
-
Navigate to {stack-manage-app} > Index Management > Component Templates.
-
Search for the name of the data stream’s custom component template and click the edit icon.
-
Add any custom index settings, metadata, or mappings. For example, you may want to:
-
Customize the index lifecycle policy applied to a data stream. See {apm-guide-ref}/ilm-how-to.html#data-streams-custom-policy[Configure a custom index lifecycle policy] in the APM Guide for a walk-through.
Specify lifecycle name in the index settings:
{ "index": { "lifecycle": { "name": "my_policy" } } }
-
Change the number of {ref}/docs-replication.html[replicas] per index. Specify the number of replica shards in the index settings:
{ "index": { "number_of_replicas": "2" } }
-
Changes to component templates are not applied retroactively to existing indices. For changes to take effect, you must create a new write index for the data stream. You can do this with the {es} {ref}/indices-rollover-index.html[Rollover API].
Use the {ref}/index-lifecycle-management.html[index lifecycle management] ({ilm-init}) feature in {es} to manage your {agent} data stream indices as they age. For example, create a new index after a certain period of time, or delete stale indices to enforce data retention standards.
Installed integrations may have one or many associated data streams—each with an associated {ilm-init} policy.
By default, these data streams use an {ilm-init} policy that matches their data type.
For example, the data stream metrics-system.logs-*
,
uses the metrics {ilm-init} policy as defined in the metrics-system.logs
index template.
Want to customize your index lifecycle management? See Tutorials: Customize data retention policies.
{agent} integration data streams ship with a default {ref}/ingest.html[ingest pipeline] that preprocesses and enriches data before indexing. The default pipeline should not be directly edited as changes can easily break the functionality of the integration.
Starting in version 8.4, all default ingest pipelines call a non-existent and non-versioned “@custom” ingest pipeline. If left uncreated, this pipeline has no effect on your data. However, if added to a data stream and customized, this pipeline can be used for custom data processing, adding fields, sanitizing data, and more.
Starting in version 8.12, ingest pipelines can be configured to process events at various levels of customization.
Note
|
If you create a custom index pipeline, Elastic is not responsible for ensuring that it indexes and behaves as expected. Creating a custom pipeline involves custom processing of the incoming data, which should be done with caution and tested carefully. |
global@custom
-
Apply processing to all events
For example, the following {ref}/put-pipeline-api.html[pipeline API] request adds a new field
my-global-field
for all events:PUT _ingest/pipeline/global@custom { "processors": [ { "set": { "description": "Process all events", "field": "my-global-field", "value": "foo" } } ] }
${type}
-
Apply processing to all events of a given data type.
For example, the following request adds a new field
my-logs-field
for all log events:PUT _ingest/pipeline/logs@custom { "processors": [ { "set": { "description": "Process all log events", "field": "my-logs-field", "value": "foo" } } ] }
${type}-${package}.integration
-
Apply processing to all events of a given type in an integration
For example, the following request creates a
logs-nginx.integration@custom
pipeline that adds a new fieldmy-nginx-field
for all log events in the Nginx integration:PUT _ingest/pipeline/logs-nginx.integration@custom { "processors": [ { "set": { "description": "Process all nginx events", "field": "my-nginx-field", "value": "foo" } } ] }
Note that
.integration
is included in the pipeline pattern to avoid possible collision with existing dataset pipelines. ${type}-${dataset}
-
Apply processing to a specific dataset.
For example, the following request creates a
metrics-system.cpu@custom
pipeline that adds a new fieldmy-system.cpu-field
for all CPU metrics events in the System integration:PUT _ingest/pipeline/metrics-system.cpu@custom { "processors": [ { "set": { "description": "Process all events in the system.cpu dataset", "field": "my-system.cpu-field", "value": "foo" } } ] }
Custom pipelines can directly contain processors or you can use the pipeline processor to call other pipelines that can be shared across multiple data streams or integrations. These pipelines will persist across all version upgrades.
Warning
|
If you have a custom pipeline defined that matches the naming scheme used for any {fleet} custom ingest pipelines, this can produce unintended results. For example, if you have a pipeline named like one of the following:
The pipeline may be unexpectedly called for other data streams in other integrations. To avoid this problem, avoid the naming schemes defined above when naming your custom pipelines. Refer to the breaking change in the 8.12.0 Release Notes for more detail and workaround options. |
See Tutorial: Transform data with custom ingest pipelines to get started.
These tutorials explain how to apply a custom {ilm-init} policy to an integration’s data stream.
For certain features you’ll need to use a slightly different procedure to manage the index lifecycle:
-
APM: For verions 8.15 and later, refer to {observability-guide}/apm-ilm-how-to.html[Index lifecycle management].
-
Synthetic monitoring: Refer to {observability-guide}/synthetics-manage-retention.html[Manage data retention].
-
Universal Profiling: Refer to {observability-guide}/profiling-index-lifecycle-management.html[Universal Profiling index life cycle management].
How you apply an ILM policy depends on your use case. Choose a scenario for the detailed steps.
-
Scenario 1: You want to apply an ILM policy to all logs or metrics data streams across all namespaces.
-
Scenario 2: You want to apply an ILM policy to selected data streams in an integration.
-
Scenario 3: You want apply an ILM policy for data streams in a selected namespace in an integration.
Scenario 1: Apply an ILM policy to all data streams generated from Fleet integrations across all namespaces
Note
|
This tutorial uses a logs@custom and a metrics@custom component template which are available in versions 8.13 and later.
For versions later than 8.4 and earlier than 8.13, you instead need to use the <integration prefix>@custom component template and add the ILM policy to that template.
This needs to be done for every newly added integration.
|
Mappings and settings for data streams can be customized through the creation of *@custom
component templates, which are referenced by the index templates created by each integration.
The easiest way to configure a custom index lifecycle policy per data stream is to edit this template.
This tutorial explains how to apply a custom index lifecycle policy to all of the data streams associated with the System
integration, as an example.
Similar steps can be used for any other integration.
Setting a custom index lifecycle policy must be done separately for all logs and for all metrics, as described in the following steps.
-
To open Lifecycle Policies, find Stack Management in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field].
-
Click Create policy.
Name your new policy.
For this tutorial, you can use my-ilm-policy
.
Customize the policy to your liking, and when you’re done, click Save policy.
The Index Templates view in {kib} shows you all of the index templates available to automatically apply settings, mappings, and aliases to indices:
-
To open Index Management, find Stack Management in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field].
-
Select Index Templates.
-
Search for
system
to see all index templates associated with the System integration. -
Select any
logs-*
index template to view the associated component templates. For example, you can select thelogs-system.application
index template. -
Select
logs@custom
in the list to view the component template properties. -
For a newly added integration, the component template won’t exist yet. Select Create component template to create it. If the component template already exists, click Manage to update it.
-
On the Logistics page, keep all defaults and click Next.
-
On the Index settings page, in the Index settings field, specify the ILM policy that you created. For example:
{ "index": { "lifecycle": { "name": "my-ilm-policy" } } }
-
Click Next.
-
For both the Mappings and Aliases pages, keep all defaults and click Next.
-
Finally, on the Review page, review the summary and request. If everything looks good, select Create component template.
To confirm that the index template is using the logs@custom
component template with your custom ILM policy:
-
Reopen the Index Management page and open the Component Templates tab.
-
Search for
logs@
and select thelogs@custom
component template. -
The Summary shows the list of all data streams that use the component template, and the Settings view shows your newly configured ILM policy.
New ILM policies only take effect when new indices are created, so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), or force a rollover of each data stream using the {ref}/indices-rollover-index.html[{es} rollover API.
For example:
POST /logs-system.auth/_rollover/
You’ve now applied a custom index lifecycle policy to all of the logs-
data streams in the System
integration.
For the metrics data streams, you can repeat steps 2 and 3, using a metrics-
index template and the metrics@custom
component template.
Scenario 2: Apply an ILM policy to specific data streams generated from Fleet integrations across all namespaces
Mappings and settings for data streams can be customized through the creation of *@custom
component templates,
which are referenced by the index templates created by the {es} apm-data plugin.
The easiest way to configure a custom index lifecycle policy per data stream is to edit this template.
This tutorial explains how to apply a custom index lifecycle policy to the logs-system.auth
data stream.
-
To open Lifecycle Policies, find Stack Management in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field].
-
Click Create policy.
Name your new policy.
For this tutorial, you can use my-ilm-policy
.
Customize the policy to your liking, and when you’re done, click Save policy.
The Index Templates view in {kib} shows you all of the index templates available to automatically apply settings, mappings, and aliases to indices:
-
To open Index Management, find Stack Management in the main menu or use the {kibana-ref}/introduction.html#kibana-navigation-search[global search field].
-
Select Index Templates.
-
Search for
system
to see all index templates associated with the System integration. -
Select the index template that matches the data stream for which you want to set up an ILM policy. For this example, you can select the
logs-system.auth
index template. -
In the Summary, select
logs-system.auth@custom
from the list to view the component template properties. -
For a newly added integration, the component template won’t exist yet. Select Create component template to create it. If the component template already exists, click Manage to update it.
-
On the Logistics page, keep all defaults and click Next.
-
On the Index settings page, in the Index settings field, specify the ILM policy that you created. For example:
{ "index": { "lifecycle": { "name": "my-ilm-policy" } } }
-
Click Next.
-
For both the Mappings and Aliases pages, keep all defaults and click Next.
-
Finally, on the Review page, review the summary and request. If everything looks good, select Create component template.
-
To confirm that the index template is using the logs@custom
component template with your custom ILM policy:
-
Reopen the Index Management page and open the Component Templates tab.
-
Search for
system
and select thelogs-system.auth@custom
component template. -
The Summary shows the list of all data streams that use the component template, and the Settings view shows your newly configured ILM policy.
New ILM policies only take effect when new indices are created, so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), or force a rollover of the data stream using the {ref}/indices-rollover-index.html[{es} rollover API:
POST /logs-system.auth/_rollover/
You’ve now applied a custom index lifecycle policy to the logs-system.auth
data stream in the System
integration.
Repeat these steps for any other data streams for which you’d like to configure a custom ILM policy.
In this scenario, you have {agent}s collecting system metrics with the System integration in two environments—one with the namespace development
, and one with production
.
Goal: Customize the {ilm-init} policy for the system.network
data stream in the production
namespace.
Specifically, apply the built-in 90-days-default
{ilm-init} policy so that data is deleted after 90 days.
Note
|
|
The Data Streams view in {kib} shows you the data streams, index templates, and {ilm-init} policies associated with a given integration.
-
Navigate to {stack-manage-app} > Index Management > Data Streams.
-
Search for
system
to see all data streams associated with the System integration. -
Select the
metrics-system.network-{namespace}
data stream to view its associated index template and {ilm-init} policy. As you can see, the data stream follows the Data stream naming scheme and starts with its type,metrics-
.
For your changes to continue to be applied in future versions,
you must put all custom index settings into a component template.
The component template must follow the data stream naming scheme,
and end with @custom
:
<type>-<dataset>-<namespace>@custom
For example, to create custom index settings for the system.network
data stream with a namespace of production
,
the component template name would be:
metrics-system.network-production@custom
-
Navigate to {stack-manage-app} > Index Management > Component Templates
-
Click Create component template.
-
Use the template above to set the name—in this case,
metrics-system.network-production@custom
. Click Next. -
Under Index settings, set the {ilm-init} policy name under the
lifecycle.name
key:{ "lifecycle": { "name": "90-days-default" } }
-
Continue to Review and ensure your request looks similar to the image below. If it does, click Create component template.
Now that you’ve created a component template, you need to create an index template to apply the changes to the correct data stream. The easiest way to do this is to duplicate and modify the integration’s existing index template.
Warning
|
Please note the following: * When duplicating the index template, do not change or remove any managed properties. This may result in problems when upgrading. Cloning the index template of an integration package involves some risk as any changes made to the original index template when it is upgraded will not be propagated to the cloned version. * These steps assume that you want to have a namespace specific ILM policy, which requires index template cloning. Cloning the index template of an integration package involves some risk because any changes made to the original index template as part of package upgrades are not propagated to the cloned version. See [assets-restrictions-cloning-index-template] for details. +
If you want to change the ILM Policy, the number of shards, or other settings for the datastreams of one or more integrations, but the changes do not need to be specific to a given namespace, it’s strongly recommended to use a |
-
Navigate to {stack-manage-app} > Index Management > Index Templates.
-
Find the index template you want to clone. The index template will have the
<type>
and<dataset>
in its name, but not the<namespace>
. In this case, it’smetrics-system.network
. -
Select Actions > Clone.
-
Set the name of the new index template to
metrics-system.network-production
. -
Change the index pattern to include a namespace—in this case,
metrics-system.network-production*
. This ensures the previously created component template is only applied to theproduction
namespace. -
Set the priority to
250
. This ensures that the new index template takes precedence over other index templates that match the index pattern. -
Under Component templates, search for and add the component template created in the previous step. To ensure your namespace-specific settings are applied over other custom settings, the new template should be added below the existing
@custom
template. -
Create the index template.
To confirm that the data stream is now using the new index template and {ilm-init} policy, you can either repeat Step 1, or navigate to {dev-tools-app} and run the following:
GET /_data_stream/metrics-system.network-production (1)
-
The name of the data stream we’ve been hacking on
The result should include the following:
{
"data_streams" : [
{
...
"template" : "metrics-system.network-production", (1)
"ilm_policy" : "90-days-default", (2)
...
}
]
}
-
The name of the custom index template created in step three
-
The name of the {ilm-init} policy applied to the new component template in step two
New {ilm-init} policies only take effect when new indices are created, so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), or force a rollover using the {ref}/indices-rollover-index.html[{es} rollover API]:
POST /metrics-system.network-production/_rollover/
If you cloned an index template to customize the data retention policy on an {es} version prior to 8.13, you must update the index cloned index template to add the ecs@mappings
component template on {es} version 8.13 or later.
To update the cloned index template:
-
Navigate to {stack-manage-app} > Index Management > Index Templates.
-
Find the index template you cloned. The index template will have the
<type>
and<dataset>
in its name. -
Select Manage > Edit.
-
Select (2) Component templates
-
In the Search component templates field, search for
ecs@mappings
. -
Click on the + (plus) icon to add the
ecs@mappings
component template. -
Move the
ecs@mappings
component template right below the@package
component template. -
Save the index template.
Roll over the data stream to apply the changes.
This tutorial explains how to add a custom ingest pipeline to an Elastic Integration. Custom pipelines can be used to add custom data processing, like adding fields, obfuscate sensitive information, and more.
Scenario: You have {agent}s collecting system metrics with the System integration.
Goal: Add a custom ingest pipeline that adds a new field to each {es} document before it is indexed.
Create a custom ingest pipeline that will be called by the default integration pipeline. In this tutorial, we’ll create a pipeline that adds a new field to our documents.
-
In {kib}, navigate to Stack Management → Ingest Pipelines → Create pipeline → New pipeline.
-
Name your pipeline. We’ll call this one,
add_field
. -
Select Add a processor. Fill out the following information:
-
Processor: "Set"
-
Field:
test
-
Value:
true
The {ref}/set-processor.html[Set processor] sets a document field and associates it with the specified value.
-
-
Click Add.
-
Click Create pipeline.
Add a custom pipeline to an integration by calling it from the default ingest pipeline. The custom pipeline will run after the default pipeline but before the final pipeline.
Add a custom pipeline to an integration from the Edit integration workflow. The integration must already be configured and installed before a custom pipeline can be added. To enter this workflow, do the following:
-
Navigate to {fleet}
-
Select the relevant {agent} policy
-
Search for the integration you want to edit
-
Select Actions → Edit integration
Most integrations write to multiple data streams. You’ll need to add the custom pipeline to each data stream individually.
-
Find the first data stream you wish to edit and select Change defaults. For this tutorial, find the data stream configuration titled, Collect metrics from System instances.
-
Scroll to System CPU metrics and under Advanced options select Add custom pipeline.
This will take you to the Create pipeline workflow in Stack management.
Add the pipeline you created in step one.
-
Select Add a processor. Fill out the following information:
-
Processor: "Pipeline"
-
Pipeline name: "add_field"
-
Value:
true
-
-
Click Create pipeline to return to the Edit integration page.
For pipeline changes to take effect immediately, you must roll over the data stream. If you do not, the changes will not take effect until the next scheduled roll over. Select Apply now and rollover.
After the data stream rolls over, note the name of the custom ingest pipeline.
In this tutorial, it’s metrics-system.cpu@custom
.
The name follows the pattern <type>-<dataset>@custom
:
-
type:
metrics
-
dataset:
system.cpu
-
Custom ingest pipeline designation:
@custom
Add the custom ingest pipeline to any other data streams you wish to update.
Allow time for new data to be ingested before testing your pipeline. In a new window, open {kib} and navigate to {kib} Dev tools.
Use an {ref}/query-dsl-exists-query.html[exists query] to ensure that the new field, "test" is being applied to documents.
GET metrics-system.cpu-default/_search (1)
{
"query": {
"exists": {
"field": "test" (2)
}
}
}
-
The data stream to search. In this tutorial, we’ve edited the
metrics-system.cpu
type and dataset.default
is the default namespace. Combining all three of these gives us a data stream name ofmetrics-system.cpu-default
. -
The name of the field set in step one.
If your custom pipeline is working correctly, this query will return at least one document.
Now that a new field is being set in your {es} documents, you’ll want to assign a new mapping for that field.
Use the @custom
component template to apply custom mappings to an integration data stream.
In the Edit integration workflow, do the following:
-
Under Advanced options select the pencil icon to edit the
@custom
component template. -
Define the new field for your indexed documents. Select Add field and add the following information:
-
Field name:
test
-
Field type:
Boolean
-
-
Click Add field.
-
Click Review to fast-forward to the review step and click Save component template to return to the Edit integration workflow.
-
For changes to take effect immediately, select Apply now and rollover.
Allow time for new data to be ingested before testing your mappings. In a new window, open {kib} and navigate to {kib} Dev tools.
Use the {ref}/indices-get-field-mapping.html[Get field mapping API] to ensure that the custom mapping has been applied.
GET metrics-system.cpu-default/_mapping/field/test (1)
-
The data stream to search. In this tutorial, we’ve edited the
metrics-system.cpu
type and dataset.default
is the default namespace. Combining all three of these gives us a data stream name ofmetrics-system.cpu-default
.
The result should include type: "boolean"
for the specified field.
".ds-metrics-system.cpu-default-2022.08.10-000002": {
"mappings": {
"test": {
"full_name": "test",
"mapping": {
"test": {
"type": "boolean"
}
}
}
}
}
The previous steps demonstrated how to create a custom ingest pipeline that adds a new field to each {es} document generated for the Systems integration CPU metrics (system.cpu
) dataset.
You can create an ingest pipeline to process data at various levels of customization. An ingest pipeline processor can be applied:
-
Globally to all events
-
To all events of a certain type (for example
logs
ormetrics
) -
To all events of a certain type in an integration
-
To all events in a specific dataset
Let’s create a new custom ingest pipeline logs@custom
that processes all log events.
-
Open {kib} and navigate to {kib} Dev tools.
-
Run a {ref}/put-pipeline-api.html[pipeline API] request to add a new field
my-logs-field
:PUT _ingest/pipeline/logs@custom { "processors": [ { "set": { "description": "Custom field for all log events", "field": "my-logs-field", "value": "true" } } ] }
-
Allow some time for new data to be ingested, and then use a new {ref}/query-dsl-exists-query.html[exists query] to confirm that the new field "my-logs-field" is being applied to log event documents.
For this example, we’ll check the System integration
system.syslog
dataset:GET /logs-system.syslog-default/_search?pretty { "query": { "exists": { "field": "my-logs-field" } } }
With the new pipeline applied, this query should return at least one document.
You can modify your pipeline API request as needed to apply custom processing at various levels. Refer to Ingest pipelines to learn more.
{fleet} provides support for several advanced features around its data streams, including:
These features can be enabled and disabled for {fleet}-managed data streams by using the index template API and a few key settings.
Note that in versions 8.17.0 and later, Synthetic _source
requires an Enterprise license.
Note
|
If you are already making use of @custom component templates for ingest or retention customization (as shown for example in Tutorial: Customize data retention policies), exercise care to ensure you don’t overwrite your customizations when making these requests.
|
We recommended using {kib} Dev Tools to run the following requests. Replace <NAME>
with the name of a given integration data stream. For example specifying metrics-nginx.stubstatus
results in making a PUT request to _component_template/metrics-nginx.stubstatus@custom
. Use the index management interface to explore what integration data streams are available to you.
Once you’ve executed a given request below, you also need to execute a data stream rollover to ensure any incoming data is ingested with your new settings immediately. For example:
POST metrics-nginx.stubstatus-default/_rollover
Refer to the following steps to enable or disable advanced data stream features:
Note
|
TSDS uses synthetic _source , so if you want to trial both features you need to enable only TSDS.
|
Due to restrictions in the {es} API, TSDS must be enabled at the index template level. So, you’ll need to make some sequential requests to enable or disable TSDS.
-
Send a GET request to retrieve the index template:
GET _index_template/<NAME>
-
Use the JSON payload returned from the GET request to populate a PUT request, for example:
PUT _index_template/<NAME> { # You can copy & paste this directly from the GET request above "index_patterns": [ "<index pattern from GET request>" ], # Make sure this is added "template": { "settings": { "index": { "mode": "time_series" } } }, # You can copy & paste this directly from the GET request above "composed_of": [ "<NAME>@package", "<NAME>@custom", ".fleet_globals-1", ".fleet_agent_id_verification-1" ], # You can copy & paste this directly from the GET request above "priority": 200, # Make sure this is added "data_stream": { "allow_custom_routing": false } }
To disable TSDS, follow the same procedure as to enable TSDS, but specify null
for index.mode
instead of time_series
. Follow the steps below or you can copy the NGINX example.
-
Send a GET request to retrieve the index template:
GET _index_template/<NAME>
-
Use the JSON payload returned from the GET request to populate a PUT request, for example:
PUT _index_template/<NAME> { # You can copy/paste this directly from the GET request above "index_patterns": [ "<index pattern from GET request>" ], # Make sure this is added "template": { "settings": { "index": { "mode": null } } }, # You can copy/paste this directly from the GET request above "composed_of": [ "<NAME>@package", "<NAME>@custom", ".fleet_globals-1", ".fleet_agent_id_verification-1" ], # You can copy/paste this directly from the GET request above "priority": 200, # Make sure this is added "data_stream": { "allow_custom_routing": false } }
For example, the following payload disables TSDS on
nginx.stubstatus
:{ "index_patterns": [ "metrics-nginx.stubstatus-*" ], "template": { "settings": { "index": { "mode": null } } }, "composed_of": [ "metrics-nginx.stubstatus@package", "metrics-nginx.stubstatus@custom", ".fleet_globals-1", ".fleet_agent_id_verification-1" ], "priority": 200, "data_stream": { "allow_custom_routing": false } }
PUT _component_template/<NAME>@custom
{
"settings": {
"index": {
"mapping": {
"source": {
"mode": "synthetic"
}
}
}
}
}
PUT _component_template/<NAME>@custom
{
"settings": {
"index": {
"mapping": {
"source": {"mode": "stored"}
}
}
}
}