diff --git a/redirects.yml b/redirects.yml index 697d4ef67..d2deca412 100644 --- a/redirects.yml +++ b/redirects.yml @@ -3,4 +3,5 @@ redirects: 'deploy-manage/security/manually-configure-security-in-self-managed-cluster.md': '!deploy-manage/security/self-setup.md' 'deploy-manage/security/security-certificates-keys.md': '!deploy-manage/security/self-auto-setup.md' 'deploy-manage/security/ece-traffic-filtering-through-the-api.md': 'deploy-manage/security/ec-traffic-filtering-through-the-api.md' - 'deploy-manage/security/install-stack-demo-secure.md': '!deploy-manage/security/self-setup.md' \ No newline at end of file + 'deploy-manage/security/install-stack-demo-secure.md': '!deploy-manage/security/self-setup.md' + 'reference/observability/fields-and-object-schemas/logs-app-fields.md': '!reference/observability/fields-and-object-schemas.md' \ No newline at end of file diff --git a/reference/fleet/monitor-elastic-agent.md b/reference/fleet/monitor-elastic-agent.md index 4d853fa0b..7c151defe 100644 --- a/reference/fleet/monitor-elastic-agent.md +++ b/reference/fleet/monitor-elastic-agent.md @@ -135,7 +135,6 @@ On the **Logs** tab you can filter, search, and explore the agent logs: * Change the log level to filter the view by log levels. Want to see debugging logs? Refer to [Change the logging level](#change-logging-level). * Change the time range to view historical logs. -* Click **Open in Logs** to tail agent log files in real time. For more information about logging, refer to [Tail log files](/solutions/observability/logs/logs-stream.md). ## Change the logging level [change-logging-level] diff --git a/reference/observability/fields-and-object-schemas.md b/reference/observability/fields-and-object-schemas.md index 8213c3a80..097995269 100644 --- a/reference/observability/fields-and-object-schemas.md +++ b/reference/observability/fields-and-object-schemas.md @@ -5,16 +5,13 @@ mapped_pages: # Fields and object schemas [fields-reference] -This section lists Elastic Common Schema (ECS) fields the Logs and Infrastructure apps use to display data. +This section lists Elastic Common Schema (ECS) fields the Infrastructure apps use to display data. ECS is an open source specification that defines a standard set of fields to use when storing event data in {{es}}, such as logs and metrics. -Beat modules (for example, [{{filebeat}} modules](beats://reference/filebeat/filebeat-modules.md)) are ECS-compliant, so manual field mapping is not required, and all data is populated automatically in the Logs and Infrastructure apps. If you cannot use {{beats}}, map your data to [ECS fields](ecs://reference/ecs-converting.md)). You can also try using the experimental [ECS Mapper](https://github.com/elastic/ecs-mapper) tool. +Beat modules (for example, [{{filebeat}} modules](beats://reference/filebeat/filebeat-modules.md)) are ECS-compliant, so manual field mapping is not required, and all data is populated automatically in the Infrastructure app. If you cannot use {{beats}}, map your data to [ECS fields](ecs://reference/ecs-converting.md)). You can also try using the experimental [ECS Mapper](https://github.com/elastic/ecs-mapper) tool. -This reference covers: - -* [Logs Explorer fields](/reference/observability/fields-and-object-schemas/logs-app-fields.md) -* [{{infrastructure-app}} fields](/reference/observability/fields-and-object-schemas/metrics-app-fields.md) +This reference covers [{{infrastructure-app}} fields](/reference/observability/fields-and-object-schemas/metrics-app-fields.md). diff --git a/reference/observability/fields-and-object-schemas/logs-app-fields.md b/reference/observability/fields-and-object-schemas/logs-app-fields.md deleted file mode 100644 index 8ad247b1a..000000000 --- a/reference/observability/fields-and-object-schemas/logs-app-fields.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/observability/current/logs-app-fields.html ---- - -# Logs Explorer fields [logs-app-fields] - -This section lists the required fields the **Logs Explorer** uses to display data. Please note that some of the fields listed are not [ECS fields](ecs://reference/index.md#_what_is_ecs). - -`@timestamp` -: Date/time when the event originated. - - This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events. - - type: date - - required: True - - ECS field: True - - example: `May 27, 2020 @ 15:22:27.982` - - -`_doc` -: This field is used to break ties between two entries with the same timestamp. - - required: True - - ECS field: False - - -`container.id` -: Unique container id. - - type: keyword - - required: True - - ECS field: True - - example: `data` - - -`event.dataset` -: Name of the dataset. - - If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. - - It’s recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name. - - type: keyword - - required: True, if you want to use the {{ml-features}}. - - ECS field: True - - example: `apache.access` - - -`host.hostname` -: Name of the host. - - It normally contains what the `hostname` command returns on the host machine. - - type: keyword - - required: True, if you want to enable and use the **View in Context** feature. - - ECS field: True - - example: `Elastic.local` - - -`host.name` -: Name of the host. - - It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use. - - type: keyword - - required: True - - ECS field: True - - example: `MacBook-Elastic.local` - - -`kubernetes.pod.uid` -: Kubernetes Pod UID. - - type: keyword - - required: True - - ECS field: False - - example: `8454328b-673d-11ea-7d80-21010a840123` - - -`log.file.path` -: Full path to the log file this event came from, including the file name. It should include the drive letter, when appropriate. - - If the event wasn’t read from a log file, do not populate this field. - - type: keyword - - required: True, if you want to use the **View in Context** feature. - - ECS field: True - - example: `/var/log/demo.log` - - -`message` -: For log events the message field contains the log message, optimized for viewing in a log viewer. - - For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event. - - If multiple messages exist, they can be combined into one message. - - type: text - - required: True - - ECS field: True - - example: `Hello World` diff --git a/reference/observability/index.md b/reference/observability/index.md index 692a02763..f78173e66 100644 --- a/reference/observability/index.md +++ b/reference/observability/index.md @@ -13,7 +13,7 @@ mapped_pages: This section contains reference information for Elastic Observability features, including: * Fields reference - * Logs Explorer fields + * Logs Discover fields * Infrastructure app fields * Elastic Entity Model diff --git a/reference/observability/toc.yml b/reference/observability/toc.yml index 167c31621..f93a5019c 100644 --- a/reference/observability/toc.yml +++ b/reference/observability/toc.yml @@ -2,7 +2,6 @@ toc: - file: index.md - file: fields-and-object-schemas.md children: - - file: fields-and-object-schemas/logs-app-fields.md - file: fields-and-object-schemas/metrics-app-fields.md - file: elastic-entity-model.md - file: serverless/infrastructure-app-fields.md \ No newline at end of file diff --git a/solutions/images/logs-discover.png b/solutions/images/logs-discover.png new file mode 100644 index 000000000..bc766e9d6 Binary files /dev/null and b/solutions/images/logs-discover.png differ diff --git a/solutions/observability.md b/solutions/observability.md index a02d5a7d5..6fd686fcb 100644 --- a/solutions/observability.md +++ b/solutions/observability.md @@ -25,7 +25,7 @@ applies_to: ## How to [_how_to] -* [**Explore log data**](observability/logs/logs-explorer.md): Use Discover to explore your log data. +* [**Explore log data**](observability/logs/discover-logs.md): Use Discover to explore your log data. * [**Trigger alerts and triage problems**](../solutions/observability/incident-management/create-manage-rules.md): Create rules to detect complex conditions and trigger alerts. * [**Track and deliver on your SLOs**](observability/incident-management/service-level-objectives-slos.md): Measure key metrics important to the business. * [**Detect anomalies and spikes**](../explore-analyze/machine-learning/anomaly-detection.md): Find unusual behavior in time series data. diff --git a/solutions/observability/apps/collect-metrics.md b/solutions/observability/apps/collect-metrics.md index 679114b50..4063bf920 100644 --- a/solutions/observability/apps/collect-metrics.md +++ b/solutions/observability/apps/collect-metrics.md @@ -38,7 +38,7 @@ See the [Open Telemetry Metrics API](https://github.com/open-telemetry/opentelem Use **Discover** to validate that metrics are successfully reported to {{kib}}. 1. Open your Observability instance. -2. Find **Discover** in the main menu or use the [global search field](../../../get-started/the-stack.md#kibana-navigation-search), and select the **Logs Explorer** tab. +2. Find **Discover** in the main menu or use the [global search field](../../../get-started/the-stack.md#kibana-navigation-search). 3. Click **All logs** → **Data Views** then select **APM**. 4. Filter the data to only show documents with metrics: `processor.name :"metric"` 5. Narrow your search with a known OpenTelemetry field. For example, if you have an `order_value` field, add `order_value: *` to your search to return only OpenTelemetry metrics documents. diff --git a/solutions/observability/cloud/monitor-cloudtrail-logs.md b/solutions/observability/cloud/monitor-cloudtrail-logs.md index 71463e925..a05745aea 100644 --- a/solutions/observability/cloud/monitor-cloudtrail-logs.md +++ b/solutions/observability/cloud/monitor-cloudtrail-logs.md @@ -209,11 +209,6 @@ Navigate to {{kib}} and choose among the following monitoring options: :alt: Visualize CloudTrail logs with Disocver ::: -* **Visualize your logs with Logs explorer** - - :::{image} /solutions/images/observability-firehose-cloudtrail-logsexplorer.png - :alt: Visualize CloudTrail logs with Logs explorer - ::: * **Visualize your logs with the CloudTrail Dashboard** diff --git a/solutions/observability/cloud/monitor-microsoft-azure-openai.md b/solutions/observability/cloud/monitor-microsoft-azure-openai.md index 386da8074..dbe4ce9e7 100644 --- a/solutions/observability/cloud/monitor-microsoft-azure-openai.md +++ b/solutions/observability/cloud/monitor-microsoft-azure-openai.md @@ -246,7 +246,6 @@ Now that your log and metric data is streaming to {{es}}, you can view them in { * [View logs and metrics with the overview dashboard](#azure-openai-overview-dashboard): Use the built-in overview dashboard for insight into your Azure OpenAI service like total requests and token usage. * [View logs and metrics with Discover](#azure-openai-discover): Use Discover to find and filter your log and metric data based on specific fields. -* [View logs with Logs Explorer](#azure-openai-logs-explorer): Use Logs Explorer for an in-depth view into your logs. ### View logs and metrics with the overview dashboard [azure-openai-overview-dashboard] @@ -279,28 +278,6 @@ From here, filter your data and dive deeper into individual logs to find informa For more on using Discover and creating data views, refer to the [Discover](../../../explore-analyze/discover.md) documentation. - -### View logs with Logs Explorer [azure-openai-logs-explorer] - -To view Azure OpenAI logs, open {{kib}} and go to **Logs Explorer** (find `Logs Explorer` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md)). With **Logs Explorer**, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. - -:::{image} /solutions/images/observability-log-explorer.png -:alt: screenshot of the logs explorer main page -:screenshot: -::: - -From **Logs Explorer**, you can select the Azure OpenAI integration from the data selector to view your Kubernetes data. - -![screenshot of the logs explorer data selector](/solutions/images/observability-azure-open-ai-data-selector.png "") - -From here, filter your log data and dive deeper into individual logs to find information and troubleshoot issues. For a list of Azure OpenAI fields you may want to filter by, refer to the [Azure OpenAI integration](https://docs.elastic.co/en/integrations/azure_openai#settings) documentation. - -For more on Logs Explorer, refer to: - -* [Logs Explorer](../logs/logs-explorer.md) for an overview of Logs Explorer. -* [Filter logs in Logs Explorer](../logs/filter-aggregate-logs.md#logs-filter-logs-explorer) for more on filtering logs in Logs Explorer. - - ## Step 6: Monitor Microsoft Azure OpenAI APM with OpenTelemetry [azure-openai-apm] The Azure OpenAI API provides useful data to help monitor and understand your code. Using OpenTelemetry, you can ingest this data into Elastic {{observability}}. From there, you can view and analyze your data to monitor the cost and performance of your applications. diff --git a/solutions/observability/cloud/monitor-microsoft-azure-with-azure-native-isv-service.md b/solutions/observability/cloud/monitor-microsoft-azure-with-azure-native-isv-service.md index 9c40cd942..9b5eef3c4 100644 --- a/solutions/observability/cloud/monitor-microsoft-azure-with-azure-native-isv-service.md +++ b/solutions/observability/cloud/monitor-microsoft-azure-with-azure-native-isv-service.md @@ -98,7 +98,7 @@ To ingest Azure subscription and resource logs into Elastic, you use the Azure N :::: 3. In {{kib}}, under **{{observability}}**, find **Overview** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Refresh the page until you see some data. This may take a few minutes. -4. To analyze your subscription and resource logs, click **Show Logs Explorer**. +4. To analyze your subscription and resource logs, click **Show Logs**. ## Step 3: Ingest logs and metrics from your virtual machines (VMs) [azure-ingest-VM-logs-metrics] @@ -112,7 +112,7 @@ To ingest Azure subscription and resource logs into Elastic, you use the Azure N ::: 3. Wait until the extension is installed and sending data (if the list does not update, click **Refresh** ). -4. Back in {{kib}}, view the **Logs Explorer** again. Notice that you can filter the view to show logs for a specific instance, for example `cloud.instance.name : "ingest-tutorial-linux"`. +4. Back in {{kib}}, view the **Discover** again. Notice that you can filter the view to show logs for a specific instance, for example `cloud.instance.name : "ingest-tutorial-linux"`. 5. To view VM metrics, go to **Infrastructure inventory** and then select a VM. (To open **Infrastructure inventory**, find **Infrastructure** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).) To explore the data further, click **Open as page**. diff --git a/solutions/observability/cloud/monitor-microsoft-azure-with-beats.md b/solutions/observability/cloud/monitor-microsoft-azure-with-beats.md index 4e8d40c27..8afc45fad 100644 --- a/solutions/observability/cloud/monitor-microsoft-azure-with-beats.md +++ b/solutions/observability/cloud/monitor-microsoft-azure-with-beats.md @@ -91,7 +91,7 @@ To ingest Azure subscription and resource logs into Elastic using the Microsoft :::: 3. In {{kib}}, find the {{observability}} **Overview** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Refresh the page until you see some data. This may take a few minutes. -4. To analyze your subscription and resource logs, click **Show Logs Explorer**. +4. To analyze your subscription and resource logs, click **Show Logs**. ## Step 3: Ingest logs and metrics from your virtual machines. [azure-step-three] @@ -104,7 +104,7 @@ To ingest Azure subscription and resource logs into Elastic using the Microsoft ![Select VMs to collect logs and metrics from](/solutions/images/observability-monitor-azure-elastic-vms.png "") -3. Wait until it is installed and sending data (if the list does not update, click **Refresh** ). To see the logs from the VM, open **Logs Explorer** (find `Logs Explorer` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md)). +3. Wait until it is installed and sending data (if the list does not update, click **Refresh** ). To see the logs from the VM, open **Discover** (find `Discover` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md)). To view VM metrics, go to **Infrastructure inventory** and then select a VM. (To open **Infrastructure inventory**, find **Infrastructure** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).) diff --git a/solutions/observability/data-set-quality-monitoring.md b/solutions/observability/data-set-quality-monitoring.md index 89dde8bc7..9f776c7be 100644 --- a/solutions/observability/data-set-quality-monitoring.md +++ b/solutions/observability/data-set-quality-monitoring.md @@ -2,10 +2,10 @@ mapped_pages: - https://www.elastic.co/guide/en/observability/current/monitor-datasets.html - https://www.elastic.co/guide/en/serverless/current/observability-monitor-datasets.html +navigation_title: "Data set quality" applies_to: stack: beta serverless: beta -navigation_title: "Data set quality" --- # Data set quality monitoring [observability-monitor-datasets] @@ -35,7 +35,7 @@ Opening the details of a specific data set shows the degraded documents history, ## Investigate issues [observability-monitor-datasets-investigate-issues] -The Data Set Quality page has a couple of different ways to help you find ignored fields and investigate issues. From the data set table, you can open the data set’s details page, and view commonly ignored fields and information about those fields. Open a logs data set in Logs Explorer or other data set types in Discover to find ignored fields in individual documents. +The Data Set Quality page has a couple of different ways to help you find ignored fields and investigate issues. From the data set table, you can open the data set’s details page, and view commonly ignored fields and information about those fields. Open a logs data set in Discover or other data set types in Discover to find ignored fields in individual documents. ### Find ignored fields in data sets [observability-monitor-datasets-find-ignored-fields-in-data-sets] @@ -50,12 +50,12 @@ The **Quality issues** section shows fields that have been ignored, the number o ### Find ignored fields in individual logs [observability-monitor-datasets-find-ignored-fields-in-individual-logs] -To use Logs Explorer or Discover to find ignored fields in individual logs: +To use Discover to find ignored fields in individual logs: 1. Find data sets with degraded documents using the **Degraded Docs** column of the data sets table. -2. Click the percentage in the **Degraded Docs** column to open the data set in Logs Explorer or Discover. +2. Click the percentage in the **Degraded Docs** column to open the data set in Discover. -The **Documents** table in Logs Explorer or Discover is automatically filtered to show documents that were not parsed correctly. Under the **actions** column, you’ll find the degraded document icon (![degraded document icon](/solutions/images/serverless-indexClose.svg "")). +The **Documents** table in Discover is automatically filtered to show documents that were not parsed correctly. Under the **actions** column, you’ll find the degraded document icon (![degraded document icon](../images/serverless-indexClose.svg "")). Now that you know which documents contain ignored fields, examine them more closely to find the origin of the issue: diff --git a/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md b/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md index ed6374657..7e5938401 100644 --- a/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md +++ b/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md @@ -134,7 +134,7 @@ After installation is complete and all relevant data is flowing into Elastic, th | Integration asset | Description | | --- | --- | | **Apache** | Prebuilt dashboard for monitoring Apache HTTP server health using error and access log data. | -| **Custom .log files** | Logs Explorer for analyzing custom logs. | +| **Custom .log files** | Discover for analyzing custom logs. | | **Docker** | Prebuilt dashboard for monitoring the status and health of Docker containers. | | **MySQL** | Prebuilt dashboard for monitoring MySQl server health using error and access log data. | | **Nginx** | Prebuilt dashboard for monitoring Nginx server health using error and access log data. | @@ -160,7 +160,7 @@ For host monitoring, the following capabilities and features are recommended: * [Detect anomalies](../../../solutions/observability/infra-and-hosts/detect-metric-anomalies.md) for memory usage and network traffic on hosts. * [Create alerts](../../../solutions/observability/incident-management/alerting.md) that notify you when an anomaly is detected or a metric exceeds a given value. -* In the [Logs Explorer](../../../solutions/observability/logs/logs-explorer.md), search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also: +* In [Discover](../../../solutions/observability/logs/discover-logs.md), search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also: * [Monitor log data set quality](../../../solutions/observability/data-set-quality-monitoring.md) to find degraded documents. * [Run a pattern analysis](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-pattern-analysis) to find patterns in unstructured log messages. diff --git a/solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md b/solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md index c60c2a023..5c968affb 100644 --- a/solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md +++ b/solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md @@ -118,7 +118,7 @@ Logs are collected from setup onward, so you won’t see logs that occurred befo ::::: -Under **Visualize your data**, you’ll see links to **Logs Explorer** to view your logs and **Hosts** to view your host metrics. +Under **Visualize your data**, you’ll see links to **Discover** to view your logs and **Hosts** to view your host metrics. ## Gain deeper insight into your host data [_get_value_out_of_your_data] @@ -130,7 +130,7 @@ After using the Hosts page and Discover to confirm you’ve ingested all the hos * [Detect anomalies](../../../solutions/observability/infra-and-hosts/detect-metric-anomalies.md) for memory usage and network traffic on hosts. * [Create alerts](../../../solutions/observability/incident-management/create-manage-rules.md) that notify you when an anomaly is detected or a metric exceeds a given value. -* In the [Logs Explorer](../../../solutions/observability/logs/logs-explorer.md), search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also: +* In [Discover](../../../solutions/observability/logs/discover-logs.md), search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also: * [Monitor log data set quality](../../../solutions/observability/data-set-quality-monitoring.md) to find degraded documents. * [Run a pattern analysis](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-pattern-analysis) to find patterns in unstructured log messages. diff --git a/solutions/observability/get-started/what-is-elastic-observability.md b/solutions/observability/get-started/what-is-elastic-observability.md index b3eb74ae9..d3ca45dd5 100644 --- a/solutions/observability/get-started/what-is-elastic-observability.md +++ b/solutions/observability/get-started/what-is-elastic-observability.md @@ -19,11 +19,11 @@ mapped_pages: Analyze log data from your hosts, services, Kubernetes, Apache, and many more. -In **Logs Explorer** (powered by Discover), you can quickly search and filter your log data, get information about the structure of the fields, and display your findings in a visualization. +In **Discover**, you can quickly search and filter your log data, get information about the structure of the fields, and display your findings in a visualization. -:::{image} /solutions/images/serverless-log-explorer-overview.png -:alt: Logs Explorer showing log events -:screenshot: +:::{image} ../../images/logs-discover.png +:alt: Discover showing log events +:class: screenshot ::: [Learn more about log monitoring →](../../../solutions/observability/logs.md) diff --git a/solutions/observability/incident-management/create-log-threshold-rule.md b/solutions/observability/incident-management/create-log-threshold-rule.md index 29dae6932..859266926 100644 --- a/solutions/observability/incident-management/create-log-threshold-rule.md +++ b/solutions/observability/incident-management/create-log-threshold-rule.md @@ -332,7 +332,7 @@ When a rule check is performed, a query is built based on the configuration of t ## Settings [settings] -With log threshold rules, it’s not possible to set an explicit index pattern as part of the configuration. The index pattern is instead inferred from **Log indices** on the [Settings](../logs/configure-data-sources.md) page of the {{logs-app}}. +With log threshold rules, it’s not possible to set an explicit index pattern as part of the configuration. The index pattern is instead inferred from **Log sources** at **Stack Management** → **Advanced settings** under **Observability**. With each execution of the rule check, the **Log indices** setting is checked, but it is not stored when the rule is created. diff --git a/solutions/observability/infra-and-hosts/configure-settings.md b/solutions/observability/infra-and-hosts/configure-settings.md index 80d0ac25e..55fe61038 100644 --- a/solutions/observability/infra-and-hosts/configure-settings.md +++ b/solutions/observability/infra-and-hosts/configure-settings.md @@ -25,7 +25,7 @@ From the main menu, go to **Infrastructure** → **Infrastructure inventory** or Click **Apply** to save your changes. ::::{note} -The patterns used to match log sources are configured in the Logs app. The default setting is `logs-*,filebeat-*,kibana_sample_data_logs*`. To change the default, refer to [Configure data sources](../../../solutions/observability/logs/configure-data-sources.md). +The patterns used to match log sources are configured in {{kib}} advanced settings. The default setting is `logs-*-*,logs-*,filebeat-*`. To change the default, go to **Log sources** at **Stack Management** → **Advanced settings** under **Observability**. :::: diff --git a/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md b/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md index 28cae541c..d7e993ec3 100644 --- a/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md +++ b/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md @@ -519,23 +519,12 @@ For more on using the **Metrics Explorer** page, refer to [Explore infrastructur ### View Kubernetes logs [monitor-k8s-explore-logs] -Find `Logs Explorer` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +Find `Discover` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -With **Logs Explorer**, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. +From the **Data view** menu, select `All logs`. From here, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. Then, you can filter your log data and dive deeper into individual logs to find and troubleshoot issues. For more information, refer to: -:::{image} /solutions/images/observability-log-explorer.png -:alt: screenshot of the logs explorer main page -:screenshot: -::: - -From **Logs Explorer**, you can select the Kubernetes integration from the data selector to view your Kubernetes data. - -![screenshot of the logs explorer main page](/solutions/images/observability-logs-explorer-applications.png "") - -From here, you can filter your log data and dive deeper into individual logs to find and troubleshoot issues. For more information, refer to: - -* [Logs Explorer](../logs/logs-explorer.md) for an over view of Logs Explorer. -* [Filter logs in Logs Explorer](../logs/filter-aggregate-logs.md#logs-filter-logs-explorer) for more on filtering logs in Logs Explorer. +* [Explore logs in Discover](../logs/discover-logs.md) for an overview of viewing your logs in Discover. +* [Filter logs in Discover](../logs/filter-aggregate-logs.md#logs-filter-discover) for more on filtering logs in Discover. ## Part 4: Monitor application performance [monitor-kubernetes-application-performance] diff --git a/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md b/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md index 738b047c6..dcd669d98 100644 --- a/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md +++ b/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md @@ -13,7 +13,7 @@ applies_to: :::: -Use the [nginx Elastic integration](https://docs.elastic.co/en/integrations/nginx) and the {{agent}} to collect valuable metrics and logs from your nginx instances. Then, use built-in dashboards and tools like Logs Explorer in {{kib}} to visualize and monitor your nginx data from one place. This data provides valuable insight into your nginx instances—for example: +Use the [nginx Elastic integration](https://docs.elastic.co/en/integrations/nginx) and the {{agent}} to collect valuable metrics and logs from your nginx instances. Then, use built-in dashboards and tools like Discover to visualize and monitor your nginx data from one place. This data provides valuable insight into your nginx instances—for example: * A spike in error logs for a certain resource may mean you have a deleted resource that is still needed. * Access logs can show when a service’s peak times are, and, from this, when it might be best to perform things like maintenance. @@ -197,33 +197,20 @@ The **Metrics Nginx overview** shows visual representations of total requests, p ### View logs [monitor-nginx-explore-logs] -After your nginx logs are ingested, view and explore your logs using [Logs Explorer](#monitor-nginx-logs-explorer) or the [nginx logs dashboards](#monitor-nginx-logs-dashboard). +After your nginx logs are ingested, view and explore your logs using [Discover](#monitor-nginx-discover) or the [nginx logs dashboards](#monitor-nginx-logs-dashboard). -#### Logs Explorer [monitor-nginx-logs-explorer] +#### Discover [monitor-nginx-discover] -With Logs Explorer, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. +With Discover, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. -To open **Logs Explorer**, find `Logs Explorer` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +Find `Discover` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Filter your results to see logs from the nginx integration from the data selector: -1. Under **Integrations**, select **Nginx**. - - :::{image} /solutions/images/observability-nginx-data-selector.png - :alt: nginx integration in the data selector - :screenshot: - ::: - -2. Select either **access** logs or **error** logs to view the logs you’re looking for. - -The **Documents** table now shows your nginx logs: - -:::{image} /solutions/images/observability-nginx-logs-explorer.png -:alt: Logs Explorer showing nginx error logs -:screenshot: -::: +1. From the **Data view** menu, select all logs. +2. Filter the log results using the KQL search bar. Enter `data_stream.dataset : "nginx.error"` to show nginx error logs or `data_stream.dataset : "nginx.access"` to show nginx access logs. #### nginx logs dashboards [monitor-nginx-logs-dashboard] diff --git a/solutions/observability/logs.md b/solutions/observability/logs.md index bca93dd18..52acfec0f 100644 --- a/solutions/observability/logs.md +++ b/solutions/observability/logs.md @@ -14,7 +14,7 @@ Elastic Observability allows you to deploy and manage logs at a petabyte scale, * [Stream any log file](../../solutions/observability/logs/stream-any-log-file.md): Send log files to your Observability project using a standalone {{agent}}. * [Parse and route logs](../../solutions/observability/logs/parse-route-logs.md): Parse your log data and extract structured fields that you can use to analyze your data. * [Filter and aggregate logs](../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter): Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. -* [Explore logs](../../solutions/observability/logs/logs-explorer.md): Find information on visualizing and analyzing logs. +* [Explore logs](../../solutions/observability/logs/discover-logs.md): Find information on visualizing and analyzing logs. * [Run pattern analysis on log data](../../solutions/observability/logs/run-pattern-analysis-on-log-data.md): Find patterns in unstructured log messages and make it easier to examine your data. * [Troubleshoot logs](../../troubleshoot/observability/troubleshoot-logs.md): Find solutions for errors you might encounter while onboarding your logs. @@ -77,11 +77,11 @@ The following resources provide information on configuring your logs: ## View and monitor logs [observability-log-monitoring-view-and-monitor-logs] -Use **Logs Explorer** to search, filter, and tail all your logs ingested into your project in one place. +Use **Discover** to search, filter, and tail all your logs ingested into your project in one place. The following resources provide information on viewing and monitoring your logs: -* [Discover and explore](../../solutions/observability/logs/logs-explorer.md): Discover and explore all of the log events flowing in from your servers, virtual machines, and containers in a centralized view. +* [Discover and explore](../../solutions/observability/logs/discover-logs.md): Discover and explore all of the log events flowing in from your servers, virtual machines, and containers in a centralized view. * [Detect log anomalies](../../explore-analyze/machine-learning/anomaly-detection.md): Use {{ml}} to detect log anomalies automatically. diff --git a/solutions/observability/logs/configure-data-sources.md b/solutions/observability/logs/configure-data-sources.md deleted file mode 100644 index e4e7e87b6..000000000 --- a/solutions/observability/logs/configure-data-sources.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/observability/current/configure-data-sources.html -applies_to: - stack: all ---- - -# Configure data sources [configure-data-sources] - -::::{Note} - -**There’s a new, better way to explore your logs!** - -These settings only apply to the Logs Stream app. The Logs Stream app and dashboard panel are deactivated by default. We recommend viewing and inspecting your logs with [Logs Explorer](logs-explorer.md) as it provides more features, better performance, and more intuitive navigation. - -To activate the Logs Stream app, refer to [Activate Logs Stream](logs-stream.md#activate-logs-stream). - -:::: - - -Specify the source configuration for logs in the [Logs settings](kibana://reference/configuration-reference/logs-settings.md) in the [{{kib}} configuration file](kibana://reference/configuration-reference/general-settings.md). By default, the configuration uses the index patterns stored in the {{kib}} log sources advanced setting to query the data. The configuration also defines the default columns displayed in the logs stream. - -If your logs have custom index patterns, use non-default field settings, or contain parsed fields that you want to expose as individual columns, you can override the default configuration settings. - - -## Edit configuration settings [edit-config-settings] - -1. Find `Logs / Settings` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). - - | | | - | --- | --- | - | **Name** | Name of the source configuration. | - | **{{kib}} log sources advanced setting** | Use index patterns stored in the {{kib}} **log sources** advanced setting, which provides a centralized place to store and query log index patterns.To open **Advanced settings**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). | - | **{{data-source-cap}} (deprecated)** | The Logs UI integrates with {{data-sources}} toconfigure the used indices by clicking **Use {{data-sources}}**. | - | **Log indices (deprecated)** | {{kib}} index patterns or index name patterns in the {{es}} indicesto read log data from. | - | **Log columns** | Columns that are displayed in the logs **Stream** page. | - -2. When you have completed your changes, click **Apply**. - - -## Customize Stream page [customize-stream-page] - -::::{tip} -If [Spaces](../../../deploy-manage/manage-spaces.md) are enabled in your {{kib}} instance, any configuration changes you make here are specific to the current space. You can make different subsets of data available by creating multiple spaces with other data source configurations. - -:::: - - -By default, the **Stream** page within the {{logs-app}} displays the following columns. - -| | | -| --- | --- | -| **Timestamp** | The timestamp of the log entry from the `timestamp` field. | -| **Message** | The message extracted from the document.The content of this field depends on the type of log message.If no special log message type is detected, the [Elastic Common Schema (ECS)](ecs://reference/ecs-base.md)base field, `message`, is used. | - -1. To add a new column to the logs stream, select **Settings > Add column**. -2. In the list of available fields, select the field you want to add. To filter the field list by that name, you can start typing a field name in the search box. -3. To remove an existing column, click the **Remove this column** icon. -4. When you have completed your changes, click **Apply**. - -If the fields are grayed out and cannot be edited, you may not have sufficient privileges to modify the source configuration. For more information, see [Granting access to {{kib}}](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md). diff --git a/solutions/observability/logs/discover-logs.md b/solutions/observability/logs/discover-logs.md new file mode 100644 index 000000000..1b3e848b6 --- /dev/null +++ b/solutions/observability/logs/discover-logs.md @@ -0,0 +1,75 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/observability/current/explore-logs.html + - https://www.elastic.co/guide/en/serverless/current/observability-discover-and-explore-logs.html +applies_to: + stack: ga + serverless: ga +--- + +# Explore logs in Discover [explore-logs] + +From the `logs-*` or `All logs` data view in Discover, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also customize and save your searches and place them on a dashboard. Instead of having to log into different servers, change directories, and view individual files, all your logs are available in a single view. + +To open **Discover**, find `Discover` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Select the `logs-*` or `All logs` data view from the **Data view** menu. + +:::{note} +For a contextual logs experience, set the **Solution view** for your space to **Observability**. Refer to [Managing spaces](../../../deploy-manage/manage-spaces.md) for more information. +::: + +:::{image} ../../images/observability-log-explorer.png +:alt: Screen capture of Discover +:class: screenshot +::: + +## Required {{kib}} privileges [logs-explorer-privileges] + +Viewing data in Discover logs data views requires `read` privileges for **Discover**, **Index**, and **Logs**. For more on assigning {{kib}} privileges, refer to the [{{kib}} privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md) docs. + + +## Find your logs [find-your-logs] + +By default, the **All logs** data view shows all of your logs, according to the index patterns set in the **logs sources** advanced setting. To open **Advanced settings**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). + +To focus on logs from a specific source or sources, create a data view using the index patterns of those source. For more information on creating data views, refer to [Create a data view](../../../explore-analyze/find-and-organize/data-views.md#settings-create-pattern) + +Once you have the logs you want to focus on displayed, you can drill down further to find the information you need. For more on filtering your data in Discover, refer to [Filter logs in Discover](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-discover). + + +## Review log data in the documents table [review-log-data-in-the-documents-table] + +The documents table lets you add fields, order table columns, sort fields, and update the row height in the same way you would in Discover. + +Refer to the [Discover](../../../explore-analyze/discover.md) documentation for more information on updating the table. + + +### Actions column [actions-column] + +The actions column provides additional information about your logs. + +**Expand:** ![The icon to expand log details](/solutions/images/observability-expand-icon.png "") Open the log details to get an in-depth look at an individual log file. + +**Degraded document indicator:** ![The icon that shows ignored fields](../../images/observability-pagesSelect-icon.png "") This indicator shows if any of the document’s fields were ignored when it was indexed. Ignored fields could indicate malformed fields or other issues with your document. Use this information to investigate and determine why fields are being ignored. + +**Stacktrace indicator:** ![The icon that shows if a document contains stack traces](../../images/observability-apmTrace-icon.png "") This indicator makes it easier to find documents that contain additional information in the form of stacktraces. + + +## View log details [view-log-details] + +Click the expand icon ![icon to open log details](/solutions/images/observability-expand-icon.png "") to get an in-depth look at an individual log file. + +These details provide immediate feedback and context for what’s happening and where it’s happening for each log. From here, you can quickly debug errors and investigate the services where errors have occurred. + +The following actions help you filter and focus on specific fields in the log details: + +* **Filter for value (![filter for value icon](../../images/observability-plusInCircle.png "")):** Show logs that contain the specific field value. +* **Filter out value (![filter out value icon](../../images/observability-minusInCircle.png "")):** Show logs that do **not** contain the specific field value. +* **Filter for field present (![filter for present icon](../../images/observability-filter.png "")):** Show logs that contain the specific field. +* **Toggle column in table (![toggle column in table icon](../../images/observability-listAdd.png "")):** Add or remove a column for the field to the main Discover table. + + +## View log data set details [view-log-data-set-details] + +Go to **Data Sets** to view more details about your data sets and monitor their overall quality. To open **Data Set Quality**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). + +Refer to [*Data set quality*](../../../solutions/observability/data-set-quality-monitoring.md) for more information. \ No newline at end of file diff --git a/solutions/observability/logs/ecs-formatted-application-logs.md b/solutions/observability/logs/ecs-formatted-application-logs.md index ea60cb36f..4bfad8a2d 100644 --- a/solutions/observability/logs/ecs-formatted-application-logs.md +++ b/solutions/observability/logs/ecs-formatted-application-logs.md @@ -43,7 +43,7 @@ To set up log ECS reformatting: 1. [Enable {{apm-agent}} reformatting](../../../solutions/observability/logs/ecs-formatted-application-logs.md#enable-log-ecs-reformatting) 2. [Ingest logs with {{filebeat}} or {{agent}}](../../../solutions/observability/logs/ecs-formatted-application-logs.md#ingest-ecs-logs) -3. [View logs in Logs Explorer](../../../solutions/observability/logs/ecs-formatted-application-logs.md#view-ecs-logs) +3. [View logs in Discover](../../../solutions/observability/logs/ecs-formatted-application-logs.md#view-ecs-logs) ### Enable log ECS reformatting [enable-log-ecs-reformatting] diff --git a/solutions/observability/logs/explore-logs.md b/solutions/observability/logs/explore-logs.md index 1bd183385..b6cd9a768 100644 --- a/solutions/observability/logs/explore-logs.md +++ b/solutions/observability/logs/explore-logs.md @@ -5,16 +5,8 @@ mapped_pages: # Explore logs [monitor-logs] -Logs Explorer in {{kib}} enables you to search, filter, and tail all your logs ingested into {{es}}. Instead of having to log into different servers, change directories, and tail individual files, all your logs are available in Logs Explorer. - -Logs Explorer allows you to quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. Refer to the [Logs Explorer](logs-explorer.md) documentation for more on using Logs Explorer. - -Logs Explorer also provides {{ml}} to detect specific [log anomalies](inspect-log-anomalies.md) automatically and [categorize log messages](categorize-log-entries.md) to quickly identify patterns in your log events. - -To view Logs Explorer, find **Logs Explorer** in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md)) - - - - +From Discover in {{kib}} or your Observability Serverless project, you can search, filter, and tail all your logs ingested into {{es}}. Instead of having to log into different servers, change directories, and tail individual files, all your logs are available in Discover. +Discover allows you to quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. Refer to [Explore logs in Discover](discover-logs.md) for more. +Elastic also provides {{ml}} to detect specific [log anomalies](inspect-log-anomalies.md) automatically and [categorize log messages](categorize-log-entries.md) to quickly identify patterns in your log events. \ No newline at end of file diff --git a/solutions/observability/logs/filter-aggregate-logs.md b/solutions/observability/logs/filter-aggregate-logs.md index 5adfb521a..cb17a27b2 100644 --- a/solutions/observability/logs/filter-aggregate-logs.md +++ b/solutions/observability/logs/filter-aggregate-logs.md @@ -73,15 +73,15 @@ PUT _index_template/logs-example-default-template Filter your data using the fields you’ve extracted so you can focus on log data with specific log levels, timestamp ranges, or host IPs. You can filter your log data in different ways: -* [Filter logs in Logs Explorer](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-logs-explorer): Filter and visualize log data in Logs Explorer. +* [Filter logs in Discover](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-discover): Filter and visualize log data in Discover. * [Filter logs with Query DSL](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-qdsl): Filter log data from Developer Tools using Query DSL. -### Filter logs in Logs Explorer [logs-filter-logs-explorer] +### Filter logs in Discover [logs-filter-discover] -Logs Explorer is a tool that automatically provides views of your log data based on integrations and data streams. To open **Logs Explorer**, find `Logs Explorer` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +Discover is a tool that provides views of your log data based on data views and index patterns. To open **Discover**, find `Discover` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -From Logs Explorer, you can use the [{{kib}} Query Language (KQL)](../../../explore-analyze/query-filter/languages/kql.md) in the search bar to narrow down the log data that’s displayed. For example, you might want to look into an event that occurred within a specific time range. +From Discover, open the `logs-*` or `All logs` data views from the **Data views** menu. From here, you can use the [{{kib}} Query Language (KQL)](../../../explore-analyze/query-filter/languages/kql.md) in the search bar to narrow down the log data that’s displayed. For example, you might want to look into an event that occurred within a specific time range. Add some logs with varying timestamps and log levels to your data stream: @@ -100,19 +100,19 @@ POST logs-example-default/_bulk { "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } ``` -For this example, let’s look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Logs Explorer: +For this example, let’s look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Discover: +1. Make sure **All logs** is selected in the **Data views** menu. 1. Add the following KQL query in the search bar to filter for logs with log levels of `WARN` or `ERROR`: ```text log.level: ("ERROR" or "WARN") ``` +1. Click the current time range, select **Absolute**, and set the **Start date** to `Sep 14, 2023 @ 00:00:00.000`. -2. Click the current time range, select **Absolute**, and set the **Start date** to `Sep 14, 2023 @ 00:00:00.000`. + ![Set the time range start date](../../images/serverless-logs-start-date.png "") - ![Set the time range start date](/solutions/images/serverless-logs-start-date.png "") - -3. Click the end of the current time range, select **Absolute**, and set the **End date** to `Sep 15, 2023 @ 23:59:59.999`. +1. Click the end of the current time range, select **Absolute**, and set the **End date** to `Sep 15, 2023 @ 23:59:59.999`. ![Set the time range end date](/solutions/images/serverless-logs-end-date.png "") @@ -124,7 +124,7 @@ Under the **Documents** tab, you’ll see the filtered log data matching your qu :screenshot: ::: -For more on using Logs Explorer, refer to the [Discover](../../../explore-analyze/discover.md) documentation. +For more on using Discover, refer to the [Discover](../../../explore-analyze/discover.md) documentation. ### Filter logs with Query DSL [logs-filter-qdsl] diff --git a/solutions/observability/logs/get-started-with-system-logs.md b/solutions/observability/logs/get-started-with-system-logs.md index 6d10d2921..58bc52446 100644 --- a/solutions/observability/logs/get-started-with-system-logs.md +++ b/solutions/observability/logs/get-started-with-system-logs.md @@ -15,7 +15,7 @@ applies_to: :::: -In this guide you’ll learn how to onboard system log data from a machine or server, then observe the data in **Logs Explorer**. +In this guide you’ll learn how to onboard system log data from a machine or server, then observe the data in **Discover**. To onboard system log data: @@ -26,18 +26,13 @@ To onboard system log data: After the agent is installed and successfully streaming log data, you can view the data in the UI: -1. From the navigation menu, go to **Discover** and select the **Logs Explorer** tab. The view shows all log datasets. Notice you can add fields, change the view, expand a document to see details, and perform other actions to explore your data. -2. Click **All log datasets** and select **System** → **syslog** to show syslog logs. - -:::{image} /solutions/images/serverless-log-explorer-select-syslogs.png -:alt: Screen capture of the Logs Explorer showing syslog dataset selected -:screenshot: -::: +1. From the navigation menu, go to **Discover**. +1. Select **All logs** from the **Data views** menu. The view shows all log datasets. Notice you can add fields, change the view, expand a document to see details, and perform other actions to explore your data. ## Next steps [observability-get-started-with-logs-next-steps] -Now that you’ve added system logs and explored your data, learn how to onboard other types of data: +Now that you’ve added logs and explored your data, learn how to onboard other types of data: * [Stream any log file](stream-any-log-file.md) * [Get started with traces and APM](../apps/get-started-with-apm.md) diff --git a/solutions/observability/logs/logs-explorer.md b/solutions/observability/logs/logs-explorer.md deleted file mode 100644 index 096171824..000000000 --- a/solutions/observability/logs/logs-explorer.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/observability/current/explore-logs.html - - https://www.elastic.co/guide/en/serverless/current/observability-discover-and-explore-logs.html -applies_to: - stack: all - serverless: all ---- - -# Logs Explorer [explore-logs] - -::::{warning} -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. -:::: - - -With **Logs Explorer**, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also customize and save your searches and place them on a dashboard. Instead of having to log into different servers, change directories, and view individual files, all your logs are available in a single view. - -To open **Logs Explorer**, find `Logs Explorer` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). - -:::{image} /solutions/images/observability-log-explorer.png -:alt: Screen capture of the Logs Explorer -:screenshot: -::: - - -## Required {{kib}} privileges [logs-explorer-privileges] - -Viewing data in Logs Explorer requires `read` privileges for **Discover**, **Index**, **Logs**, and **Integrations**. For more on assigning {{kib}} privileges, refer to the [{{kib}} privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md) docs. - - -## Find your logs [find-your-logs] - -By default, Logs Explorer shows all of your logs, according to the index patterns set in the **logs sources** advanced setting. To open **Advanced settings**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). - -If you need to focus on logs from a specific integration, select the integration from the logs menu: - -:::{image} /solutions/images/observability-log-menu.png -:alt: Screen capture of log menu -:screenshot: -::: - -Once you have the logs you want to focus on displayed, you can drill down further to find the information you need. For more on filtering your data in Logs Explorer, refer to [Filter logs in Logs Explorer](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-logs-explorer). - - -## Review log data in the documents table [review-log-data-in-the-documents-table] - -The documents table in Logs Explorer functions similarly to the table in Discover. You can add fields, order table columns, sort fields, and update the row height in the same way you would in Discover. - -Refer to the [Discover](../../../explore-analyze/discover.md) documentation for more information on updating the table. - - -### Actions column [actions-column] - -The actions column provides access to additional information about your logs. - -**Expand:** ![The icon to expand log details](/solutions/images/observability-expand-icon.png "") Open the log details to get an in-depth look at an individual log file. - -**Degraded document indicator:** ![The icon that shows ignored fields](/solutions/images/observability-pagesSelect-icon.png "") Shows if any of the document’s fields were ignored when it was indexed. Ignored fields could indicate malformed fields or other issues with your document. Use this information to investigate and determine why fields are being ignored. - -**Stacktrace indicator:** ![The icon that shows if a document contains stack traces](/solutions/images/observability-apmTrace-icon.png "") Shows if the document contains stack traces. This indicator makes it easier to navigate through your documents and know if they contain additional information in the form of stack traces. - - -## View log details [view-log-details] - -Click the expand icon ![icon to open log details](/solutions/images/observability-expand-icon.png "") to get an in-depth look at an individual log file. - -These details provide immediate feedback and context for what’s happening and where it’s happening for each log. From here, you can quickly debug errors and investigate the services where errors have occurred. - -The following actions help you filter and focus on specific fields in the log details: - -* **Filter for value (![filter for value icon](/solutions/images/observability-plusInCircle.png "")):** Show logs that contain the specific field value. -* **Filter out value (![filter out value icon](/solutions/images/observability-minusInCircle.png "")):** Show logs that do **not** contain the specific field value. -* **Filter for field present (![filter for present icon](/solutions/images/observability-filter.png "")):** Show logs that contain the specific field. -* **Toggle column in table (![toggle column in table icon](/solutions/images/observability-listAdd.png "")):** Add or remove a column for the field to the main Logs Explorer table. - - -## View log data set details [view-log-data-set-details] - -Go to **Data Set Quality** to view more details about your data sets and monitor their overall quality. To open **Data Set Quality**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). - -Refer to [*Data set quality*](../../../solutions/observability/data-set-quality-monitoring.md) for more information. \ No newline at end of file diff --git a/solutions/observability/logs/logs-stream.md b/solutions/observability/logs/logs-stream.md deleted file mode 100644 index cef060272..000000000 --- a/solutions/observability/logs/logs-stream.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/observability/current/tail-logs.html ---- - -# Logs Stream [tail-logs] - -::::{note} -**There’s a new, better way to explore your logs!** - -The Logs Stream app and dashboard panel are deactivated by default. We recommend viewing and inspecting your logs with [Logs Explorer](logs-explorer.md) as it provides more features, better performance, and more intuitive navigation. - -To activate the Logs Stream app, refer to [Activate Logs Stream](#activate-logs-stream). - -:::: - - -Within the {{logs-app}}, the **Stream** page enables you to monitor all of the log events flowing in from your servers, virtual machines, and containers in a centralized view. You can consider this as a `tail -f` in your browser, along with the power of search. - -Click **Stream Live** to view a continuous flow of log messages in real time, or click **Stop streaming** to view historical logs from a specified time range. - - -## Activate Logs Stream [activate-logs-stream] - -Because [Logs Explorer](logs-explorer.md) is replacing Logs Stream, Logs Stream and the Logs Stream dashboard panel are disabled by default. To activate Logs Stream and the Logs Stream dashboard panel complete the following steps: - -1. To open **Advanced Settings**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -2. In **Advanced Settings**, enter *Logs Stream* in the search bar. -3. Turn on **Logs Stream**. - -After saving your settings, you’ll see Logs Stream in the Observability navigation, and the Logs Stream dashboard panel will be available. - - -## Filter logs [filter-logs] - -To help you get started with your analysis faster and extract fields from your logs, use the search bar to create structured queries using [{{kib}} Query Language](../../../explore-analyze/query-filter/languages/kql.md). For example, enter `host.hostname : "host1"` to see only the information for `host1`. - -Additionally, click **Highlights** and enter a term you would like to locate within the log events. The Logs histogram, located to the right, highlights the number of discovered terms and when the log event was ingested. This helps you quickly jump between potential areas of interest in large amounts of logs, or from a high level, view when a large number of events occurred. - - -## Inspect log event details [inspect-log-event] - -When you have searched and filtered your logs for a specific log event, you may want to examine the metadata and the structured fields associated with that event. To view the **Log event document details** fly-out, hover over the log event, click **View actions for line**, and then select **View details**. To further enhance the workflow of monitoring logs, the icons next to each field value enable you to filter the logs per that value. - -:::{image} /solutions/images/observability-log-event-details.png -:alt: Log event details -:screenshot: -::: - - -## View contextual logs [view-contextual-logs] - -Once your logs are filtered, and you find an interesting log line, the real context you are looking for is what happened before and after that log line within that data source. For example, you are running containerized applications on a Kubernetes cluster, you filter the logs for the term `error`, and you find an interesting error log line. The context you want is what happened before and after the error line within the logs of this container and application. - -Hover over the log event, click **View actions for line**, and then select **View in context**. The context is preserved and helps you find the root cause as soon as possible. - -:::{image} /solutions/images/observability-contextual-logs.png -:alt: Contextual log event -:screenshot: -::: - - -## Integrate with Uptime and APM [uptime-apm-integration] - -To see other actions related to a log event, click **Actions** in the **Log event document details** fly-out. Depending on the event and the features you have configured, you can: - -* Select **View status in Uptime** to [view related uptime information](../apps/view-monitor-status.md) in the {{uptime-app}}. -* Select **View in APM** to [view corresponding APM traces](../apps/traces-2.md) in the Applications UI. diff --git a/solutions/observability/logs/plaintext-application-logs.md b/solutions/observability/logs/plaintext-application-logs.md index aa28564fb..e37bf3927 100644 --- a/solutions/observability/logs/plaintext-application-logs.md +++ b/solutions/observability/logs/plaintext-application-logs.md @@ -20,7 +20,7 @@ To ingest, parse, and correlate plaintext logs: 1. Ingest plaintext logs with [{{filebeat}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-filebeat) or [{{agent}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-the-agent) and parse them before indexing with an ingest pipeline. 2. [Correlate plaintext logs with an {{apm-agent}}.](../../../solutions/observability/logs/plaintext-application-logs.md#correlate-plaintext-logs) -3. [View logs in Logs Explorer](../../../solutions/observability/logs/plaintext-application-logs.md#view-plaintext-logs) +3. [View logs in Discover](../../../solutions/observability/logs/plaintext-application-logs.md#view-plaintext-logs) ## Ingest logs [ingest-plaintext-logs] @@ -351,31 +351,4 @@ Learn about correlating plaintext logs in the agent-specific ingestion guides: ## View logs [view-plaintext-logs] -To view logs ingested by {{filebeat}}, go to **Discover** from the main menu and create a data view based on the `filebeat-*` index pattern. Refer to [Create a data view](../../../explore-analyze/find-and-organize/data-views.md) for more information. - -To view logs ingested by {{agent}}, go to Logs Explorer by clicking **Explorer** under **Logs** from the {{observability}} main menu. Refer to the [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md) documentation for more information on viewing and filtering your logs in {{kib}}. - - -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [ ] ./raw-migrated-files/docs-content/serverless/observability-plaintext-application-logs.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$correlate-plaintext-logs$$$ - -$$$ingest-plaintext-logs-with-filebeat$$$ - -$$$ingest-plaintext-logs-with-the-agent$$$ - -$$$observability-plaintext-application-logs-correlate-logs$$$ - -$$$observability-plaintext-application-logs-ingest-logs-with-agent$$$ - -$$$observability-plaintext-application-logs-ingest-logs-with-filebeat$$$ - -$$$observability-plaintext-application-logs-view-logs$$$ - -$$$view-plaintext-logs$$$ \ No newline at end of file +To view logs ingested by {{filebeat}}, go to **Discover** from the main menu and create a data view based on the `filebeat-*` index pattern. You can also select **All logs** from the **Data views** menu as it includes the `filebeat-*` index pattern by default. Refer to [Create a data view](../../../explore-analyze/find-and-organize/data-views.md) for more information. \ No newline at end of file diff --git a/solutions/observability/logs/run-pattern-analysis-on-log-data.md b/solutions/observability/logs/run-pattern-analysis-on-log-data.md index 87f9eaca2..49f898ff6 100644 --- a/solutions/observability/logs/run-pattern-analysis-on-log-data.md +++ b/solutions/observability/logs/run-pattern-analysis-on-log-data.md @@ -18,8 +18,8 @@ Log pattern analysis works on every text field. To run a log pattern analysis: -1. In your {{obs-serverless}} project, go to **Discover** and select the **Logs Explorer** tab. -2. Select an integration, and apply any filters that you want. +1. In your {{obs-serverless}} project, go to **Discover** and select the **All logs** from the **Data view** menu. +2. Apply any filters that you want. 3. If you don’t see any results, expand the time range, for example, to **Last 15 days**. 4. In the **Available fields** list, select the text field you want to analyze, then click **Run pattern analysis**. @@ -32,4 +32,4 @@ To run a log pattern analysis: :screenshot: ::: -5. (Optional) Select one or more patterns, then choose to filter for (or filter out) documents that match the selected patterns. **Logs Explorer** only displays documents that match (or don’t match) the selected patterns. The filter options enable you to remove unimportant messages and focus on the more important, actionable data during troubleshooting. +5. (Optional) Select one or more patterns, then choose to filter for (or filter out) documents that match the selected patterns. Discover only displays documents that match (or don’t match) the selected patterns. The filter options enable you to remove unimportant messages and focus on the more important, actionable data during troubleshooting. diff --git a/solutions/toc.yml b/solutions/toc.yml index b9d553184..f06750845 100644 --- a/solutions/toc.yml +++ b/solutions/toc.yml @@ -401,11 +401,9 @@ toc: - file: observability/logs/filter-aggregate-logs.md - file: observability/logs/explore-logs.md children: - - file: observability/logs/logs-explorer.md + - file: observability/logs/discover-logs.md - file: observability/logs/categorize-log-entries.md - file: observability/logs/inspect-log-anomalies.md - - file: observability/logs/configure-data-sources.md - - file: observability/logs/logs-stream.md - file: observability/logs/run-pattern-analysis-on-log-data.md - file: observability/logs/add-service-name-to-logs.md - file: observability/logs/logs-index-template-reference.md