Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Acrolinx testing] Adds test branch and files #114

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 94 additions & 0 deletions docs/test-acrolinx/add-data-from-splunk-obs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Add data from Splunk [splunk-get-started]

::::{warning}
This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
::::


Apache, AWS CloudTrail, Nginx, and Zeek integrations offer the ability to seamlessly ingest data from a Splunk Enterprise instance. Data will be automatically mapped to the Elastic Common Schema, making it available for rapid analysis in Elastic solutions, including Security and {{observability}}.

These integrations work by using the `httpjson` input in {{agent}} to run a Splunk search via the Splunk REST API and then extract the raw event from the results. The raw event is then processed via the {{agent}}. The Splunk search is customizable and the interval between searches is customizable. These integrations only get new data since the last query, not historical data.

:::{image} ../../../images/observability-elastic-agent-splunk.png
:alt: Splunk integration components
:screenshot:
:::

To ingest Nginx data from Splunk, perform the following steps. The options are the same for Apache, AWS CloudTrail, and Zeek.


## Prerequisites [splunk-prereqs]

To follow the steps in this guide, you need an {{stack}} deployment that includes:

* {{es}} for storing and searching data
* {{kib}} for visualizing and managing data
* Kibana user with `All` privileges on {{fleet}} and Integrations. Since many Integrations assets are shared across spaces, users need the Kibana privileges in all spaces.
* Integrations Server (included by default in every {{ech}} deployment)

To get started quickly, create an {{ech}} deployment and host it on AWS, GCP, or Azure. [Try it out for free](https://cloud.elastic.co/registration?page=docs&placement=docs-body).


## Step 1: Add integration [splunk-step-one]

Find **Integrations** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Search for and add the nginx integration. Refer to [Get started with system metrics](../infra-and-hosts/get-started-with-system-metrics.md) for detailed steps about adding integrations.


## Step 2: Enable Collect logs from third-party REST API [splunk-step-two]

Enable "Collect logs from third-party REST API" and disable both "Collect logs from Nginx instances" and "Collect metrics from Nginx instances".

:::{image} ../../../images/observability-kibana-fleet-third-party-rest-api.png
:alt: {{fleet}} showing enabling third-party REST API
:screenshot:
:::


## Step 3: Enter connection information [splunk-step-three]

Enter the required information to connect to the Splunk Enterprise REST API.

The URL of the Splunk Enterprise Server must include the scheme (`http` or `https`), the IP address or hostname of the Splunk Enterprise Server, and the port the REST API is listening on.

The Splunk username and password must be of a user with a role or capability to use REST API endpoints. Administrative users have these permissions by default.

SSL Configuration is available under the "Advanced options". These may be necessary if Splunk Enterprise server uses self-signed certificates. See [SSL Options](beats://reference/filebeat/configuration-ssl.md) for valid configuration options.

:::{image} ../../../images/observability-kibana-fleet-third-party-rest-settings.png
:alt: {{fleet}} showing enabling third-party REST API settings
:screenshot:
:::


## Step 4: Enter information to select data from Splunk [splunk-step-four]

For each type of log file, enter the interval and Splunk search string.

The interval is expressed as a [Go duration](https://golang.org/pkg/time/#ParseDuration). The interval is the time between requests sent to the Splunk Enterprise REST API to request new information. Intervals less than one second are not recommended; Splunk only maintains second accuracy for index time. The interval should closely match the rate at which data arrives at the Splunk Enterprise Server. For example, an interval of "5s" for data that only arrives at the Splunk Enterprise Server every hour will generate unnecessary load on the Splunk Enterprise Server.

The search string is the Splunk search used to uniquely describe the events that match the type of log file you are trying to configure. For example, to uniquely describe Nginx access logs `search sourcetype=nginx:plus:access` might be used. Note, the search string must begin with "search" for details refer to the Splunk REST API manual and the "search/jobs/export" endpoint.

Be aware that each time the {{agent}} connects to the Splunk Enterprise REST API a Splunk search is performed. Because of this you want to be sure your search string is as specific as possible, since this reduces the load on the Splunk Enterprise Server.

Tags may be added in the "Advanced options". For example, if you’d like to tag events coming from Splunk with a *Splunk* tag, you can add it here. By default, the forward tag is present to indicate that events are being forwarded via an intermediary, i.e. Splunk.

:::{image} ../../../images/observability-kibana-fleet-third-party-rest-dataset-settings.png
:alt: {{fleet}} showing enabling third-party REST API settings
:screenshot:
:::


## Step 5: Save Integration [splunk-step-five]

Click Save Integration

Data and Dashboards will be available just as if you had collected the data on the Nginx host using log files.


### Considerations and questions [splunk-considerations]

The time on the host running the agent and the Splunk Enterprise Server should be synchronized to the same time source, with correct timezone information. Failure to do this could result in delays in transferring data or gaps in the data received.

**Does the Splunk data need to be in a specific format or mapped to Splunk’s Common Information Model?** No, because these integrations take the raw event from Splunk and process that. There is no dependency on any Splunk processing.

**Are events mapped to Elastic Common Schema (ECS)?** Yes, events from these integrations go through the exact same processing as if {{agent}} had gotten the event from the original source. So the same level of mapping to ECS occurs.
93 changes: 93 additions & 0 deletions docs/test-acrolinx/deploy-es-cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# Deploy an {{es}} cluster

This section includes information on how to set up {{es}} and get it running, including:

* [Configuring your system to support {{es}}](/deploy-manage/deploy/self-managed/important-system-configuration.md), and the [bootstrap checks](/deploy-manage/deploy/self-managed/bootstrap-checks.md) that are run at startup to verify these configurations
* Downloading, installing, and starting {{es}} using each [supported installation method](#installation-methods)

To quickly set up {{es}} and {{kib}} in Docker for local development or testing, jump to [](/deploy-manage/deploy/self-managed/local-development-installation-quickstart.md).

## Installation methods

If you want to install and manage {{es}} yourself, you can:

* Run {{es}} using a [Linux, MacOS, or Windows install package](#elasticsearch-install-packages).
* Run {{es}} in a [Docker container](#elasticsearch-docker-images).

::::{tip}
To try out {{stack}} on your own machine, we recommend using Docker and running both {{es}} and {{kib}}. For more information, see [](/deploy-manage/deploy/self-managed/local-development-installation-quickstart.md). This setup is not suitable for production use.
::::

::::{admonition} Use dedicated hosts
$$$dedicated-host$$$
:::{include} _snippets/dedicated-hosts.md
:::
::::

### {{es}} install packages [elasticsearch-install-packages]

{{es}} is provided in the following package formats.

Each linked guide provides the following details:

* Download and installation instructions
* Information on enrolling a newly installed node in an existing cluster
* Instructions on starting {{es}} manually and, if applicable, as a service or daemon
* Instructions on connecting clients to your new cluster
* Archive or package contents information
* Security certificate and key information

Before you start, make sure that you [configure your system](/deploy-manage/deploy/self-managed/important-system-configuration.md).

| Format | Description | Instructions |
| --- | --- | --- |
| Linux and MacOS `tar.gz` archives | The `tar.gz` archives are available for installation on any Linux distribution and MacOS. | [Install {{es}} from archive on Linux or MacOS](/deploy-manage/deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md) |
| Windows `.zip` archive | The `zip` archive is suitable for installation on Windows. | [Install {{es}} with `.zip` on Windows](/deploy-manage/deploy/self-managed/install-elasticsearch-with-zip-on-windows.md) |
| `deb` | The `deb` package is suitable for Debian, Ubuntu, and other Debian-based systems. Debian packages can be downloaded from the {{es}} website or from our Debian repository. | [Install {{es}} with Debian Package](/deploy-manage/deploy/self-managed/install-elasticsearch-with-debian-package.md) |
| `rpm` | The `rpm` package is suitable for installation on Red Hat, Centos, SLES, OpenSuSE and other RPM-based systems. RPM packages can be downloaded from the {{es}} website or from our RPM repository. | [Install {{es}} with RPM](/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md) |

### {{es}} container images [elasticsearch-docker-images]

You can also run {{es}} inside a docket container image. Docker container images may be downloaded from the Elastic Docker Registry.

You can [use Docker Compose](/deploy-manage/deploy/self-managed/install-elasticsearch-docker-compose.md) to deploy multiple nodes at once.

[Install {{es}} with Docker](/deploy-manage/deploy/self-managed/install-elasticsearch-with-docker.md)

## Version compatibility

:::{include} /deploy-manage/deploy/_snippets/stack-version-compatibility.md
:::

## Installation order

:::{include} /deploy-manage/deploy/_snippets/installation-order.md
:::

## Supported operating systems and JVMs [supported-platforms]

The matrix of officially supported operating systems and JVMs is available in the [Elastic Support Matrix](https://elastic.co/support/matrix). {{es}} is tested on the listed platforms, but it is possible that it will work on other platforms too.

### Java (JVM) Version [jvm-version]

{{es}} is built using Java, and includes a bundled version of [OpenJDK](https://openjdk.java.net) within each distribution. We strongly recommend using the bundled JVM in all installations of {{es}}.

The bundled JVM is treated the same as any other dependency of {{es}} in terms of support and maintenance. This means that Elastic takes responsibility for keeping it up to date, and reacts to security issues and bug reports as needed to address vulnerabilities and other bugs in {{es}}. Elastic’s support of the bundled JVM is subject to Elastic’s [support policy](https://www.elastic.co/support_policy) and [end-of-life schedule](https://www.elastic.co/support/eol) and is independent of the support policy and end-of-life schedule offered by the original supplier of the JVM. Elastic does not support using the bundled JVM for purposes other than running {{es}}.

::::{tip}
{{es}} uses only a subset of the features offered by the JVM. Bugs and security issues in the bundled JVM often relate to features that {{es}} does not use. Such issues do not apply to {{es}}. Elastic analyzes reports of security vulnerabilities in all its dependencies, including in the bundled JVM, and will issue an [Elastic Security Advisory](https://www.elastic.co/community/security) if such an advisory is needed.
::::


If you decide to run {{es}} using a version of Java that is different from the bundled one, prefer to use the latest release of a [LTS version of Java](https://www.oracle.com/technetwork/java/eol-135779.html) which is [listed in the support matrix](https://elastic.co/support/matrix). Although such a configuration is supported, if you encounter a security issue or other bug in your chosen JVM then Elastic may not be able to help unless the issue is also present in the bundled JVM. Instead, you must seek assistance directly from the supplier of your chosen JVM. You must also take responsibility for reacting to security and bug announcements from the supplier of your chosen JVM. {{es}} may not perform optimally if using a JVM other than the bundled one. {{es}} is closely coupled to certain OpenJDK-specific features, so it may not work correctly with JVMs that are not OpenJDK. {{es}} will refuse to start if you attempt to use a known-bad JVM version.

To use your own version of Java, set the `ES_JAVA_HOME` environment variable to the path to your own JVM installation. The bundled JVM is located within the `jdk` subdirectory of the {{es}} home directory. You may remove this directory if using your own JVM.

:::{warning}
Don’t use third-party Java agents that attach to the JVM. These agents can reduce {{es}} performance, including freezing or crashing nodes.
:::

## Third-party dependencies [dependencies-versions]

:::{include} /deploy-manage/deploy/self-managed/_snippets/third-party-dependencies.md
:::
Loading
Loading