Wazuh Indexer Technical Documentation
This folder contains the technical documentation for the Wazuh Indexer. The documentation is organized into the following guides:
- Development Guide: Instructions for building, testing, and packaging the Indexer.
- Reference Manual: Detailed information on the Indexer’s architecture, configuration, and usage.
Requirements
To work with this documentation, you need mdBook installed.
-
Get the latest
cargo
(hit enter when prompted for a default install)curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
-
Install
mdbook
andmdbook-mermaid
cargo install mdbook cargo install mdbook-mermaid
Usage
-
To build the documentation, run:
./build.sh
The output will be generated in the
book
directory. -
To serve the documentation locally for preview, run:
./server.sh
The documentation will be available at http://127.0.0.1:3000.
Development documentation
Under this section, you will find the development documentation of Wazuh Indexer. This documentation contains instructions to compile, run, test and package the source code. Moreover, you will find instructions to set up a development environment in order to get started at developing the Wazuh Indexer.
This documentation assumes basic knowledge of certain tools and technologies, such as Docker, Bash (Linux) or Git.
Set up the development environment
1. Git
Install and configure Git (ssh keys, commits and tags signing, user and email).
- Set your username.
- Set your email address.
- Generate an SSH key.
- Add the public key to your GitHub account for authentication and signing.
- Configure Git to sign commits with your SSH key.
2. Repositories
Before you start, you need to properly configure your working repository to have origin and upstream remotes.
-
Clone the
wazuh-indexer
forkgit clone git@github.com:wazuh-indexer.git git remote add upstream git@github.com:opensearch-project/opensearch.git
-
Clone the
wazuh-indexer-reporting
forkgit clone git@github.com:wazuh/wazuh-indexer-reporting.git git remote add upstream git@github.com:opensearch-project/reporting.git
-
Clone the
wazuh-indexer-plugins
repositorygit clone git@github.com:wazuh/wazuh-indexer-plugins.git
3. IntelliJ IDEA
Prepare your IDE:
- Install IDEA Community Edition as per the official documentation.
- Set a global SDK to Eclipse Temurin following this guide.
You can find the JDK version to use under the
wazuh-indexer/gradle/libs.versions.toml
file. IntelliJ IDEA includes some JDKs by default. If you need to change it, or if you want to use a different distribution, follow the instructions on the next section.
4. JDK (Optional)
We use the Eclipse Temurin JDK. To use a different JDK installed on your machine, use sudo update-alternatives --config java
to select the JDK of your preference.
Set the JAVA_HOME and PATH environment variables by adding these lines to your Shell RC file (.bashrc
, .zsrhrc
, …):
export JAVA_HOME=/usr/lib/jvm/temurin-21-jdk-amd64
export PATH=$PATH:/usr/lib/jvm/temurin-21-jdk-amd64/bin
After that, restart your shell or run source ~/.zshrc
(or similar) to apply the changes. Finally, check Java is installed correctly by running java --version
.
How to generate a package
This guide includes instructions to generate distribution packages locally using Docker.
Wazuh Indexer supports any of these combinations:
- distributions:
['tar', 'deb', 'rpm']
- architectures:
['x64', 'arm64']
Windows is currently not supported.
For more information navigate to the compatibility section.
Before you get started, make sure to clean your environment by running ./gradlew clean
on the root level of the wazuh-indexer
repository.
Pre-requisites
The process to build packages requires Docker and Docker Compose.
Your workstation must meet the minimum hardware requirements (the more resources the better ☺):
- 8 GB of RAM (minimum)
- 4 cores
The tools and source code to generate a package of Wazuh Indexer are hosted in the wazuh-indexer repository, so clone it if you haven't done already.
Building wazuh-indexer
packages
The Docker environment under wazuh-indexer/build-scripts/builder
automates the build and assemble process for the Wazuh Indexer and its plugins, making it easy to create packages on any system.
Use the builder.sh
script to build a package.
./builder.sh -h
Usage: ./builder.sh [args]
Arguments:
-p INDEXER_PLUGINS_BRANCH [Optional] wazuh-indexer-plugins repo branch, default is 'main'.
-r INDEXER_REPORTING_BRANCH [Optional] wazuh-indexer-reporting repo branch, default is 'main'.
-R REVISION [Optional] Package revision, default is '0'.
-s STAGE [Optional] Staging build, default is 'false'.
-d DISTRIBUTION [Optional] Distribution, default is 'rpm'.
-a ARCHITECTURE [Optional] Architecture, default is 'x64'.
-D Destroy the docker environment
-h Print help
The example below it will generate a wazuh-indexer package for Debian based systems, for the x64 architecture, using 1 as revision number and using the production naming convention.
# Wihtin wazuh-indexer/build-scripts/builder
bash builder.sh -d deb -a x64 -R 1 -s true
The resulting package will be stored at wazuh-indexer/artifacts/dist
.
The
STAGE
option defines the naming of the package. When set tofalse
, the package will be unequivocally named with the commits' SHA of thewazuh-indexer
,wazuh-indexer-plugins
andwazuh-indexer-reporting
repositories, in that order. For example:wazuh-indexer_5.0.0-0_x86_64_aff30960363-846f143-494d125.rpm
.
How to generate a container image
This guide includes instructions to generate distribution packages locally using Docker.
Wazuh Indexer supports any of these combinations:
- distributions:
['tar', 'deb', 'rpm']
- architectures:
['x64', 'arm64']
Windows is currently not supported.
For more information navigate to the compatibility section.
Before you get started, make sure to clean your environment by running ./gradlew clean
on the root level of the wazuh-indexer
repository.
Pre-requisites
The process to build packages requires Docker and Docker Compose.
Your workstation must meet the minimum hardware requirements (the more resources the better ☺):
- 8 GB of RAM (minimum)
- 4 cores
The tools and source code to generate a package of Wazuh Indexer are hosted in the wazuh-indexer repository, so clone it if you haven't done already.
Building wazuh-indexer
Docker images
The wazuh-indexer/build-scripts/docker
folder contains the code to build Docker images. Below there is an example of the command needed to build the image. Set the build arguments and the image tag accordingly.
The Docker image is built from a wazuh-indexer tarball (tar.gz), which must be present in the same folder as the Dockerfile in wazuh-indexer/build-scripts/docker
.
docker build \
--build-arg="VERSION=5.0.0" \
--build-arg="INDEXER_TAR_NAME=wazuh-indexer_5.0.0-0_linux-x64.tar.gz" \
--tag=wazuh-indexer:5.0.0-0 \
--progress=plain \
--no-cache .
Then, start a container with:
docker run -p 9200:9200 -it --rm wazuh-indexer:5.0.0-0
The build-and-push-docker-image.sh
script automates the process to build and push Wazuh Indexer Docker images to our repository in quay.io. The script takes several parameters. Use the -h
option to display them.
To push images, credentials must be set at environment level:
- QUAY_USERNAME
- QUAY_TOKEN
Usage: build-scripts/build-and-push-docker-image.sh [args]
Arguments:
-n NAME [required] Tarball name.
-r REVISION [Optional] Revision qualifier, default is 0.
-h help
The script will stop if the credentials are not set, or if any of the required parameters are not provided.
This script is used in the 5_builderpackage_docker.yml
GitHub Workflow, which is used to automate the process even more. When possible, prefer this method.
How to build from sources
Every Wazuh Indexer repository includes one or more Gradle projects with predefined tasks to run and build the source code.
In this case, to build (compile and zip) a distribution of Wazuh Indexer, run the ./gradlew build
command at the root level of the repository. When completed, the distribution artifacts will be located in the build/distributions
directory.
How to run from sources
Every Wazuh Indexer repository includes one or more Gradle projects with predefined tasks to run and build the source code.
In this case, to run a Gradle project from source code, run the ./gradlew run
command.
For Wazuh Indexer, additional plugins may be installed by passing the -PinstalledPlugins
flag:
./gradlew run -PinstalledPlugins="['plugin1', 'plugin2']"
The ./gradlew run
command will build and start the project, writing its log above Gradle's status message. A lot of stuff is logged on startup, specifically these lines tell you that OpenSearch is ready.
[2020-05-29T14:50:35,167][INFO ][o.e.h.AbstractHttpServerTransport] [runTask-0] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2020-05-29T14:50:35,169][INFO ][o.e.n.Node ] [runTask-0] started
It's typically easier to wait until the console stops scrolling, and then run curl
in another window to check if OpenSearch instance is running.
curl localhost:9200
{
"name" : "runTask-0",
"cluster_name" : "runTask",
"cluster_uuid" : "oX_S6cxGSgOr_mNnUxO6yQ",
"version" : {
"number" : "1.0.0-SNAPSHOT",
"build_type" : "tar",
"build_hash" : "0ba0e7cc26060f964fcbf6ee45bae53b3a9941d0",
"build_date" : "2021-04-16T19:45:44.248303Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
}
}
Use -Dtests.opensearch.
to pass additional settings to the running instance. For example, to enable OpenSearch to listen on an external IP address, pass -Dtests.opensearch.http.host
. Make sure your firewall or security policy allows external connections for this to work.
./gradlew run -Dtests.opensearch.http.host=0.0.0.0
Artifact dependencies between plugins
Some plugins may need other plugins to work properly, such as any plugin extending the Job Scheduler plugin or the Content Manager sending HTTP requests to the Command Manager plugin's API.
Under these cases, the Gradle project of the plugin can be modified to include these other plugins as dependencies, so it forms up a development environment with all these plugins installed.
All our Gradle projects are already configured to not require manual steps, but in the case of the Content Manager plugin. Unlike OpenSearch, we do not publish our Zip archives of our plugins to a Maven repository. As a result, any dependency between Wazuh Indexer plugins need some manual steps to build and publish the dependency plugins to the local Maven repository (~/.m2
by default) before starting up the dependant plugin project.
These are the steps required to start the Content Manager plugin development environment together with the Command Manager plugin.
- Build and publish a Zip archive of the Command Manager.
./gradlew :wazuh-indexer-command-manager:build ./gradlew :wazuh-indexer-command-manager:publishToMavenLocal
- In the Content Manager's
build.gradle
file, uncomment this line:zipArchive group: 'com.wazuh', name:'wazuh-indexer-command-manager', version: "${wazuh_version}.${revision}"
- Start the Content Manager plugin by running
./gradlew run
.
How to run the tests
This section explains how to run the Wazuh Indexer tests.
Full set of tests
To execute all kind of tests, use the command ./gradlew check
. This command does not only run tests, but also tasks to check the quality of the code, such as documentation and linter checks.
Unit tests
To run unit tests, use the ./gradlew test
command.
Integration tests
To run integration tests, use the ./gradlew integTest
and the ./gradlew yamlresttest
commands.
Package testing
For package testing, we conduct smoke tests on the packages using the GitHub Actions Workflows. These tests consist on installing the packages on a supported operating system. DEB packages are installed in the “Ubuntu 24.04” runner executing the workflow, while RPM packages are installed in a Red Hat 9 Docker container, as there is no RPM compatible runner available in GitHub Actions.
As a last note, there is also a Vagrantfile and testing scripts in the repository to perform some tests on a real wazuh-indexer service running on a virtual machine. Refer to its README.md for more information about how to run these tests.
Description
The Wazuh indexer is a highly scalable, full-text search and analytics engine. This Wazuh central component indexes and stores alerts generated by the Wazuh server and provides near real-time data search and analytics capabilities. The Wazuh indexer can be configured as a single-node or multi-node cluster, providing scalability and high availability.
The Wazuh indexer stores data as JSON documents. Each document correlates a set of keys, field names or properties, with their corresponding values which can be strings, numbers, booleans, dates, arrays of values, geolocations, or other types of data.
An index is a collection of documents that are related to each other. The documents stored in the Wazuh indexer are distributed across different containers known as shards. By distributing the documents across multiple shards, and distributing those shards across multiple nodes, the Wazuh indexer can ensure redundancy. This protects your system against hardware failures and increases query capacity as nodes are added to a cluster.
The Wazuh indexer stores the data collected by the Wazuh agents in separate indices. Each index contains documents with specific inventory information. In this section, you can find a description of the information in each index.
Index | Description |
---|---|
wazuh-agents | Stores information about the agents, such as name, IP, ID, groups... |
wazuh‑alerts | Stores alerts generated by the Wazuh server. These are created each time an event trips a rule with a high enough priority (this threshold is configurable). |
wazuh-commands | Commands are used as a communication mechanism between the different Wazuh Central Components. This index stores detailed information about these commands, as its status, destination, origin and issued time. |
wazuh-states-fim | File Integrity Monitoring registries. |
wazuh-states-inventory-hardware | Basic information about the hardware components of an endpoint. |
wazuh-states-inventory-hotfixes | Contains information about the updates installed on Windows endpoints. This information is used by the vulnerability detector module to discover what vulnerabilities have been patched on Windows endpoints. |
wazuh-states-inventory-networks | Network information, such as network interfaces, protocols and traffic summary. |
wazuh-states-inventory-packages | Stores information about the currently installed software on the endpoint. |
wazuh-states-inventory-ports | Basic information about open network ports on the endpoint. |
wazuh-states-inventory-processes | Stores the detected running processes on the endpoints. |
wazuh-states-inventory-system | Operating system information, hostname and architecture. |
wazuh-states-sca | Stores Security Configuration Assessment (SCA) results. |
wazuh-states-vulnerabilities | Active vulnerabilities on the endpoint and its details. |
wazuh‑archives | Stores all events (archive data) received by the Wazuh server, whether they trip a rule. |
wazuh-internal-users | Stores information about internal users, including authentication details and role-based access control (RBAC) permissions. |
wazuh-custom-users | Stores information about custom users defined by administrators, including user-specific roles and permissions. |
wazuh-cve | Stores information about Common Vulnerabilities and Exposures (CVEs) and their details. |
Architecture
Compatibility
Supported operating systems
We aim to support as many operating systems as OpenSearch does. Wazuh indexer should work on many Linux distributions, but we only test a handful. The following table lists the operating system versions that we currently support.
For 5.0.0 and above, we support the operating system versions and architectures included in the table below.
Name | Version | Architecture |
---|---|---|
Red Hat | 8, 9 | x86_64, aarch64 |
Ubuntu | 22.04, 24.04 | x86_64, aarch64 |
Amazon Linux | 2, 2023 | x86_64, aarch64 |
CentOS | 8 | x86_64, aarch64 |
OpenSearch
Currently, Wazuh indexer is using version 2.19.1
of OpenSearch.
Requirements
Hardware recommendations
The Wazuh indexer can be installed as a single-node or as a multi-node cluster.
Hardware recommendations for each node
Minimum | Recommended | |||
---|---|---|---|---|
Component | RAM (GB) | CPU (cores) | RAM (GB) | CPU (cores) |
Wazuh indexer | 4 | 2 | 16 | 8 |
Disk space requirements
The amount of data depends on the generated alerts per second (APS). This table details the estimated disk space needed per agent to store 90 days of alerts on a Wazuh indexer server, depending on the type of monitored endpoints.
Monitored endpoints | APS | Storage in Wazuh indexer (GB/90 days) |
---|---|---|
Servers | 0.25 | 3.7 |
Workstations | 0.1 | 1.5 |
Network devices | 0.5 | 7.4 |
For example, for an environment with 80 workstations, 10 servers, and 10 network devices, the storage needed on the Wazuh indexer server for 90 days of alerts is 230 GB.
Packages
Please refer to this section for information pertaining to compatibility.
Installation
Installing the Wazuh indexer step by step
Install and configure the Wazuh indexer as a single-node or multi-node cluster, following step-by-step instructions. The installation process is divided into three stages.
-
Certificates creation
-
Nodes installation
-
Cluster initialization
Note You need root user privileges to run all the commands described below.
1. Certificates creation
Generating the SSL certificates
-
Download the
wazuh-certs-tool.sh
script and theconfig.yml
configuration file. This creates the certificates that encrypt communications between the Wazuh central components.curl -sO https://packages-dev.wazuh.com/5.0/wazuh-certs-tool.sh curl -sO https://packages-dev.wazuh.com/5.0/config.yml
-
Edit
./config.yml
and replace the node names and IP values with the corresponding names and IP addresses. You need to do this for all Wazuh server, Wazuh indexer, and Wazuh dashboard nodes. Add as many node fields as needed.nodes: # Wazuh indexer nodes indexer: - name: node-1 ip: "<indexer-node-ip>" #- name: node-2 # ip: "<indexer-node-ip>" #- name: node-3 # ip: "<indexer-node-ip>" # Wazuh server nodes # If there is more than one Wazuh server # node, each one must have a node_type server: - name: wazuh-1 ip: "<wazuh-manager-ip>" # node_type: master #- name: wazuh-2 # ip: "<wazuh-manager-ip>" # node_type: worker #- name: wazuh-3 # ip: "<wazuh-manager-ip>" # node_type: worker # Wazuh dashboard nodes dashboard: - name: dashboard ip: "<dashboard-node-ip>"
To learn more about how to create and configure the certificates, see the Certificates deployment section.
-
Run
./wazuh-certs-tool.sh
to create the certificates. For a multi-node cluster, these certificates need to be later deployed to all Wazuh instances in your cluster../wazuh-certs-tool.sh -A
-
Compress all the necessary files.
tar -cvf ./wazuh-certificates.tar -C ./wazuh-certificates/ . rm -rf ./wazuh-certificates
-
Copy the
wazuh-certificates.tar
file to all the nodes, including the Wazuh indexer, Wazuh server, and Wazuh dashboard nodes. This can be done by using thescp
utility.
2. Nodes installation
Installing package dependencies
Install the following packages if missing:
Yum
yum install coreutils
APT
apt-get install debconf adduser procps
Adding the Wazuh repository
Yum
-
Import the GPG key.
rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH
-
Add the repository.
echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/5.x/yum/\nprotect=1' | tee /etc/yum.repos.d/wazuh.repo
APT
-
Install the following packages if missing.
apt-get install gnupg apt-transport-https
-
Install the GPG key.
curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
-
Add the repository.
echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/5.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
-
Update the packages information.
apt-get update
Installing the Wazuh indexer package
Yum
yum -y install wazuh-indexer
APT
apt-get -y install wazuh-indexer
Configuring the Wazuh indexer
Edit the /etc/wazuh-indexer/opensearch.yml
configuration file and replace the following values:
a. network.host
: Sets the address of this node for both HTTP and transport traffic. The node will bind to this address and use it as its publish address. Accepts an IP address or a hostname.
Use the same node address set in config.yml
to create the SSL certificates.
b. node.name
: Name of the Wazuh indexer node as defined in the config.yml
file. For example, node-1
.
c. cluster.initial_master_nodes
: List of the names of the master-eligible nodes. These names are defined in the config.yml
file. Uncomment the node-2
and config.yml
and node-3
lines, change the names, or add more lines, according to your onfig.yml`definitions.
cluster.initial_master_nodes:
- "node-1"
- "node-2"
- "node-3"
d. discovery.seed_hosts
: List of the addresses of the master-eligible nodes. Each element can be either an IP address or a hostname. You may leave this setting commented if you are configuring the Wazuh indexer as a single node. For multi-node configurations, uncomment this setting and set the IP addresses of each master-eligible node.
discovery.seed_hosts:
- "10.0.0.1"
- "10.0.0.2"
- "10.0.0.3"
e. plugins.security.nodes_dn
: List of the Distinguished Names of the certificates of all the Wazuh indexer cluster nodes. Uncomment the lines for node-2
and node-3
and change the common names (CN) and values according to your settings and your config.yml
definitions.
plugins.security.nodes_dn:
- "CN=node-1,OU=Wazuh,O=Wazuh,L=California,C=US"
- "CN=node-2,OU=Wazuh,O=Wazuh,L=California,C=US"
- "CN=node-3,OU=Wazuh,O=Wazuh,L=California,C=US"
Deploying certificates
Note: Make sure that a copy of the
nazuh-certificates.tar
file, created during the initial configuration step, is placed in your working directory.
Run the following commands, replacing <INDEXER_NODE_NAME>
with the name of the Wazuh indexer node you are configuring as defined in config.yml
. For example, node-1
. This deploys the SSL certificates to encrypt communications between the Wazuh central components.
NODE_NAME=<INDEXER_NODE_NAME>
mkdir /etc/wazuh-indexer/certs
tar -xf ./wazuh-certificates.tar -C /etc/wazuh-indexer/certs/ ./$NODE_NAME.pem ./$NODE_NAME-key.pem ./admin.pem ./admin-key.pem ./root-ca.pem
mv -n /etc/wazuh-indexer/certs/$NODE_NAME.pem /etc/wazuh-indexer/certs/indexer.pem
mv -n /etc/wazuh-indexer/certs/$NODE_NAME-key.pem /etc/wazuh-indexer/certs/indexer-key.pem
chmod 500 /etc/wazuh-indexer/certs
chmod 400 /etc/wazuh-indexer/certs/*
chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/certs
Starting the service
Enable and start the Wazuh indexer service.
Systemd
systemctl daemon-reload
systemctl enable wazuh-indexer
systemctl start wazuh-indexer
SysV init
Choose one option according to the operating system used.
a. RPM-based operating system:
chkconfig --add wazuh-indexer
service wazuh-indexer start
b. Debian-based operating system:
update-rc.d wazuh-indexer defaults 95 10
service wazuh-indexer start
Repeat this stage of the installation process for every Wazuh indexer node in your cluster. Then proceed with initializing your single-node or multi-node cluster in the next stage.
3. Cluster initialization
Run the Wazuh indexer indexer-security-init.sh
script on any Wazuh indexer node to load the new certificates information and start the single-node or multi-node cluster.
/usr/share/wazuh-indexer/bin/indexer-security-init.sh
Note: You only have to initialize the cluster once, there is no need to run this command on every node.
Testing the cluster installation
-
Replace
<WAZUH_INDEXER_IP_ADDRESS>
and run the following commands to confirm that the installation is successful.curl -k -u admin:admin https://<WAZUH_INDEXER_IP_ADRESS>:9200
Output
{ "name" : "node-1", "cluster_name" : "wazuh-cluster", "cluster_uuid" : "095jEW-oRJSFKLz5wmo5PA", "version" : { "number" : "7.10.2", "build_type" : "rpm", "build_hash" : "db90a415ff2fd428b4f7b3f800a51dc229287cb4", "build_date" : "2023-06-03T06:24:25.112415503Z", "build_snapshot" : false, "lucene_version" : "9.6.0", "minimum_wire_compatibility_version" : "7.10.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "The OpenSearch Project: https://opensearch.org/" }
-
Replace
<WAZUH_INDEXER_IP_ADDRESS>
and run the following command to check if the single-node or multi-node cluster is working correctly.curl -k -u admin:admin https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_cat/nodes?v
Configuration files
Most of our plugins allow you to configure various settings through the opensearch.yml
file. Below is an overview of the available configuration options and their default values.
Setup plugin configuration
-
Client Timeout
- Key:
setup.client.timeout
- Type: Integer
- Default:
30
- Minimum:
5
- Maximum:
120
- Description: Timeout in seconds for index and search operations.
- Key:
-
Job Schedule
- Key:
setup.job.schedule
- Type: Integer
- Default:
1
- Minimum:
1
- Maximum:
10
- Description: Job execution interval in minutes.
- Key:
-
Job Max Docs
- Key:
setup.job.max_docs
- Type: Integer
- Default:
1000
- Minimum:
5
- Maximum:
100000
- Description: Maximum number of documents to be returned by a search query.
- Key:
Command Manager plugin configuration
-
Client Timeout
- Key:
command_manager.client.timeout
- Type: Integer
- Default:
30
- Minimum:
5
- Maximum:
120
- Description: Timeout in seconds for index and search operations.
- Key:
-
Job Schedule
- Key:
command_manager.job.schedule
- Type: Integer
- Default:
1
- Minimum:
1
- Maximum:
10
- Description: Job execution interval in minutes.
- Key:
-
Job Max Docs
- Key:
command_manager.job.max_docs
- Type: Integer
- Default:
1000
- Minimum:
5
- Maximum:
100000
- Description: Maximum number of documents to be returned by a search query.
- Key:
Example
Below, there is an example of custom values for these settings within the opensearch.yml
file:
setup:
client:
timeout: 60
job:
schedule: 5
max_docs: 4000
command_manager:
client:
timeout: 60
job:
schedule: 5
max_docs: 4000
Setup
The wazuh-indexer-setup
plugin is a module composing the Wazuh Indexer responsible for the initialization of the indices required by Wazuh to store all the data gathered and generated by other Central Components, such as the agents and the server (engine).
The Wazuh Indexer Setup Plugin in responsible for:
- Create the index templates, to define the mappings and settings of the indices.
- Create the initial indices. We distinguish between stateful, stateless, and rbac indices. Stateful indices are unique and its data is update over time (agents' inventory), stateless indices are rotated and static (alerts), and RBAC indices store access control and authorization information for managing users, roles, and permissions.
- For stateless indices, it creates the indices aliases and lifecycle policies for rollover.
Key Features:
- The plugin extends the Job Scheduler plugin via its SPI. The job periodically searches for agents in "active" state whose last login was 15 minutes ago or more and changes their status to "disconnected".
Architecture
Design
The plugin implements the ClusterPlugin interface in order to be able to hook into the node’s lifecycle overriding the onNodeStarted()
method. The logic for the creation of the index templates and the indices is encapsulated in the WazuhIndices
class. The onNodeStarted()
method invokes the WazuhIndices::initialize()
method, which handles everything.
By design, the plugin will overwrite any existing index template under the same name.
JavaDoc
The plugin is documented using JavaDoc. You can compile the documentation using the Gradle task for that purpose. The generated JavaDoc is in the build/docs folder.
./gradlew javadoc
Indices
Refer to the docs for complete definitions of the indices. The indices inherit the settings and mappings defined in the index templates.
Sequence diagram
Note Calls to
Client
are asynchronous.
sequenceDiagram actor Node participant SetupPlugin participant WazuhIndices participant Client Node->>SetupPlugin: plugin.onNodeStarted() activate SetupPlugin Note over Node,SetupPlugin: Invoked on Node::start() activate WazuhIndices SetupPlugin->>WazuhIndices: initialize() Note over SetupPlugin,WazuhIndices: Create index templates and indices loop i..n templates WazuhIndices-)Client: templateExists(i) Client--)WazuhIndices: response alt template i does not exist WazuhIndices-)Client: putTemplate(i) Client--)WazuhIndices: response end end loop i..n indices WazuhIndices-)Client: indexExists(i) Client--)WazuhIndices: response alt index i does not exist WazuhIndices-)Client: putIndex(i) Client--)WazuhIndices: response end end deactivate WazuhIndices deactivate SetupPlugin
Class diagram
--- title: Wazuh Indexer setup plugin --- classDiagram direction LR SetupPlugin"1"-->WazuhIndices WazuhIndices"1"-->Client <<service>> Client SetupPlugin : -WazuhIndices indices SetupPlugin : +createComponents() SetupPlugin : +onNodeStarted() WazuhIndices : -Client client WazuhIndices : -ClusterService clusterService WazuhIndices : +WazuhIndices(Client client, ClusterService clusterService) WazuhIndices : +putTemplate(String template) void WazuhIndices : +putIndex(String index) void WazuhIndices : +indexExists(String index) bool WazuhIndices : +templateExists(String template) bool WazuhIndices : +initialize() void
The Job Scheduler task
A periodic task performs an updateByQuery query to set the status of inactive agents to "disconnected".
Issue: https://github.com/wazuh/wazuh-indexer-plugins/issues/341
Command Manager
flowchart TD subgraph Agents Endpoints Clouds Other_sources end subgraph Indexer["Indexer cluster"] subgraph Data_states["Data streams"] commands_stream["Orders stream"] end subgraph indexer_modules["Indexer modules"] commands_manager["Commands manager"] content_manager["Content manager"] end end subgraph Wazuh1["Server 1"] comms_api["Comms API"] engine["Engine"] management_api["Management API"] server["Server"] end subgraph Dashboard subgraph Dashboard1["Dashboard"] end end subgraph lb["Load Balancer"] lb_node["Per request"] end Agents -- 3.a) /poll_commands --> lb lb -- 3.a) /poll_commands --> comms_api content_manager -- 1.a) /send_commands --> commands_manager management_api -- 1.a) /send_commands --> commands_manager commands_manager -- 1.b) /index --> commands_stream server -- 2.a) /get_commands --> commands_stream server -- 2.b) /send_commands --> comms_api server -- 2.b) /send_commands --> engine users["Wazuh users"] --> Dashboard Dashboard -- HTTP --> Indexer style Data_states fill:#abc2eb style indexer_modules fill:#abc2eb
This plugin is one of the pillars of the new communication mechanism used across the different components of Wazuh: the commands. The commands are used to deliver specific actions to other components. For example, a command can order a group of agents to restart, update its configuration, change group or run an active response action. The Command Manager plugin receives these commands through its HTTP REST API, validates and stores them in an index. The Wazuh Server periodically queries the index looking for new commands and sends them to the final destination, which can be an agent or a server (engine).
The Command Manager generates a unique ID for each of the order received. This ID is required for updating the result of the order, so it's sent together with the order details to the target. Orders are expected to be executed before a given amount of time. The Command Manager periodically searches for past due commands and updates its status to the "failed" state.
Key Concepts:
- Command: the raw command as received by the
POST /_plugins/_command_manager/commands
endpoint. - Order: processed command, as stored in the index. A subset of this information is fetched by the Wazuh and sent to the order's target.
Key Features:
- The plugin exposes a Rest API with a single endpoint that listens for POST requests.
- The plugin extends the Job Scheduler plugin via its SPI. The job periodically looks for past due orders in “pending” state and changes their state to "failed".
The Command Manager plugin appears for the first time in Wazuh 5.0.0.
Architecture
Command manager context diagram
graph TD subgraph Command_Manager["Command Manager"] API["Commands API"] Controller["Commands Controller"] Processor["Commands Expansion"] Storage["Commands Index Storage"] CommandsIndex[(commands index)] AgentsIndex[(agents index)] Scheduler["Job Scheduler Task"] end Actor("Actor") -- POST /commands --> API API --> Controller Controller --> Processor Processor --> Storage Storage -- write --> CommandsIndex Processor -- read --> AgentsIndex Scheduler -- read-write--> CommandsIndex subgraph Server["Server"] direction TB ManagementAPI["Management API"] end ManagementAPI -- read --> CommandsIndex
Commands API
Issue: https://github.com/wazuh/wazuh-indexer-plugins/issues/69
The Command Manager API is described formally in OpenAPI format. Check it out here.
Important: The
action.name
attribute must always be provided beforeaction.args
in the JSON. Otherwise, the command is rejected. This is necessary for proper validation of the arguments, which depends on the command type, defined byaction.name
.
fetch-config
The fetch-config
command is used to order an agent to update its remote configuration.
Accepted values for target.type
are agent
and group
. The target.id
represents the agent's ID or group's name, respectively.
The command takes no arguments (action.args
). Any provided argument is ignored.
{
"commands": [
{
"action": {
"name": "fetch-config",
"args": {},
"version": "5.0.0"
},
"source": "Users/Services",
"user": "Management API",
"timeout": 100,
"target": {
"id": "d5b250c4-dfa1-4d94-827f-9f99210dbe6c",
"type": "agent"
}
}
]
}
set-group
The set-group
command is used to change the groups of an agent.
Accepted values for target.type
are agent
and group
. The target.id
represents the agent's ID or group's name, respectively.
The command takes the groups
argument, an array of strings depicting the full list of groups the agent belongs too. Any other value than an array of strings is rejected. Additional arguments are ignored.
{
"commands": [
{
"action": {
"name": "set-group",
"args": {
"groups": [
"group_1",
"group_2"
]
},
"version": "5.0.0"
},
"source": "Users/Services",
"user": "Management API",
"timeout": 100,
"target": {
"id": "d5b250c4-dfa1-4d94-827f-9f99210dbe6c",
"type": "agent"
}
}
]
}
update
The update
command is used to notify about new content being available. We usually refer to content to the CVE and Ruleset catalog.
Only accepted value for target.type
is server
. The target.id
represents the server's module that is interested on the new content.
The command takes the index
and offset
arguments, strings depicting the index where the new content is and its version, respectively. Any other value than a string is rejected. Additional arguments are ignored.
{
"commands": [
{
"action": {
"name": "update",
"args": {
"index": "content-index",
"offset": "1111"
},
"version": "5.0.0"
},
"source": "Content Manager",
"timeout": 100,
"target": {
"id": "vulnerability-detector",
"type": "server"
}
}
]
}
refresh
The refresh
command is created when the Wazuh RBAC resources (users, roles, policies, ...) are modified.
This command serves to the Wazuh Server as a notification to update its local copy of these resources.
The expected values for target.type
and target.id
are server
and rbac
, respectively.
The command accepts an optional index
argument, which must be an array of strings representing the RBAC indices that changed. Any other value than an array is strings is rejected. Additional arguments are ignored.
{
"commands": [
{
"action": {
"name": "refresh",
"args": {
"index": ["index-a", "index-b"], // Optional
},
"version": "5.0.0"
},
"source": "Users/Services",
"timeout": 100,
"target": {
"id": "rbac",
"type": "server"
}
}
]
}
Commands expansion
Commands can be targeted to a group of agents, too. This is achieved by setting group
as the target type and the name of the group as the target ID. For example:
{
"commands": [
{
"action": {
"name": "fetch-config",
"args": {},
"version": "5.0.0"
},
"source": "Users/Services",
"user": "Management API",
"timeout": 100,
"target": {
"id": "group002",
"type": "group"
}
}
]
}
The command is processed by the Command Manager and expanded. We refer to expansion as the generation of analogous commands targeting the individual agents that belong to that group. For example, if the windows-group-A
group contains 10 agents, 10 commands will be generated, one for each of the agents. The target type and ID for these commands are set to agent
and the ID of the agent, respectively.
[
{
"command": {
"source": "Users/Services",
"user": "Management API",
"target": {
"type": "agent",
"id": "agent82"
},
"action": {
"name": "fetch-config",
"args": {},
"version": "5.0.0"
},
"timeout": 100,
"status": "pending"
}
},
{
"command": {
"source": "Users/Services",
"user": "Management API",
"target": {
"type": "agent",
"id": "agent21"
},
"action": {
"name": "fetch-config",
"args": {},
"version": "5.0.0"
},
"timeout": 100,
"status": "pending"
}
},
{
"command": {
"source": "Users/Services",
"user": "Management API",
"target": {
"type": "agent",
"id": "agent28"
},
"action": {
"name": "fetch-config",
"args": {},
"version": "5.0.0"
},
"timeout": 100,
"status": "pending"
}
}
]
Issue: https://github.com/wazuh/wazuh-indexer-plugins/issues/88
Orders storage
The processed commands, the orders, are stored in the wazuh-commands
index.
GET wazuh-commands/_search
{
"took": 4,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 1,
"relation": "eq"
},
"max_score": 1,
"hits": [
{
"_index": "wazuh-commands",
"_id": "yu5Li5UB5IfLPVSubqhW",
"_score": 1,
"_source": {
"agent": {
"groups": [
"group002"
]
},
"command": {
"source": "Users/Services",
"user": "Management API",
"target": {
"type": "agent",
"id": "agent28"
},
"action": {
"name": "set-group",
"args": {
"groups": [
"group_1",
"group_2"
]
},
"version": "5.0.0"
},
"timeout": 100,
"status": "pending",
"order_id": "ye5Li5UB5IfLPVSubqhW",
"request_id": "yO5Li5UB5IfLPVSubqhW"
},
"@timestamp": "2025-03-12T16:58:51Z",
"delivery_timestamp": "2025-03-12T17:00:31Z"
}
}
]
}
}
Issue: https://github.com/wazuh/wazuh-indexer-plugins/issues/42
The Job Scheduler task
A periodic task performs an updateByQuery query to set the status of past due orders to "failed".
Issue: https://github.com/wazuh/wazuh-indexer-plugins/issues/87
API reference
Check the OpenAPI spec here.
Content Manager
The Content Manager is a plugin for Wazuh 5.0 responsible for the management of the Wazuh Catalog within the Indexer. The catalog is structured into contexts. Each context contains a collection of resources. Each change made to these resources generates a new offset. A consumer is a customized view of a context, and it's used to consume the catalog within the CTI API.
The Content Manager manages multiple Contexts, having a single Consumer each. These are preconfigured in the plugin by default, and not configurable.
The Content Manager periodically looks for new content on the CTI API by comparing the offsets. On its first run, the content is initialized using a snapshot. From there on, the content is patched to match the latest offset available. Simple information about the context, the consumer, the current offset and the snapshot URL are saved in an index.
The Content Manager also offers the possibility of offline content updates, from a snapshot file. The content is stored in indices.
On new content, the Content Manager generates a new command for the Command Manager.
- [ONLINE] For each context, the scheduled job checks if there is new content available on the CTI API.
- If the offset is
0
, the context will be initialized from a snapshot- The Content Manager gets the URL for the latest snapshot from
GET /api/v1/catalog/contexts/:context/consumers/:consumer
- The Content Manager downloads the snapshot.
- The Content Manager unzips the snapshot.
- The Content Manager reads and indexes the content of the snapshot into an index using JSON Streaming.
- Generate a command for the Command Manager.
- The Content Manager gets the URL for the latest snapshot from
- If the offset is the same as the offset fetched from the CTI API for that context and consumer. The content is up-to-date and nothing needs to be done.
- If the offset is lower than the offset fetched from the CTI API for that context and consumer, so the content needs to be updated.
- Subtract the difference in offsets:
difference = latest_offsest - local_offset
- While
difference > 0
- Fetch changes in batches of 1000 elements as maximum
- Apply JSON-patch to the content.
- Generate a command for the Command Manager.
- Subtract the difference in offsets:
- If the offset is
- [OFFLINE] The Content Manager exposes an API endpoint that accepts the URI to the snapshot file (e.g.
file:///tmp/snapshot.zip
).- From
1.1.2
to1.1.5
- From
--- title: Content Manager - Content update --- sequenceDiagram ContentUpdater->>IndexClient: getContextInformation() IndexClient-)ContentUpdater: contextInfo loop while last_offset > offset ContentUpdater->>CTIclient: getContextChanges() CTIclient-)ContentUpdater: changes ContentUpdater-->>IndexPatcher: changes end ContentUpdater-->>CommandManagerClient: postCommand()
Schema of the wazuh-content
index
[ONLINE]
[
{
"_index": "wazuh-content",
"_id": "vd_1.0.0",
"_source": {
"vd_4.8.0": {
"offset": 75019,
"last_offset": 85729
}
}
},
]
[OFFLINE] or [INITIALIZATION]
[
{
"_index": "wazuh-content",
"_id": "vd_1.0.0",
"_source": {
"vd_4.8.0": {
"offset": 0,
"snapshot": "uri-to-snapshot"
}
}
}
]
Architecture
Use case: sync content from CTI to Indexer
Wazuh Indexer will store threat intelligence content such as CVE definitions or rules in indices for its distribution to the Servers (Engine).
CVEs context
In the case of CVEs, the new content is fetched periodically by the Content Manager from the CTI API (1). Following a successful update of the content (2), the Content Manager generates a command (3) (4) to notify about new content being available. Ultimately, the Server's periodic search for new commands reads the notification about the new content (5) and notifies the Engine (6), that updates its CVE content with the latest copy in the Indexer's CVE index (7).
flowchart TD subgraph cti["CTI"] end subgraph Indexer["Indexer cluster"] subgraph Data_streams["Data stream"] alerts_stream["Alerts stream"] commands_stream["Commands stream"] end subgraph Plugins["Modules"] content_manager["Content manager"] command_manager["Command manager"] end subgraph Data_states["Content"] states["CVE data"] end end subgraph Wazuh1["Server 1"] engine["Engine / VD"] server["Server"] end content_manager -- 1- /check_updates <--> cti content_manager -- 2- /update_content --> states content_manager -- 3- /process_updates --> command_manager command_manager -- 4- stores --> commands_stream server -- 5- /pulls --> commands_stream server -- 6- /update_content --> engine engine -- 7- /pulls --> states style Data_states fill:#abc2eb style Data_streams fill:#abc2eb style Plugins fill:#abc2eb
Ruleset context
In the case of the ruleset, the new content is fetched periodically by the Content Manager from the CTI API (1). Following a successful update of the content (2), the Content Manager generates a command (3) (4) to notify about new content being available. Ultimately, the Server's periodic search for new commands reads the notification about the new content (5) and notifies the Engine (6), that updates its ruleset content (7).
flowchart TD subgraph Indexer["Indexer cluster"] subgraph Data_streams["Data stream"] commands_stream["Commands stream"] end subgraph Data_states["Content"] states["Ruleset data"] end subgraph Plugins["Modules"] content_manager["Content manager"] command_manager["Command manager"] end end subgraph Wazuh1["Server 1"] engine["Engine"] server["Server"] end subgraph cti["CTI"] end content_manager -- 1- check_updates --> cti content_manager -- 2- /update_content --> states content_manager -- 3- /process_updates --> command_manager command_manager -- 4- stores --> commands_stream server -- 5- pulls --> commands_stream server -- 6- /update_content --> engine engine -- 7- requests_policy --> content_manager style Data_streams fill:#abc2eb style Data_states fill:#abc2eb style Plugins fill:#abc2eb
Use case: save user-made content to Indexer
Wazuh Indexer will store user-made content, such as custom rules, in indices for its distribution to the Servers (Engine).
Users may create new content by interacting with the Management API (1a) or UI (1b). In any case, the new content arrives to the Content Manager API (2a) (2b). The Content Manager validates the data (3), and stores it on the appropriate index (4) in case of being valid. Ultimately, the Content Manager generates a command (5) (6) To notify about new content being available.
flowchart TD subgraph Dashboard["Dashboard"] end subgraph Indexer["Indexer cluster"] subgraph Data_states["Content"] states["Ruleset data"] end subgraph Plugins["Modules"] subgraph content_manager["Content manager"] subgraph indexer_engine["Engine"] end subgraph content_manager_api["Content manager API"] end end command_manager["Command manager"] end subgraph Data_streams["Data stream"] commands_stream["Commands stream"] end end subgraph Wazuh1["Server 1"] engine["Engine"] management_api["Management API"] server["Server"] end subgraph users["Users"] end users -- 1b- /test_policy --> Dashboard users -- 1a- /test_policy --> management_api management_api -- 2a- /update_test_policy --> content_manager Dashboard -- 2b- /update_test_policy --> content_manager content_manager_api -- 3- /validate_test_policy --> indexer_engine content_manager -- 4- /update_test_policy --> states content_manager -- 5- /process_updates --> command_manager command_manager -- 6- /stores --> commands_stream style Data_states fill:#abc2eb style Data_streams fill:#abc2eb style Plugins fill:#abc2eb
Content update process
The update process of the Content Manager compares the offset values for the consumer
To update the content, the Content Manager uses the CTI client to fetch the changes. It then processes the data and transforms it into create, update or delete operations to the content index. When the update is completed, it generates a command for the Command Manager using the API.
The Content Updater module is the orchestrator of the update process, delegating the fetching and indexing operations to other modules.
The update process is as follows:
- The Content Updater module compares the "offsets" in the
wazuh-context
index. If these values differ, it means that the version of the content in the Indexer and in CTI are different. - If the content is outdated, it requests the CTI API for the newest changes, which are in JSON patch format. For performance purposes, these changes are obtained in chunks.
- Each of these chunks are applied to the content one by one. If the operation fails, the update process is interrupted and a recovery from a snapshot is required.
- The update continues until the offsets are equal.
- Once completed, the update is committed by updating the offset in the
wazuh-context
index and generating a command for the Command Manager notifying about the update's success.
--- title: Content Manager offset-based update mechanism --- flowchart TD ContextIndex1@{ shape: lin-cyl, label: "Index storage" } ContextIndex2@{ shape: lin-cyl, label: "Index storage" } CTI_API@{ shape: lin-cyl, label: "CTI API" } CM_API@{ shape: lin-cyl, label: "Command Manager API" } subgraph ContentIndex["[apply change]"] direction LR OperationType --> Create OperationType --> Update OperationType --> Delete Create -.-> CVE_Index Delete -.-> CVE_Index Update -.-> CVE_Index OperationType@{ shape: hex, label: "Check operation" } Create["Create"] Delete["Delete"] Update["Update"] CVE_Index@{ shape: lin-cyl, label: "Index storage" } end subgraph ContentUpdater["Content update process"] Start@{ shape: circle, label: "Start" } End@{ shape: dbl-circ, label: "Stop" } GetConsumerInfo["Get consumer info"] CompareOffsets@{ shape: hex, label: "Compare offsets" } IsOutdated@{ shape: diamond, label: "Is outdated?" } GetChanges["Get changes"] ApplyChange@{ shape: subproc, label: "apply change" } IsLastOffset@{ shape: diamond, label: "Is last offset?"} UpdateOffset["Update offset"] GenerateCommand["Generate command"] end %% Flow Start --> GetConsumerInfo GetConsumerInfo --> CompareOffsets GetConsumerInfo -.read.-> ContextIndex1 CompareOffsets --> IsOutdated IsOutdated -- No --> End IsOutdated -- Yes --> GetChanges GetChanges -.GET.-> CTI_API GetChanges --> ApplyChange ApplyChange --> IsLastOffset IsLastOffset -- No --> GetChanges IsLastOffset -- Yes --> UpdateOffset UpdateOffset --> GenerateCommand --> End UpdateOffset -.write.-> ContextIndex2 GenerateCommand -.POST.-> CM_API style ContentUpdater fill:#abc2eb style ContentIndex fill:#abc2eb
Upgrade
This section guides you through the upgrade process of the Wazuh indexer.
Preparing the upgrade
In case Wazuh is installed in a multi-node cluster configuration, repeat the following steps for every node.
Ensure you have added the Wazuh repository to every Wazuh indexer node before proceeding to perform the upgrade actions.
Yum
-
Import the GPG key.
rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH
-
Add the repository.
echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/5.x/yum/\nprotect=1' | tee /etc/yum.repos.d/wazuh.repo
APT
-
Install the following packages if missing.
apt-get install gnupg apt-transport-https
-
Install the GPG key.
curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
-
Add the repository.
echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/5.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
-
Update the packages information.
apt-get update
Upgrading the Wazuh indexer
The Wazuh indexer cluster remains operational throughout the upgrade. The rolling upgrade process allows nodes to be updated one at a time, ensuring continuous service availability and minimizing disruptions. The steps detailed in the following sections apply to both single-node and multi-node Wazuh indexer clusters.
Preparing the Wazuh indexer cluster for upgrade
Perform the following steps on any of the Wazuh indexer nodes replacing <WAZUH_INDEXER_IP_ADDRESS>
, <USERNAME>
, and <PASSWORD>
.
-
Disable shard replication to prevent shard replicas from being created while Wazuh indexer nodes are being taken offline for the upgrade.
curl -X PUT "https://:9200/_cluster/settings" \ -u : -k -H "Content-Type: application/json" -d ' { "persistent": { "cluster.routing.allocation.enable": "primaries" } }'
Output
{ "acknowledged" : true, "persistent" : { "cluster" : { "routing" : { "allocation" : { "enable" : "primaries" } } } }, "transient" : {} }
-
Perform a flush operation on the cluster to commit transaction log entries to the index.
curl -X POST "https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_flush" -u <USERNAME>:<PASSWORD> -k
Output
{ "_shards" : { "total" : 19, "successful" : 19, "failed" : 0 } }
Upgrading the Wazuh indexer nodes
-
Stop the Wazuh indexer service.
Systemd
systemctl stop wazuh-indexer
SysV init
service wazuh-indexer stop
-
Upgrade the Wazuh indexer to the latest version.
Yum
yum upgrade wazuh-indexer
APT
apt-get install wazuh-indexer
-
Restart the Wazuh indexer service.
Systemd
systemctl daemon-reload systemctl enable wazuh-indexer systemctl start wazuh-indexer
SysV init
Choose one option according to the operating system used.
a. RPM-based operating system:
chkconfig --add wazuh-indexer service wazuh-indexer start
b. Debian-based operating system:
update-rc.d wazuh-indexer defaults 95 10 service wazuh-indexer start
Repeat steps 1 to 3 above on all Wazuh indexer nodes before proceeding to the post-upgrade actions.
Post-upgrade actions
Perform the following steps on any of the Wazuh indexer nodes replacing <WAZUH_INDEXER_IP_ADDRESS>
, <USERNAME>
, and <PASSWORD>
.
-
Check that the newly upgraded Wazuh indexer nodes are in the cluster.
curl -k -u <USERNAME>:<PASSWORD> https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_cat/nodes?v
-
Re-enable shard allocation.
# curl -X PUT "https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_cluster/settings" \ -u <USERNAME>:<PASSWORD> -k -H "Content-Type: application/json" -d ' { "persistent": { "cluster.routing.allocation.enable": "all" } } '
Output
{ "acknowledged" : true, "persistent" : { "cluster" : { "routing" : { "allocation" : { "enable" : "all" } } } }, "transient" : {} }
-
Check the status of the Wazuh indexer cluster again to see if the shard allocation has finished.
curl -k -u <USERNAME>:<PASSWORD> https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_cat/nodes?v
Output
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role node.roles cluster_manager name 172.18.0.3 34 86 32 6.67 5.30 2.53 dimr cluster_manager,data,ingest,remote_cluster_client - wazuh2.indexer 172.18.0.4 21 86 32 6.67 5.30 2.53 dimr cluster_manager,data,ingest,remote_cluster_client * wazuh1.indexer 172.18.0.2 16 86 32 6.67 5.30 2.53 dimr cluster_manager,data,ingest,remote_cluster_client - wazuh3.indexer
Uninstall
Note You need root user privileges to run all the commands described below.
Yum
yum remove wazuh-indexer -y
rm -rf /var/lib/wazuh-indexer/
rm -rf /usr/share/wazuh-indexer/
rm -rf /etc/wazuh-indexer/
APT
apt-get remove --purge wazuh-indexer -y
Backup and restore
In this section you can find instructions on how to create and restore a backup of your Wazuh Indexer key files, preserving file permissions, ownership, and path. Later, you can move this folder contents back to the corresponding location to restore your certificates and configurations. Backing up these files is useful in cases such as moving your Wazuh installation to another system.
Note: This backup only restores the configuration files, not the data. To backup data stored in the indexer, use snapshots.
Creating a backup
To create a backup of the Wazuh indexer, follow these steps. Repeat them on every cluster node you want to back up.
Note: You need root user privileges to run all the commands described below.
Preparing the backup
-
Create the destination folder to store the files. For version control, add the date and time of the backup to the name of the folder.
bkp_folder=~/wazuh_files_backup/$(date +%F_%H:%M) mkdir -p $bkp_folder && echo $bkp_folder
-
Save the host information.
cat /etc/*release* > $bkp_folder/host-info.txt echo -e "\n$(hostname): $(hostname -I)" >> $bkp_folder/host-info.txt
Backing up the Wazuh indexer
Back up the Wazuh indexer certificates and configuration
rsync -aREz \
/etc/wazuh-indexer/certs/ \
/etc/wazuh-indexer/jvm.options \
/etc/wazuh-indexer/jvm.options.d \
/etc/wazuh-indexer/log4j2.properties \
/etc/wazuh-indexer/opensearch.yml \
/etc/wazuh-indexer/opensearch.keystore \
/etc/wazuh-indexer/opensearch-observability/ \
/etc/wazuh-indexer/opensearch-reports-scheduler/ \
/etc/wazuh-indexer/opensearch-security/ \
/usr/lib/sysctl.d/wazuh-indexer.conf $bkp_folder
Compress the files and transfer them to the new server:
```bash
tar -cvzf wazuh_central_components.tar.gz ~/wazuh_files_backup/
```
Restoring Wazuh indexer from backup
This guide explains how to restore a backup of your configuration files.
Note: This guide is designed specifically for restoration from a backup of the same version.
Note: For a multi-node setup, there should be a backup file for each node within the cluster. You need root user privileges to execute the commands below.
Preparing the data restoration
-
In the new node, move the compressed backup file to the root
/
directory:mv wazuh_central_components.tar.gz / cd /
-
Decompress the backup files and change the current working directory to the directory based on the date and time of the backup files:
tar -xzvf wazuh_central_components.tar.gz cd ~/wazuh_files_backup/<DATE_TIME>
Restoring Wazuh indexer files
Perform the following steps to restore the Wazuh indexer files on the new server.
-
Stop the Wazuh indexer to prevent any modifications to the Wazuh indexer files during the restoration process:
systemctl stop wazuh-indexer
-
Restore the Wazuh indexer configuration files and change the file permissions and ownerships accordingly:
sudo cp etc/wazuh-indexer/jvm.options /etc/wazuh-indexer/jvm.options chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/jvm.options sudo cp -r etc/wazuh-indexer/jvm.options.d/* /etc/wazuh-indexer/jvm.options.d/ chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/jvm.options.d sudo cp etc/wazuh-indexer/log4j2.properties /etc/wazuh-indexer/log4j2.properties chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/log4j2.properties sudo cp etc/wazuh-indexer/opensearch.keystore /etc/wazuh-indexer/opensearch.keystore chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/opensearch.keystore sudo cp -r etc/wazuh-indexer/opensearch-observability/* /etc/wazuh-indexer/opensearch-observability/ chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/opensearch-observability/ sudo cp -r etc/wazuh-indexer/opensearch-reports-scheduler/* /etc/wazuh-indexer/opensearch-reports-scheduler/ chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/opensearch-reports-scheduler/ sudo cp usr/lib/sysctl.d/wazuh-indexer.conf /usr/lib/sysctl.d/wazuh-indexer.conf
-
Start the Wazuh indexer service:
systemctl start wazuh-indexer