Wazuh Indexer Technical Documentation
This folder contains the technical documentation for the Wazuh Indexer. The documentation is organized into the following guides:
- Development Guide: Instructions for building, testing, and packaging the Indexer.
- Reference Manual: Detailed information on the Indexer’s architecture, configuration, and usage.
Requirements
To work with this documentation, you need mdBook installed.
| Tool | Required Version |
|---|---|
| mdbook | 0.5.2 |
| mdbook-mermaid | 0.17.0 |
-
Get the latest
cargo(hit enter when prompted for a default install)curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -
Install
mdbookandmdbook-mermaidcargo install mdbook --version 0.5.2 --locked cargo install mdbook-mermaid --version 0.17.0 --locked
Usage
-
To build the documentation, run:
./build.shThe output will be generated in the
bookdirectory. -
To serve the documentation locally for preview, run:
./server.shThe documentation will be available at http://127.0.0.1:3000.
Development documentation
Under this section, you will find the development documentation of Wazuh Indexer. This documentation contains instructions to compile, run, test and package the source code. Moreover, you will find instructions to set up a development environment in order to get started at developing the Wazuh Indexer.
This documentation assumes basic knowledge of certain tools and technologies, such as Docker, Bash (Linux) or Git.
Set up the Development Environment
1. Git
Install and configure Git (SSH keys, commits and tags signing, user and email).
- Set your username.
- Set your email address.
- Generate an SSH key.
- Add the public key to your GitHub account for authentication and signing.
- Configure Git to sign commits with your SSH key.
2. Repositories
Clone the Wazuh Indexer repositories (use SSH). Before you start, you need to properly configure your working repositories to have origin and upstream remotes.
mkdir -p ~/wazuh && cd ~/wazuh
# Plugins (no upstream fork)
git clone git@github.com:wazuh/wazuh-indexer-plugins.git
# Indexer core (forked from OpenSearch)
git clone git@github.com:wazuh/wazuh-indexer.git
cd wazuh-indexer
git remote add upstream git@github.com:opensearch-project/opensearch.git
cd ..
# Reporting plugin (forked from OpenSearch)
git clone git@github.com:wazuh/wazuh-indexer-reporting.git
cd wazuh-indexer-reporting
git remote add upstream git@github.com:opensearch-project/reporting.git
cd ..
# Security Analytics (forked from OpenSearch)
git clone git@github.com:wazuh/wazuh-indexer-security-analytics.git
cd wazuh-indexer-security-analytics
git remote add upstream git@github.com:opensearch-project/security-analytics.git
cd ..
# Notifications plugin (forked from OpenSearch)
git clone git@github.com:wazuh/wazuh-indexer-notifications.git
cd wazuh-indexer-notifications
git remote add upstream git@github.com:opensearch-project/notifications.git
cd ..
# Common Utils plugin (forked from OpenSearch)
git clone git@github.com:wazuh/wazuh-indexer-common-utils.git
cd wazuh-indexer-common-utils
git remote add upstream git@github.com:opensearch-project/common-utils.git
cd ..
# Alerting plugin (forked from OpenSearch)
git clone git@github.com:wazuh/wazuh-indexer-alerting.git
cd wazuh-indexer-alerting
git remote add upstream git@github.com:opensearch-project/alerting.git
cd ..
3. Vagrant
Install Vagrant with the Libvirt provider following the guide.
Then install the Vagrant SCP plugin:
vagrant plugin install vagrant-scp
4. IntelliJ IDEA
Prepare your IDE:
- Install IDEA Community Edition as per the official documentation.
- Set a global SDK to Eclipse Temurin following this guide.
You can find the JDK version to use under the
wazuh-indexer/gradle/libs.versions.tomlfile. IntelliJ IDEA includes some JDKs by default. If you need to change it, or if you want to use a different distribution, follow the instructions in the next section.
5. Set up Java
When you open a Java project for the first time, IntelliJ will ask you to install the appropriate JDK for the project.
Using IDEA, install a JDK following this guide. The version to install must match the JDK version used by the Indexer (check wazuh-indexer/gradle/libs.versions.toml).
Once the JDK is installed, configure it as the default system-wide Java installation using update-alternatives:
sudo update-alternatives --install /usr/bin/java java /home/$USER/.jdks/temurin-21.0.9/bin/java 0
Check Java is correctly configured:
java --version
If you need to install or switch JDK versions, use sudo update-alternatives --config java to select the JDK of your preference.
Set the JAVA_HOME and PATH environment variables by adding these lines to your shell RC file (.bashrc, .zshrc, etc.):
export JAVA_HOME=/usr/lib/jvm/temurin-24-jdk-amd64
export PATH=$PATH:/usr/lib/jvm/temurin-24-jdk-amd64/bin
After that, restart your shell or run source ~/.zshrc (or similar) to apply the changes. Verify with java --version.
Tip: SDKMAN is a convenient tool for managing multiple JDK versions:
sdk install java 24-tem sdk use java 24-tem
6. Docker (Optional)
Docker is useful for running integration tests and local test environments. Install Docker Engine following the official instructions.
Verify the installation:
docker --version
docker run hello-world
7. Test Cluster (Optional)
The repository includes a Vagrant-based test cluster at tools/test-cluster/ for end-to-end testing against a real Wazuh Indexer instance.
Prerequisites:
- Vagrant
- VirtualBox or another supported provider
Refer to the tools/test-cluster/README.md for provisioning and usage instructions.
8. Verify the Setup
After completing the setup, verify everything works:
cd wazuh-indexer-plugins
./gradlew :wazuh-indexer-content-manager:compileJava
For the Notifications plugin (Kotlin-based, separate repository):
cd wazuh-indexer-notifications
./gradlew build
For the Common Utils plugin (Shared Library):
cd wazuh-indexer-common-utils
./gradlew clean build publishToMavenLocal
For the Alerting plugin:
cd wazuh-indexer-alerting
./gradlew build
If compilation succeeds, your environment is ready. See Build from Sources for more build commands.
How to generate a package
This guide includes instructions to generate distribution packages locally using Docker.
Wazuh Indexer supports any of these combinations:
- distributions:
['tar', 'deb', 'rpm'] - architectures:
['x64', 'arm64']
Windows is currently not supported.
For more information navigate to the compatibility section.
Before you get started, make sure to clean your environment by running ./gradlew clean on the root level of the wazuh-indexer repository.
Pre-requisites
The process to build packages requires Docker and Docker Compose.
Your workstation must meet the minimum hardware requirements (the more resources the better ☺):
- 8 GB of RAM (minimum)
- 4 cores
The tools and source code to generate a package of Wazuh Indexer are hosted in the wazuh-indexer repository, so clone it if you haven’t done already.
Building wazuh-indexer packages
The Docker environment under wazuh-indexer/build-scripts/builder automates the build and assemble process for the Wazuh Indexer and its plugins, making it easy to create packages on any system.
Use the builder.sh script to build a package.
./builder.sh -h
Usage: ./builder.sh [args]
Arguments:
-p INDEXER_PLUGINS_BRANCH [Optional] wazuh-indexer-plugins repo branch, default is 'main'.
-r INDEXER_REPORTING_BRANCH [Optional] wazuh-indexer-reporting repo branch, default is 'main'.
-s SECURITY_ANALYTICS_BRANCH [Optional] wazuh-indexer-security-analytics repo branch, default is 'main'.
-n NOTIFICATIONS_BRANCH [Optional] wazuh-indexer-notifications repo branch, default is 'main'.
-c COMMON_UTILS_BRANCH [Optional] wazuh-indexer-common-utils repo branch, default is 'main'.
-e ENGINE_TARBALL [Optional] Path to wazuh-engine tarball (.tar.gz) on the host.
-R REVISION [Optional] Package revision, default is '0'.
-S STAGE [Optional] Staging build, default is 'false'.
-d DISTRIBUTION [Optional] Distribution, default is 'rpm'.
-a ARCHITECTURE [Optional] Architecture, default is 'x64'.
-D Destroy the docker environment
-h Print help
The example below it will generate a wazuh-indexer package for Debian based systems, for the x64 architecture, using 1 as revision number and using the production naming convention.
# Wihtin wazuh-indexer/build-scripts/builder
bash builder.sh -d deb -a x64 -R 0 -S true -e ./wazuh-engine-5.0.0-linux-amd64.tar.gz
The resulting package will be stored at wazuh-indexer/artifacts/dist.
The
STAGEoption defines the naming of the package. When set tofalse, the package will be unequivocally named with the commits’ SHA of thewazuh-indexer,wazuh-indexer-pluginsandwazuh-indexer-reportingrepositories, in that order. For example:wazuh-indexer_5.0.0-0_x86_64_aff30960363-846f143-494d125.rpm.
How to generate a container image
This guide includes instructions to generate distribution packages locally using Docker.
Wazuh Indexer supports any of these combinations:
- distributions:
['tar', 'deb', 'rpm'] - architectures:
['x64', 'arm64']
Windows is currently not supported.
For more information navigate to the compatibility section.
Before you get started, make sure to clean your environment by running ./gradlew clean on the root level of the wazuh-indexer repository.
Pre-requisites
The process to build packages requires Docker and Docker Compose.
Your workstation must meet the minimum hardware requirements (the more resources the better ☺):
- 8 GB of RAM (minimum)
- 4 cores
The tools and source code to generate a package of Wazuh Indexer are hosted in the wazuh-indexer repository, so clone it if you haven’t done already.
Building wazuh-indexer Docker images
The wazuh-indexer/build-scripts/docker folder contains the code to build Docker images. Below there is an example of the command needed to build the image. Set the build arguments and the image tag accordingly.
The Docker image is built from a wazuh-indexer tarball (tar.gz), which must be present in the same folder as the Dockerfile in wazuh-indexer/build-scripts/docker.
docker build \
--build-arg="VERSION=<version>" \
--build-arg="INDEXER_TAR_NAME=wazuh-indexer_<version>-<revision>_linux-x64.tar.gz" \
--tag=wazuh-indexer:<version>-<revision> \
--progress=plain \
--no-cache .
Then, start a container with:
docker run -p 9200:9200 -it --rm wazuh-indexer:<version>-<revision>
The build-and-push-docker-image.sh script automates the process to build and push Wazuh Indexer Docker images to our repository in quay.io. The script takes several parameters. Use the -h option to display them.
To push images, credentials must be set at environment level:
- QUAY_USERNAME
- QUAY_TOKEN
Usage: build-scripts/build-and-push-docker-image.sh [args]
Arguments:
-n NAME [required] Tarball name.
-r REVISION [Optional] Revision qualifier, default is 0.
-h help
The script will stop if the credentials are not set, or if any of the required parameters are not provided.
This script is used in the 5_builderpackage_docker.yml GitHub Workflow, which is used to automate the process even more. When possible, prefer this method.
How to Build from Sources
The Wazuh Indexer Plugins repository uses Gradle as its build system. The root project contains multiple subprojects, one per plugin.
Building the Entire Project
To build all plugins (compile, test, and package):
./gradlew build
When completed, distribution artifacts for each plugin are located in their respective build/distributions/ directories.
Building a Specific Plugin
To build only the Content Manager plugin:
./gradlew :wazuh-indexer-content-manager:build
Other plugin targets follow the same pattern. To see all available projects:
./gradlew projects
Compile Only (No Tests)
For a faster feedback loop during development, compile without running tests:
./gradlew :wazuh-indexer-content-manager:compileJava
This is useful for checking that your code changes compile correctly before running the full test suite.
Output Locations
| Artifact | Location |
|---|---|
| Plugin ZIP distribution | plugins/<plugin-name>/build/distributions/ |
| Compiled classes | plugins/<plugin-name>/build/classes/ |
| Test reports | plugins/<plugin-name>/build/reports/tests/ |
| Generated JARs | plugins/<plugin-name>/build/libs/ |
Common Build Issues
JDK Version Mismatch
The project requires a specific JDK version (currently JDK 24, Eclipse Temurin). If you see compilation errors related to Java version, check:
java --version
Ensure JAVA_HOME points to the correct JDK. See Setup for details.
Dependency Resolution Failures
If Gradle cannot resolve dependencies:
- Check your network connection (dependencies are downloaded from Maven Central and repositories).
- Try clearing the Gradle cache:
rm -rf ~/.gradle/caches/ - Re-run with
--refresh-dependencies:./gradlew build --refresh-dependencies
Out of Memory
For large builds, increase Gradle’s heap size in gradle.properties:
org.gradle.jvmargs=-Xmx4g
Linting and Formatting Errors
The build includes code quality checks (Spotless, etc.). If formatting checks fail:
./gradlew spotlessApply
Then rebuild.
Useful Gradle Flags
| Flag | Description |
|---|---|
--info | Verbose output |
--debug | Debug-level output |
--stacktrace | Print stack traces on failure |
--parallel | Run tasks in parallel (faster on multi-core) |
-x test | Skip tests: ./gradlew build -x test |
--continuous | Watch mode — rebuilds on file changes |
How to run from sources
Every Wazuh Indexer repository includes one or more Gradle projects with predefined tasks to run and build the source code.
In this case, to run a Gradle project from source code, run the ./gradlew run command.
For Wazuh Indexer, additional plugins may be installed by passing the -PinstalledPlugins flag:
./gradlew run -PinstalledPlugins="['plugin1', 'plugin2']"
The ./gradlew run command will build and start the project, writing its log above Gradle’s status message. A lot of stuff is logged on startup, specifically these lines tell you that OpenSearch is ready.
[2020-05-29T14:50:35,167][INFO ][o.e.h.AbstractHttpServerTransport] [runTask-0] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2020-05-29T14:50:35,169][INFO ][o.e.n.Node ] [runTask-0] started
It’s typically easier to wait until the console stops scrolling, and then run curl in another window to check if OpenSearch instance is running.
curl localhost:9200
{
"name" : "runTask-0",
"cluster_name" : "runTask",
"cluster_uuid" : "oX_S6cxGSgOr_mNnUxO6yQ",
"version" : {
"number" : "1.0.0-SNAPSHOT",
"build_type" : "tar",
"build_hash" : "0ba0e7cc26060f964fcbf6ee45bae53b3a9941d0",
"build_date" : "2021-04-16T19:45:44.248303Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
}
}
Use -Dtests.opensearch. to pass additional settings to the running instance. For example, to enable OpenSearch to listen on an external IP address, pass -Dtests.opensearch.http.host. Make sure your firewall or security policy allows external connections for this to work.
./gradlew run -Dtests.opensearch.http.host=0.0.0.0
How to Run the Tests
This section explains how to run the Wazuh Indexer Plugins tests at various levels.
Full Suite
To execute all tests and code quality checks (linting, documentation, formatting):
./gradlew check
This runs unit tests, integration tests, and static analysis tasks.
Unit Tests
Run all unit tests across the entire project:
./gradlew test
Run unit tests for a specific plugin:
./gradlew :wazuh-indexer-content-manager:test
Integration Tests
Run integration tests for a specific plugin:
./gradlew :wazuh-indexer-content-manager:integTest
YAML REST Tests
Plugins can define REST API tests using YAML test specs. To run them:
./gradlew :wazuh-indexer-content-manager:yamlRestTest
Reproducible Test Runs
Tests use randomized seeds. When a test fails, the output includes the seed that was used. To reproduce the exact same run:
./gradlew :wazuh-indexer-content-manager:test -Dtests.seed=DEADBEEF
Replace DEADBEEF with the actual seed from the failure output.
Viewing Test Reports
After running tests, HTML reports are generated at:
plugins/<plugin-name>/build/reports/tests/test/index.html
Open this file in a browser to see detailed results with pass/fail status, stack traces, and timing.
For integration tests:
plugins/<plugin-name>/build/reports/tests/integTest/index.html
Running a Single Test Class
To run a specific test class:
./gradlew :wazuh-indexer-content-manager:test --tests "com.wazuh.contentmanager.rest.service.RestPostRuleActionTests"
Test Cluster (Vagrant)
For end-to-end testing on a real Wazuh Indexer service, the repository includes a Vagrant-based test cluster at tools/test-cluster/. This provisions a virtual machine with Wazuh Indexer installed and configured.
Refer to its README.md for setup and usage instructions.
Package Testing
Smoke tests on built packages are run via GitHub Actions Workflows. These install packages on supported operating systems:
- DEB packages — installed on the Ubuntu 24.04 GitHub Actions runner.
- RPM packages — installed in a Red Hat 9 Docker container.
Useful Test Flags
| Flag | Description |
|---|---|
-Dtests.seed=<seed> | Reproduce a specific randomized test run |
-Dtests.verbose=true | Print test output to stdout |
--tests "ClassName" | Run a single test class |
--tests "ClassName.methodName" | Run a single test method |
-x test | Skip unit tests in a build |
Wazuh Indexer Setup Plugin — Development Guide
This document describes how to extend the Wazuh Indexer setup plugin to create new index templates and index management policies (ISM) for OpenSearch.
📦 Creating a New Index
1. Add a New Index Template
Create a new JSON file in the directory: /plugins/setup/src/main/resources
Follow the existing structure and naming convention. Example:
{
"index_patterns": ["<pattern>"],
"mappings": {
"date_detection": false,
"dynamic": "strict",
"properties": {
<custom mappings and fields>
}
},
"order": 1,
"settings": {
"index": {
"number_of_shards": 1,
"number_of_replicas": 1
}
}
}
2. Register the Index in the Code
Edit the constructor of the SetupPlugin class located at: /plugins/setup/src/main/java/com/wazuh/setup/SetupPlugin.java
Add the template and index entry to the indices map. There are two kind of indices:
- Stream index. Stream indices contain time-based events of any kind (alerts, statistics, logs…).
- Stateful index. Stateful indices represent the most recent information of a subject (active vulnerabilities, installed packages, open ports, …). These indices are different of Stream indices as they do not contain timestamps. The information is not based on time, as they always represent the most recent state.
/**
* Main class of the Indexer Setup plugin. This plugin is responsible for the creation of the index
* templates and indices required by Wazuh to work properly.
*/
public class SetupPlugin extends Plugin implements ClusterPlugin {
// ...
// Stream indices
this.indices.add(new StreamIndex("my-stream-index-000001", "my-index-template-1", "my-alias"));
// State indices
this.indices.add(new StateIndex("my-state-index", "my-index-template-2"));
//...
}
✅ Verifying Template and Index Creation After building the plugin and deploying the Wazuh Indexer with it, you can verify the index templates and indices using the following commands:
curl -X GET <indexer-IP>:9200/_index_template/ curl -X GET <indexer-IP>:9200/_cat/indices?v
Alternatively, use the Developer Tools console from the Wazuh Dashboard, or your browser.
🔁 Creating a New ISM (Index State Management) Policy
1. Add Rollover Alias to the Index Template
Edit the existing index template JSON file and add the following setting:
"plugins.index_state_management.rollover_alias": "<index-name>"
2. Define the ISM Policy
Refer to the OpenSearch ISM Policies documentation for more details.
Here is an example ISM policy:
{
"policy": {
"policy_id": "<index-name>-rollover-policy",
"description": "<policy-description>",
"last_updated_time": <unix-timestamp-in-milliseconds>,
"schema_version": 21,
"error_notification": null,
"default_state": "rollover",
"states": [
{
"name": "rollover",
"actions": [
{
"rollover": {
"min_doc_count": 200000000,
"min_index_age": "7d",
"min_primary_shard_size": "25gb"
}
}
],
"transitions": []
}
],
"ism_template": [
{
"index_patterns": [
"wazuh-<pattern1>-*"
// Optional additional patterns
// "wazuh-<pattern2>-*"
],
"priority": <priority-int>,
"last_updated_time": <unix-timestamp-in-milliseconds>
}
]
}
}
3. Register the ISM Policy in the Plugin Code
Edit the IndexStateManagement class located at: /plugins/setup/src/main/java/com/wazuh/setup/index/IndexStateManagement.java
Register the new policy constant and add it in the constructor:
// ISM policy name constant (filename without .json extension)
static final String MY_POLICY = "my-policy-filename";
...
/**
* Constructor
*
* @param index Index name
* @param template Index template name
*/
public IndexStateManagement(String index, String template) {
super(index, template);
this.policies = new ArrayList<>();
// Register the ISM policy to be created
this.policies.add(MY_POLICY);
}
📌 Additional Notes
Always follow existing naming conventions to maintain consistency.
Use epoch timestamps (in milliseconds) for last_updated_time fields.
ISM policies and templates must be properly deployed before the indices are created.
🚀 Event Stream Templates
Overview
All event data streams share a single base template: templates/streams/events.json. At deployment time, the plugin generates one index template per event category by dynamically setting the index_patterns and rollover_alias fields from the base template. This means:
- Source of truth: Only
events.jsonexists in the repository. - At runtime: One index template is created for each category (e.g.,
wazuh-events-v5-cloud-services-template,wazuh-events-v5-security-template, etc.).
The StreamIndex class handles this: when constructed with only an index name (no explicit template path), it defaults to templates/streams/events and rewrites the index_patterns and rollover_alias to match the specific index.
How it works
// Single-arg constructor defaults to the shared events template
new StreamIndex("wazuh-events-v5-cloud-services")
// Equivalent to:
new StreamIndex("wazuh-events-v5-cloud-services", "templates/streams/events")
During createTemplate(), the plugin:
- Reads
events.jsonfrom the classpath - Overrides
index_patternsto["wazuh-events-v5-cloud-services*"] - Overrides
rollover_aliasto"wazuh-events-v5-cloud-services" - Creates the composable index template in OpenSearch
Verifying deployed templates
To list all event templates in a running cluster:
GET /_index_template/wazuh-events-*
Specialized stream templates
Some data streams use their own dedicated templates instead of the shared events.json:
| Data Stream | Template | Notes |
|---|---|---|
wazuh-events-raw-v5 | templates/streams/raw.json | Stores original unprocessed events |
wazuh-events-v5-unclassified | templates/streams/unclassified.json | Stores uncategorized events for investigation |
wazuh-active-responses | templates/streams/active-responses.json | Active Response execution requests |
These are registered with the two-arg constructor:
new StreamIndex("wazuh-events-raw-v5", "templates/streams/raw")
new StreamIndex("wazuh-events-v5-unclassified", "templates/streams/unclassified")
new StreamIndex("wazuh-active-responses", "templates/streams/active-responses")
🚀 Unclassified Events Data Stream (wazuh-events-v5-unclassified)
Overview
The wazuh-events-v5-unclassified data stream is a specialized stream designed to capture and store events that do not match any predefined event categories. This provides visibility into edge cases, parsing failures, and events that may require new categorization rules.
Purpose
- Investigation and Troubleshooting: Analyze uncategorized events to identify patterns or issues
- Rule Development: Identify events that need new categorization rules
- System Monitoring: Track parsing failures and anomalies
Data Stream Configuration
Index Template
- Location:
plugins/setup/src/main/resources/templates/streams/unclassified.json - Index Pattern:
wazuh-events-v5-unclassified* - Rollover Alias:
wazuh-events-v5-unclassified - Priority: 1 (higher priority than standard event streams for proper template selection)
Fields Included
- @timestamp: Event timestamp
- event.original: Raw, unprocessed event data
- wazuh.agent.*: Agent metadata (id, name, version, type)
- wazuh.cluster.*: Cluster information (name, node)
- wazuh.space.name: Wazuh space/tenant information
- wazuh.schema.version: Schema version
- wazuh.integration.*: Integration metadata (category, name, decoders, rules)
Storage Settings
- Number of Shards: 3
- Number of Replicas: 0
- Auto-expand Replicas: 0-1
- Refresh Interval: 5 seconds
- Dynamic Mapping: Strict (prevents unintended field creation)
ISM Policy
Policy Details
- Policy Name:
stream-unclassified-events-policy - Location:
plugins/setup/src/main/resources/policies/stream-unclassified-events-policy.json - Retention Period: 7 days
- Priority: 100
Policy States
-
Hot State
- Actions: None (events are immediately indexed)
- Transition Condition: Transitions to
deleteafter 7 days
-
Delete State
- Actions: Deletes the index
- Retry Policy: 3 attempts with exponential backoff (1-minute initial delay)
Use Cases
-
Event Classification Issues
- Events that failed to match any category
- Malformed or unusual event formats
-
Parsing Failures
- Events that couldn’t be decoded properly
- Invalid event structures
-
Rule Development
- Analyzing patterns that require new rules
- Edge cases not covered by existing rules
-
System Diagnostics
- Understanding integration performance
- Identifying missing integrations or decoders
Configuration
The data stream is created automatically during plugin initialization. Ensure:
- The template file
unclassified.jsonexists intemplates/streams/ - The ISM policy file
stream-unclassified-events-policy.jsonexists inpolicies/ - Both are registered in
SetupPlugin.javaandIndexStateManagement.java
Indexing Unclassified Events
To index events into this data stream, use:
POST /wazuh-events-v5-unclassified/_doc
{
"@timestamp": "2024-02-19T10:00:00Z",
"event": {
"original": "raw uncategorized event data"
},
"wazuh": {
"agent": {
"id": "001",
"name": "agent-name"
},
"space": {
"name": "default"
}
}
}
Monitoring and Analysis
Query Unclassified Events
GET /wazuh-events-v5-unclassified/_search
{
"query": {
"match_all": {}
}
}
Count Events by Agent
GET /wazuh-events-v5-unclassified/_search
{
"size": 0,
"aggs": {
"events_by_agent": {
"terms": {
"field": "wazuh.agent.id",
"size": 100
}
}
}
}
Time-based Analysis
GET /wazuh-events-v5-unclassified/_search
{
"size": 0,
"aggs": {
"events_over_time": {
"date_histogram": {
"field": "@timestamp",
"interval": "1h"
}
}
}
}
Testing
Integration tests for the unclassified data stream are located at:
plugins/setup/src/test/java/com/wazuh/setup/UnclassifiedEventsIT.java
These tests verify:
- Data stream creation
- Template application
- ISM policy creation and application
- Document indexing capability
- Correct field mappings
🚀 Active Responses Data Stream (wazuh-active-responses)
Overview
The wazuh-active-responses data stream stores Active Response execution requests generated when monitor triggers match their conditions. This is part of the Active Response 5.0 integration with Wazuh XDR, using the Indexer Alerting and Notifications plugins as the foundation.
Purpose
- Active Response Pipeline: Structured and auditable execution pipeline for Active Response actions
- Manager Retrieval: The Wazuh manager retrieves documents from this index to distribute and execute Active Responses on agents
- Event Correlation: Each document references the source event (document ID and index) that triggered the response
Data Stream Configuration
Index Template
- Location:
plugins/setup/src/main/resources/templates/streams/active-responses.json - Index Pattern:
wazuh-active-responses* - Rollover Alias:
wazuh-active-responses - Priority: 1
Fields Included (WCS-compatible)
- @timestamp: When the document was inserted into the wazuh-active-responses index (indexing time)
- event.doc_id: Document ID of the matched alert that triggered the active response
- event.index: Source index of the matched alert
- wazuh.active_response.name: Name of the active response configured in the channel
- wazuh.active_response.executable: Executable configured in the active response channel
- wazuh.active_response.extra_arguments: Arguments configured in the channel
- wazuh.active_response.location: Where to execute (local, defined-agent, all)
- wazuh.active_response.agent_id: Agent configured in the channel
- wazuh.active_response.type: Response type (stateless, stateful)
- wazuh.active_response.stateful_timeout: Seconds configured in the channel (for stateful)
- wazuh.agent.*: Agent metadata
- wazuh.cluster.*: Cluster information
- wazuh.space.name: Wazuh space/tenant information
ISM Policy
Policy Details
- Policy Name:
stream-active-responses-policy - Location:
plugins/setup/src/main/resources/policies/stream-active-responses-policy.json - Retention Period: 3 days
- Priority: 100
Configuration
The data stream is created automatically during plugin initialization. Ensure:
- The template file
active-responses.jsonexists intemplates/streams/ - The ISM policy file
stream-active-responses-policy.jsonexists inpolicies/ - Both are registered in
SetupPlugin.javaandIndexStateManagement.java
Testing
Integration tests for the active responses data stream are located at:
plugins/setup/src/test/java/com/wazuh/setup/ActiveResponsesIT.java
Defining default users and roles for Wazuh Indexer
The Wazuh Indexer packages include a set of default users and roles specially crafted for Wazuh’s use cases. This guide provides instructions to extend or modify these users and roles so they end up being included in the Wazuh Indexer package by default.
Note that the access control and permissions management are handled by the OpenSearch’s security plugin. As a result, we provide configuration files for it. The data is applied during the cluster’s initialization, as a result of running the indexer-security-init.sh script.
Considerations and conventions
As these configuration files are included in the Wazuh Indexer package, they are hosted in the wazuh-indexer repository. Be aware of that when reading this guide.
Any security related resource (roles, action groups, users, …) created by us must be reserved (reserved: true). This ensures they cannot be modified by the users, in order to guarantee the correct operation of Wazuh Central Components. Also, they should be visible (hidden: false) unless explicitly defined otherwise.
1. Adding a new user
Add the new user to the internal_users.wazuh.yml file located at: wazuh-indexer/distribution/src/config/security/.
new-user:
# Generate the hash using the tool at `plugins/opensearch-security/tools/hash.sh -p <new-password>`
hash: "<HASHED-PASSWORD>"
reserved: true
hidden: false
backend_roles: []
description: "New user description"
OpenSearch’s reference:
2. Adding a new role
Add the new role to the roles.wazuh.yml file located at: wazuh-indexer/distribution/src/config/security/.
- Under
index_permissions.index_patterns, list the index patterns the role will have effect on. - Under
index_permissions.allowed_actions, list the allowed action groups or indiviual permissions granted to this role.
The default action groups for cluster_permissions and index_permissions are listed in the Default action groups documentation
role-read:
reserved: true
hidden: false
cluster_permissions: []
index_permissions:
- index_patterns:
- "wazuh-*"
dls: ""
fls: []
masked_fields: []
allowed_actions:
- "read"
tenant_permissions: []
static: true
role-write:
reserved: true
hidden: false
cluster_permissions: []
index_permissions:
- index_patterns:
- "wazuh-*"
dls: ""
fls: []
masked_fields: []
allowed_actions:
- "index"
tenant_permissions: []
static: true
OpenSearch’s reference:
3. Adding a new role mapping
Add the new role mapping to roles_mapping.wazuh.yml file located at: wazuh-indexer/distribution/src/config/security/. Note that the mapping name must match the role name.
- Under
users, list the users the role will be mapped to.
role-read:
reserved: true
hidden: false
backend_roles: [ ]
hosts: [ ]
users:
- "new-user"
and_backend_roles: [ ]
role-write:
reserved: true
hidden: false
backend_roles: [ ]
hosts: [ ]
users:
- "new-user"
and_backend_roles: [ ]
OpenSearch’s reference:
Testing the configuration
The validation of the new configuration needs to be tested on a running deployment of Wazuh Indexer containing the security plugin.
You can follow any of these paths:
A. Generating a new Wazuh Indexer package
- Apply your changes to the configuration files in
wazuh-indexer/distribution/src/config/security/. - Generate a new package (see Build Packages).
- Follow the official installation and configuration steps.
- Check the new changes are applied (you can use the UI or the API).
B. Applying the new configuration to an existing Wazuh Indexer deployment (using the UI or API)
- Use the Wazuh Indexer API or the Wazuh Dashboard to create a new security resource. Follow the steps in Defining users and roles.
C. Applying the new configuration to an existing Wazuh Indexer deployment (using configuration files)
- Add the new configuration to the affected file within
/etc/wazuh-indexer/opensearch-security/. - Run the
/usr/share/wazuh-indexer/bin/indexer-security-init.shscript to load the new configuration.
The indexer-security-init.sh will overwrite your security configuration, including passwords. Use it under your own risk.
Alternatively, apply the new configuration using fine-grained options. See Applying changes to configuration files
Wazuh Indexer Reporting Plugin — Development Guide
This document describes how to build a Wazuh Reporting plugin development environment to create and test new features.
Working from a minimal environment
In order to deploy a minimal environment for developing the reporting plugin just for testing purposes, you must have at least a Wazuh Indexer and a Wazuh Dashboard environment running. Then, you can create your own SMPT server to test the email notifications from the following Mailpit configuration. To verify everything is working correctly, try generating reports following the user’s guide.
Working from real scenario packages
Preparing packages
- Wazuh Indexer package (debian package based on OpenSearch 3.1.0). Compiled locally using the Docker builder:
bash builder.sh -d deb -a x64. - Wazuh Dashboard package (debian package based on OpenSearch 3.1.0). Downloaded from wazuh-dashboard actions.
Note: To test using RPM packages, update the Vagrant configuration and provisioning scripts accordingly (for example, change
generic/ubuntu2204togeneric/centos7in the Vagrantfile and replace Debian-specific installation commands with RPM equivalents).
Preparing a development environment
Prepare a multi-VM Vagrant environment with the following components:
- Server
- Wazuh Indexer (including the reporting plugin).
- Wazuh Dashboard (including the reporting plugin).
- Mailpit
- Mailpit SMTP server.
File location should be:
working-dir/
├── Vagrantfile
├── data/
│ ├── wazuh-indexer_*.deb
│ ├── wazuh-dashboard_*.deb
│ ├── gencerts.sh
│ ├── mailpit.sh
│ └── server.sh
Vagrantfile
Details
class VagrantPlugins::ProviderVirtualBox::Action::Network
def dhcp_server_matches_config?(dhcp_server, config)
true
end
end
Vagrant.configure("2") do |config|
config.vm.define "server" do |server|
server.vm.box = "generic/ubuntu2204"
server.vm.provider "virtualbox" do |vb|
vb.memory = "8192"
end
# For Hyper-V provider
#server.vm.provider "hyperv" do |hv|
# hv.memory = 8192
#end
server.vm.network "private_network", type: "dhcp"
server.vm.hostname = "rhel-server"
config.vm.provision "file", source: "data", destination: "/tmp/vagrant_data"
server.vm.provision "shell", privileged: true, path: "data/server.sh"
end
config.vm.define "mailpit" do |mailpit|
mailpit.vm.box = "generic/ubuntu2204"
mailpit.vm.provider "virtualbox" do |vb|
vb.memory = "1024"
end
# For Hyper-V provider
#client.vm.provider "hyperv" do |hv|
# hv.memory = 8192
#end
mailpit.vm.network "private_network", type: "dhcp"
mailpit.vm.hostname = "mailpit"
config.vm.provision "file", source: "data", destination: "/tmp/vagrant_data"
mailpit.vm.provision "shell", privileged: true, path: "data/mailpit.sh"
end
end
server.sh
Details
#!/bin/bash
# Install
dpkg -i /tmp/vagrant_data/wazuh-indexer*.deb
dpkg -i /tmp/vagrant_data/wazuh-dashboard*.deb
# Setup
## Create certs
mkdir certs
cd certs || exit 1
bash /tmp/vagrant_data/gencerts.sh .
mkdir -p /etc/wazuh-indexer/certs
cp admin.pem /etc/wazuh-indexer/certs/admin.pem
cp admin.key /etc/wazuh-indexer/certs/admin-key.pem
cp indexer.pem /etc/wazuh-indexer/certs/indexer.pem
cp indexer-key.pem /etc/wazuh-indexer/certs/indexer-key.pem
cp ca.pem /etc/wazuh-indexer/certs/root-ca.pem
chown -R wazuh-indexer.wazuh-indexer /etc/wazuh-indexer/certs/
mkdir -p /etc/wazuh-dashboard/certs
cp dashboard.pem /etc/wazuh-dashboard/certs/dashboard.pem
cp dashboard-key.pem /etc/wazuh-dashboard/certs/dashboard-key.pem
cp ca.pem /etc/wazuh-dashboard/certs/root-ca.pem
chown -R wazuh-dashboard.wazuh-dashboard /etc/wazuh-dashboard/certs/
systemctl daemon-reload
## set up Indexer
systemctl enable wazuh-indexer
systemctl start wazuh-indexer
/usr/share/wazuh-indexer/bin/indexer-security-init.sh
## set up Dashboard
systemctl enable wazuh-dashboard
systemctl start wazuh-dashboard
## enable IPv6
modprobe ipv6
sysctl -w net.ipv6.conf.all.disable_ipv6=0
## turn off firewalld
sudo ufw disable
mailpit.sh
Details
#!/bin/bash
# Install
curl -sOL https://raw.githubusercontent.com/axllent/mailpit/develop/install.sh && INSTALL_PATH=/usr/bin sudo bash ./install.sh
# Setup
## set up Mailpit
useradd -r -s /bin/false mailpit
groupadd -r mailpit
### Create directories
mkdir -p /var/lib/mailpit
chown -R mailpit.mailpit /var/lib/mailpit
### Create password file
mkdir -p /etc/mailpit
echo "admin:$(openssl passwd -apr1 admin)" > /etc/mailpit/passwords
chown -R mailpit.mailpit /var/lib/mailpit
## Create certs
mkdir certs
cd certs || exit 1
bash /tmp/vagrant_data/gencerts.sh .
mkdir -p /etc/mailpit/certs
cp admin.pem /etc/mailpit/certs/admin.pem
cp admin.key /etc/mailpit/certs/admin-key.pem
cp mailpit.pem /etc/mailpit/certs/mailpit.pem
cp mailpit-key.pem /etc/mailpit/certs/mailpit-key.pem
cp ca.pem /etc/mailpit/certs/root-ca.pem
chown -R mailpit.mailpit /etc/mailpit/certs/
## enable IPv6
modprobe ipv6
sysctl -w net.ipv6.conf.all.disable_ipv6=0
## turn off firewalld
sudo ufw disable
echo "======================================================"
echo "Start Mailpit with the following command:"
echo ""
echo "mailpit --listen 0.0.0.0:8025 --smtp 0.0.0.0:1025 --database /var/lib/mailpit.db --ui-auth-file /etc/mailpit/passwords --ui-tls-cert /etc/mailpit/certs/admin.pem --ui-tls-key /etc/mailpit/certs/admin-key.pem --smtp-tls-cert /etc/mailpit/certs/mailpit.pem --smtp-tls-key /etc/mailpit/certs/mailpit-key.pem"
echo "======================================================"
# Adding HTTPS: https://mailpit.axllent.org/docs/configuration/http/
# mailpit --ui-tls-cert /path/to/cert.pem --ui-tls-key /path/to/key.pem
# Adding basic authentication: https://mailpit.axllent.org/docs/configuration/passwords/
# mailpit --ui-auth-file /path/to/password-file
gencerts.sh
Details
#!/bin/bash
if [[ $# -ne 1 ]]; then
fs=$(mktemp -d)
else
fs=$1
shift
fi
echo Working directory $fs
cd $fs
if [[ ! -e $fs/cfssl ]]; then
curl -s -L -o $fs/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
curl -s -L -o $fs/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod 755 $fs/cfssl*
fi
cfssl=$fs/cfssl
cfssljson=$fs/cfssljson
if [[ ! -e $fs/ca.pem ]]; then
cat << EOF | $cfssl gencert -initca - | $cfssljson -bare ca -
{
"CN": "Wazuh",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "San Francisco",
"O": "Wazuh",
"OU": "Wazuh Root CA"
}
]
}
EOF
fi
if [[ ! -e $fs/ca-config.json ]]; then
$cfssl print-defaults config > ca-config.json
fi
gencert_rsa() {
name=$1
profile=$2
cat << EOF | $cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=$profile -hostname="$name,127.0.0.1,localhost" - | $cfssljson -bare $name -
{
"CN": "$i",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "California",
"O": "Wazuh",
"OU": "Wazuh"
}
],
"hosts": [
"$i",
"localhost"
]
}
EOF
openssl pkcs8 -topk8 -inform pem -in $name-key.pem -outform pem -nocrypt -out $name.key
}
gencert_ec() {
openssl ecparam -name secp256k1 -genkey -noout -out jwt-private.pem
openssl ec -in jwt-private.pem -pubout -out jwt-public.pem
}
hosts=(indexer dashboard mailpit)
for i in "${hosts[@]}"; do
gencert_rsa $i www
done
users=(admin)
for i in "${users[@]}"; do
gencert_rsa $i client
done
gencert_ec
- Bring up the environment with
vagrant up. Use the command provided in the console to start mailpit from within its VM. mailpit is configured to use TLS and access credentials (admin:admin). Useip addrto check for the public IP address given to the VM and use that IP to access mailpit UI (e.g:https://172.28.128.136:8025/). - Add the username and password for mailpit to the Wazuh Indexer keystore.
echo "admin" | /usr/share/wazuh-indexer/bin/opensearch-keystore add opensearch.notifications.core.email.mailpit.username echo "admin" | /usr/share/wazuh-indexer/bin/opensearch-keystore add opensearch.notifications.core.email.mailpit.password chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/opensearch.keystore - Ensure
mailpitis accessible within theserverVM (e.gcurl https://172.28.128.136:8025 -k -u admin:adminshould return HTML code). If not, add it to the list of known hosts in/etc/hosts(e.gecho "172.28.128.136 mailpit mailpit" >> /etc/hosts).
Wazuh Indexer Content Manager Plugin — Development Guide
This document describes the architecture, components, and extension points of the Content Manager plugin, which manages security content synchronization from the Wazuh CTI API and provides REST endpoints for user-generated content management.
Overview
The Content Manager plugin handles:
- CTI Subscription: Manages subscriptions and tokens with the CTI Console.
- Job Scheduling: Periodically checks for updates using the OpenSearch Job Scheduler.
- Update Check Service: Sends a daily heartbeat to CTI so Wazuh can notify users when a newer version is available.
- Content Synchronization: Keeps local indices in sync with the Wazuh CTI Catalog via snapshots and incremental JSON Patch updates.
- Security Analytics Integration: Pushes rules, integrations, and detectors to the Security Analytics Plugin (SAP).
- User-Generated Content: Full CUD for rules, decoders, integrations, KVDBs, and policies in the Draft space.
- Engine Communication: Validates and promotes content via Unix Domain Socket to the Wazuh Engine.
- Space Management: Manages content lifecycle through Draft → Test → Custom promotion.
System Indices
The plugin manages the following indices:
| Index | Purpose |
|---|---|
.wazuh-cti-consumers | Sync state (status, offsets, snapshot links) |
wazuh-threatintel-policies | Policy documents |
wazuh-threatintel-integrations | Integration definitions |
wazuh-threatintel-rules | Detection rules |
wazuh-threatintel-decoders | Decoder definitions |
wazuh-threatintel-kvdbs | Key-value databases |
wazuh-threatintel-enrichments | Indicators of Compromise |
wazuh-threatintel-filters | Engine filter rules |
.wazuh-content-manager-jobs | Job scheduler metadata |
Plugin Architecture
Entry Point
ContentManagerPlugin is the main class. It implements Plugin, ClusterPlugin, JobSchedulerExtension, and ActionPlugin. On startup it:
- Initializes
PluginSettings,ConsumersIndex,CtiConsole,CatalogSyncJob,EngineServiceImpl, andSpaceService. - Registers all REST handlers via
getRestHandlers(). - Creates the
.wazuh-cti-consumersindex on cluster manager nodes. - Schedules the periodic
CatalogSyncJobvia the OpenSearch Job Scheduler. - Optionally triggers an immediate sync on start.
- Registers/schedules
TelemetryPingJob(wazuh-telemetry-ping-job) whenplugins.content_manager.telemetry.enabledis true. - Registers a dynamic settings consumer to enable/disable telemetry at runtime.
Update Check Service internals
The update check flow is split into two classes:
-
TelemetryPingJob(jobscheduler/jobs/TelemetryPingJob.java)- Runs through Job Scheduler every 1 day.
- Reads cluster UUID from
ClusterServicemetadata. - Reads Wazuh version through
ContentManagerPlugin.getVersion(). - Prevents overlap using a
Semaphore(tryAcquire()guard).
-
TelemetryClient(cti/console/client/TelemetryClient.java)- Sends an asynchronous GET request to CTI
/ping. - Headers sent:
wazuh-uid: cluster UUIDwazuh-tag:v<version>user-agent:Wazuh Indexer <version>
- Fire-and-forget behavior: callback logs success/failure without blocking scheduler threads.
- Sends an asynchronous GET request to CTI
Runtime toggle behavior:
plugins.content_manager.telemetry.enabledis a dynamic setting.- Enabling it schedules the job and triggers an immediate ping.
- Disabling it removes the telemetry job document from
.wazuh-content-manager-jobs.
REST Handlers
The plugin registers 26 REST handlers, grouped by domain:
| Domain | Handler | Method | URI |
|---|---|---|---|
| Subscription | RestGetSubscriptionAction | GET | /_plugins/_content_manager/subscription |
RestPostSubscriptionAction | POST | /_plugins/_content_manager/subscription | |
RestDeleteSubscriptionAction | DELETE | /_plugins/_content_manager/subscription | |
| Update | RestPostUpdateAction | POST | /_plugins/_content_manager/update |
| Logtest | RestPostLogtestAction | POST | /_plugins/_content_manager/logtest |
| Policy | RestPutPolicyAction | PUT | /_plugins/_content_manager/policy/{space} |
| Rules | RestPostRuleAction | POST | /_plugins/_content_manager/rules |
RestPutRuleAction | PUT | /_plugins/_content_manager/rules/{id} | |
RestDeleteRuleAction | DELETE | /_plugins/_content_manager/rules/{id} | |
| Decoders | RestPostDecoderAction | POST | /_plugins/_content_manager/decoders |
RestPutDecoderAction | PUT | /_plugins/_content_manager/decoders/{id} | |
RestDeleteDecoderAction | DELETE | /_plugins/_content_manager/decoders/{id} | |
| Integrations | RestPostIntegrationAction | POST | /_plugins/_content_manager/integrations |
RestPutIntegrationAction | PUT | /_plugins/_content_manager/integrations/{id} | |
RestDeleteIntegrationAction | DELETE | /_plugins/_content_manager/integrations/{id} | |
| KVDBs | RestPostKvdbAction | POST | /_plugins/_content_manager/kvdbs |
RestPutKvdbAction | PUT | /_plugins/_content_manager/kvdbs/{id} | |
RestDeleteKvdbAction | DELETE | /_plugins/_content_manager/kvdbs/{id} | |
| Filters | RestPostFilterAction | POST | /_plugins/_content_manager/filters |
RestPutFilterAction | PUT | /_plugins/_content_manager/filters/{id} | |
RestDeleteFilterAction | DELETE | /_plugins/_content_manager/filters/{id} | |
| Promote | RestPostPromoteAction | POST | /_plugins/_content_manager/promote |
RestGetPromoteAction | GET | /_plugins/_content_manager/promote | |
| Spaces | RestDeleteSpaceAction | DELETE | /_plugins/_content_manager/space/{space} |
Class Hierarchy
The REST handlers follow a Template Method pattern through a three-level abstract class hierarchy. There are two parallel branches — one where the target space is always draft (AbstractCreateAction / AbstractUpdateAction / AbstractDeleteAction) and one where the target space is supplied at runtime from the request body (AbstractCreateActionSpaces / AbstractUpdateActionSpaces / AbstractDeleteActionSpaces). The latter is used for resources like Filters that can live in either draft or standard space.
BaseRestHandler
├── AbstractContentAction
│ ├── AbstractCreateAction # Target space always: draft
│ │ ├── RestPostRuleAction
│ │ ├── RestPostDecoderAction
│ │ ├── RestPostIntegrationAction
│ │ └── RestPostKvdbAction
│ ├── AbstractUpdateAction # Target space always: draft
│ │ ├── RestPutRuleAction
│ │ ├── RestPutDecoderAction
│ │ ├── RestPutIntegrationAction
│ │ └── RestPutKvdbAction
│ ├── AbstractDeleteAction # Target space always: draft
│ │ ├── RestDeleteRuleAction
│ │ ├── RestDeleteDecoderAction
│ │ ├── RestDeleteIntegrationAction
│ │ └── RestDeleteKvdbAction
│ ├── AbstractCreateActionSpaces # Target space from request body (draft|standard)
│ │ └── RestPostFilterAction
│ ├── AbstractUpdateActionSpaces # Target space from request body (draft|standard)
│ │ └── RestPutFilterAction
│ └── AbstractDeleteActionSpaces # Target space from request body (draft|standard)
│ └── RestDeleteFilterAction
├── RestPutPolicyAction
├── RestDeleteSpaceAction
├── RestGetSubscriptionAction
├── RestPostSubscriptionAction
├── RestDeleteSubscriptionAction
├── RestPostUpdateAction
├── RestPostLogtestAction
├── RestPostPromoteAction
└── RestGetPromoteAction
AbstractContentAction
Base class for all content CUD actions. It:
- Overrides
prepareRequest()fromBaseRestHandler. - Initializes shared services:
SpaceService,SecurityAnalyticsService,IntegrationService. - Validates that a Draft policy exists before executing any content action.
- Delegates to the abstract
executeRequest()method for concrete logic.
AbstractCreateAction / AbstractCreateActionSpaces
Handles POST requests to create new resources. AbstractCreateAction hard-codes the target space to draft. AbstractCreateActionSpaces reads the space from the request body instead, allowing draft or standard as the target.
The executeRequest() workflow:
- Validate request body — ensures the request has content and valid JSON.
- Validate payload structure — checks for required
resourcekey and optionalintegrationkey. - Resource-specific validation — delegates to
validatePayload()(abstract). Concrete handlers check required fields, duplicate titles, and parent integration existence. - Generate ID and metadata — creates a UUID, sets
dateandmodifiedtimestamps, defaultsenabledtotrue. - External sync — delegates to
syncExternalServices()(abstract). Typically upserts the resource in SAP or validates via the Engine. - Index — wraps the resource in the CTI document structure and indexes it in the Draft space.
- Link to parent — delegates to
linkToParent()(abstract). Usually adds the new resource ID to a parent integration’s resource list. - Update hash — recalculates the Draft space policy hash via
SpaceService.
Returns 201 Created with the new resource UUID on success.
AbstractUpdateAction / AbstractUpdateActionSpaces
Handles PUT requests to update existing resources. AbstractUpdateAction restricts updates to the draft space. AbstractUpdateActionSpaces accepts a space value (draft or standard) from the request body.
The executeRequest() workflow:
- Validate ID — checks the path parameter is present and correctly formatted.
- Check existence and space — verifies the resource exists and belongs to the Draft space.
- Parse and validate payload — same structural checks as create.
- Resource-specific validation — delegates to
validatePayload()(abstract). - Update timestamps — sets
modifiedtimestamp. Preserves immutable fields (creation date, author) from the existing document. - External sync — delegates to
syncExternalServices()(abstract). - Re-index — overwrites the document in the index.
- Update hash — recalculates the Draft space hash.
Returns 200 OK with the resource UUID on success.
AbstractDeleteAction / AbstractDeleteActionSpaces
Handles DELETE requests. AbstractDeleteAction restricts deletions to the draft space. AbstractDeleteActionSpaces resolves the target space from the stored document (allowing deletion from both draft and standard).
The executeRequest() workflow:
- Validate ID — checks format and presence.
- Check existence and space — resource must exist in Draft space.
- Pre-delete validation — delegates to
validateDelete()(optional override). Can prevent deletion if dependent resources exist. - External sync — delegates to
deleteExternalServices()(abstract). Removes from SAP. Handles 404 gracefully. - Unlink from parent — delegates to
unlinkFromParent()(abstract). Removes the resource ID from the parent integration’s list. - Delete from index — removes the document.
- Update hash — recalculates the Draft space hash.
Returns 200 OK with the resource UUID on success.
Engine Communication
The plugin communicates with the Wazuh Engine via a Unix Domain Socket for validation and promotion of content.
EngineSocketClient
Located at: engine/client/EngineSocketClient.java
- Connects to the socket at
/usr/share/wazuh-indexer/engine/sockets/engine-api.sock. - Sends HTTP-over-UDS requests: builds a standard HTTP/1.1 request string (method, headers, JSON body) and writes it to the socket channel.
- Each request opens a new
SocketChannel(usingStandardProtocolFamily.UNIX) that is closed after the response is read. - Parses the HTTP response, extracting the status code and JSON body.
EngineService Interface
Defines the Engine operations:
| Method | Description |
|---|---|
logtest(JsonNode log) | Forwards a log test payload to the Engine |
validate(JsonNode resource) | Validates a resource payload |
promote(JsonNode policy) | Validates a full policy for promotion |
validateResource(String type, JsonNode resource) | Wraps a resource with its type and delegates to validate() |
EngineServiceImpl
Implementation using EngineSocketClient. Maps methods to Engine API endpoints:
| Method | Engine Endpoint | HTTP Method |
|---|---|---|
logtest() | /logtest | POST |
validate() | /content/validate/resource | POST |
promote() | /content/validate/policy | POST |
Space Model
Resources live in spaces that represent their lifecycle stage. The Space enum defines four spaces:
| Space | Description |
|---|---|
STANDARD | Production-ready CTI resources from the upstream catalog |
CUSTOM | User-created resources that have been promoted to production |
DRAFT | Resources under development — all user edits happen here |
TEST | Intermediate space for validation before production |
Promotion Flow
Spaces promote in a fixed chain:
DRAFT → TEST → CUSTOM
The Space.promote() method returns the next space in the chain. STANDARD and CUSTOM spaces cannot be promoted further.
SpaceService
Located at: cti/catalog/service/SpaceService.java
Manages space-related operations:
getSpaceResources(spaceName)— Fetches all resources (document IDs and hashes) from all managed indices for a given space.promoteSpace(indexName, resources, targetSpace)— Copies documents from one space to another via bulk indexing, updating thespace.namefield.calculateAndUpdate(targetSpaces)— Recalculates the aggregate SHA-256 hash for each policy in the given spaces. The hash is computed by concatenating hashes of the policy and all its linked resources (integrations, decoders, KVDBs, rules).buildEnginePayload(...)— Assembles the full policy payload (policy + all resources from target space with modifications applied) for Engine validation during promotion.deleteResources(indexName, ids, targetSpace)— Bulk-deletes resources from a target space.
Document Structure
Every resource document follows this envelope structure:
{
"document": {
"id": "<uuid>",
"title": "...",
"date": "2026-01-01T00:00:00Z",
"modified": "2026-01-15T00:00:00Z",
"enabled": true
},
"hash": {
"sha256": "abc123..."
},
"space": {
"name": "draft",
"hash": {
"sha256": "xyz789..."
}
}
}
Content Synchronization Pipeline
Overview
sequenceDiagram
participant Scheduler as JobScheduler/RestAction
participant SyncJob as CatalogSyncJob
participant Synchronizer as ConsumerRulesetService
participant ConsumerSvc as ConsumerService
participant CTI as External CTI API
participant Snapshot as SnapshotService
participant Update as UpdateService
participant Indices as Content Indices
participant SAP as SecurityAnalyticsServiceImpl
Scheduler->>SyncJob: Trigger Execution
activate SyncJob
SyncJob->>Synchronizer: synchronize()
Synchronizer->>ConsumerSvc: getLocalConsumer() / getRemoteConsumer()
ConsumerSvc->>CTI: Fetch Metadata
ConsumerSvc-->>Synchronizer: Offsets & Metadata
alt Local Offset == 0 (Initialization)
Synchronizer->>Snapshot: initialize(remoteConsumer)
Snapshot->>CTI: Download Snapshot ZIP
Snapshot->>Indices: Bulk Index Content (Rules/Integrations/etc.)
Snapshot-->>Synchronizer: Done
else Local Offset < Remote Offset (Update)
Synchronizer->>Update: update(localOffset, remoteOffset)
Update->>CTI: Fetch Changes
Update->>Indices: Apply JSON Patches
Update-->>Synchronizer: Done
end
opt Changes Applied (onSyncComplete)
Synchronizer->>Indices: Refresh Indices
Synchronizer->>SAP: upsertIntegration(doc)
loop For each Integration
SAP->>SAP: WIndexIntegrationAction
end
Synchronizer->>SAP: upsertRule(doc)
loop For each Rule
SAP->>SAP: WIndexRuleAction
end
Synchronizer->>SAP: upsertDetector(doc)
loop For each Integration
SAP->>SAP: WIndexDetectorAction
end
Synchronizer->>Synchronizer: calculatePolicyHash()
end
deactivate SyncJob
Initialization Phase
When local_offset = 0:
- Downloads a ZIP snapshot from the CTI API.
- Extracts and parses JSON files for each content type.
- Bulk-indexes content into respective indices.
- Registers all content with the Security Analytics Plugin via
SecurityAnalyticsServiceImpl.
Update Phase
When local_offset > 0 and local_offset < remote_offset:
- Fetches the changes in batches from the CTI API.
- Applies JSON Patch operations (add, update, delete).
- Pushes the changes to the Security Analytics Plugin via
SecurityAnalyticsServiceImpl. - Updates the local offset.
Post-Synchronization Phase
- Refreshes all content indices.
- Upserts integrations, rules, and detectors into the Security Analytics Plugin via
SecurityAnalyticsServiceImpl. - Recalculates SHA-256 hashes for policy integrity verification.
- Sets consumer
statustoidlein.wazuh-cti-consumers.
Error Handling
If a critical error or data corruption is detected, the system resets local_offset to 0, triggering a full snapshot re-initialization on the next run.
Configuration Settings
To register a new setting, follow the existing pattern in PluginSettings.java. That will make it available in opensearch.yml.
For existing settings, check Settings Reference
When registering a new setting, document it in the section linked above.
REST API URIs
All endpoints are under /_plugins/_content_manager. The URI constants are defined in PluginSettings:
| Constant | Value |
|---|---|
PLUGINS_BASE_URI | /_plugins/_content_manager |
SUBSCRIPTION_URI | /_plugins/_content_manager/subscription |
UPDATE_URI | /_plugins/_content_manager/update |
LOGTEST_URI | /_plugins/_content_manager/logtest |
RULES_URI | /_plugins/_content_manager/rules |
DECODERS_URI | /_plugins/_content_manager/decoders |
INTEGRATIONS_URI | /_plugins/_content_manager/integrations |
KVDBS_URI | /_plugins/_content_manager/kvdbs |
FILTERS_URI | /_plugins/_content_manager/filters |
PROMOTE_URI | /_plugins/_content_manager/promote |
POLICY_URI | /_plugins/_content_manager/policy |
SPACE_URI | /_plugins/_content_manager/space |
REST API Reference
The full API is defined in openapi.yml.
Logtest
The Indexer acts as a proxy between the UI and the Engine. POST /logtest accepts the payload and forwards it to the Engine via UDS. No validation is performed. If the Engine responds, its response is returned directly. If the Engine is unreachable, a 500 error is returned.
A testing policy must be loaded in the Engine for logtest to work. Load a policy via the policy promotion endpoint.
---
title: Logtest execution
---
sequenceDiagram
actor User
participant UI
participant Indexer
participant Engine
User->>UI: run logtest
UI->>Indexer: POST /logtest
Indexer->>Engine: POST /logtest (via UDS)
Engine-->>Indexer: response
Indexer-->>UI: response
Content CUD (Rules, Decoders, Integrations, KVDBs)
All four resource types follow the same patterns via the abstract class hierarchy:
Create (POST):
sequenceDiagram
actor User
participant Indexer
participant Engine/SAP as Engine or SAP
participant ContentIndex
participant IntegrationIndex
User->>Indexer: POST /_plugins/_content_manager/{resource_type}
Indexer->>Indexer: Validate payload, generate UUID, timestamps
Indexer->>Engine/SAP: Sync (validate/upsert)
Engine/SAP-->>Indexer: OK
Indexer->>ContentIndex: Index in Draft space
Indexer->>IntegrationIndex: Link to parent integration
Indexer-->>User: 201 Created + UUID
Update (PUT):
sequenceDiagram
actor User
participant Indexer
participant ContentIndex
participant Engine/SAP as Engine or SAP
User->>Indexer: PUT /_plugins/_content_manager/{resource_type}/{id}
Indexer->>ContentIndex: Check exists + is in Draft space
Indexer->>Indexer: Validate, preserve metadata, update timestamps
Indexer->>Engine/SAP: Sync (validate/upsert)
Indexer->>ContentIndex: Re-index document
Indexer-->>User: 200 OK + UUID
Delete (DELETE):
sequenceDiagram
actor User
participant Indexer
participant ContentIndex
participant Engine/SAP as Engine or SAP
participant IntegrationIndex
User->>Indexer: DELETE /_plugins/_content_manager/{resource_type}/{id}
Indexer->>ContentIndex: Check exists + is in Draft space
Indexer->>Engine/SAP: Delete from external service
Indexer->>IntegrationIndex: Unlink from parent
Indexer->>ContentIndex: Delete document
Indexer-->>User: 200 OK + UUID
Policy Update
The policy endpoint now accepts a {space} path parameter (draft or standard), allowing the same handler to serve both spaces with different validation rules.
- Draft space — all policy fields are accepted. The
integrationsandfiltersarrays allow reordering but not adding or removing entries.author,description,documentation, andreferencesare required in addition to the boolean fields. - Standard space — only
enrichments,filters,enabled,index_unclassified_events, andindex_discarded_eventscan be modified. All other fields are preserved from the existing standard policy document. After a successful update, if the standard space hash changed, the updated policy is automatically loaded into the Engine.
flowchart TD
UI[UI] -->|"PUT /policy/{space}"| Indexer
Indexer -->|Validate space| SpaceCheck{is a valid space?}
SpaceCheck -->|No| Error400[400 Bad Request]
SpaceCheck -->|Yes| Parse[Parse & validate fields]
Parse --> SpaceBranch{Space?}
SpaceBranch -->|draft| StoreDraft[Update draft policy in wazuh-threatintel-policies]
SpaceBranch -->|standard| StoreStd[Merge allowed fields into standard policy]
StoreDraft --> Hash[Recalculate space hash]
StoreStd --> Hash
Hash --> EngineCheck{Standard hash changed?}
EngineCheck -->|Yes| Engine[Load standard space into Engine]
EngineCheck -->|No| OK[200 OK]
Engine --> OK
Policy Schema
The wazuh-threatintel-policies index stores policy configurations. See the Policy document structure above for the envelope format.
Policy document fields:
| Field | Type | Description | Editable in standard space |
|---|---|---|---|
id | keyword | Unique identifier | No |
title | keyword | Human-readable name | No |
date | date | Creation timestamp | No |
modified | date | Last modification timestamp | No |
root_decoder | keyword | Root decoder for event processing | No |
integrations | keyword[] | Active integration IDs | No |
author | keyword | Policy author | No |
description | text | Brief description | No |
documentation | keyword | Documentation link | No |
references | keyword[] | External reference URLs | No |
filters | keyword[] | Filter UUIDs (reordering allowed, no add/remove) | Yes |
enrichments | keyword[] | Enrichment types (file, domain-name, ip, url, geo) | Yes |
enabled | boolean | Whether the policy is active | Yes |
index_unclassified_events | boolean | Index events that match no rule | Yes |
index_discarded_events | boolean | Index events explicitly discarded by rules | Yes |
Filters CUD (Engine Filters)
Filters follow the same CUD pattern as other resource types but use the AbstractCreateActionSpaces / AbstractUpdateActionSpaces / AbstractDeleteActionSpaces hierarchy. The key difference is that the target space is supplied in the request body rather than being fixed to draft. Both draft and standard are accepted.
Filters are linked directly to their space’s policy document (the filters array) rather than to a parent integration.
Create (POST):
sequenceDiagram
actor User
participant Indexer
participant Engine
participant FilterIndex as wazuh-threatintel-filters
participant PoliciesIndex as wazuh-threatintel-policies
User->>Indexer: POST /_plugins/_content_manager/filters
Indexer->>Indexer: Validate payload + space (draft|standard)
Indexer->>Engine: validateResource("filter", resource)
Engine-->>Indexer: OK
Indexer->>FilterIndex: Index in target space
Indexer->>PoliciesIndex: Add filter ID to space policy filters[]
Indexer-->>User: 201 Created + UUID
Update (PUT):
sequenceDiagram
actor User
participant Indexer
participant Engine
participant FilterIndex as wazuh-threatintel-filters
User->>Indexer: PUT /_plugins/_content_manager/filters/{id}
Indexer->>FilterIndex: Check exists + validate space (draft|standard)
Indexer->>Indexer: Validate payload
Indexer->>Engine: validateResource("filter", resource)
Engine-->>Indexer: OK
Indexer->>FilterIndex: Re-index document
Indexer-->>User: 200 OK + UUID
Delete (DELETE):
sequenceDiagram
actor User
participant Indexer
participant FilterIndex as wazuh-threatintel-filters
participant PoliciesIndex as wazuh-threatintel-policies
User->>Indexer: DELETE /_plugins/_content_manager/filters/{id}
Indexer->>FilterIndex: Check exists + resolve space
Indexer->>PoliciesIndex: Remove filter ID from space policy filters[]
Indexer->>FilterIndex: Delete document
Indexer-->>User: 200 OK + UUID
Space Reset
flowchart TD
UI[UI] -->|"DELETE /space/{space}"| Indexer
Indexer -->|Validate space| Check{space == draft?}
Check -->|No| Error400[400 Bad Request]
Check -->|Yes| DeleteSAP[Delete draft resources from SAP]
DeleteSAP --> DeleteCTI[Delete all draft documents from wazuh-threatintel-* indices]
DeleteCTI --> RegenPolicy[Re-generate default draft policy]
RegenPolicy --> OK[200 OK]
Only the draft space can be reset. Attempting to reset any other space returns 400 Bad Request. Failures in SAP cleanup are logged but do not block the reset — the primary goal is clearing the content indices and regenerating the policy.
Debugging
Check Consumer Status
GET /.wazuh-cti-consumers/_search
{
"query": { "match_all": {} }
}
The status field indicates the sync lifecycle state:
idle— sync complete; content is safe to read.updating— sync in progress; content may be partially written.
To find consumers that are currently syncing or that failed mid-sync (status stuck at updating):
GET /.wazuh-cti-consumers/_search
{
"query": { "term": { "status": "updating" } }
}
Check Content by Space
GET /wazuh-threatintel-rules/_search
{
"query": { "term": { "space.name": "draft" } },
"size": 10
}
Monitor Plugin Logs
tail -f var/log/wazuh-indexer/wazuh-cluster.log | grep -E "ContentManager|CatalogSyncJob|SnapshotServiceImpl|UpdateServiceImpl|AbstractContentAction"
Important Notes
- The plugin only runs on cluster manager nodes.
- CTI API must be accessible for content synchronization.
- All user content CUD operations require a Draft policy to exist.
- The Engine socket must be available at the configured path for logtest, validation, and promotion.
- Offset-based synchronization ensures no content is missed.
🧪 Testing
The plugin includes integration tests defined in the tests/content-manager directory. These tests cover various scenarios for managing integrations, decoders, rules, and KVDBs through the REST API.
01 - Integrations: Create Integration (9 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully create an integration |
| 2 | Create an integration with the same title as an existing integration |
| 3 | Create an integration with missing title |
| 4 | Create an integration with missing author |
| 5 | Create an integration with missing category |
| 6 | Create an integration with an explicit id in the resource |
| 7 | Create an integration with missing resource object |
| 8 | Create an integration with empty body |
| 9 | Create an integration without authentication |
01 - Integrations: Update Integration (8 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully update an integration |
| 2 | Update an integration changing its title to a title that already exists in draft space |
| 3 | Update an integration with missing required fields |
| 4 | Update an integration that does not exist |
| 5 | Update an integration with an invalid UUID |
| 6 | Update an integration with an id in the request body |
| 7 | Update an integration attempting to add/remove dependency lists |
| 8 | Update an integration without authentication |
01 - Integrations: Delete Integration (7 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully delete an integration with no attached resources |
| 2 | Delete an integration that has attached resources |
| 3 | Delete an integration that does not exist |
| 4 | Delete an integration with an invalid UUID |
| 5 | Delete an integration without providing an ID |
| 6 | Delete an integration not in draft space |
| 7 | Delete an integration without authentication |
02 - Decoders: Create Decoder (7 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully create a decoder |
| 2 | Create a decoder without an integration reference |
| 3 | Create a decoder with an explicit id in the resource |
| 4 | Create a decoder with an integration not in draft space |
| 5 | Create a decoder with missing resource object |
| 6 | Create a decoder with empty body |
| 7 | Create a decoder without authentication |
02 - Decoders: Update Decoder (7 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully update a decoder |
| 2 | Update a decoder that does not exist |
| 3 | Update a decoder with an invalid UUID |
| 4 | Update a decoder not in draft space |
| 5 | Update a decoder with missing resource object |
| 6 | Update a decoder with empty body |
| 7 | Update a decoder without authentication |
02 - Decoders: Delete Decoder (7 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully delete a decoder |
| 2 | Delete a decoder that does not exist |
| 3 | Delete a decoder with an invalid UUID |
| 4 | Delete a decoder not in draft space |
| 5 | Delete a decoder without providing an ID |
| 6 | Delete a decoder without authentication |
| 7 | Verify decoder is removed from index after deletion |
03 - Rules: Create Rule (7 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully create a rule |
| 2 | Create a rule with missing title |
| 3 | Create a rule without an integration reference |
| 4 | Create a rule with an explicit id in the resource |
| 5 | Create a rule with an integration not in draft space |
| 6 | Create a rule with empty body |
| 7 | Create a rule without authentication |
03 - Rules: Update Rule (7 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully update a rule |
| 2 | Update a rule with missing title |
| 3 | Update a rule that does not exist |
| 4 | Update a rule with an invalid UUID |
| 5 | Update a rule not in draft space |
| 6 | Update a rule with empty body |
| 7 | Update a rule without authentication |
03 - Rules: Delete Rule (7 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully delete a rule |
| 2 | Delete a rule that does not exist |
| 3 | Delete a rule with an invalid UUID |
| 4 | Delete a rule not in draft space |
| 5 | Delete a rule without providing an ID |
| 6 | Delete a rule without authentication |
| 7 | Verify rule is removed from index after deletion |
04 - KVDBs: Create KVDB (9 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully create a KVDB |
| 2 | Create a KVDB with missing title |
| 3 | Create a KVDB with missing author |
| 4 | Create a KVDB with missing content |
| 5 | Create a KVDB without an integration reference |
| 6 | Create a KVDB with an explicit id in the resource |
| 7 | Create a KVDB with an integration not in draft space |
| 8 | Create a KVDB with empty body |
| 9 | Create a KVDB without authentication |
04 - KVDBs: Update KVDB (7 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully update a KVDB |
| 2 | Update a KVDB with missing required fields |
| 3 | Update a KVDB that does not exist |
| 4 | Update a KVDB with an invalid UUID |
| 5 | Update a KVDB not in draft space |
| 6 | Update a KVDB with empty body |
| 7 | Update a KVDB without authentication |
04 - KVDBs: Delete KVDB (7 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully delete a KVDB |
| 2 | Delete a KVDB that does not exist |
| 3 | Delete a KVDB with an invalid UUID |
| 4 | Delete a KVDB not in draft space |
| 5 | Delete a KVDB without providing an ID |
| 6 | Delete a KVDB without authentication |
| 7 | Verify KVDB is removed from index after deletion |
05 - Policy: Policy Initialization (6 scenarios)
| # | Scenario |
|---|---|
| 1 | The “wazuh-threatintel-policies” index exists |
| 2 | Exactly four policy documents exist (one per space) |
| 3 | Standard policy has a different document ID than draft/test/custom |
| 4 | Draft, test, and custom policies start with empty integrations and root_decoder |
| 5 | Each policy document contains the expected structure |
| 6 | Each policy has a valid SHA-256 hash |
05 - Policy: Update Draft Policy (12 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully update the draft policy |
| 2 | Update policy with missing type field |
| 3 | Update policy with wrong type value |
| 4 | Update policy with missing resource object |
| 5 | Update policy with missing required fields in resource |
| 6 | Update policy attempting to add an integration to the list |
| 7 | Update policy attempting to remove an integration from the list |
| 8 | Update policy with reordered integrations list (allowed) |
| 9 | Update policy with empty body |
| 10 | Update policy without authentication |
| 11 | Verify policy changes are NOT reflected in test space until promotion |
| 12 | Verify policy changes are reflected in test space after promotion |
06 - Log Test (4 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully test a log event |
| 2 | Send log test with empty body |
| 3 | Send log test with invalid JSON |
| 4 | Send log test without authentication |
07 - Promote: Preview Promotion (7 scenarios)
| # | Scenario |
|---|---|
| 1 | Preview promotion from draft to test |
| 2 | Preview promotion from test to custom |
| 3 | Preview promotion with missing space parameter |
| 4 | Preview promotion with empty space parameter |
| 5 | Preview promotion with invalid space value |
| 6 | Preview promotion from custom (not allowed) |
| 7 | Preview promotion without authentication |
07 - Promote: Execute Promotion (18 scenarios)
| # | Scenario |
|---|---|
| 1 | Successfully promote from draft to test |
| 2 | Verify resources exist in test space after draft to test promotion |
| 3 | Verify promoted resources exist in both draft and test spaces |
| 4 | Verify test space hash is regenerated after draft to test promotion |
| 5 | Verify promoted resource hashes match between draft and test spaces |
| 6 | Verify deleting a decoder in draft does not affect promoted test space |
| 7 | Successfully promote from test to custom |
| 8 | Verify resources exist in custom space after test to custom promotion |
| 9 | Verify promoted resources exist in both test and custom spaces |
| 10 | Verify custom space hash is regenerated after test to custom promotion |
| 11 | Verify promoted resource hashes match between test and custom spaces |
| 12 | Promote from custom (not allowed) |
| 13 | Promote with invalid space |
| 14 | Promote with missing changes object |
| 15 | Promote with incomplete changes (missing required resource arrays) |
| 16 | Promote with non-update operation on policy |
| 17 | Promote with empty body |
| 18 | Promote without authentication |
Related Documentation
Tutorial: Adding a REST Endpoint to the Content Manager Plugin
This tutorial walks through adding a new REST endpoint to the Content Manager plugin, using a concrete example: a GET endpoint to retrieve a single rule by ID.
By the end, you will have a working GET /_plugins/_content_manager/rules/{id} endpoint that fetches a rule document from the wazuh-threatintel-rules index.
Prerequisites
- Development environment set up (see Setup)
- The project compiles:
./gradlew :wazuh-indexer-content-manager:compileJava
Step 1: Add the URI Constant
If your endpoint uses a new base URI, add it to PluginSettings. In this case, rules already have RULES_URI, and our GET endpoint uses the same base path with an {id} parameter, so no changes are needed.
The existing constant in PluginSettings.java:
public static final String RULES_URI = PLUGINS_BASE_URI + "/rules";
Our endpoint will match /_plugins/_content_manager/rules/{id} using the same base URI.
Step 2: Create the Handler Class
Create a new file at:
plugins/content-manager/src/main/java/com/wazuh/contentmanager/rest/service/RestGetRuleAction.java
package com.wazuh.contentmanager.rest.service;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.opensearch.core.rest.RestStatus;
import org.opensearch.rest.BaseRestHandler;
import org.opensearch.rest.BytesRestResponse;
import org.opensearch.rest.NamedRoute;
import org.opensearch.rest.RestRequest;
import org.opensearch.transport.client.node.NodeClient;
import java.util.List;
import com.wazuh.contentmanager.cti.catalog.index.ContentIndex;
import com.wazuh.contentmanager.settings.PluginSettings;
import com.wazuh.contentmanager.utils.Constants;
/**
* GET /_plugins/_content_manager/rules/{id}
*
* Retrieves a single rule document by its ID from the wazuh-threatintel-rules index.
*/
public class RestGetRuleAction extends BaseRestHandler {
private static final Logger log = LogManager.getLogger(RestGetRuleAction.class);
private static final ObjectMapper MAPPER = new ObjectMapper();
// A short identifier for log output and debugging.
private static final String ENDPOINT_NAME = "content_manager_rule_get";
// A unique name used by OpenSearch's named route system for access control.
private static final String ENDPOINT_UNIQUE_NAME = "plugin:content_manager/rule_get";
@Override
public String getName() {
return ENDPOINT_NAME;
}
/**
* Define the route. The {id} path parameter is automatically extracted
* by OpenSearch and available via request.param("id").
*/
@Override
public List<Route> routes() {
return List.of(
new NamedRoute.Builder()
.path(PluginSettings.RULES_URI + "/{id}")
.method(RestRequest.Method.GET)
.uniqueName(ENDPOINT_UNIQUE_NAME)
.build());
}
/**
* Prepare and execute the request. This method is called by the
* OpenSearch REST framework for each incoming request.
*
* @param request the incoming REST request
* @param client the node client for index operations
* @return a RestChannelConsumer that writes the response
*/
@Override
protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) {
// Extract the {id} path parameter.
String id = request.param(Constants.KEY_ID);
return channel -> {
try {
// Validate the ID parameter is present.
if (id == null || id.isBlank()) {
channel.sendResponse(new BytesRestResponse(
RestStatus.BAD_REQUEST,
"application/json",
"{\"error\": \"Missing required parameter: id\"}"));
return;
}
// Use ContentIndex to retrieve the document.
ContentIndex index = new ContentIndex(client, Constants.INDEX_RULES, null);
JsonNode document = index.getDocument(id);
if (document == null) {
channel.sendResponse(new BytesRestResponse(
RestStatus.NOT_FOUND,
"application/json",
"{\"error\": \"Rule not found: " + id + "\"}"));
return;
}
// Return the document as JSON.
String responseBody = MAPPER.writeValueAsString(document);
channel.sendResponse(new BytesRestResponse(
RestStatus.OK,
"application/json",
responseBody));
} catch (Exception e) {
log.error("Failed to retrieve rule [{}]: {}", id, e.getMessage(), e);
channel.sendResponse(new BytesRestResponse(
RestStatus.INTERNAL_SERVER_ERROR,
"application/json",
"{\"error\": \"Internal server error: " + e.getMessage() + "\"}"));
}
};
}
}
Key Concepts
getName()— Returns a short identifier used in logs and debugging.routes()— Defines the HTTP method and URI pattern. UsesNamedRoute.Builderwhich requires auniqueNamefor OpenSearch’s access control system.prepareRequest()— The core method. Returns aRestChannelConsumerlambda that executes asynchronously and writes the response to the channel.- Path parameters —
{id}in the route path is automatically parsed. Access it withrequest.param("id").
Step 3: Register the Handler
Open ContentManagerPlugin.java and add the new handler to getRestHandlers():
@Override
public List<RestHandler> getRestHandlers(
Settings settings,
RestController restController,
ClusterSettings clusterSettings,
IndexScopedSettings indexScopedSettings,
SettingsFilter settingsFilter,
IndexNameExpressionResolver indexNameExpressionResolver,
Supplier<DiscoveryNodes> nodesInCluster) {
return List.of(
// ... existing handlers ...
// Rule endpoints
new RestPostRuleAction(),
new RestPutRuleAction(),
new RestDeleteRuleAction(),
new RestGetRuleAction(), // <-- Add the new handler
// ... remaining handlers ...
);
}
Make sure to add the import at the top of the file:
import com.wazuh.contentmanager.rest.service.RestGetRuleAction;
Step 4: Build and Verify
Compile the plugin to check for errors:
./gradlew :wazuh-indexer-content-manager:compileJava
If compilation succeeds, run the full build (including tests):
./gradlew :wazuh-indexer-content-manager:build
Step 5: Test the Endpoint
Manual Testing
Start a local cluster (see tools/test-cluster) and test:
# Create a rule first (so there's something to fetch)
curl -X POST "https://localhost:9200/_plugins/_content_manager/rules" \
-H "Content-Type: application/json" \
-u admin:admin --insecure \
-d '{
"integration": "<integration-id>",
"resource": {
"title": "Test Rule"
}
}'
# The response returns the UUID. Use it to fetch:
curl -X GET "https://localhost:9200/_plugins/_content_manager/rules/<uuid>" \
-u admin:admin --insecure
Writing a Unit Test
Create a test file at:
plugins/content-manager/src/test/java/com/wazuh/contentmanager/rest/service/RestGetRuleActionTests.java
At minimum, test that getName() and routes() return expected values:
package com.wazuh.contentmanager.rest.service;
import org.opensearch.rest.RestRequest;
import org.opensearch.test.OpenSearchTestCase;
public class RestGetRuleActionTests extends OpenSearchTestCase {
public void testGetName() {
RestGetRuleAction action = new RestGetRuleAction();
assertEquals("content_manager_rule_get", action.getName());
}
public void testRoutes() {
RestGetRuleAction action = new RestGetRuleAction();
assertEquals(1, action.routes().size());
assertEquals(RestRequest.Method.GET, action.routes().get(0).getMethod());
assertTrue(action.routes().get(0).getPath().contains("/rules/{id}"));
}
}
Run:
./gradlew :wazuh-indexer-content-manager:test
Summary
To add a new REST endpoint to the Content Manager plugin:
- Create the handler class — Extend
BaseRestHandler(for simple endpoints) or one of the abstract classes (AbstractCreateAction,AbstractUpdateAction,AbstractDeleteAction) for standard CUD. - Define routes — Use
NamedRoute.Builderwith a unique name. - Implement logic — Override
prepareRequest()(orexecuteRequest()if extending the abstract hierarchy). - Register — Add the instance to
ContentManagerPlugin.getRestHandlers(). - Build and test —
./gradlew :wazuh-indexer-content-manager:compileJavathen./gradlew :wazuh-indexer-content-manager:test.
For content CUD endpoints that need Draft space validation, Engine sync, and hash updates, extend AbstractContentAction or one of its children instead of BaseRestHandler directly.
Logtest Architecture and Developer Guide
Component Overview
The logtest flow involves three layers:
RestPostLogtestAction → LogtestService → EngineService + SecurityAnalyticsService
RestPostLogtestNormalizationAction → ↑ ↑
RestPostLogtestDetectionAction → ↑ ↑
(REST handlers) (Orchestration) (External services)
RestPostLogtestAction (combined)
Path: rest/service/RestPostLogtestAction.java
The REST handler for POST /_plugins/_content_manager/logtest. Responsibilities:
- Validates the request has content and is valid JSON.
- Validates the required field
space. - Validates that
spaceis"test"or"standard". - Extracts the optional
integrationfield (if present) and strips it from the Engine payload. - Delegates to
LogtestService.executeLogtest(integrationId, space, enginePayload). IfintegrationIdisnull, only engine normalization is performed.
The handler does not interact with indices or external services directly, all business logic is in the service.
RestPostLogtestNormalizationAction
Path: rest/service/RestPostLogtestNormalizationAction.java
The REST handler for POST /_plugins/_content_manager/logtest/normalization. Responsibilities:
- Validates the request has content and is valid JSON.
- Validates the required field
space. - Validates that
spaceis"test"or"standard". - Strips the
integrationfield if present (not used for normalization). - Delegates to
LogtestService.executeNormalization(enginePayload).
RestPostLogtestDetectionAction
Path: rest/service/RestPostLogtestDetectionAction.java
The REST handler for POST /_plugins/_content_manager/logtest/detection. Responsibilities:
- Validates the request has content and is valid JSON.
- Validates the required fields
space,integration, andinput. - Validates that
spaceis"test"or"standard". - Validates that
inputis a JSON object (not a string or array). - Delegates to
LogtestService.executeDetection(integrationId, space, inputEvent).
LogtestService
Path: cti/catalog/service/LogtestService.java
The orchestrator. Provides three public entry points:
executeLogtest()— Full combined flow (normalization + detection)executeNormalization()— Engine-only: forwards payload toEngineService.logtest()and returns the response directly withparseMessageAsJson()executeDetection()— SAP-only: looks up integration, fetches rule IDs/bodies, evaluates viaSecurityAnalyticsService.evaluateRules(), and returns the SAP result
The full logtest flow:
- No-integration shortcut — If
integrationIdisnull, delegates toexecuteEngineOnly(): runs the Engine normalization and returns the result withdetection.status: "skipped"andreason: "No integration provided". Steps 2–5 below are skipped. - Integration lookup — Queries
wazuh-threatintel-integrationsfor a document matchingdocument.id == integrationIdandspace.name == space. Returns 400 if not found. - Engine processing — Sends the event payload to the Wazuh Engine via
EngineService.logtest(). Extracts the normalized event from theoutputfield. The engine result fields (output,asset_traces,validation) are included directly in the response (no wrapper). - Rule fetching — Extracts rule IDs from the integration’s
document.rulesarray, then fetches rule bodies fromwazuh-threatintel-rulesbydocument.id, filtered by the same space. - SAP evaluation — Passes the normalized event JSON and rule bodies to
SecurityAnalyticsService.evaluateRules(). - Response building — Combines engine and SAP results into a single JSON response under the keys
normalizationanddetection.
Error handling:
- If the Engine fails (HTTP error or exception), SAP evaluation is skipped and the response includes
status: "skipped"with the reason. - If no integration is provided, detection is skipped (normalization-only mode).
- If the integration has no rules, SAP returns
rules_evaluated: 0, rules_matched: 0with success status. - If SAP evaluation returns unparseable JSON, the SAP result is
status: "error".
SecurityAnalyticsService / EventMatcher
The SAP evaluation happens in the security-analytics:
SecurityAnalyticsServiceImpl.evaluateRules()— Parses Sigma rule YAML strings intoSigmaRuleobjects, then delegates toEventMatcher.EventMatcher.evaluate()— Flattens the normalized event JSON into dot-notation keys, then evaluates each rule’s detection conditions against the flat map. Returns a JSON result string.
The EventMatcher handles:
- Field-equals-value conditions (exact match, case-insensitive)
- Keyword (value-only) conditions (searches all event fields)
- Wildcards (
*for multi-char,?for single-char) via cached compiled regex patterns - String modifiers:
contains,startswith,endswith - Explicit regex (
remodifier) - CIDR subnet matching (IPv4 and IPv6)
- Boolean, numeric (gt, gte, lt, lte), null, and string comparisons
- Composite conditions: AND, OR, NOT
- List values (any element matching counts as a match)
Match results use a nested rule object per match entry:
{
"rule": { "id": "...", "title": "...", "level": "...", "tags": [...] },
"matched_conditions": [...]
}
Data Flow
Client request
│
▼
RestPostLogtestAction (combined)
│ validates request
│ strips "integration" field
▼
LogtestService.executeLogtest(integrationId, space, payload)
│
├──► [if integrationId == null]
│ → executeEngineOnly(payload)
│ → returns normalization + detection: { status: "skipped" }
│
├──► client.prepareSearch("wazuh-threatintel-integrations")
│ → finds integration in given space (test or standard)
│ → extracts rule IDs from document.rules
│
├──► engineService.logtest(payload)
│ → sends to Wazuh Engine socket
│ → receives normalized event
│ → extracts "output" node as normalized event JSON
│
├──► client.prepareSearch("wazuh-threatintel-rules")
│ → fetches rule bodies by document.id + space filter
│
├──► securityAnalytics.evaluateRules(normalizedEventJson, ruleBodies)
│ → parses YAML → SigmaRule objects
│ → EventMatcher flattens event + evaluates conditions
│ → returns JSON result
│
└──► builds combined response
{ normalization: {...}, detection: {...} }
Split Endpoints
In addition to the combined flow, there are two dedicated endpoints that execute normalization and detection independently:
RestPostLogtestNormalizationAction RestPostLogtestDetectionAction
│ validates: space │ validates: space, integration, input
│ strips integration field │
▼ ▼
LogtestService.executeNormalization(payload) LogtestService.executeDetection(id, space, input)
│ │
└──► engineService.logtest(payload) ├──► client.prepareSearch(".cti-integrations")
→ returns engine response directly │ → finds integration
├──► extractRuleIds() + fetchRuleBodies()
│ → fetches rule content from .cti-rules
└──► securityAnalytics.evaluateRules(inputJson, ruleBodies)
→ returns SAP result directly
Key differences from the combined endpoint:
- Normalization returns the raw Engine response (no detection wrapper). The
integrationfield is stripped if present but has no effect on behavior. - Detection accepts a pre-normalized event as the
inputJSON object. It does not call the Engine — it goes straight to integration lookup → rule fetch → SAP evaluation.
Index Dependencies
| Index | Usage | Query |
|---|---|---|
wazuh-threatintel-integrations | Look up integration by ID in the given space | document.id == X AND space.name == {space} |
wazuh-threatintel-rules | Fetch rule bodies by document IDs in the given space | document.id IN [...] AND space.name == {space} |
Both indices must exist and have document.id mapped as keyword for term queries to work.
Testing
Unit Tests
| Test class | Covers |
|---|---|
RestPostLogtestActionTests | Request validation for combined endpoint (empty body, invalid JSON, missing fields, wrong space, delegation to service) |
RestPostLogtestNormalizationActionTests | Request validation for normalization endpoint (empty body, invalid JSON, missing space, invalid space, delegation, integration stripping) |
RestPostLogtestDetectionActionTests | Request validation for detection endpoint (empty body, invalid JSON, missing fields, invalid space, non-object input, delegation) |
LogtestServiceTests | Orchestration logic (integration lookup, engine errors, rule fetching, SAP evaluation, response structure) |
EventMatcherTests | Sigma rule evaluation (field matching, wildcards, numerics, booleans, nulls, AND/OR/NOT conditions) |
Integration Tests
| Test class | Covers |
|---|---|
LogtestIT | End-to-end REST workflow against a live test cluster (request validation, integration lookup, promote + logtest, response structure) |
Integration tests extend ContentManagerRestTestCase and run against a real OpenSearch cluster. Since the Wazuh Engine is not available in the test environment, engine-dependent tests validate graceful error handling (engine error → SAP skipped).
Adding New Logtest Features
Supporting a new validation field
- Add the field constant to
Constants.java. - Add validation logic in the relevant handler(s):
RestPostLogtestAction,RestPostLogtestNormalizationAction, and/orRestPostLogtestDetectionAction. - Add unit tests in the corresponding test classes.
- Add integration test in
LogtestIT.
Supporting a new Engine response field
- Update
LogtestService.executeEngine()to extract the field. - Include it in the
normalizationmap withinbuildCombinedResponse(). - Add unit test scenarios in
LogtestServiceTests. - Update the API docs (
api.md) response fields table.
Extending SAP evaluation
- Modify
EventMatcher.matchValue()to handle newSigmaTypesubclasses. - Add test cases in
EventMatcherTests. - Update the Sigma rules doc (
sigma-rules.md) if new detection modifiers are supported.
Wazuh Indexer Notifications Plugin — Development Guide
This document describes the architecture, components, and extension points of the Notifications plugin, which provides multi-channel notification capabilities to the Wazuh Indexer.
Overview
The Notifications plugin handles:
- Channel Management: CRUD operations for notification channels (Slack, Email, Chime, Microsoft Teams, Webhooks, SNS, SES).
- Message Delivery: Abstracts different communication protocols (SMTP, HTTP, AWS SES/SNS) into a unified transport layer.
- Test Notifications: Allows sending test messages to validate channel configuration.
- Plugin Features: Exposes dynamic feature discovery so other plugins can query supported notification types.
- Security Integration: Integrates with the Wazuh Indexer Security plugin for RBAC-based access control.
Project Structure
The plugin is organized into three Gradle subprojects:
| Subproject | Description |
|---|---|
notifications/core-spi | Service Provider Interface. Defines destination models (SlackDestination, SmtpDestination, ChimeDestination, etc.) and the NotificationCore contract. |
notifications/core | Core implementation. Contains HTTP/SMTP/SES/SNS clients, transport providers, and all configurable settings (PluginSettings). |
notifications/notifications | Main plugin module. Registers REST handlers, transport actions, index operations, metrics, and security access management. |
Class Hierarchy
Destination Models (core-spi)
BaseDestination
├── SlackDestination
├── ChimeDestination
├── MicrosoftTeamsDestination
├── CustomWebhookDestination
├── WebhookDestination
├── SmtpDestination
├── SesDestination
└── SnsDestination
Transport Layer (core)
DestinationTransport (interface)
├── WebhookDestinationTransport (Slack, Chime, Teams, Webhooks)
├── SmtpDestinationTransport (SMTP Email)
├── SesDestinationTransport (AWS SES Email)
└── SnsDestinationTransport (AWS SNS)
REST Handlers (notifications)
| Handler | Method | URI |
|---|---|---|
NotificationConfigRestHandler | POST | /_plugins/_notifications/configs |
| PUT | /_plugins/_notifications/configs/{config_id} | |
| GET | /_plugins/_notifications/configs/{config_id} | |
| GET | /_plugins/_notifications/configs | |
| DELETE | /_plugins/_notifications/configs/{config_id} | |
| DELETE | /_plugins/_notifications/configs | |
NotificationFeaturesRestHandler | GET | /_plugins/_notifications/features |
NotificationChannelListRestHandler | GET | /_plugins/_notifications/channels |
SendTestMessageRestHandler | POST | /_plugins/_notifications/feature/test/{config_id} |
NotificationStatsRestHandler | GET | /_plugins/_notifications/_local/stats |
Setup Environment
Requirements
- JDK: version 11 or 17 (depending on the target Wazuh Indexer version).
- Gradle: Use the included
./gradlewwrapper (no separate install needed). - IDE: IntelliJ IDEA with Kotlin plugin is recommended.
Clone and Build
git clone <notifications-repo-url>
cd wazuh-indexer-notifications
./gradlew build
The distribution zip will be generated at:
notifications/notifications/build/distributions/
Build Packages
To create distribution packages:
# Full build (compile + test + assemble)
./gradlew build
# Assemble only (skip tests)
./gradlew assemble
The output zip can be installed on a running Wazuh Indexer using:
bin/opensearch-plugin install file:///path/to/notifications-<version>.zip
Run Tests
Unit Tests
./gradlew test
Integration Tests
The integration test suite is located at:
notifications/notifications/src/test/kotlin/org/opensearch/integtest/
To execute the full integration test suite:
./gradlew :notifications:notifications:integTest
Key integration test classes:
| Test Class | Description |
|---|---|
SlackNotificationConfigCrudIT | Full CRUD lifecycle for Slack channels. |
ChimeNotificationConfigCrudIT | Full CRUD lifecycle for Chime channels. |
EmailNotificationConfigCrudIT | Full CRUD lifecycle for Email channels (SMTP/SES). |
MicrosoftTeamsNotificationConfigCrudIT | Full CRUD lifecycle for Microsoft Teams channels. |
WebhookNotificationConfigCrudIT | Full CRUD lifecycle for custom webhooks. |
SnsNotificationConfigCrudIT | Full CRUD lifecycle for SNS channels. |
CreateNotificationConfigIT | Config creation edge cases and validation. |
DeleteNotificationConfigIT | Config deletion including bulk delete. |
QueryNotificationConfigIT | Filtering, sorting, and pagination queries. |
GetPluginFeaturesIT | Feature discovery endpoint tests. |
GetNotificationChannelListIT | Channel list endpoint tests. |
SendTestMessageRestHandlerIT | Test message delivery flow. |
SendTestMessageWithMockServerIT | Test message with mock destination. |
SecurityNotificationIT | RBAC and access control tests. |
MaxHTTPResponseSizeIT | HTTP response size limit enforcement. |
NotificationsBackwardsCompatibilityIT | Backwards compatibility between versions. |
Notification Flow
The data flow when sending a notification follows this sequence:
Monitor/Alerting Plugin
│
▼
Notification Plugin Interface (REST / Transport)
│
▼
Security Plugin (verify permissions)
│
▼
.notifications index (persist notification, status = pending)
│
▼
Transport Action (resolve destination type)
│
├──► WebhookDestinationTransport ──► Slack / Chime / Teams / Custom Webhook
├──► SmtpDestinationTransport ──► External SMTP Server
├──► SesDestinationTransport ──► AWS SES
└──► SnsDestinationTransport ──► AWS SNS
│
▼
Recipient
- An internal plugin (Alerting, Reporting, ISM) or a user invokes the Notification plugin via Transport or REST API.
- The Security plugin verifies the caller’s permissions.
- The notification is persisted in the
.notificationsindex withpendingstatus. - The
DestinationTransportProviderresolves the correct transport based on the channel type. - The transport client delivers the message to the external service.
- On failure, retries are attempted up to the configured limit.
- The notification status is updated to
sentorfailed.
Extending with a New Destination
To add a new notification destination:
-
Define the destination model in
core-spi:- Create a new class extending
BaseDestinationinnotifications/core-spi/src/main/kotlin/.../destination/.
- Create a new class extending
-
Implement the transport in
core:- Create a new class implementing
DestinationTransportinnotifications/core/src/main/kotlin/.../transport/. - Register it in
DestinationTransportProvider.
- Create a new class implementing
-
Add the config type to the
DEFAULT_ALLOWED_CONFIG_TYPESlist incore/setting/PluginSettings.kt. -
Write tests: Add integration tests in
notifications/notifications/src/test/kotlin/org/opensearch/integtest/config/.
Description
The Wazuh Indexer is a highly scalable, full-text search and analytics engine built over OpenSearch. It serves as the central data store for the Wazuh platform, indexing and storing security alerts, events, vulnerability data, and system inventory generated by Wazuh Agents and the Wazuh Server. It provides near real-time search and analytics capabilities, enabling security teams to investigate threats, monitor compliance, and gain visibility into their infrastructure.
The Wazuh Indexer can be deployed as a single-node instance for development and small environments, or as a multi-node cluster for production workloads requiring high availability and horizontal scalability.
Core Concepts
The Wazuh Indexer stores data as JSON documents. Each document contains a set of fields (keys) mapped to values — strings, numbers, booleans, dates, arrays, nested objects, and more.
An index is a collection of related documents. For time-series data such as alerts and events, the Wazuh Indexer uses data streams backed by rolling indices with automatic lifecycle management.
Documents are distributed across shards, which are spread across cluster nodes. This distribution provides redundancy against hardware failures and allows query throughput to scale as nodes are added.

Bundled Plugins
The Wazuh Indexer ships with four purpose-built plugins that extend OpenSearch for security monitoring use cases:
Setup Plugin
The Setup plugin initializes the indexer environment on cluster startup. It creates all required index templates, Index State Management (ISM) policies, data streams, and internal state indices. This ensures the correct schema and lifecycle rules are in place before any data is ingested. The Setup plugin defines the Wazuh Common Schema — the standardized field mappings used across all Wazuh indices.
Content Manager Plugin
The Content Manager plugin is responsible for keeping the Wazuh detection content up to date. It synchronizes rules, decoders, integrations, key-value databases (KVDBs), and Indicators of Compromise (IoCs) from the Wazuh Cyber Threat Intelligence (CTI) API. It also provides a REST API for managing user-generated content — custom rules, decoders, and integrations that can be drafted, tested, and promoted to the active Wazuh Engine configuration.
The Content Manager communicates with the Wazuh Engine through a Unix socket to execute log tests, validate configurations, and reload content. See Content Manager for details.
Security Plugin
The Security plugin provides role-based access control (RBAC), user authentication, and TLS encryption for both the REST API and inter-node transport layers. It ships with predefined roles tailored to Wazuh operations, allowing administrators to control which users can access specific indices, APIs, and dashboards.
Reporting Plugin
The Reporting plugin enables the generation of PDF and CSV reports from Wazuh Dashboard visualizations and saved searches. Reports can be triggered on demand or scheduled for periodic delivery.
Security Analytics Plugin
The Security Analytics plugin provides advanced threat detection and analysis capabilities. It leverages rule-based threat detection analysis to identify anomalies, potential threats, and suspicious activities within the monitored environment through its security events.
Notifications Plugin
The Notifications plugins takes a principal role on Wazuh’s Active Response mechanism. It deploys dedicated notification channels for active response commands, to be executed on the agents.
Alerting Plugin
The Alerting plugin provides real-time alerting capabilities based on predefined rules and conditions (monitors). Monitors are the core component of Threat Detectors, used by the Security Analytics plugin.
Data Storage
The Wazuh Indexer organizes data into purpose-specific indices:
| Index pattern | Description |
|---|---|
wazuh-active-responses | Raw security events from monitored endpoints |
wazuh-events-v5* | Security events from monitored endpoints |
wazuh-findings-v5* | Findings from security events (triggered by rules) |
wazuh-states-v5* | Scan results, such as inventory data (vulnerabilities, packages, ports, etc.) |
wazuh-metrics* | General metrics |
wazuh-threatintel-* | Content Manager system indices for CTI content |
For a complete list of indices and their schemas, see the Setup Plugin documentation.
Integration with the Wazuh Platform
The Wazuh Indexer integrates with:
- Wazuh Server / Engine: Receives analyzed events and alerts; the Content Manager syncs detection content back to the Engine.
- Wazuh Dashboard: An OpenSearch Dashboards fork that provides the web UI for searching, visualizing, and managing Wazuh data.
- Wazuh Agents: Collect endpoint data that ultimately flows into the Indexer after processing by the Engine.
The Indexer exposes a standard REST API compatible with the OpenSearch API, so existing OpenSearch tools, clients, and integrations work with the Wazuh Indexer out of the box.
Architecture
The Wazuh Indexer is built on top of OpenSearch and extends it with a set of purpose-built plugins that provide security event indexing, content management, access control, and reporting capabilities.
Component Overview
┌─────────────────────────────────────────────────────────────────────┐
│ Wazuh Indexer │
│ │
│ ┌──────────────┐ ┌──────────────────┐ ┌──────────┐ ┌─────────┐ │
│ │ Setup Plugin │ │ Content Manager │ │ Security │ │Reporting│ │
│ │ │ │ Plugin │ │ Plugin │ │ Plugin │ │
│ └──────┬───────┘ └────────┬─────────┘ └────┬─────┘ └───┬─────┘ │
│ │ │ │ │ │
│ ┌──────┴────────┐ ┌──────┴───────────┐ ┌──┴───────┐ │ │
│ │Index Templates│ │ CTI API Client │ │ RBAC & │ │ │
│ │ISM Policies │ │ Engine Client │ │ Access │ │ │
│ │Stream Indices │ │ Job Scheduler │ │ Control │ │ │
│ │State Indices │ │ Space Service │ └──────────┘ │ │
│ └───────────────┘ └───────┬──────────┘ │ │
│ │ │ │
│ ┌─────────┴───────────────────────┐ │ │
│ │ System Indices │ │ │
│ │ .wazuh-cti-consumers │ │ │
│ │ wazuh-threatintel-rules │ │ │
│ │ wazuh-threatintel-decoders │ │ │
│ │ wazuh-threatintel-integrations │ │ │
│ │ wazuh-threatintel-kvdbs │ │ │
│ │ wazuh-threatintel-policies │ │ │
│ │ wazuh-threatintel-enrichments │ │ │
│ └─────────────────────────────────┘ │ │
└─────────────────────────────────┬──────────────────────────┼────────┘
│ Unix Socket │
┌───────┴────────┐ ┌──────┴───────┐
│ Wazuh Engine │ │ Wazuh │
│ (Analysis & │ │ Dashboard │
│ Detection) │ │ (UI) │
└────────────────┘ └──────────────┘
Plugins
Setup Plugin
The Setup plugin initializes the Wazuh Indexer environment when the cluster starts. It is responsible for:
- Index templates: Defines the mappings and settings for all Wazuh indices (alerts, events, statistics, vulnerabilities, etc.).
- ISM (Index State Management) policies: Configures lifecycle policies for automatic rollover, deletion, and retention of time-series indices.
- Data streams: Creates the initial data stream indices that receive incoming event data.
- State indices: Sets up internal indices used by other Wazuh components to track operational state.
The Setup plugin runs once during cluster initialization and ensures the required infrastructure is in place before other plugins begin operating.
Content Manager Plugin
The Content Manager is the most feature-rich plugin. It handles:
- CTI synchronization: Periodically fetches threat intelligence content (rules, decoders, integrations, KVDBs, IoCs) from the Wazuh CTI API. On first run, it downloads a full snapshot; subsequent runs apply incremental patches.
- User-generated content: Provides a REST API for creating, updating, and deleting custom decoders, rules, integrations, and KVDBs in a draft space.
- Promotion workflow: Changes made in the draft space can be previewed and promoted to the Wazuh Engine for activation.
- Engine communication: Communicates with the Wazuh Engine via a Unix socket for logtest execution, content validation, and configuration reload.
- Policy management: Manages the Engine routing policy that controls how events are processed.
See Content Manager for full details.
Security Plugin
The Security plugin extends OpenSearch’s security capabilities for Wazuh-specific needs:
- Role-based access control (RBAC): Defines predefined roles and permissions for Wazuh operations.
- User management: Provides APIs and configuration for managing users and their access levels.
- TLS/SSL: Handles transport and REST layer encryption.
Reporting Plugin
The Reporting plugin enables on-demand and scheduled report generation from the Wazuh Dashboard, producing PDF or CSV exports of dashboards and saved searches.
Data Flow
- Wazuh Agents collect security events from monitored endpoints and forward them to the Wazuh Server.
- The Wazuh Engine on the server analyzes events using rules and decoders, then forwards alerts and events to the Wazuh Indexer via the Indexer API.
- The Setup Plugin ensures the correct index templates, data streams, and lifecycle policies exist.
- The Content Manager Plugin keeps the Engine’s detection content up to date by synchronizing with the CTI API and managing user customizations.
- The Wazuh Dashboard queries the Indexer to visualize alerts, events, and security analytics.
Compatibility
Supported operating systems
We aim to support as many operating systems as OpenSearch does. Wazuh indexer should work on many Linux distributions, but we only test a handful. The following table lists the operating system versions that we currently support.
For 5.0.0 and above, we support the operating system versions and architectures included in the table below.
| Name | Version | Architecture |
|---|---|---|
| Red Hat | 9, 10 | x86_64, aarch64 |
| Ubuntu | 22.04, 24.04 | x86_64, aarch64 |
| Amazon Linux | 2023 | x86_64, aarch64 |
OpenSearch
Currently, Wazuh indexer is using version 3.5.0 of OpenSearch.
Requirements
Hardware recommendations
The Wazuh indexer can be installed as a single-node or as a multi-node cluster.
Hardware recommendations for each node
| Minimum | Recommended | |||
|---|---|---|---|---|
| Component | RAM (GB) | CPU (cores) | RAM (GB) | CPU (cores) |
| Wazuh indexer | 8 | 4 | 32 | 8 |
Disk space requirements
The amount of data depends on the generated alerts per second (APS). This table details the estimated disk space needed per agent to store 90 days of alerts on a Wazuh indexer server, depending on the type of monitored endpoints.
| Monitored endpoints | APS | Storage in Wazuh indexer (GB/90 days) |
|---|---|---|
| Servers | 0.25 | 3.7 |
| Workstations | 0.1 | 1.5 |
| Network devices | 0.5 | 7.4 |
For example, for an environment with 80 workstations, 10 servers, and 10 network devices, the storage needed on the Wazuh indexer server for 90 days of alerts is 230 GB.
Packages
Wazuh Indexer packages can be downloaded from the internal S3 buckets though the following links. Note these links are placeholders, and that you need to replace the RELEASE_SERIES, the VERSION and the REVISION with the appropriate values.
wazuh_indexer_aarch64_rpm: "https://packages-staging.xdrsiem.wazuh.info/pre-release/<RELEASE_SERIES>/yum/wazuh-indexer-<VERSION>-<REVISION>.aarch64.rpm"
wazuh_indexer_amd64_deb: "https://packages-staging.xdrsiem.wazuh.info/pre-release/<RELEASE_SERIES>/apt/pool/main/w/wazuh-indexer/wazuh-indexer_<VERSION>-<REVISION>_amd64.deb"
wazuh_indexer_arm64_deb: "https://packages-staging.xdrsiem.wazuh.info/pre-release/<RELEASE_SERIES>/apt/pool/main/w/wazuh-indexer/wazuh-indexer_<VERSION>-<REVISION>_arm64.deb"
wazuh_indexer_x86_64_rpm: "https://packages-staging.xdrsiem.wazuh.info/pre-release/<RELEASE_SERIES>/yum/wazuh-indexer-<VERSION>-<REVISION>.x86_64.rpm"
Examples
wazuh_indexer_aarch64_rpm: "https://packages-staging.xdrsiem.wazuh.info/pre-release/5.x/yum/wazuh-indexer-5.0.0-alpha99.aarch64.rpm"
wazuh_indexer_amd64_deb: "https://packages-staging.xdrsiem.wazuh.info/pre-release/5.x/apt/pool/main/w/wazuh-indexer/wazuh-indexer_5.0.0-alpha99_amd64.deb"
wazuh_indexer_arm64_deb: "https://packages-staging.xdrsiem.wazuh.info/pre-release/5.x/apt/pool/main/w/wazuh-indexer/wazuh-indexer_5.0.0-alpha99_arm64.deb"
wazuh_indexer_x86_64_rpm: "https://packages-staging.xdrsiem.wazuh.info/pre-release/5.x/yum/wazuh-indexer-5.0.0-alpha99.x86_64.rpm"
Compatibility
Please refer to this section for information pertaining to compatibility.
Installation
Note: This documentation assumes you are already provisioned with a wazuh-indexer package through any of the possible methods:
- Local package generation (recommended).
- GH Workflows artifacts.
- Staging S3 buckets
Installing the Wazuh indexer step by step
Install and configure the Wazuh indexer as a single-node or multi-node cluster, following step-by-step instructions. The installation process is divided into three stages.
-
Certificates creation
-
Nodes installation
-
Cluster initialization
Note: You need root user privileges to run all the commands described below.
1. Certificates creation
Generating the SSL certificates
-
Download the
wazuh-certs-tool.shscript and theconfig.ymlconfiguration file. This creates the certificates that encrypt communications between the Wazuh central components.curl -sO https://packages-dev.wazuh.com/5.0/wazuh-certs-tool.sh curl -sO https://packages-dev.wazuh.com/5.0/config.yml -
Edit
./config.ymland replace the node names and IP values with the corresponding names and IP addresses. You need to do this for all Wazuh server, Wazuh indexer, and Wazuh dashboard nodes. Add as many node fields as needed.nodes: # Wazuh indexer nodes indexer: - name: node-1 ip: "<indexer-node-ip>" #- name: node-2 # ip: "<indexer-node-ip>" #- name: node-3 # ip: "<indexer-node-ip>" # Wazuh manager nodes # If there is more than one Wazuh manager # node, each one must have a node_type manager: - name: wazuh-1 ip: "<wazuh-manager-ip>" # node_type: master #- name: wazuh-2 # ip: "<wazuh-manager-ip>" # node_type: worker #- name: wazuh-3 # ip: "<wazuh-manager-ip>" # node_type: worker # Wazuh dashboard nodes dashboard: - name: dashboard ip: "<dashboard-node-ip>"To learn more about how to create and configure the certificates, see the Certificates deployment section.
-
Run
./wazuh-certs-tool.shto create the certificates. For a multi-node cluster, these certificates need to be later deployed to all Wazuh instances in your cluster../wazuh-certs-tool.sh -A -
Compress all the necessary files.
tar -cvf ./wazuh-certificates.tar -C ./wazuh-certificates/ . rm -rf ./wazuh-certificates -
Copy the
wazuh-certificates.tarfile to all the nodes, including the Wazuh indexer, Wazuh server, and Wazuh dashboard nodes. This can be done by using thescputility.
2. Nodes installation
Installing package dependencies
Install the following packages if missing:
yum
yum install coreutils
apt
apt-get install debconf adduser procps
Installing the Wazuh indexer package
rpm
rpm -ivh --replacepkgs wazuh-indexer-<VERSION>.rpm
dpkg
dpkg -i wazuh-indexer-<VERSION>.deb
Configuring the Wazuh indexer
Edit the /etc/wazuh-indexer/opensearch.yml configuration file and replace the following values:
a. network.host: Sets the address of this node for both HTTP and transport traffic. The node will bind to this address and use it as its publish address. Accepts an IP address or a hostname.
Use the same node address set in config.yml to create the SSL certificates.
b. node.name: Name of the Wazuh indexer node as defined in the config.yml file. For example, node-1.
c. cluster.initial_cluster_manager_nodes: List of the names of the master-eligible nodes. These names are defined in the config.yml file. Uncomment the node-2 and config.yml and node-3lines, change the names, or add more lines, according to your config.yml definitions.
cluster.initial_cluster_manager_nodes:
- "node-1"
- "node-2"
- "node-3"
d. discovery.seed_hosts: List of the addresses of the master-eligible nodes. Each element can be either an IP address or a hostname. You may leave this setting commented if you are configuring the Wazuh indexer as a single node. For multi-node configurations, uncomment this setting and set the IP addresses of each master-eligible node.
discovery.seed_hosts:
- "10.0.0.1"
- "10.0.0.2"
- "10.0.0.3"
e. plugins.security.nodes_dn: List of the Distinguished Names of the certificates of all the Wazuh indexer cluster nodes. Uncomment the lines for node-2 and node-3 and change the common names (CN) and values according to your settings and your config.yml definitions.
plugins.security.nodes_dn:
- "CN=node-1,OU=Wazuh,O=Wazuh,L=California,C=US"
- "CN=node-2,OU=Wazuh,O=Wazuh,L=California,C=US"
- "CN=node-3,OU=Wazuh,O=Wazuh,L=California,C=US"
Deploying certificates
Note: Make sure that a copy of the
nazuh-certificates.tarfile, created during the initial configuration step, is placed in your working directory.
Run the following commands, replacing <INDEXER_NODE_NAME> with the name of the Wazuh indexer node you are configuring as defined in config.yml. For example, node-1. This deploys the SSL certificates to encrypt communications between the Wazuh central components.
NODE_NAME=<INDEXER_NODE_NAME>
mkdir -p /etc/wazuh-indexer/certs
tar -xf ./wazuh-certificates.tar -C /etc/wazuh-indexer/certs/ ./$NODE_NAME.pem ./$NODE_NAME-key.pem ./admin.pem ./admin-key.pem ./root-ca.pem
mv -n /etc/wazuh-indexer/certs/$NODE_NAME.pem /etc/wazuh-indexer/certs/indexer.pem
mv -n /etc/wazuh-indexer/certs/$NODE_NAME-key.pem /etc/wazuh-indexer/certs/indexer-key.pem
chmod 500 /etc/wazuh-indexer/certs
chmod 400 /etc/wazuh-indexer/certs/*
chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/certs
Starting the service
Enable and start the Wazuh indexer service.
Systemd
systemctl daemon-reload
systemctl enable wazuh-indexer
systemctl start wazuh-indexer
SysV
Choose one option according to the operating system used.
a. RPM-based operating system:
chkconfig --add wazuh-indexer
service wazuh-indexer start
b. Debian-based operating system:
update-rc.d wazuh-indexer defaults 95 10
service wazuh-indexer start
Repeat this stage of the installation process for every Wazuh indexer node in your cluster. Then proceed with initializing your single-node or multi-node cluster in the next stage.
3. Cluster initialization
Run the Wazuh indexer indexer-security-init.sh script on any Wazuh indexer node to load the new certificates information and start the single-node or multi-node cluster.
/usr/share/wazuh-indexer/bin/indexer-security-init.sh
Note: You only have to initialize the cluster once, there is no need to run this command on every node.
Testing the cluster installation
-
Replace
$WAZUH_INDEXER_IP_ADDRESSand run the following commands to confirm that the installation is successful.curl -k -u admin:admin https://$WAZUH_INDEXER_IP_ADDRESS:9200Output
{ "name" : "node-1", "cluster_name" : "wazuh-cluster", "cluster_uuid" : "095jEW-oRJSFKLz5wmo5PA", "version" : { "number" : "7.10.2", "build_type" : "rpm", "build_hash" : "db90a415ff2fd428b4f7b3f800a51dc229287cb4", "build_date" : "2023-06-03T06:24:25.112415503Z", "build_snapshot" : false, "lucene_version" : "9.6.0", "minimum_wire_compatibility_version" : "7.10.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "The OpenSearch Project: https://opensearch.org/" } -
Replace
$WAZUH_INDEXER_IP_ADDRESSand run the following command to check if the single-node or multi-node cluster is working correctly.curl -k -u admin:admin https://$WAZUH_INDEXER_IP_ADDRESS:9200/_cat/nodes?v
Configuration Files
Initialization plugin settings
Timeout for the OpenSearch client
- Key:
plugins.setup.timeout - Type: Integer
- Default:
30 - Minimum:
5 - Maximum:
120 - Description: Timeout in seconds for index and search operations.
Backoff (delay) for the retry mechanism
- Key:
plugins.setup.backoff - Type: Integer
- Default:
15 - Minimum:
5 - Maximum:
60 - Description: Delay in seconds for the retry mechanism involving initialization tasks.
Example
Below, there is an example of custom values for these settings within the opensearch.yml file:
plugins.setup.timeout: 60
plugins.setup.backoff: 30
Security - Access Control
Wazuh Indexer uses the OpenSearch Security plugin to manage access control and security features.
The configuration files for the security plugin are located under the /etc/wazuh-indexer/opensearch-security/ directory by default.
Modifying these files directly is not recommened. Instead, use the Wazuh Dashboard Security plugin to create new security resouces. See Define Users and Roles.
Among these files, Wazuh Indexer uses these particularly to add its own security resources:
-
internal_users.yml: Defines the internal users for the Wazuh Indexer. Each user has a hashed password, reserved status, backend roles, and a description. -
roles.yml: Defines the roles and their permissions within the Wazuh Indexer. Each role specifies the cluster permissions, index permissions, and tenant permissions. -
roles_mapping.yml: Maps users and backend roles to the defined roles. This file specifies which users or backend roles have access to each role.
The Access Control section contains information about the security resources added to the Wazuh Indexer by default.
Wazuh Indexer Initialization plugin
The wazuh-indexer-setup plugin is a module composing the Wazuh Indexer responsible for the initialization of the indices required by Wazuh to store all the data gathered and generated by other Central Components, such as the agents and the server (engine).
The Wazuh Indexer Setup Plugin in responsible for:
- Create the index templates, to define the mappings and settings for the indices.
- Create the initial indices. We distinguish between stateful and stream indices. While stream indices contain immutable time-series data and are rolled over periodically, stateful indices store dynamic data that can change over time and reside in a single index.
- Stream indices are created with a data stream configuration and an ISM rollover policy.
Indices
The following table lists the indices created by this plugin.
Stream indices
| Index | Description |
|---|---|
wazuh-events-raw-v5 | Stores original unprocessed events. |
wazuh-events-v5-unclassified | Stores uncategorized events for investigation. |
wazuh-active-responses | Stores active response execution requests. |
wazuh‑events-v5-<category> | Stores events received by the Wazuh Server, categorized by their origin or type. Refer to Wazuh Common Schema for more information. |
wazuh‑findings-v5-<category> | Stores security findings generated by the Threat Detectors. These are created each time an event trips a detection rule. |
wazuh-metrics-agents | Stores statistics about the Wazuh Agents state. |
wazuh-metrics-comms | Stores statistics about the Wazuh Server usage and performance. The information includes the number of events decoded, bytes received, and TCP sessions. |
Stateful indices
| Index | Description |
|---|---|
wazuh‑states-sca | Security Configuration Assessment (SCA) scan results. |
wazuh-states-fim-files | File Integrity Monitoring: information about monitored files. |
wazuh-states-fim-registry-keys | File Integrity Monitoring: information about the Windows registry (keys). |
wazuh-states-fim-registry-values | File Integrity Monitoring: information about the Windows registry (values). |
wazuh-states-inventory-browser-extensions | Stores browser extensions/add-ons detected on the endpoint (Chromium-based browsers — Chrome/Edge/Brave/Opera —, Firefox, and Safari). |
wazuh-states-inventory-groups | Stores existing groups on the endpoint. |
wazuh-states-inventory-hardware | Basic information about the hardware components of the endpoint. |
wazuh-states-inventory-hotfixes | Contains information about the updates installed on Windows endpoints. This information is used by the vulnerability detector module to discover what vulnerabilities have been patched on Windows endpoints. |
wazuh-states-inventory-interfaces | Stores information (up and down interfaces) as well as packet transfer information about the interfaces on a monitored endpoint. |
wazuh-states-inventory-monitoring | Stores the connection status history of Wazuh agents (active, disconnected, pending, or never connected). The index is used by the Wazuh Dashboard to display agent status and historical trends. |
wazuh-states-inventory-networks | Stores the IPv4 and IPv6 addresses associated with each network interface, as referenced in the wazuh-states-inventory-interfaces index. |
wazuh-states-inventory-packages | Stores information about the currently installed software on the endpoint. |
wazuh-states-inventory-ports | Basic information about open network ports on the endpoint. |
wazuh-states-inventory-processes | Stores the detected running processes on the endpoints. |
wazuh-states-inventory-protocols | Stores routing configuration details for each network interface, as referenced in the wazuh-states-inventory-interfaces index. |
wazuh-states-inventory-services | Stores system services detected on the endpoint (Windows Services, Linux systemd units, and macOS launchd daemons/agents). |
wazuh-states-inventory-system | Operating system information, hostname and architecture. |
wazuh-states-inventory-users | Stores existing users on the endpoint. |
wazuh-states-vulnerabilities | Active vulnerabilities on the endpoint and its details. |
Install
The wazuh-indexer-setup plugin is part of the official Wazuh Indexer packages and is installed by default. However, to manually install the plugin, follow the next steps.
Note: You need to use the
wazuh-indexerorrootuser to run these commands.
/usr/share/wazuh-indexer/bin/opensearch-plugin install file://[absolute-path-to-the-plugin-zip]
Once installed, restart the Wazuh Indexer service.
Uninstall
Note You need to use the
wazuh-indexerorrootuser to run these commands.
To list the installed plugins, run:
/usr/share/wazuh-indexer/bin/opensearch-plugin list
To remove a plugin, use its name as a parameter with the remove command:
/usr/share/wazuh-indexer/bin/opensearch-plugin remove <plugin-name>
/usr/share/wazuh-indexer/bin/opensearch-plugin remove wazuh-indexer-setup
Architecture
Design
The plugin implements the ClusterPlugin interface in order to be able to hook into the node’s lifecycle overriding the onNodeStarted() method.
The SetupPlugin class holds the list of indices to create. The logic for the creation of the index templates and the indices is encapsulated in the Index abstract class. Each subclass can override this logic if necessary. The SetupPlugin::onNodeStarted() method invokes the Index::initialize() method, effectively creating every index in the list.
By design, the plugin will overwrite any existing index template under the same name.
Retry mechanism
The plugin features a retry mechanism to handle transient faults. In case of a temporal failure (timeouts or similar) during the initialization of the indices, the task is retried after a given amount of time (backoff). If two consecutive faults occur during the initialization of the same index, the initialization process is halted, and the node is shut down. Proper logging is in place to notify administrators before the shutdown occurs.
The backoff time is configurable. Head to Configuration Files for more information.
Replica configuration
During the node initialization, the plugin checks for the presence of the cluster.default_number_of_replicas setting in the node configuration. If this setting is defined, the plugin automatically updates the cluster’s persistent settings with this value. This ensures that the default number of replicas is consistently applied across the cluster as defined in the configuration file.
Class diagram
---
title: Wazuh Indexer setup plugin
---
classDiagram
%% Classes
class IndexInitializer
<<interface>> IndexInitializer
class Index
<<abstract>> Index
class IndexStateManagement
class WazuhIndex
<<abstract>> WazuhIndex
class StateIndex
class StreamIndex
%% Relations
IndexInitializer <|-- Index : implements
Index <|-- IndexStateManagement
Index <|-- WazuhIndex
WazuhIndex <|-- StateIndex
WazuhIndex <|-- StreamIndex
%% Schemas
class IndexInitializer {
+createIndex(String index) void
+createTemplate(String template) void
}
class Index {
Client client
ClusterService clusterService
IndexUtils utils
String index
String template
+Index(String index, String template)
+setClient(Client client) IndexInitializer
+setClusterService(ClusterService clusterService) IndexInitializer
+setIndexUtils(IndexUtils utils) IndexInitializer
+indexExists(String indexName) bool
+initialize() void
+createIndex(String index) void
+createTemplate(String template) void
%% initialize() podría reemplazarse por createIndex() y createTemplate()
}
class IndexStateManagement {
-List~String~ policies
+initialize() void
-createPolicies() void
-indexPolicy(String policy) void
}
class WazuhIndex {
}
class StreamIndex {
-String alias
+StreamIndex(String index, String template, String alias)
+createIndex(String index)
}
class StateIndex {
}
Sequence diagram
Note Calls to
Clientare asynchronous.
sequenceDiagram
actor Node
participant SetupPlugin
participant Index
participant Client
Node->>SetupPlugin: plugin.onNodeStarted()
activate SetupPlugin
Note over Node,SetupPlugin: Invoked on Node::start()
activate Index
loop i..n indices
SetupPlugin->>Index: i.initialize()
Index-)Client: createTemplate(i)
Client--)Index: response
Index-)Client: indexExists(i)
Client--)Index: response
alt index i does not exist
Index-)Client: createIndex(i)
Client--)Index: response
end
end
deactivate Index
deactivate SetupPlugin
Wazuh Common Schema
Refer to the docs for complete definitions of the indices. The indices inherit the settings and mappings defined in the index templates.
Event stream templates
All event categories share a single base template: templates/streams/events.json. The StreamIndex class dynamically generates one index template per category at deployment time by overriding index_patterns and rollover_alias from the base template. Specialized streams (raw, unclassified, active-responses) use their own dedicated template files.
The WCS field definitions are organized under wcs/stateless/events/:
wcs/stateless/events/
├── main/ # Shared fields for all event categories
├── raw/ # Fields for raw (unprocessed) events
└── unclassified/ # Fields for uncategorized events
JavaDoc
The plugin is documented using JavaDoc. You can compile the documentation using the Gradle task for that purpose. The generated JavaDoc is in the build/docs folder.
./gradlew javadoc
API Reference
The Setup plugin exposes a REST API under /_plugins/_setup/. All endpoints require authentication.
Settings
Update Settings
Persists configuration settings to the .wazuh-settings index. Currently supports the engine.index_raw_events boolean flag, which controls whether the Engine indexes raw events into the wazuh-events-raw-v5 data stream.
Request
- Method:
PUT - Path:
/_plugins/_setup/settings
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
engine | Object | Yes | Engine settings object |
engine.index_raw_events | Boolean | Yes | Whether the Engine indexes raw events into the wazuh-events-raw-v5 data stream |
Example Request
curl -sk -u admin:admin -X PUT \
"https://192.168.56.6:9200/_plugins/_setup/settings" \
-H 'Content-Type: application/json' \
-d '{
"engine": {
"index_raw_events": true
}
}'
Example Response (success)
{
"message": "Settings updated successfully.",
"status": 200
}
Example Response (missing field)
{
"message": "Missing required field: 'engine.index_raw_events'.",
"status": 400
}
Example Response (invalid type)
{
"message": "Field 'engine.index_raw_events' must be of type boolean.",
"status": 400
}
Status Codes
| Code | Description |
|---|---|
| 200 | Settings updated successfully |
| 400 | Invalid request body, missing required fields, or wrong field type |
| 500 | Internal server error (e.g., failed to persist settings to the index) |
Documentation Maintenance — modifications to the REST API must be reflected in both
openapi.ymland this file.
Wazuh Common Schema
The Wazuh Common Schema (WCS) is a standardized structure for organizing and categorizing security event data collected by Wazuh. It is designed to facilitate data analysis, correlation, and reporting across different data sources and types.
Categorization
The Wazuh Common Schema categorizes events into several key areas to streamline data management and analysis.
All event categories share a single base index template (events.json). At deployment time, the setup plugin dynamically generates one index template per category from this shared base, setting the appropriate index_patterns and rollover_alias for each. This means only one template file exists in the repository, but each category gets its own index template in the cluster.
The index mappings and settings for subcategories take precedence over those from the main category. In OpenSearch, index templates are applied in order of their “priority” value: templates with a lower priority are applied first, and those with a higher priority are applied afterward, allowing them to override previous settings. This means the index template for the main category is applied first (priority=1), and then the subcategory template (priority=10) is applied on top of it, so subcategory-specific settings override the main category defaults.
To list all deployed event templates:
GET /_index_template/wazuh-events-*
Categories
The Key column is the canonical identifier used throughout the system — in data stream names, integrations, rules, decoders, and the Security Analytics plugin. Use it exactly as shown when creating or referencing any of these resources.
| Name | Key | Example log types |
|---|---|---|
| Access Management | access-management | ad_ldap, apache_access, okta |
| Applications | applications | github, gworkspace, m365 |
| Cloud Services | cloud-services | azure, cloudtrail, s3 |
| Network Activity | network-activity | dns, network, vpcflow |
| Security | security | waf |
| System Activity | system-activity | linux, windows, others_macos |
| Other | other | others_application, others_apt, others_web |
| Unclassified | unclassified | Events that could not be categorized |
Note:
unclassifiedis a catch-all for events that could not be assigned to any other category. It is managed automatically by the pipeline and should not be used as a target category when creating new integrations or rules.
Data Streams
Each category maps to a dedicated data stream following the pattern wazuh-events-v5-{key}:
Events
wazuh-events-v5-access-management
wazuh-events-v5-applications
wazuh-events-v5-cloud-services
wazuh-events-v5-network-activity
wazuh-events-v5-other
wazuh-events-v5-security
wazuh-events-v5-system-activity
wazuh-events-v5-unclassified
Findings
wazuh-findings-v5-access-management
wazuh-findings-v5-applications
wazuh-findings-v5-cloud-services
wazuh-findings-v5-network-activity
wazuh-findings-v5-other
wazuh-findings-v5-security
wazuh-findings-v5-system-activity
wazuh-findings-v5-unclassified
Check Stream indices for details.
Content Manager
The Content Manager is a Wazuh Indexer plugin responsible for managing detection content — rules, decoders, integrations, key-value databases (KVDBs), and Indicators of Compromise (IoCs). It synchronizes content from the Wazuh Cyber Threat Intelligence (CTI) API, provides a REST API for user-generated content, and communicates with the Wazuh Engine to activate changes.
It also includes the Update check system, which communicates with the CTI Update check API once per day to let Wazuh determine whether a newer Wazuh version is available for the deployment.
Update check components are:
- Update check API (CTI)
- Update check system (Wazuh Indexer)
- Update check UI (Wazuh Dashboard)
CTI Synchronization
The Content Manager periodically synchronizes content from the Wazuh CTI API. Three content contexts are managed:
- Catalog context: Contains detection rules, decoders, integrations, KVDBs, and the routing policy.
- IoC context: Contains Indicators of Compromise for threat detection.
- CVE context: Contains Common Vulnerabilities and Exposures data, stored in
wazuh-threatintel-vulnerabilities. CVE documents do not have a space and are not subject to removals from CTI.
Each context has an associated consumer that tracks synchronization state (current offset, snapshot URL) in the .wazuh-cti-consumers index.
Snapshot Initialization
On first run (when the local offset is 0), the Content Manager performs a full snapshot initialization:
- Fetches the latest snapshot URL from the CTI API.
- Downloads and extracts the ZIP archive.
- Indexes the content into the appropriate system indices using bulk operations.
- Records the snapshot offset in
.wazuh-cti-consumers.
Incremental Updates
When the local offset is behind the remote offset, the Content Manager fetches changes in batches (up to 1000 per request) and applies creation, update, and removal operations to the content indices. The local offset is updated after each successful batch.
If the local offset is ahead of the remote offset (e.g., consumer was changed), or if the update fails, the Content Manager resets to the latest snapshot to realign with the CTI API.
Sync Schedule
By default, synchronization runs:
- On plugin startup (
plugins.content_manager.catalog.update_on_start: true) - Periodically every 60 minutes (
plugins.content_manager.catalog.sync_interval: 60)
The periodic job is registered with the OpenSearch Job Scheduler and tracked in the .wazuh-content-manager-jobs index.
Update Check Service
When plugins.content_manager.telemetry.enabled is true (default), the Content Manager schedules a daily update check heartbeat job.
- Frequency: every 24 hours
- Scheduler document ID:
wazuh-telemetry-ping-job - Endpoint: CTI
/ping - Data sent: cluster UUID and deployed Wazuh version (through headers)
This information is used to detect update availability and surface notifications through the Wazuh Dashboard.
User-Generated Content
The Content Manager provides a full CUD REST API for creating custom detection content:
- Rules: Custom detection rules associated with an integration.
- Decoders: Custom log decoders associated with an integration.
- Integrations: Logical groupings of related rules, decoders, and KVDBs.
- KVDBs: Key-value databases used by rules and decoders for lookups.
User-generated content is stored in the draft space and is separate from the CTI-managed standard space. This separation ensures that user customizations never conflict with upstream CTI content.
See the API Reference for endpoint details.
Content Spaces
The Content Manager organizes content into spaces:
| Space | Description |
|---|---|
| Standard | Read-only content synced from the CTI API. This is the baseline detection content. |
| Draft | Writable space for user-generated content. CUD operations target this space. |
| Test | Used for logtest operations and content validation before final promotion. |
| Custom | The final space for user content. Content promoted to this space is used by the Wazuh Engine (via the manager package) to actively decode and process logs. |
Content flows through spaces in a promotion chain: Draft → Test → Custom. The Standard space exists independently as the upstream CTI baseline. Each space maintains its own copies of rules, decoders, integrations, KVDBs, filters, and the routing policy within the system indices.
Policy Management
The routing policy defines how the Wazuh Engine processes incoming events — which integrations are active and in what order. The Content Manager provides an API to update the draft policy:
curl -sk -u admin:admin -X PUT \
"https://192.168.56.6:9200/_plugins/_content_manager/policy" \
-H 'Content-Type: application/json' \
-d '{"resource": { ... }}'
Policy changes are applied to the draft space and take effect after promotion.
Promotion Workflow
The promotion workflow moves content through the space chain (Draft → Test → Custom):
- Preview changes:
GET /_plugins/_content_manager/promote?space=draftreturns a diff of what will change (additions, updates, deletions for each content type). - Execute promotion:
POST /_plugins/_content_manager/promotepromotes the content from the source space to the next space in the chain.
The promotion chain works as follows:
- Draft → Test: Content is promoted for validation and logtest operations.
- Test → Custom: Once validated, content is promoted to the Custom space where it becomes active — the Wazuh Engine (via the manager package) uses this space to decode and process logs in production.
During promotion, the Content Manager:
- Sends updated content to the Engine
- Validates the configuration
- Triggers a configuration reload
- Updates the target space to reflect the promoted content
Engine Communication
The Content Manager communicates with the Wazuh Engine through a Unix domain socket located at:
/usr/share/wazuh-indexer/engine/sockets/engine-api.sock
This socket is used for:
- Logtest: Sends a log event to the Engine for analysis and returns the decoded/matched result.
- Content validation: Validates rules and decoders before promotion.
- Configuration reload: Signals the Engine to reload its configuration after promotion.
System Indices
The Content Manager uses the following system indices:
| Index | Description |
|---|---|
.wazuh-cti-consumers | Synchronization state for each CTI context/consumer pair (offsets, snapshot URLs) |
wazuh-threatintel-rules | Detection rules (both CTI-synced and user-generated, across all spaces) |
wazuh-threatintel-decoders | Log decoders |
wazuh-threatintel-integrations | Integration definitions |
wazuh-threatintel-kvdbs | Key-value databases |
wazuh-threatintel-policies | Routing policies |
wazuh-threatintel-enrichments | Indicators of Compromise |
wazuh-threatintel-vulnerabilities | Common Vulnerabilities and Exposures (CVE data from CTI, no spaces, offset-tracked) |
wazuh-threatintel-filters | Engine filters (routing filters for event classification) |
.wazuh-content-manager-jobs | Job Scheduler metadata for periodic sync and update check jobs |
CTI Subscription
To synchronize content from the CTI API, the Wazuh Indexer requires a valid subscription token. The subscription is managed through the REST API:
- Register a subscription with a device code obtained from the Wazuh CTI Console.
- The Content Manager stores the token and uses it for all CTI API requests.
- Without a valid subscription, sync operations return a
Token not founderror.
See Subscription Management in the API Reference.
Architecture
The Content Manager plugin operates within the Wazuh Indexer environment. It is composed of several components that handle REST API requests, background job scheduling, content synchronization, user-generated content management, and Engine communication.
Components
REST Layer
Exposes HTTP endpoints under /_plugins/_content_manager/ for:
- Subscription management (register, get, delete CTI tokens)
- Manual content sync trigger
- CUD operations on rules, decoders, integrations, and KVDBs
- Policy management
- Promotion preview and execution
- Logtest execution
- Content validation and promotion
CTI Console
Manages authentication with the Wazuh CTI API. Stores subscription tokens used for all CTI requests. Without a valid token, sync operations are rejected.
Job Scheduler (CatalogSyncJob)
Implements the OpenSearch JobSchedulerExtension interface. Registers a periodic job (wazuh-catalog-sync-job) that triggers content synchronization at a configurable interval (default: 60 minutes). The job metadata is stored in .wazuh-content-manager-jobs.
Update Check Service (TelemetryPingJob)
Implements a daily heartbeat job (wazuh-telemetry-ping-job) that calls the CTI Update check API endpoint (/ping).
- Enabled by default through
plugins.content_manager.telemetry.enabled. - Can be toggled at runtime because it is a dynamic setting.
- Sends deployment metadata required for update checks (cluster UUID and deployed Wazuh version).
- Job metadata is stored in
.wazuh-content-manager-jobs.
Consumer Service
Orchestrates synchronization for each context/consumer pair. Compares local offsets (from .wazuh-cti-consumers) with remote offsets from the CTI API, then delegates to either the Snapshot Service or Update Service. Tracks the sync lifecycle through the status field in .wazuh-cti-consumers: set to updating at the start of synchronize() and back to idle only once all post-sync work (hash recalculation, Security Analytics sync, Engine notification) is complete.
Snapshot Service
Handles initial content loading. Downloads a ZIP snapshot from the CTI API, extracts it, and bulk-indexes content into the appropriate system indices. Performs data enrichment (e.g., converting JSON payloads to YAML for decoders).
Update Service
Handles incremental updates. Fetches change batches from the CTI API based on offset differences and applies create, update, and delete operations to content indices.
Security Analytics Service
Interfaces with the OpenSearch Security Analytics plugin. Creates, updates, and deletes Security Analytics rules, integrations, and detectors to keep them in sync with CTI content.
Document ID model: SAP documents use their own auto-generated UUIDs as primary IDs, independent of the CTI document UUIDs. Each SAP document stores:
document.id— the UUID of the original CTI document in the Content Manager.source— the space the document belongs to, with the first letter capitalized (e.g., “Draft”, “Test”, “Custom”, or “Sigma” for standard).
This design allows the same CTI resource to exist across multiple spaces without ID collisions. Association and lookup between CTI and SAP documents is performed by querying document.id + source.
Note: SAP enforces a maximum of 100 rules per detector. If an integration has more than 100 enabled rules, the detector creation or update request will be rejected. See Security Analytics — Detector constraints for details.
Space Service
Manages the four content spaces (standard, draft, test, custom). Routes CUD operations to the correct space partitions within system indices. Handles promotion by computing diffs between spaces in the promotion chain (Draft → Test → Custom).
Engine Client
Communicates with the Wazuh Engine via Unix domain socket at /usr/share/wazuh-indexer/engine/sockets/engine-api.sock. Used for logtest execution, content validation, and configuration reload.
Data Flows
CTI Sync (Snapshot)
Job Scheduler triggers
→ Consumer Service checks .wazuh-cti-consumers (offset = 0)
→ Snapshot Service downloads ZIP from CTI API
→ Extracts and bulk-indexes into wazuh-threatintel-rules, wazuh-threatintel-decoders, etc.
→ Updates .wazuh-cti-consumers with new offset
→ Security Analytics Service creates detectors (max 100 rules per detector)
CTI Sync (Incremental)
Job Scheduler triggers
→ Consumer Service checks .wazuh-cti-consumers (local_offset < remote_offset)
→ Update Service fetches change batches from CTI API
→ Applies CREATE/UPDATE/DELETE to content indices
→ Updates .wazuh-cti-consumers offset
→ Security Analytics Service syncs changes
Update Check Heartbeat
Job Scheduler triggers (every 24h)
→ TelemetryPingJob checks plugins.content_manager.telemetry.enabled
→ Reads cluster UUID and current Wazuh version
→ TelemetryClient sends GET /ping to CTI Update check API
→ Wazuh Dashboard can surface update availability to users
User-Generated Content (CUD)
REST request (POST/PUT/DELETE)
→ Space Service routes to draft space
→ Writes to wazuh-threatintel-rules / wazuh-threatintel-decoders / wazuh-threatintel-integrations / wazuh-threatintel-kvdbs
→ Returns created/updated/deleted resource
Standard Policy Engine Loading
The local Wazuh Engine must always reflect the latest version of the standard space policy. Whenever the standard space space.hash changes, the full policy — including all referenced integrations, decoders, kvdbs, filters, and rules — is built and sent to the Engine via EngineService.promote().
The space.hash is an aggregate SHA-256 computed from the individual hashes of the policy and every resource it references. Any change to the policy will trigger a reload. These changes include:
- New or updated integrations, decoders, rules, kvdbs, or filters (via CTI sync)
- Changes to policy settings (
enabled,index_unclassified_events,index_discarded_events) - Changes to the enrichment types list
- Reordering of the filters list
The engine load is best-effort: if the Engine is unreachable, the error is logged but the operation (sync or REST update) still succeeds.
Promotion
GET /promote?space=draft
→ Space Service computes diff (draft vs test, or test vs custom)
→ Returns changes preview (adds, updates, deletes per content type)
POST /promote
→ Capture pre-promotion snapshots of target-space resources
→ Engine validates configuration (draft → test only)
→ Consolidate changes to CM indices (tracked for rollback)
→ Apply adds/updates: policy, integrations, kvdbs, decoders, filters, rules
→ Apply deletes: integrations, kvdbs, decoders, filters, rules
→ Sync integrations and rules to SAP:
→ ADDs use POST (new SAP document)
→ UPDATEs use PUT (existing SAP document)
→ Delete removed integrations/rules from SAP
Rollback on Failure
If any Content Manager index mutation fails during the consolidation phase, the promotion endpoint automatically performs a LIFO (Last-In, First-Out) rollback to restore the system to its pre-promotion state.
Pre-Promotion Snapshots
Before any writes, the system captures:
- Old versions (
captureOldVersions): For each resource being added or updated, the current target-space version is fetched and stored. If the resource does not exist in the target space,nullis stored. - Delete snapshots (
captureDeleteSnapshots): For each resource being deleted, the full document is fetched from the source space and stored.
CM Index Rollback
Each successful index mutation is recorded as a RollbackStep(kind, resourceType). On
failure, steps are replayed in strict reverse (LIFO) order:
| Forward operation | Old version | Rollback action |
|---|---|---|
| ADD (apply) | null | Delete the newly created document |
| UPDATE (apply) | non-null | Restore the previous version |
| DELETE | snapshot | Re-index the snapshotted document |
Individual rollback step failures are logged and skipped so remaining steps can proceed.
SAP Reconciliation
After CM rollback completes, a best-effort SAP reconciliation runs in dependency order:
- Revert applied rules — ADDs are deleted from SAP; UPDATEs are restored to old version.
- Revert applied integrations — Same as above.
- Restore deleted integrations — Re-created from pre-deletion snapshots via POST.
- Restore deleted rules — Same as above.
SAP reconciliation failures are logged as warnings but do not cause the overall rollback to fail, since SAP sync is considered best-effort.
Consolidation fails at step N
→ LIFO rollback: undo step N-1, N-2, ..., 1
→ APPLY + null old version → delete from target index
→ APPLY + old version → restore old version to target index
→ DELETE → re-index snapshot to target index
→ SAP reconciliation (best-effort):
→ Delete rules that were added to SAP
→ Restore rules that were updated in SAP
→ Restore integrations that were added/updated in SAP
→ Re-create integrations/rules that were deleted from SAP
→ Return 500 with error message
Index Structure
Each content index (e.g., wazuh-threatintel-rules) stores documents from all three spaces. Documents are differentiated by internal metadata fields that indicate their space membership. The document _id is a UUID assigned at creation time.
Example document structure in wazuh-threatintel-rules:
{
"_index": "wazuh-threatintel-rules",
"_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"_source": {
"title": "SSH brute force attempt",
"integration": "openssh",
"space.name": "draft",
...
}
}
The .wazuh-cti-consumers index stores one document per context/consumer pair:
{
"_index": ".wazuh-cti-consumers",
"_id": "t1-ruleset-5_public-ruleset-5",
"_source": {
"name": "public-ruleset-5",
"context": "t1-ruleset-5",
"status": "idle",
"local_offset": 3932,
"remote_offset": 3932,
"snapshot_link": "https://api.pre.cloud.wazuh.com/store/contexts/t1-ruleset-5/consumers/public-ruleset-5/168_1776070234.zip"
}
}
The status field reflects the consumer’s synchronization lifecycle:
| Value | Meaning |
|---|---|
idle | Sync is complete; content indices are up-to-date and safe to read. |
updating | Sync is in progress; content may be partially written or inconsistent. |
The status is set to updating at the very start of a sync cycle and only transitions back to idle after all post-sync work finishes — including hash recalculation, Security Analytics Plugin synchronization, and Engine IoC notification. If a sync fails mid-cycle, the status remains updating as an observable failure signal.
Configuration
The Content Manager plugin is configured through settings in opensearch.yml. All settings use the plugins.content_manager prefix.
Settings Reference
| Setting | Type | Default | Description |
|---|---|---|---|
plugins.content_manager.cti.api | String | https://api.pre.cloud.wazuh.com/api/v1 | Base URL for the Wazuh CTI API |
plugins.content_manager.catalog.sync_interval | Integer | 60 | Sync interval in minutes. Valid range: 1–1440 |
plugins.content_manager.max_items_per_bulk | Integer | 999 | Maximum documents per bulk indexing request. Valid range: 10–999 |
plugins.content_manager.max_concurrent_bulks | Integer | 5 | Maximum concurrent bulk operations. Valid range: 1–5 |
plugins.content_manager.client.timeout | Long | 10 | HTTP client timeout in seconds for CTI API requests. Valid range: 10–50 |
plugins.content_manager.catalog.update_on_start | Boolean | true | Trigger content sync when the plugin starts |
plugins.content_manager.catalog.update_on_schedule | Boolean | true | Enable the periodic sync job |
plugins.content_manager.catalog.content.context | String | t1-ruleset-5 | CTI catalog content context identifier |
plugins.content_manager.catalog.content.consumer | String | public-ruleset-5 | CTI catalog content consumer identifier |
plugins.content_manager.ioc.content.context | String | t1-iocs-5 | IoC content context identifier |
plugins.content_manager.ioc.content.consumer | String | public-iocs-5 | IoC content consumer identifier |
plugins.content_manager.cve.content.context | String | t1-vulnerabilities-5 | CVE content context identifier |
plugins.content_manager.cve.content.consumer | String | public-vulnerabilities-5 | CVE content consumer identifier |
plugins.content_manager.catalog.create_detectors | Boolean | true | Automatically create Security Analytics detectors from CTI content |
plugins.content_manager.telemetry.enabled | Boolean | true | Enable or disable the daily Update check service ping. This setting is dynamic. |
Configuration Examples
Default Configuration
No configuration is required for default behavior. The Content Manager will sync content every 60 minutes using the pre-configured CTI contexts.
Custom Sync Interval
To sync content every 30 minutes:
# opensearch.yml
plugins.content_manager.catalog.sync_interval: 30
Disable Automatic Sync
To disable all automatic synchronization and only sync manually via the API:
# opensearch.yml
plugins.content_manager.catalog.update_on_start: false
plugins.content_manager.catalog.update_on_schedule: false
Content can still be synced on demand using:
curl -sk -u admin:admin -X POST \
"https://192.168.56.6:9200/_plugins/_content_manager/update"
Custom CTI API Endpoint
To point to a different CTI API (e.g., production):
# opensearch.yml
plugins.content_manager.cti.api: "https://cti.wazuh.com/api/v1"
Tune Bulk Operations
For environments with limited resources, reduce the bulk operation concurrency:
# opensearch.yml
plugins.content_manager.max_items_per_bulk: 10
plugins.content_manager.max_concurrent_bulks: 2
plugins.content_manager.client.timeout: 30
Disable Security Analytics Detector Creation
If you do not use the OpenSearch Security Analytics plugin:
# opensearch.yml
plugins.content_manager.catalog.create_detectors: false
Update check service behavior
The update check service is enabled by default and runs once per day.
- It is implemented by a scheduled job (
wazuh-telemetry-ping-job) in.wazuh-content-manager-jobs. - It sends a request to the CTI Update check API endpoint (
/ping). - The request includes:
- Deployment identifier (
wazuh-uid: cluster UUID) - Running version (
wazuh-tag:v<version>) - User agent (
Wazuh Indexer <version>)
- Deployment identifier (
This data allows Wazuh to determine if a newer version is available and notify users in the update check UI.
The service only sends deployment identification/version metadata required for update checks. It does not send rules, events, or log payloads.
Enable/Disable Update check service dynamically
The update check service can be enabled or disabled at runtime without restarting the node using the Cluster Settings API:
curl -sk -u admin:admin -X PUT "https://192.168.56.6:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"persistent": {
"plugins.content_manager.telemetry.enabled": false
}
}'
Notes
- Changes to
opensearch.ymlrequire a restart of the Wazuh Indexer to take effect, except for dynamic settings (likeplugins.content_manager.telemetry.enabled), which can be updated at runtime via the OpenSearch API. - The
contextandconsumersettings should only be changed if instructed by Wazuh support or documentation, as they must match valid CTI API contexts. - The sync interval is enforced by the OpenSearch Job Scheduler. The actual sync timing may vary slightly depending on cluster load.
- The update check service runs with a fixed interval of 1 day when enabled.
API Reference
The Content Manager plugin exposes a REST API under /_plugins/_content_manager/. All endpoints require authentication.
Subscription Management
Get CTI Subscription
Retrieves the current CTI subscription token.
Request
- Method:
GET - Path:
/_plugins/_content_manager/subscription
Example Request
curl -sk -u admin:admin \
"https://192.168.56.6:9200/_plugins/_content_manager/subscription"
Example Response (subscription exists)
{
"access_token": "AYjcyMzY3ZDhiNmJkNTY",
"token_type": "Bearer"
}
Example Response (no subscription)
{
"message": "Token not found",
"status": 404
}
Status Codes
| Code | Description |
|---|---|
| 200 | Subscription token returned |
| 404 | No subscription registered |
Register CTI Subscription
Registers a new CTI subscription using a device code obtained from the Wazuh CTI Console.
Request
- Method:
POST - Path:
/_plugins/_content_manager/subscription
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
device_code | String | Yes | Device authorization code from CTI Console |
client_id | String | Yes | OAuth client identifier |
expires_in | Integer | Yes | Token expiration time in seconds |
interval | Integer | Yes | Polling interval in seconds |
Example Request
curl -sk -u admin:admin -X POST \
"https://192.168.56.6:9200/_plugins/_content_manager/subscription" \
-H 'Content-Type: application/json' \
-d '{
"device_code": "GmRhmhcxhwAzkoEqiMEg_DnyEysNkuNhszIySk9eS",
"client_id": "a17c21ed",
"expires_in": 1800,
"interval": 5
}'
Example Response
{
"message": "Subscription created successfully",
"status": 201
}
Status Codes
| Code | Description |
|---|---|
| 201 | Subscription registered successfully |
| 400 | Missing required fields (device_code, client_id, expires_in, interval) |
| 401 | Unauthorized — endpoint accessed by unexpected user |
| 500 | Internal error |
Delete CTI Subscription
Removes the current CTI subscription token and revokes all associated credentials.
Request
- Method:
DELETE - Path:
/_plugins/_content_manager/subscription
Example Request
curl -sk -u admin:admin -X DELETE \
"https://192.168.56.6:9200/_plugins/_content_manager/subscription"
Example Response (success)
{
"message": "Subscription deleted successfully",
"status": 200
}
Example Response (no subscription)
{
"message": "Token not found",
"status": 404
}
Status Codes
| Code | Description |
|---|---|
| 200 | Subscription deleted |
| 404 | No subscription to delete |
Content Updates
Trigger Manual Sync
Triggers an immediate content synchronization with the CTI API. Requires a valid subscription.
Request
- Method:
POST - Path:
/_plugins/_content_manager/update
Example Request
curl -sk -u admin:admin -X POST \
"https://192.168.56.6:9200/_plugins/_content_manager/update"
Example Response (success)
{
"message": "Content update triggered successfully",
"status": 200
}
Example Response (no subscription)
{
"message": "Token not found. Please create a subscription before attempting to update.",
"status": 404
}
Status Codes
| Code | Description |
|---|---|
| 200 | Sync triggered successfully |
| 404 | No subscription token found |
| 409 | A content update is already in progress |
| 429 | Rate limit exceeded |
| 500 | Internal error during sync |
Logtest
Execute Logtest
Sends a log event to the Wazuh Engine for analysis. If an integration ID is provided, the integration’s Sigma rules are also evaluated against the normalized event via the Security Analytics Plugin (SAP). If integration is omitted, only the normalization step is performed and the detection section is returned with status: "skipped".
Note: A testing policy must be loaded in the Engine for logtest to execute successfully. Load a policy via the policy promotion endpoint. When an integration is specified, it must exist in the specified space.
Request
- Method:
POST - Path:
/_plugins/_content_manager/logtest
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
integration | String | No | ID of the integration to test against. If omitted, only normalization is performed. |
space | String | Yes | "test" or "standard" |
queue | Integer | Yes | Queue number for logtest execution |
location | String | Yes | Log file path or logical source location |
event | String | Yes | Raw log event to test |
metadata | Object | No | Optional metadata passed to the Engine |
trace_level | String | No | Trace verbosity: NONE, ASSET_ONLY, or ALL |
Example Request
curl -sk -u admin:admin -X POST \
"https://192.168.56.6:9200/_plugins/_content_manager/logtest" \
-H 'Content-Type: application/json' \
-d '{
"integration": "a0b448c8-3d3c-47d4-b7b9-cbc3c175f509",
"space": "test",
"queue": 1,
"location": "/var/log/cassandra/system.log",
"event": "INFO [main] 2026-03-31 10:00:00 StorageService.java:123 - Node is ready to serve",
"trace_level": "NONE"
}'
Example Response (success with rule match)
{
"status": 200,
"message": {
"normalization": {
"output": {
"event": {
"category": ["database"],
"kind": "event",
"original": "INFO [main] 2026-03-31 10:00:00 StorageService.java:123 - Node is ready to serve"
},
"wazuh": {
"integration": {
"name": "test-integ",
"category": "other",
"decoders": ["decoder/cassandra-default/0"]
}
},
"message": "Node is ready to serve"
},
"asset_traces": [],
"validation": {
"valid": true,
"errors": []
}
},
"detection": {
"status": "success",
"rules_evaluated": 2,
"rules_matched": 1,
"matches": [
{
"rule": {
"id": "85bba177-a2e9-4468-9d59-26f4798906c9",
"title": "Cassandra Database Event Detected",
"level": "low",
"tags": []
},
"matched_conditions": [
"event.category matched 'database'",
"event.kind matched 'event'"
]
}
]
}
}
}
Example Response (Engine error, SAP skipped)
{
"status": 200,
"message": {
"normalization": {
"status": "error",
"error": {
"message": "Failed to parse protobuff json request: invalid value",
"code": "ENGINE_ERROR"
}
},
"detection": {
"status": "skipped",
"reason": "Engine processing failed"
}
}
}
Example Response (no rules in integration)
{
"status": 200,
"message": {
"normalization": {
"output": { "..." : "..." },
"asset_traces": [],
"validation": { "valid": true, "errors": [] }
},
"detection": {
"status": "success",
"rules_evaluated": 0,
"rules_matched": 0,
"matches": []
}
}
}
Example Request (normalization only, no integration)
curl -sk -u admin:admin -X POST \
"https://192.168.56.6:9200/_plugins/_content_manager/logtest" \
-H 'Content-Type: application/json' \
-d '{
"space": "test",
"queue": 1,
"location": "/var/log/syslog",
"event": "Mar 31 10:00:00 myhost sshd[1234]: Accepted publickey for user from 192.168.1.1 port 22 ssh2",
"trace_level": "NONE"
}'
Example Response (normalization only)
{
"status": 200,
"message": {
"normalization": {
"output": {
"event": {
"original": "Mar 31 10:00:00 myhost sshd[1234]: Accepted publickey for user from 192.168.1.1 port 22 ssh2"
}
},
"asset_traces": [],
"validation": { "valid": true, "errors": [] }
},
"detection": {
"status": "skipped",
"reason": "No integration provided"
}
}
}
Response Fields
| Field | Type | Description |
|---|---|---|
normalization.output | Object | Engine normalized event output |
normalization.asset_traces | Array | List of decoders that processed the event |
normalization.validation | Object | Validation result (valid, errors) |
normalization.status | String | Present on error: "error" |
normalization.error | Object | Present on error: message and code |
detection.status | String | "success", "error", or "skipped" |
detection.reason | String | Present when status is "skipped" |
detection.rules_evaluated | Integer | Number of Sigma rules evaluated |
detection.rules_matched | Integer | Number of rules that matched |
detection.matches | Array | List of matched rules with details |
detection.matches[].rule | Object | Rule metadata: id, title, level, tags |
detection.matches[].matched_conditions | Array | Human-readable descriptions of conditions that matched |
Status Codes
| Code | Description |
|---|---|
| 200 | Logtest executed (check inner status fields) |
| 400 | Missing/invalid fields or integration not found |
| 500 | Engine socket communication error or internal error |
Normalization Only
Sends a log event to the Wazuh Engine for decoding and normalization without performing Sigma rule detection. Use this to validate that decoders correctly parse events before testing detection rules.
Note: A testing policy must be loaded in the Engine for normalization to execute successfully.
Request
- Method:
POST - Path:
/_plugins/_content_manager/logtest/normalization
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
space | String | Yes | "test" or "standard" |
queue | Integer | No | Queue number for logtest execution |
location | String | No | Log file path or logical source location |
input | String | No | Raw log event to normalize |
metadata | Object | No | Optional metadata passed to the Engine |
trace_level | String | No | Trace verbosity: NONE, ASSET_ONLY, or ALL |
Example Request
curl -sk -u admin:admin -X POST \
"https://192.168.56.6:9200/_plugins/_content_manager/logtest/normalization" \
-H 'Content-Type: application/json' \
-d '{
"space": "test",
"queue": 1,
"location": "/var/log/cassandra/system.log",
"metadata": {},
"trace_level": "NONE",
"input": "INFO [CompactionExecutor-3] 2025-11-30 14:23:45 CassandraDaemon.java:250 - Some message - 7500 - 4"
}'
Example Response
{
"status": 200,
"message": {
"output": {
"log": {
"level": "INFO",
"origin": {
"file": {
"name": "CassandraDaemon.java",
"line": 250
}
}
},
"wazuh": {
"space": { "name": "test" },
"protocol": { "location": "/var/log/cassandra/system.log", "queue": 1 },
"integration": {
"decoders": ["decoder/cassandra-default/0"],
"name": "my-integration",
"category": "other"
}
},
"message": "Some message",
"event": {
"duration": 7500,
"category": ["database"],
"kind": "event",
"severity": 4
},
"source": { "ip": "10.42.3.15" },
"process": {
"thread": { "name": "CompactionExecutor-3" }
}
},
"asset_traces": [],
"validation": {
"valid": true,
"errors": []
}
}
}
Response Fields
| Field | Type | Description |
|---|---|---|
message.output | Object | Engine normalized event output |
message.asset_traces | Array | List of decoders that processed the event |
message.validation | Object | Validation result (valid, errors) |
Status Codes
| Code | Description |
|---|---|
| 200 | Normalization executed successfully |
| 400 | Missing/invalid fields |
| 500 | Engine socket communication error or internal error |
Detection Only
Evaluates an already-normalized event against the Sigma rules of a given integration via the Security Analytics Plugin (SAP). This endpoint does not call the Wazuh Engine — the normalized event must be provided directly in the input field.
Use this after obtaining a normalized event from the /logtest/normalization endpoint, or when you already have a normalized event and want to test different integrations’ rules against it.
Note: The integration must exist in the specified space. The
inputfield must be a JSON object (the normalized event), not a raw log string.
Request
- Method:
POST - Path:
/_plugins/_content_manager/logtest/detection
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
space | String | Yes | "test" or "standard" |
integration | String | Yes | UUID of the integration whose rules to evaluate |
input | Object | Yes | Normalized event object to evaluate rules against |
Example Request
curl -sk -u admin:admin -X POST \
"https://192.168.56.6:9200/_plugins/_content_manager/logtest/detection" \
-H 'Content-Type: application/json' \
-d '{
"space": "test",
"integration": "d3f3b0b8-4e25-4273-83ef-56a62003bcf7",
"input": {
"event": {
"duration": 7500,
"category": ["database"],
"kind": "event",
"severity": 4,
"type": ["info"]
},
"source": { "ip": "10.42.3.15" },
"process": {
"thread": { "name": "CompactionExecutor-3" },
"command_line": "/query tables"
},
"log": {
"origin": {
"file": { "name": "CassandraDaemon.java", "line": 250 }
}
}
}
}'
Example Response (matches found)
{
"status": 200,
"message": {
"status": "success",
"rules_evaluated": 12,
"rules_matched": 6,
"matches": [
{
"rule": {
"id": "4e52f215-bccc-4c0f-a37c-70606022be8e",
"title": "TEST: Numeric gte+lt only",
"level": "high",
"tags": ["attack.execution", "attack.t1059"]
},
"matched_conditions": [
"event.duration matched '>= 5000'",
"event.severity matched '< 10'"
]
},
{
"rule": {
"id": "1d489ded-7523-4329-8cd0-ebb21865a318",
"title": "TEST: Exact match event.kind=event",
"level": "low",
"tags": ["attack.execution", "attack.t1059"]
},
"matched_conditions": [
"event.kind matched 'event'"
]
}
]
}
}
Example Response (no rules in integration)
{
"status": 200,
"message": {
"status": "success",
"rules_evaluated": 0,
"rules_matched": 0,
"matches": []
}
}
Response Fields
| Field | Type | Description |
|---|---|---|
message.status | String | "success" or "error" |
message.rules_evaluated | Integer | Number of Sigma rules evaluated |
message.rules_matched | Integer | Number of rules that matched |
message.matches | Array | List of matched rules with details |
message.matches[].rule | Object | Rule metadata: id, title, level, tags |
message.matches[].matched_conditions | Array | Human-readable descriptions of matched conditions |
Status Codes
| Code | Description |
|---|---|
| 200 | Detection executed (check message.status) |
| 400 | Missing/invalid fields or integration not found |
| 500 | Internal error |
Policy
Update Policy
Updates the routing policy in the specified space. The policy defines which integrations are active, the root decoder, enrichment types, and how events are routed through the Engine.
Note: The
integrationsandfiltersarrays allow reordering but do not allow adding or removing entries — membership is managed via their respective CRUD endpoints.
Space-specific behavior
- Draft space (
/policy/draft): All policy fields are accepted. The metadata fieldsauthor,description,documentation, andreferencesare required in addition to the boolean fields. - Standard space (
/policy/standard): Onlyenrichments,filters,enabled,index_unclassified_events, andindex_discarded_eventscan be modified. All other fields are preserved from the existing standard policy document. If the update changes the space hash, the full standard policy is automatically loaded to the local Engine.
Request
- Method:
PUT - Path:
/_plugins/_content_manager/policy/{space}
Path Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
space | String | Yes | Target space (draft or standard) |
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
resource | Object | Yes | The policy resource object |
Fields within resource:
| Field | Type | Required | Description |
|---|---|---|---|
metadata | Object | Yes (draft) | Policy metadata (see below) |
root_decoder | String | No | Identifier of the root decoder for event processing |
integrations | Array | No | List of integration IDs (reorder only, no add/remove) |
filters | Array | No | List of filter UUIDs (reorder only, no add/remove) |
enrichments | Array | No | Enrichment types (no duplicates; values depend on engine capabilities) |
enabled | Boolean | Yes | Whether the policy is active and synchronized by the Engine |
index_unclassified_events | Boolean | Yes | Whether uncategorized events are indexed |
index_discarded_events | Boolean | Yes | Whether discarded events are indexed |
Fields within resource.metadata:
| Field | Type | Required | Description |
|---|---|---|---|
title | String | No | Human-readable policy name |
author | String | Yes (draft) | Author of the policy |
description | String | Yes (draft) | Brief description |
documentation | String | Yes (draft) | Documentation text or URL |
references | Array | Yes (draft) | External reference URLs |
Example Request (draft space)
curl -sk -u admin:admin -X PUT \
"https://192.168.56.6:9200/_plugins/_content_manager/policy/draft" \
-H 'Content-Type: application/json' \
-d '{
"resource": {
"metadata": {
"title": "Draft policy",
"author": "Wazuh Inc.",
"description": "Custom policy",
"documentation": "",
"references": [
"https://wazuh.com"
]
},
"root_decoder": "",
"integrations": [
"f16f33ec-a5ea-4dc4-bf33-616b1562323a"
],
"filters": [],
"enrichments": [],
"enabled": true,
"index_unclassified_events": false,
"index_discarded_events": false
}
}'
Example Request (standard space)
curl -sk -u admin:admin -X PUT \
"https://192.168.56.6:9200/_plugins/_content_manager/policy/standard" \
-H 'Content-Type: application/json' \
-d '{
"resource": {
"enrichments": ["connection"],
"filters": [],
"enabled": true,
"index_unclassified_events": false,
"index_discarded_events": false
}
}'
Example Response
{
"message": "kQPmV5wBi_TgruUn97RT",
"status": 200
}
The message field contains the OpenSearch document ID of the updated policy.
Status Codes
| Code | Description |
|---|---|
| 200 | Policy updated |
| 400 | Invalid space, missing resource field, missing required fields, invalid enrichments, or disallowed modification of integrations/filters |
| 500 | Internal error |
Rules
Rules follow the Sigma format with Wazuh extensions. See Sigma Rules for the full format reference, including the mitre, compliance, and metadata blocks.
Validation notes:
- The
logsource.productfield must exactly match themetadata.titleof the parent integration.- Detection fields are validated against the Wazuh Common Schema (WCS); rules referencing unknown fields are rejected.
- IPv6 addresses are supported in detection conditions (standard, compressed, and CIDR notation).
Create Rule
Creates a new detection rule in the draft space. The rule is linked to the specified parent integration and validated by the Security Analytics Plugin.
The rule is also synchronized to the SAP, where a separate document is created with its own auto-generated UUID. The SAP document stores the CTI document UUID in a document.id field and the space in a source field (e.g., “Draft”) for cross-reference.
Request
- Method:
POST - Path:
/_plugins/_content_manager/rules
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
integration | String | Yes | UUID of the parent integration (must be in draft space) |
resource | Object | Yes | The rule definition |
Fields within resource:
| Field | Type | Required | Description |
|---|---|---|---|
metadata | Object | Yes | Rule metadata (see below) |
sigma_id | String | No | Sigma rule ID |
enabled | Boolean | No | Whether the rule is enabled |
status | String | Yes | Rule status (e.g., experimental, stable) |
level | String | Yes | Alert level (e.g., low, medium, high, critical) |
logsource | Object | No | Log source definition (product, category) |
detection | Object | Yes | Sigma detection logic with condition and selection fields |
mitre | Object | No | MITRE ATT&CK mapping (see Sigma Rules) |
compliance | Object | No | Compliance framework mapping (see Sigma Rules) |
Fields within resource.metadata:
| Field | Type | Required | Description |
|---|---|---|---|
title | String | Yes | Rule title (must be unique within the draft space) |
author | String | No | Rule author |
description | String | No | Rule description |
references | Array | No | Reference URLs |
documentation | String | No | Documentation text or URL |
Example Request
curl -sk -u admin:admin -X POST \
"https://192.168.56.6:9200/_plugins/_content_manager/rules" \
-H 'Content-Type: application/json' \
-d '{
"integration": "6b7b7645-00da-44d0-a74b-cffa7911e89c",
"resource": {
"metadata": {
"title": "Test Rule",
"description": "A Test rule",
"author": "Tester",
"references": [
"https://wazuh.com"
]
},
"sigma_id": "19aefed0-ffd4-47dc-a7fc-f8b1425e84f9",
"enabled": true,
"status": "experimental",
"logsource": {
"product": "system",
"category": "system"
},
"detection": {
"condition": "selection",
"selection": {
"event.action": [
"hash_test_event"
]
}
},
"level": "low",
"mitre": {
"tactic": ["TA0001"],
"technique": ["T1190"],
"subtechnique": []
},
"compliance": {
"pci_dss": ["6.5.1"]
}
}
}'
Example Response
{
"message": "6e1c43f1-f09b-4cec-bb59-00e3a52b7930",
"status": 201
}
The message field contains the UUID of the created rule.
Status Codes
| Code | Description |
|---|---|
| 201 | Rule created |
| 400 | Missing fields, duplicate title, integration not in draft space, or validation failure |
| 500 | Internal error or SAP unavailable |
Update Rule
Updates an existing rule in the draft space.
Request
- Method:
PUT - Path:
/_plugins/_content_manager/rules/{id}
Parameters
| Name | In | Type | Required | Description |
|---|---|---|---|---|
id | Path | String (UUID) | Yes | Rule document ID |
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
resource | Object | Yes | Updated rule definition (same fields as create) |
Note: On update,
enabled,metadata.title, andmetadata.authorare required. Thedetectionandlogsourcefields are also required.
Example Request
curl -sk -u admin:admin -X PUT \
"https://192.168.56.6:9200/_plugins/_content_manager/rules/6e1c43f1-f09b-4cec-bb59-00e3a52b7930" \
-H 'Content-Type: application/json' \
-d '{
"resource": {
"metadata": {
"title": "Test Hash Generation Rule",
"description": "A rule to verify that SHA-256 hashes are calculated correctly upon creation.",
"author": "Tester"
},
"enabled": true,
"status": "experimental",
"logsource": {
"product": "system",
"category": "system"
},
"detection": {
"condition": "selection",
"selection": {
"event.action": [
"hash_test_event"
]
}
},
"level": "low"
}
}'
Example Response
{
"message": "6e1c43f1-f09b-4cec-bb59-00e3a52b7930",
"status": 200
}
Status Codes
| Code | Description |
|---|---|
| 200 | Rule updated |
| 400 | Invalid request, not in draft space, or validation failure |
| 404 | Rule not found |
| 500 | Internal error |
Delete Rule
Deletes a rule from the draft space. The rule is also removed from any integrations that reference it.
Request
- Method:
DELETE - Path:
/_plugins/_content_manager/rules/{id}
Parameters
| Name | In | Type | Required | Description |
|---|---|---|---|---|
id | Path | String (UUID) | Yes | Rule document ID |
Example Request
curl -sk -u admin:admin -X DELETE \
"https://192.168.56.6:9200/_plugins/_content_manager/rules/6e1c43f1-f09b-4cec-bb59-00e3a52b7930"
Example Response
{
"message": "6e1c43f1-f09b-4cec-bb59-00e3a52b7930",
"status": 200
}
Status Codes
| Code | Description |
|---|---|
| 200 | Rule deleted |
| 404 | Rule not found |
| 500 | Internal error |
Decoders
Create Decoder
Creates a new log decoder in the draft space. The decoder is validated against the Wazuh Engine before being stored, and automatically linked to the specified integration.
Note: A testing policy must be loaded in the Engine for decoder validation to succeed.
Request
- Method:
POST - Path:
/_plugins/_content_manager/decoders
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
integration | String | Yes | UUID of the parent integration (must be in draft space) |
resource | Object | Yes | The decoder definition |
Fields within resource:
| Field | Type | Description |
|---|---|---|
name | String | Decoder name identifier (e.g., decoder/core-wazuh-message/0) |
enabled | Boolean | Whether the decoder is enabled |
check | Array | Decoder check logic — array of condition objects |
normalize | Array | Normalization rules — array of mapping objects |
metadata | Object | Decoder metadata (see below) |
Fields within metadata:
| Field | Type | Description |
|---|---|---|
title | String | Human-readable decoder title |
description | String | Decoder description |
module | String | Module name |
compatibility | String | Compatibility description |
author | Object | Author info (name, email, url) |
references | Array | Reference URLs |
versions | Array | Supported versions |
Example Request
curl -sk -u admin:admin -X POST \
"https://192.168.56.6:9200/_plugins/_content_manager/decoders" \
-H 'Content-Type: application/json' \
-d '{
"integration": "0aa4fc6f-1cfd-4a7c-b30b-643f32950f1f",
"resource": {
"enabled": true,
"metadata": {
"author": {
"name": "Wazuh, Inc."
},
"compatibility": "All wazuh events.",
"description": "Base decoder to process Wazuh message format.",
"module": "wazuh",
"references": [
"https://documentation.wazuh.com/"
],
"title": "Wazuh message decoder",
"versions": [
"Wazuh 5.*"
]
},
"name": "decoder/core-wazuh-message/0",
"check": [
{
"tmp_json.event.action": "string_equal(\"netflow_flow\")"
}
],
"normalize": [
{
"map": [
{
"@timestamp": "get_date()"
}
]
}
]
}
}'
Example Response
{
"message": "d_0a6aaebe-dd0b-44cc-a787-ffefd4aac175",
"status": 201
}
The message field contains the UUID of the created decoder (prefixed with d_).
Status Codes
| Code | Description |
|---|---|
| 201 | Decoder created |
| 400 | Missing integration field, integration not in draft space, or Engine validation failure |
| 500 | Engine unavailable or internal error |
Update Decoder
Updates an existing decoder in the draft space. The decoder is re-validated against the Wazuh Engine.
Request
- Method:
PUT - Path:
/_plugins/_content_manager/decoders/{id}
Parameters
| Name | In | Type | Required | Description |
|---|---|---|---|---|
id | Path | String | Yes | Decoder document ID |
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
resource | Object | Yes | Updated decoder definition (same fields as create) |
Example Request
curl -sk -u admin:admin -X PUT \
"https://192.168.56.6:9200/_plugins/_content_manager/decoders/bb6d0245-8c1d-42d1-8edb-4e0907cf45e0" \
-H 'Content-Type: application/json' \
-d '{
"resource": {
"name": "decoder/test-decoder/0",
"enabled": false,
"metadata": {
"title": "Test Decoder UPDATED",
"description": "Updated description",
"author": {
"name": "Hello there"
}
},
"check": [],
"normalize": []
}
}'
Example Response
{
"message": "bb6d0245-8c1d-42d1-8edb-4e0907cf45e0",
"status": 200
}
Status Codes
| Code | Description |
|---|---|
| 200 | Decoder updated |
| 400 | Invalid request, not in draft space, or Engine validation failure |
| 404 | Decoder not found |
| 500 | Internal error |
Delete Decoder
Deletes a decoder from the draft space. The decoder is also removed from any integrations that reference it. A decoder cannot be deleted if it is currently set as the root decoder in the draft policy.
Request
- Method:
DELETE - Path:
/_plugins/_content_manager/decoders/{id}
Parameters
| Name | In | Type | Required | Description |
|---|---|---|---|---|
id | Path | String | Yes | Decoder document ID |
Example Request
curl -sk -u admin:admin -X DELETE \
"https://192.168.56.6:9200/_plugins/_content_manager/decoders/acbdba85-09c4-45a0-a487-61c8eeec58e6"
Example Response
{
"message": "acbdba85-09c4-45a0-a487-61c8eeec58e6",
"status": 200
}
Example Response (set as root decoder)
{
"message": "Cannot remove decoder [acbdba85-09c4-45a0-a487-61c8eeec58e6] as it is set as root decoder.",
"status": 400
}
Status Codes
| Code | Description |
|---|---|
| 200 | Decoder deleted |
| 400 | Decoder is set as root decoder |
| 404 | Decoder not found |
| 500 | Internal error |
Filters
Create Filter
Creates a new filter in the draft or standard space. The filter is validated against the Wazuh Engine before being stored and automatically linked to the specified space’s policy.
Request
- Method:
POST - Path:
/_plugins/_content_manager/filters
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
space | String | Yes | Target space: draft or standard |
resource | Object | Yes | The filter definition |
Fields within resource:
| Field | Type | Description |
|---|---|---|
name | String | Filter name identifier (e.g., filter/prefilter/0) |
enabled | Boolean | Whether the filter is enabled |
check | String | Filter check expression |
type | String | Filter type (e.g., pre-filter) |
metadata | Object | Filter metadata (see below) |
Fields within metadata:
| Field | Type | Description |
|---|---|---|
description | String | Filter description |
author | Object | Author info (name, email, url) |
Example Request
curl -sk -u admin:admin -X POST \
"https://192.168.56.6:9200/_plugins/_content_manager/filters" \
-H 'Content-Type: application/json' \
-d '{
"space": "draft",
"resource": {
"name": "filter/prefilter/0",
"enabled": true,
"metadata": {
"description": "Default filter to allow all events (for default ruleset)",
"author": {
"email": "info@wazuh.com",
"name": "Wazuh, Inc.",
"url": "https://wazuh.com"
}
},
"check": "$host.os.platform == '\''ubuntu'\''",
"type": "pre-filter"
}
}'
Example Response
{
"message": "f_a1b2c3d4-e5f6-47a8-b9c0-d1e2f3a4b5c6",
"status": 201
}
The message field contains the UUID of the created filter (prefixed with f_).
Status Codes
| Code | Description |
|---|---|
| 201 | Filter created |
| 400 | Missing space field, invalid space, or Engine validation failure |
| 500 | Engine unavailable or internal error |
Update Filter
Updates an existing filter in the draft or standard space. The filter is re-validated against the Wazuh Engine.
Request
- Method:
PUT - Path:
/_plugins/_content_manager/filters/{id}
Parameters
| Name | In | Type | Required | Description |
|---|---|---|---|---|
id | Path | String | Yes | Filter document ID |
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
space | String | Yes | Target space: draft or standard |
resource | Object | Yes | Updated filter definition (same fields as create) |
Example Request
curl -sk -u admin:admin -X PUT \
"https://192.168.56.6:9200/_plugins/_content_manager/filters/a1b2c3d4-e5f6-47a8-b9c0-d1e2f3a4b5c6" \
-H 'Content-Type: application/json' \
-d '{
"space": "draft",
"resource": {
"name": "filter/prefilter/0",
"enabled": true,
"metadata": {
"description": "Updated filter description",
"author": {
"email": "info@wazuh.com",
"name": "Wazuh, Inc.",
"url": "https://wazuh.com"
}
},
"check": "$host.os.platform == '\''ubuntu'\''",
"type": "pre-filter"
}
}'
Example Response
{
"message": "a1b2c3d4-e5f6-47a8-b9c0-d1e2f3a4b5c6",
"status": 200
}
Status Codes
| Code | Description |
|---|---|
| 200 | Filter updated |
| 400 | Invalid request, invalid space, or Engine validation failure |
| 404 | Filter not found |
| 500 | Internal error |
Delete Filter
Deletes a filter from the draft or standard space. The filter is also removed from the associated policy.
Request
- Method:
DELETE - Path:
/_plugins/_content_manager/filters/{id}
Parameters
| Name | In | Type | Required | Description |
|---|---|---|---|---|
id | Path | String | Yes | Filter document ID |
Example Request
curl -sk -u admin:admin -X DELETE \
"https://192.168.56.6:9200/_plugins/_content_manager/filters/a1b2c3d4-e5f6-47a8-b9c0-d1e2f3a4b5c6"
Example Response
{
"message": "a1b2c3d4-e5f6-47a8-b9c0-d1e2f3a4b5c6",
"status": 200
}
Status Codes
| Code | Description |
|---|---|
| 200 | Filter deleted |
| 404 | Filter not found |
| 500 | Internal error |
Integrations
Create Integration
Creates a new integration in the draft space. An integration is a logical grouping of related rules, decoders, and KVDBs. The integration is validated against the Engine and registered in the Security Analytics Plugin.
The integration is also synchronized to the SAP, where a separate document is created with its own auto-generated UUID. The SAP document stores the CTI document UUID in a document.id field and the space in the source field (e.g., “Draft”) for cross-reference.
Request
- Method:
POST - Path:
/_plugins/_content_manager/integrations
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
resource | Object | Yes | The integration definition |
Fields within resource:
| Field | Type | Required | Description |
|---|---|---|---|
metadata | Object | Yes | Integration metadata (see below) |
category | String | Yes | Category (e.g., cloud-services, network-activity, security, system-activity) |
enabled | Boolean | No | Whether the integration is enabled |
Fields within resource.metadata:
| Field | Type | Required | Description |
|---|---|---|---|
title | String | Yes | Integration title (must be unique in draft space) |
author | String | Yes | Author of the integration |
description | String | No | Description |
documentation | String | No | Documentation text or URL |
references | Array | No | Reference URLs |
Note: Do not include the
idfield — it is auto-generated by the Indexer.
Example Request
curl -sk -u admin:admin -X POST \
"https://192.168.56.6:9200/_plugins/_content_manager/integrations" \
-H 'Content-Type: application/json' \
-d '{
"resource": {
"metadata": {
"title": "azure-functions",
"author": "Wazuh Inc.",
"description": "This integration supports Azure Functions app logs.",
"documentation": "https://docs.wazuh.com/integrations/azure-functions",
"references": [
"https://wazuh.com"
]
},
"category": "cloud-services",
"enabled": true
}
}'
Example Response
{
"message": "94e5a2af-505e-4164-ab62-576a71873308",
"status": 201
}
The message field contains the UUID of the created integration.
Status Codes
| Code | Description |
|---|---|
| 201 | Integration created |
| 400 | Missing required fields (title, author, category), duplicate title, or validation failure |
| 500 | Internal error or SAP/Engine unavailable |
Update Integration
Updates an existing integration in the draft space. Only integrations in the draft space can be updated.
Request
- Method:
PUT - Path:
/_plugins/_content_manager/integrations/{id}
Parameters
| Name | In | Type | Required | Description |
|---|---|---|---|---|
id | Path | String (UUID) | Yes | Integration document ID |
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
resource | Object | Yes | Updated integration definition |
Fields within resource (all required for update):
| Field | Type | Required | Description |
|---|---|---|---|
metadata | Object | Yes | Integration metadata (see below) |
category | String | Yes | Category |
enabled | Boolean | Yes | Whether the integration is enabled |
rules | Array | Yes | Ordered list of rule IDs |
decoders | Array | Yes | Ordered list of decoder IDs |
kvdbs | Array | Yes | Ordered list of KVDB IDs |
Fields within resource.metadata:
| Field | Type | Required | Description |
|---|---|---|---|
title | String | Yes | Integration title |
author | String | Yes | Author |
description | String | Yes | Description |
documentation | String | Yes | Documentation text or URL |
references | Array | Yes | Reference URLs |
Note: The
rules,decoders, andkvdbsarrays are mandatory on update to allow reordering. Pass empty arrays[]if the integration has none.
Example Request
curl -sk -u admin:admin -X PUT \
"https://192.168.56.6:9200/_plugins/_content_manager/integrations/94e5a2af-505e-4164-ab62-576a71873308" \
-H 'Content-Type: application/json' \
-d '{
"resource": {
"metadata": {
"title": "azure-functions-update",
"author": "Wazuh Inc.",
"description": "This integration supports Azure Functions app logs.",
"documentation": "updated documentation",
"references": []
},
"category": "cloud-services",
"enabled": true,
"rules": [],
"decoders": [],
"kvdbs": []
}
}'
Example Response
{
"message": "94e5a2af-505e-4164-ab62-576a71873308",
"status": 200
}
Status Codes
| Code | Description |
|---|---|
| 200 | Integration updated |
| 400 | Invalid request, missing required fields, not in draft space, or duplicate title |
| 404 | Integration not found |
| 500 | Internal error |
Delete Integration
Deletes an integration from the draft space. The integration must have no attached decoders, rules, or KVDBs — delete those first.
Request
- Method:
DELETE - Path:
/_plugins/_content_manager/integrations/{id}
Parameters
| Name | In | Type | Required | Description |
|---|---|---|---|---|
id | Path | String (UUID) | Yes | Integration document ID |
Example Request
curl -sk -u admin:admin -X DELETE \
"https://192.168.56.6:9200/_plugins/_content_manager/integrations/94e5a2af-505e-4164-ab62-576a71873308"
Example Response
{
"message": "94e5a2af-505e-4164-ab62-576a71873308",
"status": 200
}
Example Response (has dependencies)
{
"message": "Cannot delete integration because it has decoders attached",
"status": 400
}
Status Codes
| Code | Description |
|---|---|
| 200 | Integration deleted |
| 400 | Integration has dependent resources (decoders/rules/kvdbs) |
| 404 | Integration not found |
| 500 | Internal error |
KVDBs
Create KVDB
Creates a new key-value database in the draft space, linked to the specified integration.
Request
- Method:
POST - Path:
/_plugins/_content_manager/kvdbs
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
integration | String | Yes | UUID of the parent integration (must be in draft space) |
resource | Object | Yes | The KVDB definition |
Fields within resource:
| Field | Type | Required | Description |
|---|---|---|---|
metadata | Object | Yes | KVDB metadata (see below) |
content | Object | Yes | Key-value data (at least one entry required) |
name | String | No | KVDB identifier name |
enabled | Boolean | No | Whether the KVDB is enabled |
Fields within resource.metadata:
| Field | Type | Required | Description |
|---|---|---|---|
title | String | Yes | KVDB title |
author | String | Yes | Author |
description | String | No | Description |
documentation | String | No | Documentation |
references | Array | No | Reference URLs |
Example Request
curl -sk -u admin:admin -X POST \
"https://192.168.56.6:9200/_plugins/_content_manager/kvdbs" \
-H 'Content-Type: application/json' \
-d '{
"integration": "f16f33ec-a5ea-4dc4-bf33-616b1562323a",
"resource": {
"metadata": {
"title": "non_standard_timezones",
"author": "Wazuh Inc.",
"description": "",
"documentation": "",
"references": [
"https://wazuh.com"
]
},
"name": "non_standard_timezones",
"enabled": true,
"content": {
"non_standard_timezones": {
"AEST": "Australia/Sydney",
"CEST": "Europe/Berlin",
"CST": "America/Chicago",
"EDT": "America/New_York",
"EST": "America/New_York",
"IST": "Asia/Kolkata",
"MST": "America/Denver",
"PKT": "Asia/Karachi",
"SST": "Asia/Singapore",
"WEST": "Europe/London"
}
}
}
}'
Example Response
{
"message": "9d4ec6d5-8e30-4ea3-be05-957968c02dae",
"status": 201
}
The message field contains the UUID of the created KVDB.
Status Codes
| Code | Description |
|---|---|
| 201 | KVDB created |
| 400 | Missing integration or required resource fields, integration not in draft space |
| 500 | Internal error |
Update KVDB
Updates an existing KVDB in the draft space.
Request
- Method:
PUT - Path:
/_plugins/_content_manager/kvdbs/{id}
Parameters
| Name | In | Type | Required | Description |
|---|---|---|---|---|
id | Path | String (UUID) | Yes | KVDB document ID |
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
resource | Object | Yes | Updated KVDB definition |
Fields within resource (all required for update):
| Field | Type | Required | Description |
|---|---|---|---|
metadata | Object | Yes | KVDB metadata (see below) |
content | Object | Yes | Key-value data |
name | String | No | KVDB identifier name |
enabled | Boolean | No | Whether the KVDB is enabled |
Fields within resource.metadata:
| Field | Type | Required | Description |
|---|---|---|---|
title | String | Yes | KVDB title |
author | String | Yes | Author |
description | String | Yes | Description |
documentation | String | Yes | Documentation |
references | Array | Yes | Reference URLs |
Example Request
curl -sk -u admin:admin -X PUT \
"https://192.168.56.6:9200/_plugins/_content_manager/kvdbs/9d4ec6d5-8e30-4ea3-be05-957968c02dae" \
-H 'Content-Type: application/json' \
-d '{
"resource": {
"metadata": {
"title": "non_standard_timezones-2",
"author": "Wazuh.",
"description": "UPDATE",
"documentation": "UPDATE.doc",
"references": [
"https://wazuh.com"
]
},
"name": "test-UPDATED",
"enabled": true,
"content": {
"non_standard_timezones": {
"AEST": "Australia/Sydney",
"CEST": "Europe/Berlin",
"CST": "America/Chicago",
"EDT": "America/New_York",
"EST": "America/New_York",
"IST": "Asia/Kolkata",
"MST": "America/Denver",
"PKT": "Asia/Karachi",
"SST": "Asia/Singapore",
"WEST": "Europe/London"
}
}
}
}'
Example Response
{
"message": "9d4ec6d5-8e30-4ea3-be05-957968c02dae",
"status": 200
}
Status Codes
| Code | Description |
|---|---|
| 200 | KVDB updated |
| 400 | Invalid request, missing required fields, or not in draft space |
| 404 | KVDB not found |
| 500 | Internal error |
Delete KVDB
Deletes a KVDB from the draft space. The KVDB is also removed from any integrations that reference it.
Request
- Method:
DELETE - Path:
/_plugins/_content_manager/kvdbs/{id}
Parameters
| Name | In | Type | Required | Description |
|---|---|---|---|---|
id | Path | String (UUID) | Yes | KVDB document ID |
Example Request
curl -sk -u admin:admin -X DELETE \
"https://192.168.56.6:9200/_plugins/_content_manager/kvdbs/9d4ec6d5-8e30-4ea3-be05-957968c02dae"
Example Response
{
"message": "9d4ec6d5-8e30-4ea3-be05-957968c02dae",
"status": 200
}
Status Codes
| Code | Description |
|---|---|
| 200 | KVDB deleted |
| 404 | KVDB not found |
| 500 | Internal error |
Promotion
Preview Promotion Changes
Returns a preview of changes that would be applied when promoting from the specified space. This is a dry-run operation that does not modify any content.
Request
- Method:
GET - Path:
/_plugins/_content_manager/promote
Parameters
| Name | In | Type | Required | Description |
|---|---|---|---|---|
space | Query | String | Yes | Source space to preview: draft or test |
Example Request
curl -sk -u admin:admin \
"https://192.168.56.6:9200/_plugins/_content_manager/promote?space=draft"
Example Response
{
"changes": {
"kvdbs": [
{
"operation": "add",
"id": "4441d331-847a-43ed-acc6-4e09d8d6abb9"
}
],
"rules": [],
"decoders": [],
"filters": [],
"integrations": [
{
"operation": "add",
"id": "f16f33ec-a5ea-4dc4-bf33-616b1562323a"
}
],
"policy": [
{
"operation": "update",
"id": "f75bda3d-1926-4a8d-9c75-66382109ab04"
}
]
}
}
The response lists changes grouped by content type. Each change includes:
operation:add,update, orremoveid: Document ID of the affected resource
Status Codes
| Code | Description |
|---|---|
| 200 | Preview returned successfully |
| 400 | Invalid or missing space parameter |
| 500 | Internal error |
Execute Promotion
Promotes content from the source space to the next space in the promotion chain (Draft → Test → Custom). The request body must include the source space and the changes to apply (typically obtained from the preview endpoint).
In addition to copying documents across CTI indices, promotion also synchronizes integrations and rules with the Security Analytics Plugin (SAP). For each promoted resource, a new SAP document is created in the target space with:
- A newly generated UUID as the SAP document primary ID.
- A
document.idfield storing the original CTI document UUID for cross-reference. - A
sourcefield indicating the target space (e.g., “Test”, “Custom”).
New resources (ADD operations) use POST to create SAP documents; existing resources (UPDATE operations) use PUT to update them in-place.
This ensures that the same CTI resource can exist in multiple spaces with independent SAP documents.
Rollback on Failure
If any Content Manager index mutation fails during the consolidation phase, the endpoint automatically performs a LIFO rollback to restore the system to its pre-promotion state:
- Pre-promotion snapshots are captured before any writes — old versions for adds/updates, full documents for deletes.
- CM rollback: Each completed mutation is undone in reverse order. ADDs are deleted, UPDATEs are restored to their previous version, DELETEs are re-indexed from the snapshot.
- SAP reconciliation (best-effort): Rules and integrations synced to SAP during the forward pass are reverted — new SAP documents are deleted, updated ones are restored, and deleted ones are re-created from snapshots.
Individual rollback or SAP reconciliation step failures are logged but do not prevent remaining steps from executing. On rollback, the endpoint returns a 500 status.
Request
- Method:
POST - Path:
/_plugins/_content_manager/promote
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
space | String | Yes | Source space: draft or test |
changes | Object | Yes | Changes to promote (from preview response) |
The changes object contains arrays for each content type (policy, integrations, kvdbs, decoders, rules, filters), each with operation and id fields.
Example Request
curl -sk -u admin:admin -X POST \
"https://192.168.56.6:9200/_plugins/_content_manager/promote" \
-H 'Content-Type: application/json' \
-d '{
"space": "draft",
"changes": {
"kvdbs": [],
"decoders": [
{
"operation": "add",
"id": "f56f3865-2827-464b-8335-30561b0f381b"
}
],
"rules": [],
"filters": [],
"integrations": [
{
"operation": "add",
"id": "0aa4fc6f-1cfd-4a7c-b30b-643f32950f1f"
}
],
"policy": [
{
"operation": "update",
"id": "baf9b03f-5872-4409-ab02-507b7f93d0c8"
}
]
}
}'
Example Response
{
"message": "Promotion completed successfully",
"status": 200
}
Status Codes
| Code | Description |
|---|---|
| 200 | Promotion successful |
| 400 | Invalid request body or missing space field |
| 500 | Engine communication error or validation failure |
Spaces
Reset Space
Resets a user space (draft) to its initial state.
When resetting the draft space, this operation will:
- Remove all documents (integrations, rules, decoders, kvdbs) that belong to the given space.
- Re-generate the default policy for the given space.
The resources are removed in the Content Manager (wazuh-threatintel-* indices) and in the Security Analytics Plugin (.opensearch-sap-* indices) to ensure a complete reset of the space.
Note: Only
draftspace can be reset.
Request
- Method:
DELETE - Path:
/_plugins/_content_manager/space/{space}
Parameters
| Name | In | Type | Required | Description |
|---|---|---|---|---|
space | Path | String | Yes | The name of the user space to reset (draft) |
Example Request
curl -sk -u admin:admin -X DELETE \
"https://192.168.56.6:9200/_plugins/_content_manager/space/draft"
Example Response
{
"message": "Space reset successfully",
"status": 200
}
Status Codes
| Code | Description |
|---|---|
| 200 | Space reset successfully |
| 400 | Invalid space identifier, or attempted to reset a space different from draft |
| 500 | Internal error (e.g., Engine unavailable or deletion failure) |
Version Check
Check Available Updates
Returns whether there are newer versions of Wazuh available for download. The endpoint reads the current installed version from VERSION.json and queries the CTI API for available updates. The response includes the latest available major, minor, and patch updates when available.
Request
- Method:
GET - Path:
/_plugins/_content_manager/version/check
Example Request
curl -sk -u admin:admin \
"https://192.168.56.6:9200/_plugins/_content_manager/version/check"
Example Response (updates available)
{
"message": {
"uuid": "bd7f0db0-d094-48ca-b883-7019484ce71f",
"last_check_date": "2026-04-14T15:28:41.347387+00:00",
"current_version": "v5.0.0",
"last_available_major": {
"tag": "v6.0.0",
"title": "Wazuh v6.0.0",
"description": "Major release with new features...",
"published_date": "2026-03-01T10:00:00Z",
"semver": { "major": 6, "minor": 0, "patch": 0 }
},
"last_available_minor": {
"tag": "v5.1.0",
"title": "Wazuh v5.1.0",
"description": "Minor improvements and enhancements...",
"published_date": "2026-02-15T10:00:00Z",
"semver": { "major": 5, "minor": 1, "patch": 0 }
},
"last_available_patch": {
"tag": "v5.0.1",
"title": "Wazuh v5.0.1",
"description": "Bug fixes and stability improvements...",
"published_date": "2026-01-20T10:00:00Z",
"semver": { "major": 5, "minor": 0, "patch": 1 }
}
},
"status": 200
}
Example Response (no updates)
{
"message": {
"uuid": "bd7f0db0-d094-48ca-b883-7019484ce71f",
"last_check_date": "2026-04-14T15:28:41.347387+00:00",
"current_version": "v5.0.0",
"last_available_major": {},
"last_available_minor": {},
"last_available_patch": {}
},
"status": 200
}
Example Response (version not found)
{
"message": "Unable to determine current Wazuh version.",
"status": 500
}
Status Codes
| Code | Description |
|---|---|
| 200 | Version check completed (may include updates or empty) |
| 500 | Unable to determine version or internal error |
| 502 | CTI API returned an error |
Note: Categories with no available updates are represented as empty objects
{}.
Documentation Maintenance
To maintain technical consistency, any modification, addition or removal
of endpoints in the REST API source code must be reflected in the openapi.yml
specification and this api.md reference guide.
Sigma Rules
Wazuh uses the Sigma rule format as the standard for Security Analytics detection rules. The Content Manager plugin accepts rules that follow the Sigma specification, extended with Wazuh-specific blocks for metadata, threat intelligence mapping, and compliance coverage.
This page describes the supported rule format, including Wazuh extensions, validation behavior, and examples.
For the full Sigma standard, see the Sigma Rules Specification.
Standard Sigma Fields
The following standard Sigma fields are supported in rule payloads:
| Field | Type | Description |
|---|---|---|
sigma_id | String | Original Sigma rule identifier (UUID) |
status | String | Rule maturity status (experimental, test, stable) |
level | String | Alert severity (informational, low, medium, high, critical) |
logsource | Object | Log source definition (product, category, service, definition) |
detection | Object | Detection logic with condition and selection fields |
tags | Array | Categorization tags (e.g., attack.initial-access) |
falsepositives | Array | Known sources of false positives |
fields | Array | Fields of interest that should be included in the output |
related | Array | Related rules, each with id and type |
enabled | Boolean | Whether the rule is active |
Wazuh Extensions
Wazuh extends the standard Sigma format with three additional blocks aligned with the Wazuh Common Schema (WCS):
metadata— Authorship and lifecycle information.mitre— MITRE ATT&CK threat intelligence mapping.compliance— Compliance framework mapping.
These blocks are optional. Existing rules without them continue to work without modification.
Metadata Block
The metadata block contains authorship and lifecycle fields. All fields are optional.
| Field | Type | Description |
|---|---|---|
title | String | Human-readable rule title |
author | String | Rule author |
date | String | Creation date (ISO 8601) |
modified | String | Last modification date (ISO 8601) |
description | String | Brief description of what the rule detects |
references | Array | Reference URLs (documentation, advisories, etc.) |
documentation | String | Documentation text or URL |
supports | Array | Supported platforms or contexts |
Note: When creating or updating rules via the API,
titleis required withinmetadata. Thedateandmodifiedfields are automatically managed by the server.
Example
{
"metadata": {
"title": "Suspicious SSH Login from IPv6",
"author": "Security Team",
"description": "Detects SSH login attempts from known malicious IPv6 ranges.",
"references": [
"https://example.com/advisory/2025-001"
],
"documentation": ""
}
}
MITRE ATT&CK Block
The mitre block maps a rule to MITRE ATT&CK tactics, techniques, and subtechniques. Each field is an array of ID strings.
| Field | Type | Description |
|---|---|---|
tactic | Array | MITRE tactic IDs (e.g., TA0002, TA0005) |
technique | Array | MITRE technique IDs (e.g., T1059, T1562) |
subtechnique | Array | MITRE subtechnique IDs (e.g., T1059.001) |
During indexing, this block is mapped to the flat WCS mitre format by extracting the ID arrays:
{
"mitre": {
"tactic": ["TA0002", "TA0005"],
"technique": ["T1059", "T1562"],
"subtechnique": ["T1059.001"]
}
}
Example
{
"mitre": {
"tactic": ["TA0002", "TA0005"],
"technique": ["T1059", "T1562"],
"subtechnique": ["T1059.001"]
}
}
Compliance Block
The compliance block maps a rule to one or more compliance frameworks. Each key is a normalized framework identifier and its value is an array of requirement ID strings.
Supported frameworks
| Key | Framework |
|---|---|
gdpr | GDPR |
pci_dss | PCI DSS |
cmmc | CMMC |
nist_800_53 | NIST 800-53 |
nist_800_171 | NIST 800-171 |
hipaa | HIPAA |
iso_27001 | ISO 27001 |
nis2 | NIS2 |
tsc | TSC |
fedramp | FedRAMP |
During indexing, this block is mapped to the flat WCS compliance format:
{
"compliance": {
"pci_dss": ["2.2.1", "6.3.3"],
"gdpr": ["Art. 32", "Art. 25"]
}
}
Example
{
"compliance": {
"gdpr": ["Art. 32", "Art. 25"],
"pci_dss": ["2.2.1", "6.3.3"],
"cmmc": ["AC.1.001"],
"nist_800_53": ["AC-3", "AU-2"],
"hipaa": ["164.312(a)(1)"]
}
}
IPv6 Support
Detection conditions support IPv6 addresses in the following formats:
| Format | Example |
|---|---|
| Standard | 2001:0db8:85a3:0000:0000:8a2e:0370:7334 |
| Compressed | 2001:db8:85a3::8a2e:370:7334 |
| CIDR | 2001:db8::/32 |
Example detection with IPv6
{
"detection": {
"selection": {
"source.ip": [
"2001:db8:bad::/48",
"fe80::1234:5678:90ab:cdef"
]
},
"condition": "selection"
}
}
WCS Field Validation
All fields referenced in the detection stanza are validated against the Wazuh Common Schema (WCS). Rules that reference unknown fields are rejected with a structured error response identifying the offending field names.
This ensures that detection logic only targets fields that exist in the indexed data, preventing silent mismatches where a rule appears active but never triggers because it queries a non-existent field.
Complete Example
The following JSON payload demonstrates a rule using all supported blocks, suitable for the Create Rule API endpoint:
{
"integration": "6b7b7645-00da-44d0-a74b-cffa7911e89c",
"resource": {
"metadata": {
"title": "Python SQL Exceptions",
"author": "Thomas Patzke",
"description": "Detects SQL exceptions in Python applications according to PEP 249."
},
"sigma_id": "19aefed0-ffd4-47dc-a7fc-f8b1425e84f9",
"status": "stable",
"level": "medium",
"enabled": true,
"tags": [
"attack.initial-access",
"attack.t1190"
],
"logsource": {
"category": "application",
"product": "python"
},
"detection": {
"keywords": [
"DataError",
"IntegrityError",
"ProgrammingError",
"OperationalError"
],
"condition": "keywords"
},
"falsepositives": [
"Application bugs"
],
"mitre": {
"tactic": ["TA0001"],
"technique": ["T1190"],
"subtechnique": []
},
"compliance": {
"pci_dss": ["6.5.1"],
"gdpr": ["Art. 32"]
}
}
}
Backward Compatibility
All Wazuh extension blocks (metadata, mitre, compliance) are optional. Rules that do not include these blocks continue to parse and function correctly. This ensures full backward compatibility with existing rules and standard Sigma rules that do not use Wazuh extensions.
Rule Testing Workflow
This guide explains how to create, test, and promote custom detection rules using the Content Manager’s logtest feature. The logtest endpoint lets you validate that your rules and decoders correctly detect events before deploying them to production.
Overview
The rule testing workflow follows the Content Manager’s space promotion chain:
Draft → Test → Custom
- Draft: Create your integration, decoders, and rules.
- Test: Promote to the test space and validate with logtest.
- Custom: Once validated, promote to custom for production use.
Logtest sends a raw log event through the full detection pipeline — the Wazuh Engine normalizes the event, and the Security Analytics Plugin (SAP) evaluates your Sigma rules against the normalized output. The combined result shows exactly what was decoded and which rules matched.
Logtest supports both the test and standard spaces. Use test for validating draft content, and standard for testing against production rules.
Step 1: Create an Integration
An integration groups related decoders, rules, and KVDBs together. Start by creating one:
curl -sk -u admin:admin -X POST \
"https://localhost:9200/_plugins/_content_manager/integrations" \
-H 'Content-Type: application/json' \
-d '{
"resource": {
"category": "endpoint-security",
"enabled": true,
"metadata": {
"title": "SSH Brute Force Detection",
"author": "Security Team",
"description": "Detects SSH brute force attempts from auth logs.",
"references": ["https://attack.mitre.org/techniques/T1110/"]
}
}
}'
The response returns the integration ID:
{
"message": "a0b448c8-3d3c-47d4-b7b9-cbc3c175f509",
"status": 201
}
Save this ID — you’ll need it for creating rules and running logtest.
Step 2: Create a Decoder
Decoders tell the Engine how to parse and normalize raw log events. Link a decoder to your integration:
curl -sk -u admin:admin -X POST \
"https://localhost:9200/_plugins/_content_manager/decoders" \
-H 'Content-Type: application/json' \
-d '{
"integration": "a0b448c8-3d3c-47d4-b7b9-cbc3c175f509",
"resource": {
"enabled": true,
"metadata": {
"title": "SSH Auth Log Decoder",
"author": "Security Team",
"description": "Parses sshd authentication events from auth.log.",
"module": "sshd",
"references": ["https://wazuh.com"],
"versions": ["Wazuh 5.*"]
},
"name": "decoder/sshd-auth/0",
"check": [
{"tmp_json.event.original": "regex_match(sshd\\\\[)"}
],
"normalize": [
{
"map": [
{"event.category": "[\"authentication\"]"},
{"event.kind": "event"},
{"@timestamp": "get_date()"}
]
}
]
}
}'
Step 3: Create a Rule
Rules use the Sigma format to define detection logic. Link a rule to the same integration:
curl -sk -u admin:admin -X POST \
"https://localhost:9200/_plugins/_content_manager/rules" \
-H 'Content-Type: application/json' \
-d '{
"integration": "a0b448c8-3d3c-47d4-b7b9-cbc3c175f509",
"resource": {
"metadata": {
"title": "SSH Failed Password Attempt",
"description": "Detects failed SSH password authentication attempts.",
"author": "Security Team",
"references": ["https://attack.mitre.org/techniques/T1110/001/"]
},
"sigma_id": "ssh-failed-password",
"enabled": true,
"status": "experimental",
"logsource": {
"product": "linux",
"category": "authentication"
},
"detection": {
"condition": "selection",
"selection": {
"event.category": "authentication",
"event.outcome": "failure"
}
},
"level": "medium",
"tags": ["attack.credential-access", "attack.t1110.001"],
"mitre": {
"tactic": ["TA0006"],
"technique": ["T1110"],
"subtechnique": ["T1110.001"]
}
}
}'
Step 4: Promote to Test Space
Before running logtest, your content must be in the test space.
# 1. Preview what will be promoted
curl -sk -u admin:admin \
"https://localhost:9200/_plugins/_content_manager/promote?space=draft"
# 2. Execute the promotion (use the changes from the preview response)
curl -sk -u admin:admin -X POST \
"https://localhost:9200/_plugins/_content_manager/promote" \
-H 'Content-Type: application/json' \
-d '{
"space": "draft",
"changes": { ... }
}'
Step 5: Run Logtest
Send a sample event to validate your detection pipeline:
curl -sk -u admin:admin -X POST \
"https://localhost:9200/_plugins/_content_manager/logtest" \
-H 'Content-Type: application/json' \
-d '{
"integration": "a0b448c8-3d3c-47d4-b7b9-cbc3c175f509",
"space": "test",
"queue": 1,
"location": "/var/log/auth.log",
"event": "Dec 19 12:00:00 host sshd[12345]: Failed password for root from 10.0.0.1 port 54321 ssh2",
"trace_level": "ALL"
}'
Understanding the Response
The response has two sections:
normalization — Shows how the Engine decoded and normalized the event:
{
"normalization": {
"output": {
"event": {
"category": ["authentication"],
"kind": "event",
"outcome": "failure",
"original": "Dec 19 12:00:00 host sshd[12345]: Failed password for root from 10.0.0.1 port 54321 ssh2"
},
"source": { "ip": "10.0.0.1" },
"user": { "name": "root" }
},
"asset_traces": ["decoder/sshd-auth/0"],
"validation": { "valid": true, "errors": [] }
}
}
detection — Shows which Sigma rules matched the normalized event:
{
"detection": {
"status": "success",
"rules_evaluated": 1,
"rules_matched": 1,
"matches": [
{
"rule": {
"id": "85bba177-a2e9-4468-9d59-26f4798906c9",
"title": "SSH Failed Password Attempt",
"level": "medium",
"tags": ["attack.credential-access", "attack.t1110.001"]
},
"matched_conditions": [
"event.category matched 'authentication'",
"event.outcome matched 'failure'"
]
}
]
}
}
Trace Levels
The trace_level field controls how much detail the Engine returns:
| Level | Description |
|---|---|
NONE | Only the final normalized output. Use for quick checks. |
ASSET_ONLY | Output plus the list of decoders that matched (asset traces). |
ALL | Full trace including every decoder attempted. Use for debugging decoder issues. |
Step 6: Iterate
If the results aren’t what you expect:
- Decoder not matching? Check
asset_traces— if your decoder isn’t listed, review thecheckconditions. Usetrace_level: ALLto see which decoders were attempted. - Rule not matching? Compare the normalized event fields with your rule’s
detectionblock. Field names and values must match exactly (case-insensitive for strings). - Unexpected matches? Review
matched_conditionsto understand why a rule triggered.
After making changes:
- Update the rule or decoder via
PUTon the respective endpoint. - Re-promote draft → test.
- Run logtest again.
Step 7: Promote to Custom
Once your rules are validated, promote from test to custom for production use:
# Preview
curl -sk -u admin:admin \
"https://localhost:9200/_plugins/_content_manager/promote?space=test"
# Execute
curl -sk -u admin:admin -X POST \
"https://localhost:9200/_plugins/_content_manager/promote" \
-H 'Content-Type: application/json' \
-d '{
"space": "test",
"changes": { ... }
}'
Content in the custom space is picked up by the Wazuh Engine and actively used for log processing.
Best Practices
Rule Design
- Start specific, broaden later. Begin with tight detection conditions and loosen them as you understand the log patterns. Overly broad rules generate noise.
- Use meaningful field names. Align your decoder’s
normalizeoutput with the Wazuh Common Schema (WCS) — e.g.,event.category,source.ip,user.name. - Set appropriate severity levels. Use
informationalfor visibility rules,low/mediumfor suspicious activity, andhigh/criticalonly for confirmed threats or high-confidence detections. - Add context to rules. Include
description,references,falsepositives, and MITRE mappings. This helps analysts triage alerts and understand why a rule exists.
Testing Strategy
- Test with real log samples. Use actual log events from your environment, not fabricated examples. Real logs expose edge cases (encoding, missing fields, unexpected formats).
- Test positive AND negative cases. Verify that your rule matches what it should, and verify it does NOT match what it shouldn’t. Send benign events that look similar to confirm no false positives.
- Use
trace_level: ALLwhen debugging. The full trace shows every decoder attempt, making it easy to spot why a particular decoder was or wasn’t selected. - Test one change at a time. When iterating on rules or decoders, change one thing per cycle. This makes it clear what fixed (or broke) the detection.
Promotion Workflow
- Always preview before promoting. The promote preview shows exactly what will change. Review it to avoid promoting unintended modifications.
- Keep draft as your working space. Make all edits in draft. Never try to modify content directly in test or custom.
- Promote frequently in small batches. Smaller promotions are easier to validate and roll back. Avoid accumulating dozens of changes before testing.
- Validate in test before promoting to custom. The test space exists specifically for this purpose. Don’t skip it.
Split Endpoints: Normalization and Detection
In addition to the combined logtest endpoint, you can run normalization and detection as separate steps. This is useful for:
- Debugging decoders without noise from detection results.
- Testing multiple integrations against the same normalized event without re-running the Engine each time.
- Iterating on rules without waiting for normalization on each call.
Normalization Only
curl -sk -u admin:admin -X POST \
"https://localhost:9200/_plugins/_content_manager/logtest/normalization" \
-H 'Content-Type: application/json' \
-d '{
"space": "test",
"queue": 1,
"location": "/var/log/auth.log",
"input": "Dec 19 12:00:00 host sshd[12345]: Failed password for root from 10.0.0.1 port 54321 ssh2",
"trace_level": "ALL"
}'
The response contains only the Engine’s normalized output (no detection section):
{
"status": 200,
"message": {
"output": {
"event": {
"category": ["authentication"],
"kind": "event",
"outcome": "failure"
},
"source": { "ip": "10.0.0.1" },
"user": { "name": "root" }
},
"asset_traces": ["decoder/sshd-auth/0"],
"validation": { "valid": true, "errors": [] }
}
}
Detection Only
Take the normalized event (the output object from normalization) and pass it as input along with the integration ID:
curl -sk -u admin:admin -X POST \
"https://localhost:9200/_plugins/_content_manager/logtest/detection" \
-H 'Content-Type: application/json' \
-d '{
"space": "test",
"integration": "a0b448c8-3d3c-47d4-b7b9-cbc3c175f509",
"input": {
"event": {
"category": ["authentication"],
"kind": "event",
"outcome": "failure"
},
"source": { "ip": "10.0.0.1" },
"user": { "name": "root" }
}
}'
The response contains only the detection result:
{
"status": 200,
"message": {
"status": "success",
"rules_evaluated": 1,
"rules_matched": 1,
"matches": [
{
"rule": {
"id": "85bba177-a2e9-4468-9d59-26f4798906c9",
"title": "SSH Failed Password Attempt",
"level": "medium",
"tags": ["attack.credential-access", "attack.t1110.001"]
},
"matched_conditions": [
"event.category matched 'authentication'",
"event.outcome matched 'failure'"
]
}
]
}
}
Quick Reference
| Action | Endpoint | Method |
|---|---|---|
| Create integration | /_plugins/_content_manager/integrations | POST |
| Create decoder | /_plugins/_content_manager/decoders | POST |
| Create rule | /_plugins/_content_manager/rules | POST |
| Update rule | /_plugins/_content_manager/rules/{id} | PUT |
| Preview promotion | /_plugins/_content_manager/promote?space={space} | GET |
| Execute promotion | /_plugins/_content_manager/promote | POST |
| Run logtest (combined) | /_plugins/_content_manager/logtest | POST |
| Normalization only | /_plugins/_content_manager/logtest/normalization | POST |
| Detection only | /_plugins/_content_manager/logtest/detection | POST |
For full endpoint details, see the API Reference. For Sigma rule format details, see Sigma Rules.
Troubleshooting
Common issues and diagnostic procedures for the Content Manager plugin.
Common Errors
“Error communicating with Engine socket: Connection refused”
The Wazuh Engine is not running or the Unix socket is not accessible.
Resolution:
-
Check the socket file exists:
ls -la /usr/share/wazuh-indexer/engine/sockets/engine-api.sock -
Ensure the Wazuh Indexer process has permission to access the socket file.
“Token not found”
No CTI subscription has been registered. The Content Manager cannot sync content without a valid subscription token.
Resolution:
-
Check the current subscription status:
curl -sk -u admin:admin \ "https://192.168.56.6:9200/_plugins/_content_manager/subscription" -
If the response is
{"message":"Token not found","status":404}, register a subscription using a device code from the Wazuh CTI Console:curl -sk -u admin:admin -X POST \ "https://192.168.56.6:9200/_plugins/_content_manager/subscription" \ -H 'Content-Type: application/json' \ -d '{ "device_code": "<your-device-code>", "client_id": "<your-client-id>", "expires_in": 900, "interval": 5 }'
Sync Not Running
Content is not being updated despite having a valid subscription.
Diagnosis:
-
Check consumer state and offsets:
curl -sk -u admin:admin \ "https://192.168.56.6:9200/.wazuh-cti-consumers/_search?pretty"If
local_offsetequalsremote_offset, the content is already up-to-date. -
Check the sync job is registered and enabled:
curl -sk -u admin:admin \ "https://192.168.56.6:9200/.wazuh-content-manager-jobs/_search?pretty"Verify the job has
"enabled": trueand the schedule interval matches your configuration. -
Check if scheduled sync is enabled in
opensearch.yml:plugins.content_manager.catalog.update_on_schedule: true -
Trigger a manual sync to test:
curl -sk -u admin:admin -X POST \ "https://192.168.56.6:9200/_plugins/_content_manager/update"
Socket File Not Found
The Unix socket used for Engine communication does not exist.
Expected path: /usr/share/wazuh-indexer/engine/sockets/engine-api.sock
Resolution:
- Verify the Wazuh Engine is installed and running.
- Check the Engine configuration for the socket path.
- Ensure the
engine/sockets/directory exists under the Wazuh Indexer installation path.
Diagnostic Commands
Check Consumer State
View synchronization state for all content contexts:
curl -sk -u admin:admin \
"https://192.168.56.6:9200/.wazuh-cti-consumers/_search?pretty"
Example output:
{
"hits": {
"hits": [
{
"_id": "t1-ruleset-5_public-ruleset-5",
"_source": {
"name": "public-ruleset-5",
"context": "t1-ruleset-5",
"status": "idle",
"local_offset": 3932,
"remote_offset": 3932,
"snapshot_link": "https://api.pre.cloud.wazuh.com/store/contexts/t1-ruleset-5/consumers/public-ruleset-5/168_1776070234.zip"
}
}
]
}
}
status == idle: Sync is complete; content is safe to read.status == updating: Sync is in progress. If this persists after a sync should have finished, the previous sync may have failed mid-cycle.local_offset == remote_offset: Content is up-to-date.local_offset < remote_offset: Content needs updating.local_offset == 0: Content has never been synced (snapshot required).
Check Sync Job
View the periodic sync job configuration:
curl -sk -u admin:admin \
"https://192.168.56.6:9200/.wazuh-content-manager-jobs/_search?pretty"
Count Content Documents
Check how many rules, decoders, etc. have been indexed:
# Rules
curl -sk -u admin:admin "https://192.168.56.6:9200/wazuh-threatintel-rules/_count?pretty"
# Decoders
curl -sk -u admin:admin "https://192.168.56.6:9200/wazuh-threatintel-decoders/_count?pretty"
# Integrations
curl -sk -u admin:admin "https://192.168.56.6:9200/wazuh-threatintel-integrations/_count?pretty"
# KVDBs
curl -sk -u admin:admin "https://192.168.56.6:9200/wazuh-threatintel-kvdbs/_count?pretty"
# IoCs
curl -sk -u admin:admin "https://192.168.56.6:9200/wazuh-threatintel-enrichments/_count?pretty"
Log Monitoring
Content Manager logs are part of the Wazuh Indexer logs. Use the following patterns to filter relevant entries:
# General Content Manager activity
grep -i "content.manager\|ContentManager\|CatalogSync" \
/var/log/wazuh-indexer/wazuh-indexer.log
# Sync job execution
grep -i "CatalogSyncJob\|consumer-sync" \
/var/log/wazuh-indexer/wazuh-indexer.log
# CTI API communication
grep -i "cti\|CTIClient" \
/var/log/wazuh-indexer/wazuh-indexer.log
# Engine socket communication
grep -i "engine.*socket\|EngineClient" \
/var/log/wazuh-indexer/wazuh-indexer.log
# Errors only
grep -i "ERROR.*content.manager" \
/var/log/wazuh-indexer/wazuh-indexer.log
Resetting Content
To force a full re-sync from snapshot, delete the consumer state document and restart the indexer:
# Delete consumer state (forces snapshot on next sync)
curl -sk -u admin:admin -X DELETE \
"https://192.168.56.6:9200/.wazuh-cti-consumers/_doc/*"
# Restart indexer to trigger sync
systemctl restart wazuh-indexer
Warning: This will re-download and re-index all content from scratch. Use only when troubleshooting persistent sync issues.
Wazuh Indexer Reporting plugin
The wazuh-indexer-reporting plugin provides functionality for generating customizable reports based on data stored in the Wazuh Indexer. Most of this data originates from the Wazuh Manager, which collects and analyzes security events from registered agents. The plugin supports both scheduled and on‑demand report generation. Reports can be delivered via email or downloaded on demand through the Wazuh Dashboard or the API. Users can create, read, update, and delete custom reports. Access to these actions is governed by the Wazuh Indexer’s role‑based access control (RBAC) permissions. This plugin is built on top of OpenSearch’s native Reporting and Notifications plugins.
Usage
Configuring the email notifications channel
In Wazuh Dashboard, go to Notifications > Channels and click on Create channel:

- Fill in a name (e.g
Email notifications). - Select Email as Channel Type.
- Check SMTP sender as Sender Type.
- Click on Create SMTP sender.
- Fill in a name (e.g
mailpit). - Fill in an email address.
- In Host, type
mailpit(adapt this to your SMTP server Domain Name). - For port, type 1025 (adapt this to your SMTP server settings).
- Select None as Encryption method.
- Click on Create.

- Fill in a name (e.g
- Click on Create recipient group.
- Fill in a name (e.g
email-notifications-recipient-group). - On Emails, type any email.
- Click on Create.

- Fill in a name (e.g
The fields should now be filled in as follows:

- Click on Send test message to validate the configuration, a green message should pop up.
- Finally, click on Create.
More information on how to configure the email notifications channel can be found in the OpenSearch documentation.
Creating a new report
For more information on how to create reports, please refer to the OpenSearch documentation. The reporting plugin also allows you to create notifications following the behaviour on OpenSearch’s notifications plugin.
Generate and download a report
To create a new report you must have predefined the report settings. Once the report is configured, you can generate it by clicking the “Generate Report” button. This is only available on “On demand” report definitions as scheduled reports will be generated automatically. The report will be processed and made available for download at the Reports section on Explore -> Report.
You can also create a csv or xlsx report without a report definition by saving a search on Explore -> Discover. Remember to have an available index pattern.
Generate a report definition
Before creating a report definition you must have generated and saved a Dashboard, a Visualization, a search or a Notebook. Then you can do so at the Explore -> Reporting section, choosing the intended configuration. This generates PDF/PNG reports or CSV/XLSX reports in case a saved search is selected.
Managing permissions on reporting via RBAC
The Reporting plugin uses the Wazuh Indexer RBAC (role-based access control) system to manage permissions. This means that users must have the appropriate roles assigned to them in order to create, read, update, or delete reports. The roles can be managed through the Wazuh Dashboard Index Management -> Security -> Roles section. The following permissions are available for the Reporting plugin:
1. cluster:admin/opendistro/reports/definition/create
2. cluster:admin/opendistro/reports/definition/update
3. cluster:admin/opendistro/reports/definition/on_demand
4. cluster:admin/opendistro/reports/definition/delete
5. cluster:admin/opendistro/reports/definition/get
6. cluster:admin/opendistro/reports/definition/list
7. cluster:admin/opendistro/reports/instance/list
8. cluster:admin/opendistro/reports/instance/get
9. cluster:admin/opendistro/reports/menu/download
There are already some predefined roles that can be used to manage permissions on reporting:
reports_read_access: permissions 5 to 9.reports_instances_read_access: 7 to 9.reports_full_access: permissions 1 to 9.
More information on how to modify and map roles on the Wazuh Indexer can be found in the Wazuh Indexer documentation.
Security Analytics
The Security Analytics Plugin (SAP) is a fork of the OpenSearch Security Analytics plugin adapted for Wazuh. It evaluates incoming events against Sigma detection rules, creates findings when rules match, and correlates related findings across detectors.
SAP runs inside the Wazuh Indexer and operates as an OpenSearch plugin, using the standard OpenSearch transport layer for all internal communication.
Detector constraints
| Constraint | Value | Description |
|---|---|---|
| Max rules per detector | 100 | Each detector input can reference at most 100 rules (custom or pre-packaged). Requests that exceed this limit are rejected with HTTP 400. |
This limit is enforced at the transport layer (TransportIndexDetectorAction) and applies to all detector creation and update paths, including inter-plugin calls from the Content Manager.
Wazuh enriched findings
What is a finding?
A finding is a record that a monitored event matched a Sigma detection rule. SAP creates one finding per matching event and stores it in the .opensearch-sap-{category}-findings-* data stream. Each finding contains:
| Field | Description |
|---|---|
id | Unique finding identifier |
detector_id | The detector that produced the finding |
related_doc_ids | IDs of the source documents that triggered the match |
queries | The Sigma rule(s) that matched |
index | The source index where the triggering event lives |
timestamp | When the finding was created |
Raw findings contain only identifiers — they do not embed the triggering event payload or rule metadata.
What is an enriched finding?
An enriched finding is an augmented version of a raw SAP finding. Because the Wazuh Dashboard needs the full event payload and rule context to render alert details, WazuhEnrichedFindingService enriches each finding with:
- The full triggering event source (fetched from the source index by document ID)
- Rule metadata: name, severity level, compliance mappings, MITRE ATT&CK tags
Enriched findings are written to wazuh-findings-v5-{category}*, where {category} is derived from the wazuh.integration.category field in the triggering event.
How findings are generated (high level)
The following steps happen for every event that matches a detection rule:
- A Wazuh Manager sends an event to the Wazuh Indexer. The event is indexed in the monitored data stream.
- SAP’s Alerting monitor evaluates the event against all active Sigma rules for the configured log category.
- On a match, SAP creates a raw finding and fires the
SUBSCRIBE_FINDINGS_ACTIONtransport action. TransportCorrelateFindingActionreceives the action, runs the correlation engine, and callsWazuhEnrichedFindingService.enrich(finding).- The service asynchronously fetches the triggering event source and the matching rule’s metadata, assembles the enriched document, and bulk-indexes it into
wazuh-findings-v5-{category}*.
The enrichment step is fire-and-forget: it never blocks the SAP write path and failures are logged at WARN level without propagating to the caller.
See Architecture for the low-level implementation details.
Architecture
Enrichment pipeline
When SAP produces a finding, WazuhEnrichedFindingService runs an asynchronous enrichment chain that fetches the triggering event and the matching rule’s metadata, assembles an enriched document, and bulk-indexes it into wazuh-findings-v5-{category}*.
The complete flow is shown in the sequence diagram below:
sequenceDiagram
participant A as Wazuh Manager
participant I as Wazuh Indexer
participant SAP as Security Analytics Plugin
participant TC as TransportCorrelateFindingAction
participant WS as WazuhEnrichedFindingService
participant SI as Source Index
participant RI as Rules Index
participant WF as wazuh-findings-v5-{category}*
A->>I: Ingest event
I->>SAP: Monitor evaluates event against Sigma rules
SAP->>SAP: Rule matches → create raw finding
SAP->>TC: SUBSCRIBE_FINDINGS_ACTION
TC->>WS: enrich(finding)
WS->>WS: Add to findingsQueue
WS->>WS: Acquire semaphore permit (max 50 in-flight)
WS->>SI: GetRequest (triggering event by doc ID)
SI-->>WS: Event source map
WS->>WS: resolveCategory(wazuh.integration.category)
alt Rule metadata cache hit
WS->>WS: Read from ruleMetadataCache
else Cache miss
WS->>RI: MultiGetRequest (pre-packaged + custom rules indices)
RI-->>WS: Rule metadata
WS->>WS: Store in ruleMetadataCache
end
WS->>WS: buildAndIndex (assemble enriched document)
WS->>WS: Add to pendingRequests queue
alt Batch full (100 items)
WS->>WF: client.bulk (stashed thread context)
else Periodic flush (every 5 s)
WS->>WF: client.bulk (stashed thread context)
end
WS->>WS: Release semaphore permit
Implementation details
Fire-and-forget execution
WazuhEnrichedFindingService.enrich() returns immediately after adding the finding to the internal queue. All network I/O and document assembly happen on async transport threads. Failures are logged at WARN level and never surface to the SAP write path.
Bounded concurrency
A Semaphore with MAX_IN_FLIGHT permits limits how many enrichment chains run simultaneously. Findings that arrive while all permits are held are queued in a ConcurrentLinkedQueue and processed as permits become available. This prevents transport-layer overload on resource-constrained nodes.
Rule metadata cache
Rule metadata (severity level, compliance mappings, MITRE ATT&CK tags) is stored in an in-memory ConcurrentHashMap keyed by rule ID. On the first finding for a given rule, the service issues a MultiGetRequest against both the pre-packaged rules index (opensearch-pre-packaged-rules) and the custom rules index (opensearch-custom-rules). Subsequent findings from the same detector reuse the cached entry, eliminating repeated round-trips.
The cache is unbounded and lives for the lifetime of the node. It is cleared only on plugin reload or node restart.
Bulk indexing
Index requests are accumulated in a ConcurrentLinkedQueue<IndexRequest>. Two flush paths drain this queue:
- Batch trigger: every time
pendingCountreaches a multiple ofBULK_BATCH_SIZE, the thread that incremented the counter callsdrainAndFlush()immediately. - Periodic flush: a fixed-delay scheduler fires
drainAndFlush()everyFLUSH_INTERVALto drain any remainder that has not yet reached the batch threshold.
drainAndFlush() polls all pending requests into a single BulkRequest and calls client.bulk(). The call is wrapped in threadPool.getThreadContext().stashContext() so the security plugin accepts the request regardless of which thread pool the flush runs on.
Category resolution
Before assembling an enriched document, the service reads wazuh.integration.category from the triggering event. If the field is absent or its value is not one of the recognized LOG_CATEGORY values, enrichment is skipped for that finding and a WARN log entry is emitted.
Technical parameters
| Parameter | Value | Description |
|---|---|---|
BULK_BATCH_SIZE | 100 | Pending index requests accumulated before a batch-trigger flush |
MAX_IN_FLIGHT | 50 | Maximum concurrent async enrichment chains |
FLUSH_INTERVAL | 5 s | Interval between periodic flush runs |
| Target data stream | wazuh-findings-v5-{category}* | Data stream destination, resolved per finding |
| Rule metadata cache | Unbounded, in-memory | ConcurrentHashMap, keyed by rule ID, cleared on restart |
| Index operation type | CREATE | Prevents overwriting existing enriched findings |
System indices
| Index | Description |
|---|---|
.opensearch-sap-{category}-findings-* | Raw SAP findings written by the Security Analytics Plugin |
.opensearch-pre-packaged-rules | Wazuh-provided Sigma rules; source for rule metadata |
.opensearch-custom-rules | User-created custom rules; fallback source for rule metadata |
wazuh-findings-v5-{category}* | Enriched findings written by WazuhEnrichedFindingService |
Notifications
The Wazuh Indexer Notifications plugin is a specialized component designed to extend the Wazuh Indexer (based on OpenSearch) with multi-channel notification capabilities. It allows the system to send alerts, reports, and messages via Email (SMTP/SES), Slack, Microsoft Teams, Amazon Chime, Amazon SNS, and Custom Webhooks.
Key Capabilities
- Multi-channel delivery: Send notifications to Slack, Microsoft Teams, Chime, Email (SMTP and AWS SES), AWS SNS, and custom HTTP webhooks.
- Unified REST API: Create, update, delete, and query notification channel configurations through a single API surface at
/_plugins/_notifications/. - Test notifications: Validate channel configuration by sending a test message before relying on it for production alerts.
- Feature discovery: Other plugins can query supported notification features dynamically.
- RBAC integration: Access to notification configurations is governed by the Wazuh Indexer Security plugin, with backend-role–based filtering.
- Extensible architecture: The plugin uses a Service Provider Interface (SPI) pattern, making it straightforward to add new destination types.
Supported Channel Types
| Channel Type | Protocol | Description |
|---|---|---|
slack | HTTPS (Webhook) | Posts messages to a Slack channel via an Incoming Webhook URL. |
chime | HTTPS (Webhook) | Posts messages to an Amazon Chime room via a webhook URL. |
microsoft_teams | HTTPS (Webhook) | Posts messages to a Microsoft Teams channel via a connector webhook. |
webhook | HTTP/HTTPS | Sends a payload to an arbitrary HTTP endpoint with configurable method, headers, and URL. |
email | SMTP / AWS SES | Sends email messages. Requires an smtp_account or ses_account configuration. |
sns | AWS SNS SDK | Publishes a message to an Amazon SNS topic. |
smtp_account | — | Defines SMTP server connection details (host, port, method, credentials). |
ses_account | — | Defines AWS SES sending details (region, role ARN, from address). |
email_group | — | Defines a group of email recipients for reuse across email-type channels. |
Dependencies
This plugin has a dependency on the wazuh-indexer-common-utils repository. It uses the Common Utils jar to provide shared utility functions and common components required for plugin functionality.
Version
The current plugin version is 5.0.0-alpha0 (see VERSION.json in the repository root).
Architecture
The Notifications plugin follows a layered architecture that separates destination definitions, transport logic, and plugin orchestration.
High-Level Architecture
The Notifications plugin runs inside the Wazuh Indexer and acts as a bridge between internal producers of alerts (such as Alerting, Reporting, and ISM) and external delivery services like SMTP servers, webhooks, and AWS services.
At a high level, the architecture is composed of three main parts:
-
Notification producers (inside the Indexer)
Internal plugins such as Alerting, Reporting, ISM, and other Wazuh Indexer components generate alerts and events.
When they need to send a notification (for example, a Slack message or an email), they call the Notifications plugin either through:- The REST API exposed by the Indexer, or
- Internal transport actions.
-
Notifications plugin (inside the Indexer)
The plugin itself is structured in several layers:-
REST / Transport layer
- Exposes the
/_plugins/_notifications/...REST endpoints. - Receives requests to create, update, list, and delete notification channel configurations, send test notifications, and query features.
- Validates requests and delegates the work to internal transport actions.
- Exposes the
-
Security integration
- Uses the Security plugin to validate permissions for each request.
- When
filter_by_backend_rolesis enabled, it filters which notification configurations each user can see or use based on backend roles.
-
Core SPI layer
- Defines common contracts and models such as
NotificationCore,BaseDestination, and concrete destination types likeSlackDestination,SmtpDestination,SesDestination, andSnsDestination. - Encapsulates message content (
MessageContent) and delivery responses (DestinationMessageResponse).
- Defines common contracts and models such as
-
Core implementation (transport logic)
- Implements concrete transports:
WebhookDestinationTransportfor Slack, Microsoft Teams, Chime, and generic webhooks (HTTP/HTTPS).SmtpDestinationTransportfor email via SMTP.SesDestinationTransportfor email via AWS SES.SnsDestinationTransportfor messages via AWS SNS.
- Manages HTTP client pools, connection and socket timeouts, host deny lists, and HTTP response size limits.
- Retrieves SMTP/SES/SNS credentials from the OpenSearch Keystore or other secure settings via a credential provider.
- Implements concrete transports:
-
Persistence and configuration
- Stores notification channel configurations in an internal index (for example,
.notifications). - Uses
NotificationConfigIndexandConfigIndexingActionsto create, read, update, and delete configurations. - Exposes internal metrics through the stats endpoint so operators can inspect request counts and error patterns.
- Stores notification channel configurations in an internal index (for example,
-
-
External destination services (outside the Indexer)
After the plugin resolves the destination type, the corresponding transport sends the message to:- SMTP servers (corporate mail, Gmail, etc.),
- Webhook endpoints (Slack, Microsoft Teams, Amazon Chime, custom HTTP integrations),
- AWS services such as SES and SNS.
Once delivery is attempted, the plugin updates the notification status (for example,
sentorfailed) and returns the outcome to the caller (Alerting, Reporting, or the user calling the REST API).
Plugin Layers
1. Core SPI (core-spi)
The Service Provider Interface layer defines the contracts and models:
NotificationCore: Interface that the core implementation must satisfy. DefinessendMessage()and related operations.BaseDestination: Abstract base class for all destination types. Subclasses includeSlackDestination,ChimeDestination,MicrosoftTeamsDestination,CustomWebhookDestination,SmtpDestination,SesDestination, andSnsDestination.MessageContent: Encapsulates the notification message (title, text body, HTML body, attachment).DestinationMessageResponse: Standard response from any delivery attempt (status code, response body).
2. Core Implementation (core)
The Core layer provides the actual delivery logic:
-
Transport Providers:
WebhookDestinationTransport— handles Slack, Chime, Microsoft Teams, and custom webhook delivery via HTTP POST.SmtpDestinationTransport— sends emails using SMTP protocol (supports STARTTLS/SSL).SesDestinationTransport— sends emails via the AWS SES SDK.SnsDestinationTransport— publishes messages to AWS SNS topics.
-
HTTP Client Pool:
DestinationClientPoolmanages a pool ofDestinationHttpClientinstances with configurable connection limits, timeouts, and host deny lists. -
Credential Management: The
CredentialsProviderabstraction loads SMTP/SES/SNS credentials from the OpenSearch Keystore or from secure settings. -
Plugin Settings (
PluginSettings): All tunable parameters — email size limits, connection pools, timeouts, allowed config types, host deny lists — are centralized here and dynamically updatable via cluster settings.
3. Notification Plugin (notifications)
The Plugin module ties everything together:
- REST Handlers: Map HTTP requests to internal transport actions (see API Reference).
- Transport Actions: Asynchronous action classes (
CreateNotificationConfigAction,DeleteNotificationConfigAction,GetNotificationConfigAction,UpdateNotificationConfigAction,SendNotificationAction,SendTestNotificationAction,GetPluginFeaturesAction,GetChannelListAction,PublishNotificationAction). - Index Operations:
NotificationConfigIndexmanages the.notificationsindex for storing channel configurations.ConfigIndexingActionshandles create/read/update/delete operations on the index. - Metrics: The
Metricsclass tracks counters for all API operations (create, update, delete, info, features, channels, send test). - Security:
UserAccessManagerenforces RBAC based on backend roles whenfilter_by_backend_rolesis enabled.
Send Notification Sequence
The following sequence describes the flow when an internal plugin (e.g., Alerting) sends a notification:
- The Alerting Monitor triggers an alert and calls the Notification plugin via the Transport Interface.
- The Security Plugin verifies the caller’s permissions.
- The notification is persisted in the notifications index with status
pending/in-progress. - The plugin resolves the destination type and delegates to the appropriate transport:
- Email:
SmtpDestinationTransportorSesDestinationTransportsends the email. On failure, retries up to the configured limit. - Webhook:
WebhookDestinationTransportsends the HTTP request to Slack, Chime, Teams, or a custom endpoint. - SNS:
SnsDestinationTransportpublishes to the SNS topic.
- Email:
- The delivery status is returned and the notification record is updated.
- The Alerting plugin acknowledges the result and updates the alert status.
Configuration Management Sequence
- A user (via Dashboard or REST API) creates or updates a notification channel configuration.
- The request is routed to
NotificationConfigRestHandler. - The configuration is validated and persisted in the
.notificationsindex. - On retrieval, configurations can be filtered by type, name, status, and other fields.
Configuration
The Notifications plugin is configured through settings in opensearch.yml and cluster-level dynamic settings. The plugin also supports default values from a YAML configuration file bundled with the plugin.
Configuration Files
On startup, the plugin loads default settings from:
- Core defaults:
<opensearch-config>/opensearch-notifications-core/notifications-core.yml - Plugin defaults:
<opensearch-config>/opensearch-notifications/notifications.yml
These files provide initial values that can be overridden by settings in opensearch.yml or through the cluster settings API.
Core Settings (opensearch.notifications.core.*)
These settings control the core notification delivery engine.
Email Settings
| Setting | Type | Default | Description |
|---|---|---|---|
opensearch.notifications.core.email.size_limit | Integer | 10000000 (10 MB) | Maximum total size of an email message including attachments. Minimum: 10000 (10 KB). |
opensearch.notifications.core.email.minimum_header_length | Integer | 160 | Minimum header length for email messages. Used to calculate available body size. |
HTTP Connection Settings
| Setting | Type | Default | Description |
|---|---|---|---|
opensearch.notifications.core.http.max_connections | Integer | 60 | Maximum number of simultaneous HTTP connections for webhooks. |
opensearch.notifications.core.http.max_connection_per_route | Integer | 20 | Maximum HTTP connections per destination route. |
opensearch.notifications.core.http.connection_timeout | Integer | 5000 | HTTP connection timeout in milliseconds. |
opensearch.notifications.core.http.socket_timeout | Integer | 50000 | HTTP socket timeout in milliseconds. |
opensearch.notifications.core.http.host_deny_list | List<String> | [] | List of denied hosts. Webhook destinations targeting these hosts will be blocked. Inherits from legacy plugins.destination.host.deny_list if not set. |
General Core Settings
| Setting | Type | Default | Description |
|---|---|---|---|
opensearch.notifications.core.max_http_response_size | Integer | Same as http.max_content_length | Maximum allowed HTTP response size in bytes. Protects against oversized responses from webhook endpoints. |
opensearch.notifications.core.allowed_config_types | List<String> | ["slack", "chime", "microsoft_teams", "webhook", "email", "sns", "ses_account", "smtp_account", "email_group"] | List of channel types that users are allowed to create. Remove a type from this list to disable it cluster-wide. |
opensearch.notifications.core.tooltip_support | Boolean | true | Enable or disable tooltip support in the Dashboard UI. |
Plugin Settings (opensearch.notifications.*)
These settings control the plugin’s general behavior.
| Setting | Type | Default | Description |
|---|---|---|---|
opensearch.notifications.general.operation_timeout_ms | Long | 60000 | Timeout in milliseconds for internal operations (index reads/writes). Minimum: 100. |
opensearch.notifications.general.default_items_query_count | Integer | 100 | Default number of items returned per query when not specified. Minimum: 10. |
opensearch.notifications.general.filter_by_backend_roles | Boolean | false | When true, users can only see notification configurations created by users who share the same backend role. Inherits from plugins.alerting.filter_by_backend_roles if not set. |
Email Destination Secure Settings
SMTP and SES credentials are stored securely in the OpenSearch Keystore rather than in plain text configuration files.
SMTP Account Credentials
To configure SMTP credentials for an email account named my_smtp_account:
# Add SMTP username
bin/opensearch-keystore add opensearch.notifications.core.email.my_smtp_account.username
# Add SMTP password
bin/opensearch-keystore add opensearch.notifications.core.email.my_smtp_account.password
The secure setting key prefix is opensearch.notifications.core.email.<account_name>.username and opensearch.notifications.core.email.<account_name>.password.
Note: Legacy settings from Alerting (
plugins.alerting.destination.email.<account_name>.*) are also supported as fallback.
Example Configuration
A minimal opensearch.yml configuration for the Notifications plugin:
# Notification core settings
opensearch.notifications.core.email.size_limit: 10000000
opensearch.notifications.core.http.max_connections: 60
opensearch.notifications.core.http.connection_timeout: 5000
opensearch.notifications.core.http.socket_timeout: 50000
opensearch.notifications.core.http.host_deny_list:
- "10.0.0.0/8"
- "172.16.0.0/12"
# Allowed channel types (remove a type to disable it)
opensearch.notifications.core.allowed_config_types:
- slack
- chime
- microsoft_teams
- webhook
- email
- sns
- ses_account
- smtp_account
- email_group
# Plugin settings
opensearch.notifications.general.operation_timeout_ms: 60000
opensearch.notifications.general.default_items_query_count: 100
opensearch.notifications.general.filter_by_backend_roles: false
Dynamic Settings Update
All settings marked as Dynamic can be updated at runtime through the cluster settings API:
curl -X PUT "https://localhost:9200/_cluster/settings" \
-H 'Content-Type: application/json' \
-d '{
"persistent": {
"opensearch.notifications.core.http.max_connections": 100,
"opensearch.notifications.general.filter_by_backend_roles": true
}
}'
API Reference
All Notification plugin endpoints use the base path /_plugins/_notifications.
Notification Configs
Create a Notification Config
Creates a new notification channel configuration.
| Method | POST |
| URI | /_plugins/_notifications/configs |
Request body:
{
"config": {
"name": "<config-name>",
"description": "<config-description>",
"config_type": "<channel-type>",
"is_enabled": true,
"<channel-type>": {
// channel-specific fields
}
}
}
Slack example:
{
"config": {
"name": "my-slack-channel",
"description": "Slack notifications for alerts",
"config_type": "slack",
"is_enabled": true,
"slack": {
"url": "https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXX"
}
}
}
Email example (with SMTP account):
{
"config": {
"name": "my-email-channel",
"description": "Email alerts via SMTP",
"config_type": "email",
"is_enabled": true,
"email": {
"email_account_id": "<smtp-account-config-id>",
"recipient_list": [
{ "recipient": "alerts@example.com" }
],
"email_group_id_list": []
}
}
}
SMTP account example:
{
"config": {
"name": "my-smtp-account",
"description": "Corporate SMTP server",
"config_type": "smtp_account",
"is_enabled": true,
"smtp_account": {
"host": "smtp.example.com",
"port": 587,
"method": "start_tls",
"from_address": "noreply@example.com"
}
}
}
Webhook example:
{
"config": {
"name": "my-custom-webhook",
"description": "Custom webhook for incident system",
"config_type": "webhook",
"is_enabled": true,
"webhook": {
"url": "https://incident.example.com/api/alert",
"header_params": {
"Content-Type": "application/json"
},
"method": "POST"
}
}
}
Microsoft Teams example:
{
"config": {
"name": "my-teams-channel",
"description": "Teams notifications",
"config_type": "microsoft_teams",
"is_enabled": true,
"microsoft_teams": {
"url": "https://outlook.office.com/webhook/..."
}
}
}
SNS example:
{
"config": {
"name": "my-sns-topic",
"description": "SNS notifications",
"config_type": "sns",
"is_enabled": true,
"sns": {
"topic_arn": "arn:aws:sns:us-east-1:123456789012:my-topic",
"role_arn": "arn:aws:iam::123456789012:role/sns-publish-role"
}
}
}
Response:
{
"config_id": "<generated-config-id>"
}
Update a Notification Config
Updates an existing notification channel configuration.
| Method | PUT |
| URI | /_plugins/_notifications/configs/{config_id} |
Request body: Same structure as create. All fields in the config object are replaced.
{
"config": {
"name": "updated-slack-channel",
"description": "Updated description",
"config_type": "slack",
"is_enabled": true,
"slack": {
"url": "https://hooks.slack.com/services/T00000000/B00000000/YYYYYYYY"
}
}
}
Response:
{
"config_id": "<config-id>"
}
Get a Notification Config
Retrieves a specific notification configuration by ID.
| Method | GET |
| URI | /_plugins/_notifications/configs/{config_id} |
Response:
{
"config_list": [
{
"config_id": "<config-id>",
"last_updated_time_ms": 1234567890,
"created_time_ms": 1234567890,
"config": {
"name": "my-slack-channel",
"description": "Slack notifications for alerts",
"config_type": "slack",
"is_enabled": true,
"slack": {
"url": "https://hooks.slack.com/services/..."
}
}
}
],
"total_hits": 1
}
List Notification Configs
Retrieves notification configurations with filtering, sorting, and pagination.
| Method | GET |
| URI | /_plugins/_notifications/configs |
Query parameters:
| Parameter | Type | Description |
|---|---|---|
config_id | String | Filter by a single config ID. |
config_id_list | String | Comma-separated list of config IDs. |
from_index | Integer | Pagination offset (default: 0). |
max_items | Integer | Maximum items to return (default: 100). |
sort_field | String | Field to sort by (e.g., config_type, name, last_updated_time_ms). |
sort_order | String | Sort order: asc or desc. |
config_type | String | Filter by channel type (e.g., slack,email). |
is_enabled | Boolean | Filter by enabled status. |
name | String | Filter by name (text search). |
description | String | Filter by description (text search). |
last_updated_time_ms | String | Range filter (e.g., 1609459200000..1640995200000). |
created_time_ms | String | Range filter. |
slack.url | String | Filter by Slack webhook URL (text search). |
chime.url | String | Filter by Chime webhook URL. |
microsoft_teams.url | String | Filter by Teams webhook URL. |
webhook.url | String | Filter by custom webhook URL. |
smtp_account.host | String | Filter by SMTP host. |
smtp_account.from_address | String | Filter by SMTP from address. |
smtp_account.method | String | Filter by SMTP method (ssl, start_tls, none). |
sns.topic_arn | String | Filter by SNS topic ARN. |
sns.role_arn | String | Filter by SNS role ARN. |
ses_account.region | String | Filter by SES region. |
ses_account.role_arn | String | Filter by SES role ARN. |
ses_account.from_address | String | Filter by SES from address. |
query | String | Search across all keyword and text filter fields. |
text_query | String | Search across text filter fields only. |
Example:
curl -sk -u admin:admin \
"https://localhost:9200/_plugins/_notifications/configs?config_type=slack&max_items=10&sort_order=desc"
Delete a Notification Config
Deletes one or more notification configurations.
| Method | DELETE |
| URI | /_plugins/_notifications/configs/{config_id} |
Or for bulk delete:
| Method | DELETE |
| URI | /_plugins/_notifications/configs?config_id_list=id1,id2,id3 |
Response:
{
"delete_response_list": {
"<config-id>": "OK"
}
}
Channels
List Notification Channels
Returns a simplified list of all configured notification channels (ID, name, type, and enabled status).
| Method | GET |
| URI | /_plugins/_notifications/channels |
Response:
{
"channel_list": [
{
"config_id": "<id>",
"name": "my-slack-channel",
"config_type": "slack",
"is_enabled": true
}
],
"total_hits": 1
}
Features
Get Plugin Features
Returns the notification features and allowed config types supported by the plugin.
| Method | GET |
| URI | /_plugins/_notifications/features |
Response:
{
"allowed_config_type_list": [
"slack",
"chime",
"microsoft_teams",
"webhook",
"email",
"sns",
"ses_account",
"smtp_account",
"email_group"
],
"plugin_features": {
"tooltip_support": "true"
}
}
Test Notifications
Send Test Notification
Sends a test notification to a configured channel to validate the configuration.
| Method | POST |
| URI | /_plugins/_notifications/feature/test/{config_id} |
Note:
GETis also supported for backwards compatibility but is deprecated and will be removed in a future major version.
Example:
curl -sk -u admin:admin -X POST \
"https://localhost:9200/_plugins/_notifications/feature/test/<config-id>"
Response:
{
"status_list": [
{
"config_id": "<config-id>",
"config_type": "slack",
"config_name": "my-slack-channel",
"delivery_status": {
"status_code": "200",
"status_text": "ok"
}
}
]
}
Stats
Get Plugin Stats
Returns internal plugin metrics and counters.
| Method | GET |
| URI | /_plugins/_notifications/_local/stats |
Response: A JSON object with flattened metric counters including:
- Request totals and interval counts for each API operation (create, update, delete, info, features, channels, send test).
Summary Table
| Endpoint | Method | Description |
|---|---|---|
/_plugins/_notifications/configs | POST | Create a new notification channel. |
/_plugins/_notifications/configs/{id} | PUT | Update an existing notification channel. |
/_plugins/_notifications/configs/{id} | GET | Get a specific notification channel. |
/_plugins/_notifications/configs | GET | List/search notification channels with filters. |
/_plugins/_notifications/configs/{id} | DELETE | Delete a notification channel. |
/_plugins/_notifications/configs | DELETE | Bulk delete (with config_id_list param). |
/_plugins/_notifications/channels | GET | List all channels (simplified view). |
/_plugins/_notifications/features | GET | Get supported features and config types. |
/_plugins/_notifications/feature/test/{id} | POST | Send a test notification. |
/_plugins/_notifications/_local/stats | GET | Get plugin metrics. |
Troubleshooting
Common issues and solutions when working with the Notifications plugin.
Channel Configuration Issues
Slack notifications are not delivered
Symptoms: Creating a Slack config succeeds, but test notifications fail with a non-200 status.
Possible causes:
- Invalid webhook URL. Verify the Incoming Webhook URL is active in your Slack workspace settings.
- Host deny list. Check if the Slack domain is included in
opensearch.notifications.core.http.host_deny_list. - Network connectivity. The Wazuh Indexer node must have outbound HTTPS access to
hooks.slack.com.
Resolution:
# Verify the config
curl -sk -u admin:admin \
"https://localhost:9200/_plugins/_notifications/configs/<config-id>"
# Send a test notification
curl -sk -u admin:admin -X POST \
"https://localhost:9200/_plugins/_notifications/feature/test/<config-id>"
Check the delivery_status in the response for the HTTP status code and error message.
Email delivery fails with timeout
Symptoms: Email notifications fail with connection timeout errors.
Possible causes:
- SMTP server unreachable. Verify the Wazuh Indexer node can reach the SMTP server on the configured port.
- Timeout too short. The default connection timeout is 5000 ms and socket timeout is 50000 ms. Increase if needed.
- TLS configuration mismatch. Ensure the SMTP
method(none, ssl, start_tls) matches the server’s requirements.
Resolution:
# Increase timeouts via cluster settings
curl -X PUT "https://localhost:9200/_cluster/settings" \
-H 'Content-Type: application/json' \
-d '{
"persistent": {
"opensearch.notifications.core.http.connection_timeout": 10000,
"opensearch.notifications.core.http.socket_timeout": 120000
}
}'
SMTP credentials not found
Symptoms: Email delivery fails with “Credential not found for account” error.
Resolution: SMTP credentials must be stored in the OpenSearch Keystore, not in opensearch.yml.
bin/opensearch-keystore add opensearch.notifications.core.email.<account_name>.username
bin/opensearch-keystore add opensearch.notifications.core.email.<account_name>.password
Restart the node after adding keystore entries.
Permission Issues
“User doesn’t have backend roles configured”
Symptoms: API calls return 403 Forbidden with the message “User doesn’t have backend roles configured.”
Cause: The setting opensearch.notifications.general.filter_by_backend_roles is true, but the current user has no backend roles assigned.
Resolution:
- Assign backend roles to the user in the Security plugin, or
- Disable RBAC filtering:
curl -X PUT "https://localhost:9200/_cluster/settings" \
-H 'Content-Type: application/json' \
-d '{
"persistent": {
"opensearch.notifications.general.filter_by_backend_roles": false
}
}'
User cannot see other users’ configurations
Cause: When filter_by_backend_roles is enabled, users can only see configurations created by users who share at least one backend role. Users with the all_access role can see all configurations.
HTTP Response Size Limit
“HTTP response too large” error
Symptoms: Webhook notifications to endpoints that return large responses fail.
Cause: The response from the webhook destination exceeds opensearch.notifications.core.max_http_response_size.
Resolution:
curl -X PUT "https://localhost:9200/_cluster/settings" \
-H 'Content-Type: application/json' \
-d '{
"persistent": {
"opensearch.notifications.core.max_http_response_size": 20971520
}
}'
Plugin Stats
To inspect the plugin’s internal metrics and check for anomalies:
curl -sk -u admin:admin \
"https://localhost:9200/_plugins/_notifications/_local/stats"
This returns counters for all API operations, which can help identify whether requests are reaching the plugin.
Logs
Enable debug logging for the Notifications plugin:
curl -X PUT "https://localhost:9200/_cluster/settings" \
-H 'Content-Type: application/json' \
-d '{
"persistent": {
"logger.org.opensearch.notifications": "DEBUG",
"logger.org.opensearch.notifications.core": "DEBUG"
}
}'
Check the Wazuh Indexer logs for entries prefixed with notifications:.
Alerting
The Wazuh Indexer Alerting enables you to monitor your data and send alert notifications automatically to your stakeholders. With an intuitive OpenSearch Dashboards interface and a powerful API, it is easy to set up, manage, and monitor your alerts. Craft highly specific alert conditions using Elasticsearch’s full query language and scripting capabilities.
Key Capabilities
Dependencies
Version
The current plugin version is 5.0.0-alpha0 (see VERSION.json in the repository root).
Upgrade
This section guides you through the upgrade process of the Wazuh indexer.
The Wazuh indexer cluster remains operational throughout the upgrade. The rolling upgrade process allows nodes to be updated one at a time, ensuring continuous service availability and minimizing disruptions. The steps detailed in the following sections apply to both single-node and multi-node Wazuh indexer clusters. For multi-node Wazuh indexer clusters, repeat the following steps on every node.
Note: This documentation assumes you are already provisioned with a wazuh-indexer package through any of the possible methods:
- Local package generation (recommended).
- GH Workflows artifacts.
- Staging S3 buckets
Preparing the upgrade
Perform the following steps on any of the Wazuh indexer nodes replacing $WAZUH_INDEXER_IP_ADDRESS, $USERNAME, and $PASSWORD.
-
Disable shard replication to prevent shard replicas from being created while Wazuh indexer nodes are being taken offline for the upgrade.
curl -X PUT "https://$WAZUH_INDEXER_IP_ADDRESS:9200/_cluster/settings" \ -u $USERNAME:$PASSWORD -k -H "Content-Type: application/json" -d ' { "persistent": { "cluster.routing.allocation.enable": "primaries" } }'Output
{ "acknowledged": true, "persistent": { "cluster": { "routing": { "allocation": { "enable": "primaries" } } } }, "transient": {} } -
Perform a flush operation on the cluster to commit transaction log entries to the index.
curl -X POST "https://$WAZUH_INDEXER_IP_ADDRESS:9200/_flush" -u $USERNAME:$PASSWORD -kOutput
{ "_shards" : { "total" : 19, "successful" : 19, "failed" : 0 } }
Upgrading the Wazuh indexer nodes
-
Stop the Wazuh indexer service.
Systemd
systemctl stop wazuh-indexerSysV
service wazuh-indexer stop -
Upgrade the Wazuh indexer to the latest version.
rpm
rpm -ivh --replacepkgs wazuh-indexer-<VERSION>.rpmdpkg
dpkg -i wazuh-indexer-<VERSION>.deb -
Restart the Wazuh indexer service.
Systemd
systemctl daemon-reload systemctl enable wazuh-indexer systemctl start wazuh-indexerSysV
Choose one option according to the operating system used.
a. RPM-based operating system:
chkconfig --add wazuh-indexer service wazuh-indexer startb. Debian-based operating system:
update-rc.d wazuh-indexer defaults 95 10 service wazuh-indexer start
Repeat steps 1 to 3 above on all Wazuh indexer nodes before proceeding to the post-upgrade actions.
Post-upgrade actions
Perform the following steps on any of the Wazuh indexer nodes replacing $WAZUH_INDEXER_IP_ADDRESS, $USERNAME, and $PASSWORD.
-
Check that the newly upgraded Wazuh indexer nodes are in the cluster.
curl -k -u $USERNAME:$PASSWORD https://$WAZUH_INDEXER_IP_ADDRESS:9200/_cat/nodes?v -
Re-enable shard allocation.
curl -X PUT "https://$WAZUH_INDEXER_IP_ADDRESS:9200/_cluster/settings" \ -u $USERNAME:$PASSWORD -k -H "Content-Type: application/json" -d ' { "persistent": { "cluster.routing.allocation.enable": "all" } } 'Output
{ "acknowledged" : true, "persistent" : { "cluster" : { "routing" : { "allocation" : { "enable" : "all" } } } }, "transient" : {} } -
Check the status of the Wazuh indexer cluster again to see if the shard allocation has finished.
curl -k -u $USERNAME:$PASSWORD https://$WAZUH_INDEXER_IP_ADDRESS:9200/_cat/nodes?vOutput
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role node.roles cluster_manager name 172.18.0.3 34 86 32 6.67 5.30 2.53 dimr cluster_manager,data,ingest,remote_cluster_client - wazuh2.indexer 172.18.0.4 21 86 32 6.67 5.30 2.53 dimr cluster_manager,data,ingest,remote_cluster_client * wazuh1.indexer 172.18.0.2 16 86 32 6.67 5.30 2.53 dimr cluster_manager,data,ingest,remote_cluster_client - wazuh3.indexer
Uninstall
Note You need root user privileges to run all the commands described below.
Yum
yum remove wazuh-indexer -y
rm -rf /var/lib/wazuh-indexer/
rm -rf /usr/share/wazuh-indexer/
rm -rf /etc/wazuh-indexer/
APT
apt-get remove wazuh-indexer -y
rm -rf /var/lib/wazuh-indexer/
rm -rf /usr/share/wazuh-indexer/
rm -rf /etc/wazuh-indexer/
Backup and restore
In this section you can find instructions on how to create and restore a backup of your Wazuh Indexer key files, preserving file permissions, ownership, and path. Later, you can move this folder contents back to the corresponding location to restore your certificates and configurations. Backing up these files is useful in cases such as moving your Wazuh installation to another system.
Note: This backup only restores the configuration files, not the data. To back up data stored in the indexer, use snapshots.
Creating a backup
To create a backup of the Wazuh indexer, follow these steps. Repeat them on every cluster node you want to back up.
Note: You need root user privileges to run all the commands described below.
Preparing the backup
-
Backup the existing Wazuh indexer security configuration files.
/usr/share/wazuh-indexer/bin/indexer-security-init.sh --options "-backup /etc/wazuh-indexer/opensearch-security -icl -nhnv" -
Create the destination folder to store the files. For version control, add the date and time of the backup to the name of the folder.
backup_folder=~/wazuh_files_backup/$(date +%F_%H:%M) mkdir -p $backup_folder && echo $backup_folder -
Save the host information.
cat /etc/*release* > $backup_folder/host-info.txt echo -e "\n$(hostname): $(hostname -I)" >> $backup_folder/host-info.txt
Backing up the Wazuh indexer
Back up the Wazuh indexer certificates and configuration
rsync -aREz \
/etc/wazuh-indexer/certs/ \
/etc/wazuh-indexer/jvm.options \
/etc/wazuh-indexer/jvm.options.d \
/etc/wazuh-indexer/log4j2.properties \
/etc/wazuh-indexer/opensearch.yml \
/etc/wazuh-indexer/opensearch.keystore \
/etc/wazuh-indexer/opensearch-observability/ \
/etc/wazuh-indexer/opensearch-security/ \
/etc/wazuh-indexer/wazuh-indexer-reports-scheduler/ \
/etc/wazuh-indexer/wazuh-indexer-notifications/ \
/etc/wazuh-indexer/wazuh-indexer-notifications-core/ \
/usr/lib/sysctl.d/wazuh-indexer.conf $backup_folder
Compress the files and transfer them to the new server:
tar -cvzf wazuh-indexer-backup.tar.gz $backup_folder
Restoring Wazuh indexer from backup
This guide explains how to restore a backup of your configuration files.
Note: This guide is designed specifically for restoration from a backup of the same version.
Note: For a multi-node setup, there should be a backup file for each node within the cluster. You need root user privileges to execute the commands below.
Preparing the data restoration
-
In the new node, move the compressed backup file to the root
/directory:mv wazuh-indexer-backup.tar.gz / cd / -
Decompress the backup files and change the current working directory to the directory based on the date and time of the backup files:
tar -xzvf wazuh-indexer-backup.tar.gz cd $backup_folder
Restoring Wazuh indexer files
Perform the following steps to restore the Wazuh indexer files on the new server.
-
Stop the Wazuh indexer to prevent any modifications to the Wazuh indexer files during the restoration process:
systemctl stop wazuh-indexer -
Restore the Wazuh indexer configuration files and change the file permissions and ownership accordingly:
cp etc/wazuh-indexer/jvm.options /etc/wazuh-indexer/jvm.options cp -r etc/wazuh-indexer/jvm.options.d/ /etc/wazuh-indexer/jvm.options.d/ cp etc/wazuh-indexer/log4j2.properties /etc/wazuh-indexer/log4j2.properties cp etc/wazuh-indexer/opensearch.keystore /etc/wazuh-indexer/opensearch.keystore cp -r etc/wazuh-indexer/opensearch-observability/ /etc/wazuh-indexer/opensearch-observability/ cp -r etc/wazuh-indexer/wazuh-indexer-reports-scheduler/ /etc/wazuh-indexer/wazuh-indexer-reports-scheduler/ cp -r etc/wazuh-indexer/wazuh-indexer-notifications/ /etc/wazuh-indexer/wazuh-indexer-notifications/ cp -r etc/wazuh-indexer/wazuh-indexer-notifications-core/ /etc/wazuh-indexer/wazuh-indexer-notifications-core/ cp usr/lib/sysctl.d/wazuh-indexer.conf /usr/lib/sysctl.d/wazuh-indexer.conf chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/jvm.options chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/jvm.options.d chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/log4j2.properties chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/opensearch.keystore chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/opensearch-observability/ chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/wazuh-indexer-reports-scheduler/ chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/wazuh-indexer-notifications/ chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/wazuh-indexer-notifications-core/ chown wazuh-indexer:wazuh-indexer /usr/lib/sysctl.d/wazuh-indexer.conf -
Start the Wazuh indexer service:
systemctl start wazuh-indexer -
Clear the backup files to free up space:
rm -rf $backup_folder rm -rf /wazuh-indexer-backup.tar.gz
Access Control
Wazuh Indexer uses the OpenSearch Security plugin to manage access control and security features. This allows you to define users, roles, and permissions for accessing indices and performing actions within the Wazuh Indexer.
You can find a more detailed overview of the OpenSearch Security plugin in the OpenSearch documentation.
Wazuh default Internal Users
Wazuh defines internal users and roles for the different Wazuh components to handle index management.
These default users and roles definitions are stored in the internal_users.yml, roles.yml, and roles_mapping.yml files on the /etc/wazuh-indexer/opensearch-security/ directory.
Find more info about the configurations files in the Configuration Files section.
Users
| User | Description | Roles |
|---|---|---|
wazuh-server | User for the Wazuh Server with read/write access to stateful indices and write-only access to stateless indices. | stateless-write, stateful-delete, stateful-write, stateful-read, cm_subscription_read |
wazuh-dashboard | User for Wazuh Dashboard with read access to stateful and stateless indices, and management level permissionsfor the monitoring indices. | sample-data-management, metrics-write, metrics-read, stateless-read, stateful-read, cm_update, cm_subscription_write |
Roles
| Role Name | Access Description | Index Patterns | Permissions |
|---|---|---|---|
stateful-read | Grants read-only permissions to stateful indices. | wazuh-states-* | read |
stateful-write | Grants write-only permissions to stateful indices. | wazuh-states-* | index |
stateful-delete | Grants delete permissions to stateful indices. | wazuh-states-* | delete |
stateless-read | Grants read-only permissions to stateless indices. | wazuh-alerts*, wazuh-archives* | read |
stateless-write | Grants write-only permissions to stateless indices. | wazuh-alerts*, wazuh-archives* | index |
metrics-read | Grants read permissions to metrics indices. | wazuh-monitoring*, wazuh-statistics* | read |
metrics-write | Grants write permissions to metrics indices. | wazuh-monitoring*, wazuh-statistics* | index |
sample-data-management | Grants full permissions to sample data indices. | *-sample-* | data_access, manage |
cm_subscription_read | Grants permissions to retrieve subscriptions for the server. | N/A | plugin:content_manager/subscription_get |
cm_subscription_write | Grants permissions to create and delete subscriptions for the content manager. | N/A | plugin:content_manager/subscription_post, plugin:content_manager/subscription_delete |
cm_update | Grants permissions to perform update operations in the content manager. | N/A | plugin:content_manager/update |
Defining Users and Roles
You can create and manage users and roles through the Wazuh Dashboard UI.
Default users and roles cannot be modified. Instead, duplicate them and modify the duplicates.
Creating a New User, Role, and Role Mapping via the Wazuh Dashboard
Prerequisites
- You must be logged in as a user with administrative privileges (e.g.,
admin).
Follow these steps:
1. Create a Role
- In the Wazuh Dashboard, go to Index Management -> Security -> Roles.
- Click Create role.
- Enter a Role name (e.g.,
custom-read-write). - Under Cluster permissions, select permissions if needed.
- Under Index permissions:
- Index: e.g.,
wazuh-* - Index permissions: choose appropriate actions such as:
read(to allow read access)index(to allow write access)
- Optionally, configure Document-level security (DLS) or Field-level security (FLS).
- Index: e.g.,
- Click Create to save the role.
2. Create a User
- In the Wazuh Dashboard, go to Index Management -> Security -> Internal users.
- Click Create internal user.
- Fill in the following:
- Username (e.g.,
new-user) - Password (enter and confirm)
- Description (optional)
- Username (e.g.,
- Click Create to create the user.
3. Verify Role Mapping
When you assign a role to a user during creation, the mapping is created automatically. To review or edit:
- In Security, go to Roles.
- Find and click your role (
custom-read-write). - Go to Mapped users
- Click Map users.
- Fill in the following:
- Users (e.g.,
new-user). - Backend roles (optional).
- Users (e.g.,
- Click Map to save the mapping.
4. Test Access
After creating the user and role:
- Log out from the Dashboard.
- Log in with the new user’s credentials.
- Navigate to Index Management -> Dev Tools.
- Run a query to test access, such as:
GET /wazuh-*/_search