mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2024-11-21 14:44:00 +00:00
docs/vmanomaly-v1.14.1-2-updates (#6717)
### Describe Your Changes - Doc updates on v1.14.1 - v1.14.2 of `vmanomaly` - [Changelog](https://docs.victoriametrics.com/anomaly-detection/changelog/) page - [Reader](https://docs.victoriametrics.com/anomaly-detection/components/reader/#vm-reader) page (`queries` arg refactor) - Also, a slight modification of `presets` [page](https://docs.victoriametrics.com/anomaly-detection/presets/) ### Checklist The following checks are **mandatory**: - [ ] My change adheres [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/contributing/).
This commit is contained in:
parent
1e3c7f0ef5
commit
0cd19edce1
3 changed files with 109 additions and 40 deletions
|
@ -11,21 +11,21 @@ aliases:
|
|||
- /anomaly-detection/FAQ.html
|
||||
---
|
||||
## What is VictoriaMetrics Anomaly Detection (vmanomaly)?
|
||||
VictoriaMetrics Anomaly Detection, also known as `vmanomaly`, is a service for detecting unexpected changes in time series data. Utilizing machine learning models, it computes and pushes back an ["anomaly score"](./components/models.md#vmanomaly-output) for user-specified metrics. This hands-off approach to anomaly detection reduces the need for manual alert setup and can adapt to various metrics, improving your observability experience.
|
||||
VictoriaMetrics Anomaly Detection, also known as `vmanomaly`, is a service for detecting unexpected changes in time series data. Utilizing machine learning models, it computes and pushes back an ["anomaly score"](/anomaly-detection/components/models#vmanomaly-output) for user-specified metrics. This hands-off approach to anomaly detection reduces the need for manual alert setup and can adapt to various metrics, improving your observability experience.
|
||||
|
||||
Please refer to [our guide section](./README.md#practical-guides-and-installation) to find out more.
|
||||
Please refer to [our QuickStart section](/anomaly-detection/#practical-guides-and-installation) to find out more.
|
||||
|
||||
> **Note: `vmanomaly` is a part of [enterprise package](../enterprise.md). You need to get a [free trial license](https://victoriametrics.com/products/enterprise/trial/) for evaluation.**
|
||||
|
||||
## What is anomaly score?
|
||||
Among the metrics produced by `vmanomaly` (as detailed in [vmanomaly output metrics](./components/models.md#vmanomaly-output)), `anomaly_score` is a pivotal one. It is **a continuous score > 0**, calculated in such a way that **scores ranging from 0.0 to 1.0 usually represent normal data**, while **scores exceeding 1.0 are typically classified as anomalous**. However, it's important to note that the threshold for anomaly detection can be customized in the alert configuration settings.
|
||||
Among the metrics produced by `vmanomaly` (as detailed in [vmanomaly output metrics](/anomaly-detection/components/models#vmanomaly-output)), `anomaly_score` is a pivotal one. It is **a continuous score > 0**, calculated in such a way that **scores ranging from 0.0 to 1.0 usually represent normal data**, while **scores exceeding 1.0 are typically classified as anomalous**. However, it's important to note that the threshold for anomaly detection can be customized in the alert configuration settings.
|
||||
|
||||
The decision to set the changepoint at `1.0` is made to ensure consistency across various models and alerting configurations, such that a score above `1.0` consistently signifies an anomaly, thus, alerting rules are maintained more easily.
|
||||
|
||||
> Note: `anomaly_score` is a metric itself, which preserves all labels found in input data and (optionally) appends [custom labels, specified in writer](./components/writer.md#metrics-formatting) - follow the link for detailed output example.
|
||||
> Note: `anomaly_score` is a metric itself, which preserves all labels found in input data and (optionally) appends [custom labels, specified in writer](/anomaly-detection/components/writer#metrics-formatting) - follow the link for detailed output example.
|
||||
|
||||
## How is anomaly score calculated?
|
||||
For most of the [univariate models](./components/models.md#univariate-models) that can generate `yhat`, `yhat_lower`, and `yhat_upper` time series in [their output](./components/models.md#vmanomaly-output) (such as [Prophet](./components/models.md#prophet) or [Z-score](./components/models.md#z-score)), the anomaly score is calculated as follows:
|
||||
For most of the [univariate models](/anomaly-detection/components/models#univariate-models) that can generate `yhat`, `yhat_lower`, and `yhat_upper` time series in [their output](/anomaly-detection/components/models#vmanomaly-output) (such as [Prophet](/anomaly-detection/components/models#prophet) or [Z-score](/anomaly-detection/components/models#z-score)), the anomaly score is calculated as follows:
|
||||
- If `yhat` (expected series behavior) equals `y` (actual value observed), then the anomaly score is 0.
|
||||
- If `y` (actual value observed) falls within the `[yhat_lower, yhat_upper]` confidence interval, the anomaly score will gradually approach 1, the closer `y` is to the boundary.
|
||||
- If `y` (actual value observed) strictly exceeds the `[yhat_lower, yhat_upper]` interval, the anomaly score will be greater than 1, increasing as the margin between the actual value and the expected range grows.
|
||||
|
@ -36,39 +36,43 @@ Please see example graph illustrating this logic below:
|
|||
|
||||
|
||||
## How does vmanomaly work?
|
||||
`vmanomaly` applies built-in (or custom) [anomaly detection algorithms](./components/models.md), specified in a config file. Although a single config file supports one model, running multiple instances of `vmanomaly` with different configs is possible and encouraged for parallel processing or better support for your use case (i.e. simpler model for simple metrics, more sophisticated one for metrics with trends and seasonalities).
|
||||
`vmanomaly` applies built-in (or custom) [anomaly detection algorithms](/anomaly-detection/components/models), specified in a config file.
|
||||
|
||||
- All the models generate a metric called [anomaly_score](#what-is-anomaly-score)
|
||||
- All produced anomaly scores are unified in a way that values lower than 1.0 mean “likely normal”, while values over 1.0 mean “likely anomalous”
|
||||
- Simple rules for alerting: start with `anomaly_score{“key”=”value”} > 1`
|
||||
- Models are retrained continuously, based on `schedulers` section in a config, so that threshold=1.0 remains actual
|
||||
- Produced scores are stored back to VictoriaMetrics TSDB and can be used for various observability tasks (alerting, visualization, debugging).
|
||||
|
||||
1. For more detailed information, please visit the [overview section](./Overview.md#about).
|
||||
2. To view a diagram illustrating the interaction of components, please explore the [components section](./components/README.md).
|
||||
|
||||
## What data does vmanomaly operate on?
|
||||
`vmanomaly` operates on data fetched from VictoriaMetrics, where you can leverage full power of [MetricsQL](../MetricsQL.md) for data selection, sampling, and processing. Users can also [apply global filters](../README.md#prometheus-querying-api-enhancements) for more targeted data analysis, enhancing scope limitation and tenant visibility.
|
||||
`vmanomaly` operates on data fetched from VictoriaMetrics, where you can leverage full power of [MetricsQL](/metricsql) for data selection, sampling, and processing. Users can also [apply global filters](/#prometheus-querying-api-enhancements) for more targeted data analysis, enhancing scope limitation and tenant visibility.
|
||||
|
||||
Respective config is defined in a [`reader`](./components/reader.md#vm-reader) section.
|
||||
Respective config is defined in a [`reader`](/anomaly-detection/components/reader#vm-reader) section.
|
||||
|
||||
## Handling noisy input data
|
||||
`vmanomaly` operates on data fetched from VictoriaMetrics using [MetricsQL](../MetricsQL.md) queries, so the initial data quality can be fine-tuned with aggregation, grouping, and filtering to reduce noise and improve anomaly detection accuracy.
|
||||
`vmanomaly` operates on data fetched from VictoriaMetrics using [MetricsQL](/metricsql) queries, so the initial data quality can be fine-tuned with aggregation, grouping, and filtering to reduce noise and improve anomaly detection accuracy.
|
||||
|
||||
## Output produced by vmanomaly
|
||||
`vmanomaly` models generate [metrics](./components/models.md#vmanomaly-output) like `anomaly_score`, `yhat`, `yhat_lower`, `yhat_upper`, and `y`. These metrics provide a comprehensive view of the detected anomalies. The service also produces [health check metrics](./components/monitoring.md#metrics-generated-by-vmanomaly) for monitoring its performance.
|
||||
`vmanomaly` models generate [metrics](/anomaly-detection/components/models#vmanomaly-output) like `anomaly_score`, `yhat`, `yhat_lower`, `yhat_upper`, and `y`. These metrics provide a comprehensive view of the detected anomalies. The service also produces [health check metrics](/anomaly-detection/components/monitoring#metrics-generated-by-vmanomaly) for monitoring its performance.
|
||||
|
||||
## Choosing the right model for vmanomaly
|
||||
Selecting the best model for `vmanomaly` depends on the data's nature and the [types of anomalies](https://victoriametrics.com/blog/victoriametrics-anomaly-detection-handbook-chapter-2/#categories-of-anomalies) to detect. For instance, [Z-score](./components/models.md#z-score) is suitable for data without trends or seasonality, while more complex patterns might require models like [Prophet](./components/models.md#prophet).
|
||||
Selecting the best model for `vmanomaly` depends on the data's nature and the [types of anomalies](https://victoriametrics.com/blog/victoriametrics-anomaly-detection-handbook-chapter-2/#categories-of-anomalies) to detect. For instance, [Z-score](/anomaly-detection/components/models#z-score) is suitable for data without trends or seasonality, while more complex patterns might require models like [Prophet](/anomaly-detection/components/models#prophet).
|
||||
|
||||
Also, starting from [v1.12.0](./CHANGELOG.md#v1120) it's possible to auto-tune the most important params of selected model class, find [the details here](./components/models.md#autotuned).
|
||||
Also, starting from [v1.12.0](/anomaly-detection/changelog/#v1120) it's possible to auto-tune the most important params of selected model class, find [the details here](/anomaly-detection/components/models#autotuned).
|
||||
|
||||
Please refer to [respective blogpost on anomaly types and alerting heuristics](https://victoriametrics.com/blog/victoriametrics-anomaly-detection-handbook-chapter-2/) for more details.
|
||||
|
||||
Still not 100% sure what to use? We are [here to help](./README.md#get-in-touch).
|
||||
Still not 100% sure what to use? We are [here to help](/anomaly-detection/#get-in-touch).
|
||||
|
||||
## Alert generation in vmanomaly
|
||||
While `vmanomaly` detects anomalies and produces scores, it *does not directly generate alerts*. The anomaly scores are written back to VictoriaMetrics, where an external alerting tool, like [`vmalert`](../vmalert.md), can be used to create alerts based on these scores for integrating it with your alerting management system.
|
||||
While `vmanomaly` detects anomalies and produces scores, it *does not directly generate alerts*. The anomaly scores are written back to VictoriaMetrics, where an external alerting tool, like [`vmalert`](/vmalert), can be used to create alerts based on these scores for integrating it with your alerting management system.
|
||||
|
||||
## Preventing alert fatigue
|
||||
Produced anomaly scores are designed in such a way that values from 0.0 to 1.0 indicate non-anomalous data, while a value greater than 1.0 is generally classified as an anomaly. However, there are no perfect models for anomaly detection, that's why reasonable defaults expressions like `anomaly_score > 1` may not work 100% of the time. However, anomaly scores, produced by `vmanomaly` are written back as metrics to VictoriaMetrics, where tools like [`vmalert`](../vmalert.md) can use [MetricsQL](../MetricsQL.md) expressions to fine-tune alerting thresholds and conditions, balancing between avoiding [false negatives](https://victoriametrics.com/blog/victoriametrics-anomaly-detection-handbook-chapter-1/#false-negative) and reducing [false positives](https://victoriametrics.com/blog/victoriametrics-anomaly-detection-handbook-chapter-1/#false-positive).
|
||||
Produced anomaly scores are designed in such a way that values from 0.0 to 1.0 indicate non-anomalous data, while a value greater than 1.0 is generally classified as an anomaly. However, there are no perfect models for anomaly detection, that's why reasonable defaults expressions like `anomaly_score > 1` may not work 100% of the time. However, anomaly scores, produced by `vmanomaly` are written back as metrics to VictoriaMetrics, where tools like [`vmalert`](/vmalert) can use [MetricsQL](/MetricsQL) expressions to fine-tune alerting thresholds and conditions, balancing between avoiding [false negatives](https://victoriametrics.com/blog/victoriametrics-anomaly-detection-handbook-chapter-1/#false-negative) and reducing [false positives](https://victoriametrics.com/blog/victoriametrics-anomaly-detection-handbook-chapter-1/#false-positive).
|
||||
|
||||
## How to backtest particular configuration on historical data?
|
||||
Starting from [v1.7.2](./CHANGELOG.md#v172) you can produce (and write back to VictoriaMetrics TSDB) anomaly scores for historical (backtesting) period, using `BacktestingScheduler` [component](./components/scheduler.md#backtesting-scheduler) to imitate consecutive "production runs" of `PeriodicScheduler` [component](./components/scheduler.md#periodic-scheduler). Please find an example config below:
|
||||
Starting from [v1.7.2](/anomaly-detection/changelog#v172) you can produce (and write back to VictoriaMetrics TSDB) anomaly scores for historical (backtesting) period, using `BacktestingScheduler` [component](/anomaly-detection/components/scheduler#backtesting-scheduler) to imitate consecutive "production runs" of `PeriodicScheduler` [component](/anomaly-detection/components/scheduler#periodic-scheduler). Please find an example config below:
|
||||
|
||||
```yaml
|
||||
schedulers:
|
||||
|
@ -81,6 +85,8 @@ schedulers:
|
|||
# copy these from your PeriodicScheduler args
|
||||
fit_window: 'P14D'
|
||||
fit_every: 'PT1H'
|
||||
# number of parallel jobs to run. Default is 1, each job is a separate OneOffScheduler fit/inference run.
|
||||
n_jobs: 1
|
||||
|
||||
models:
|
||||
model_alias1:
|
||||
|
@ -108,12 +114,12 @@ writer:
|
|||
# https://docs.victoriametrics.com/anomaly-detection/components/monitoring/
|
||||
```
|
||||
|
||||
Configuration above will produce N intervals of full length (`fit_window`=14d + `fit_every`=1h) until `to_iso` timestamp is reached to run N consecutive `fit` calls to train models; Then these models will be used to produce `M = [fit_every / sampling_frequency]` infer datapoints for `fit_every` range at the end of each such interval, imitating M consecutive calls of `infer_every` in `PeriodicScheduler` [config](./components/scheduler.md#periodic-scheduler). These datapoints then will be written back to VictoriaMetrics TSDB, defined in `writer` [section](./components/writer.md#vm-writer) for further visualization (i.e. in VMUI or Grafana)
|
||||
Configuration above will produce N intervals of full length (`fit_window`=14d + `fit_every`=1h) until `to_iso` timestamp is reached to run N consecutive `fit` calls to train models; Then these models will be used to produce `M = [fit_every / sampling_frequency]` infer datapoints for `fit_every` range at the end of each such interval, imitating M consecutive calls of `infer_every` in `PeriodicScheduler` [config](/anomaly-detection/components/scheduler#periodic-scheduler). These datapoints then will be written back to VictoriaMetrics TSDB, defined in `writer` [section](/anomaly-detection/components/writer#vm-writer) for further visualization (i.e. in VMUI or Grafana)
|
||||
|
||||
## Resource consumption of vmanomaly
|
||||
`vmanomaly` itself is a lightweight service, resource usage is primarily dependent on [scheduling](./components/scheduler.md) (how often and on what data to fit/infer your models), [# and size of timeseries returned by your queries](./components/reader.md#vm-reader), and the complexity of the employed [models](./components/models.md). Its resource usage is directly related to these factors, making it adaptable to various operational scales.
|
||||
`vmanomaly` itself is a lightweight service, resource usage is primarily dependent on [scheduling](/anomaly-detection/components/scheduler) (how often and on what data to fit/infer your models), [# and size of timeseries returned by your queries](./components/reader.md#vm-reader), and the complexity of the employed [models](/anomaly-detection/components/models). Its resource usage is directly related to these factors, making it adaptable to various operational scales.
|
||||
|
||||
> **Note**: Starting from [v1.13.0](./CHANGELOG.md#v1130), there is a mode to save anomaly detection models on host filesystem after `fit` stage (instead of keeping them in-memory by default). **Resource-intensive setups** (many models, many metrics, bigger [`fit_window` arg](./components/scheduler.md#periodic-scheduler-config-example)) and/or 3rd-party models that store fit data (like [ProphetModel](./components/models.md#prophet) or [HoltWinters](./components/models.md#holt-winters)) will have RAM consumption greatly reduced at a cost of slightly slower `infer` stage. To enable it, you need to set environment variable `VMANOMALY_MODEL_DUMPS_DIR` to desired location. [Helm charts](https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-metrics-anomaly/README.md) are being updated accordingly ([`StatefulSet`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) for persistent storage starting from chart version `1.3.0`).
|
||||
> **Note**: Starting from [v1.13.0](/anomaly-detection/changelog#v1130), there is a mode to save anomaly detection models on host filesystem after `fit` stage (instead of keeping them in-memory by default). **Resource-intensive setups** (many models, many metrics, bigger [`fit_window` arg](/anomaly-detection/components/scheduler#periodic-scheduler-config-example)) and/or 3rd-party models that store fit data (like [ProphetModel](/anomaly-detection/components/models#prophet) or [HoltWinters](/anomaly-detection/components/models#holt-winters)) will have RAM consumption greatly reduced at a cost of slightly slower `infer` stage. To enable it, you need to set environment variable `VMANOMALY_MODEL_DUMPS_DIR` to desired location. [Helm charts](https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-metrics-anomaly/README.md) are being updated accordingly ([`StatefulSet`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) for persistent storage starting from chart version `1.3.0`).
|
||||
|
||||
Here's an example of how to set it up in docker-compose using volumes:
|
||||
```yaml
|
||||
|
@ -147,16 +153,16 @@ volumes:
|
|||
## Scaling vmanomaly
|
||||
> **Note:** As of latest release we don't support cluster or auto-scaled version yet (though, it's in our roadmap for - better backends, more parallelization, etc.), so proposed workarounds should be addressed manually.
|
||||
|
||||
`vmanomaly` can be scaled horizontally by launching multiple independent instances, each with its own [MetricsQL](../MetricsQL.md) queries and [configurations](./components/README.md):
|
||||
`vmanomaly` can be scaled horizontally by launching multiple independent instances, each with its own [MetricsQL](/MetricsQL) queries and [configurations](/anomaly-detection/components/):
|
||||
|
||||
- By splitting **queries**, [defined in reader section](./components/reader.md#vm-reader) and spawn separate service around it. Also in case you have *only 1 query returning huge amount of timeseries*, you can further split it by applying MetricsQL filters, i.e. using "extra_filters" [param in reader](./components/reader.md#vm-reader)
|
||||
- By splitting **queries**, [defined in reader section](/anomaly-detection/components/reader#vm-reader) and spawn separate service around it. Also in case you have *only 1 query returning huge amount of timeseries*, you can further split it by applying MetricsQL filters, i.e. using "extra_filters" [param in reader](/anomaly-detection/components/reader?highlight=extra_filters#vm-reader)
|
||||
|
||||
- or **models** (in case you decide to run several models for each timeseries received i.e. for averaging anomaly scores in your alerting rules of `vmalert` or using a vote approach to reduce false positives) - see `queries` arg in [model config](./components/models.md#queries)
|
||||
- or **models** (in case you decide to run several models for each timeseries received i.e. for averaging anomaly scores in your alerting rules of `vmalert` or using a vote approach to reduce false positives) - see `queries` arg in [model config](/anomaly-detection/components/models#queries)
|
||||
|
||||
- or **schedulers** (in case you want the same models to be trained under several schedules) - see `schedulers` arg [model section](./components/models.md#schedulers) and `scheduler` [component itself](./components/scheduler.md)
|
||||
- or **schedulers** (in case you want the same models to be trained under several schedules) - see `schedulers` arg [model section](/anomaly-detection/components/models#schedulers) and `scheduler` [component itself](/anomaly-detection/components/scheduler)
|
||||
|
||||
|
||||
Here's an example of how to split on `extra_filters` param
|
||||
Here's an example of how to split on `extra_filters`, based on `extra_filters` reader's arg
|
||||
```yaml
|
||||
# config file #1, for 1st vmanomaly instance
|
||||
# ...
|
||||
|
|
|
@ -23,17 +23,14 @@ Additionally, **preset mode** minimizes user input needed to run the service. Yo
|
|||
Available presets:
|
||||
- [Node-Exporter](#node-exporter)
|
||||
|
||||
Here is an example config file to enable [Node-Exporter](#node-exporter) preset:
|
||||
To enable preset mode, `preset` arg should be set to particular preset name:
|
||||
|
||||
```yaml
|
||||
preset: "node-exporter"
|
||||
reader:
|
||||
datasource_url: "http://victoriametrics:8428/" # your datasource url
|
||||
# tenant_id: '0:0' # specify for cluster version
|
||||
writer:
|
||||
datasource_url: "http://victoriametrics:8428/" # your datasource url
|
||||
# tenant_id: '0:0' # specify for cluster version
|
||||
preset: "chosen_preset_name" # i.e. "node-exporter"
|
||||
```
|
||||
|
||||
Also, additional minimal set of arguments may be required from user to run the preset. See corresponding preset sections below for the details.
|
||||
|
||||
Run a service using config file with one of the [available options](./QuickStart.md#how-to-install-and-run-vmanomaly).
|
||||
|
||||
After you run `vmanomaly` with `preset` arg specified, available assets can be viewed, copied and downloaded at `http://localhost:8490/presets/` endpoint.
|
||||
|
@ -43,13 +40,24 @@ After you run `vmanomaly` with `preset` arg specified, available assets can be v
|
|||
|
||||
## Node-Exporter
|
||||
|
||||
> **Note: Preset assets can be also found [here](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/vmanomaly/vmanomaly-node-exporter-preset/)**
|
||||
The Node-Exporter preset simplifies the monitoring and anomaly detection of key system metrics collected by [`node_exporter`](https://github.com/prometheus/node_exporter). This preset reduces the need for manual configuration and detects anomalies in metrics such as CPU usage, network errors, and disk latency, ensuring timely identification of potential issues. Below are detailed instructions on enabling and using the Node-Exporter preset, along with a list of included assets like alerting rules and Grafana dashboard.
|
||||
|
||||
> **Note: Node-Exporter preset assets can be also found [here](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/vmanomaly/vmanomaly-node-exporter-preset/)**
|
||||
|
||||
For enabling Node-Exporter in config file set the `preset` arg accordingly. Also, include at least `datasource_url`-s (and `tenant_id` if using cluster version of VictoriaMetrics) in reader and writer sections, like that:
|
||||
|
||||
For enabling Node-Exporter in config file use `preset` parameter:
|
||||
```yaml
|
||||
preset: "node-exporter"
|
||||
reader:
|
||||
datasource_url: "http://victoriametrics:8428/" # source victoriametrics/prometheus
|
||||
# tenant_id: '0:0' # specify for cluster version
|
||||
writer:
|
||||
datasource_url: "http://victoriametrics:8428/" # destination victoriametrics/prometheus
|
||||
# tenant_id: '0:0' # specify for cluster version
|
||||
```
|
||||
|
||||
Run a service using such config file with one of the [available options](./QuickStart.md#how-to-install-and-run-vmanomaly).
|
||||
|
||||
### Generated anomaly scores
|
||||
Machine learning models will be fit for each timeseries, returned by underlying [MetricsQL](../MetricsQL.md) queries.
|
||||
Anomaly score metric labels will also contain [model classes](./components/models.md) and [schedulers](./components/scheduler.md) for labelset uniqueness.
|
||||
|
|
|
@ -20,6 +20,61 @@ Future updates will introduce additional readers, expanding the range of data so
|
|||
|
||||
## VM reader
|
||||
|
||||
> **Note**: Starting from [v1.13.0](/anomaly-detection/changelog/v1130) there is backward-compatible change of [`queries`](/anomaly-detection/components/reader?highlight=queries#vm-reader) arg of [VmReader](#vm-reader). New format allows to specify per-query parameters, like `step` to reduce amount of data read from VictoriaMetrics TSDB and to allow config flexibility. Please see [per-query parameters](#per-query-parameters) section for the details.
|
||||
|
||||
Old format like
|
||||
|
||||
```yaml
|
||||
# other config sections ...
|
||||
reader:
|
||||
class: 'vm'
|
||||
datasource_url: 'http://localhost:8428' # source victoriametrics/prometheus
|
||||
sampling_period: "10s" # set it <= min(infer_every) in schedulers section
|
||||
queries:
|
||||
# old format {query_alias: query_expr}, prior to 1.13, will be converted to a new format automatically
|
||||
vmb: 'avg(vm_blocks)'
|
||||
```
|
||||
|
||||
will be converted to a new one with a warning raised in logs:
|
||||
|
||||
```yaml
|
||||
# other config sections ...
|
||||
reader:
|
||||
class: 'vm'
|
||||
datasource_url: 'http://localhost:8428' # source victoriametrics/prometheus
|
||||
sampling_period: '10s'
|
||||
queries:
|
||||
# old format {query_alias: query_expr}, prior to 1.13, will be converted to a new format automatically
|
||||
vmb:
|
||||
expr: 'avg(vm_blocks)' # initial MetricsQL expression
|
||||
step: '10s' # individual step for this query, will be filled with `sampling_period` from the root level
|
||||
# new query-level arguments will be added in backward-compatible way in future releases
|
||||
```
|
||||
|
||||
### Per-query parameters
|
||||
|
||||
Starting from [v1.13.0](/anomaly-detection/changelog/v1130) there is change of [`queries`](/anomaly-detection/components/reader?highlight=queries#vm-reader) arg format. Now each query alias supports the next (sub)fields:
|
||||
|
||||
- `expr` (string): MetricsQL/PromQL expression that defines an input for VmReader. As accepted by `/query_range?query=%s`. i.e. `avg(vm_blocks)`
|
||||
|
||||
- `step` (string): query-level frequency of the points returned, i.e. `30s`. Will be converted to `/query_range?step=%s` param (in seconds). Useful to optimize total amount of data read from VictoriaMetrics, where different queries may have **different frequencies for different [machine learning models](/anomaly-detection/components/models)** to run on.
|
||||
|
||||
> **Note**: if not set explicitly (or if older config style prior to [v1.13.0](/anomaly-detection/changelog/v1130)) is used, then it is set to reader-level `sampling_period` arg.
|
||||
|
||||
> **Note**: having **different** individual `step` args for queries (i.e. `30s` for `q1` and `2m` for `q2`) is not yet supported for [multivariate model](/anomaly-detection/components/models/index.html#multivariate-models) if you want to run it on several queries simultaneously (i.e. setting [`queries`](/anomaly-detection/components/models/#queries) arg of a model to [`q1`, `q2`]).
|
||||
|
||||
### Per-query config example
|
||||
```yaml
|
||||
reader:
|
||||
class: 'vm'
|
||||
sampling_period: '1m'
|
||||
# other reader params ...
|
||||
queries:
|
||||
ingestion_rate:
|
||||
expr: 'sum(rate(vm_rows_inserted_total[5m])) by (type) > 0'
|
||||
step: '2m' # overrides global `sampling_period` of 1m
|
||||
```
|
||||
|
||||
### Config parameters
|
||||
|
||||
<table class="params">
|
||||
|
@ -51,12 +106,12 @@ Name of the class needed to enable reading from VictoriaMetrics or Prometheus. V
|
|||
`queries`
|
||||
</td>
|
||||
<td>
|
||||
|
||||
`ingestion_rate: 'sum(rate(vm_rows_inserted_total[5m])) by (type) > 0'`
|
||||
See [per-query config example](#per-query-config-example) above
|
||||
</td>
|
||||
<td>
|
||||
|
||||
PromQL/MetricsQL query to select data in format: `QUERY_ALIAS: "QUERY"`. As accepted by `/query_range?query=%s`.
|
||||
<td>
|
||||
|
||||
See [per-query config section](#per-query-parameters) above
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
|
|
Loading…
Reference in a new issue