Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files

This commit is contained in:
Aliaksandr Valialkin 2022-06-19 23:05:31 +03:00
commit cacd3d6f6d
No known key found for this signature in database
GPG key ID: A72BEC6CD3D0DED1
76 changed files with 2571 additions and 770 deletions

View file

@ -164,7 +164,7 @@ Then apply new config via the following command:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kill -HUP `pidof prometheus` kill -HUP `pidof prometheus`
``` ```
@ -328,7 +328,7 @@ VictoriaMetrics doesn't check `DD_API_KEY` param, so it can be set to arbitrary
Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line: Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line:
```bash ```console
echo ' echo '
{ {
"series": [ "series": [
@ -354,7 +354,7 @@ The imported data can be read via [export API](https://docs.victoriametrics.com/
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1' curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1'
``` ```
@ -369,6 +369,16 @@ This command should return the following output if everything is OK:
Extra labels may be added to all the written time series by passing `extra_label=name=value` query args. Extra labels may be added to all the written time series by passing `extra_label=name=value` query args.
For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics. For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics.
DataDog agent sends the [configured tags](https://docs.datadoghq.com/getting_started/tagging/) to
undocumented endpoint - `/datadog/intake`. This endpoint isn't supported by VictoriaMetrics yet.
This prevents from adding the configured tags to DataDog agent data sent into VictoriaMetrics.
The workaround is to run a sidecar [vmagent](https://docs.victoriametrics.com/vmagent.html) alongside every DataDog agent,
which must run with `DD_DD_URL=http://localhost:8429/datadog` environment variable.
The sidecar `vmagent` must be configured with the needed tags via `-remoteWrite.label` command-line flag and must forward
incoming data with the added tags to a centralized VictoriaMetrics specified via `-remoteWrite.url` command-line flag.
See [these docs](https://docs.victoriametrics.com/vmagent.html#adding-labels-to-metrics) for details on how to add labels to metrics at `vmagent`.
## How to send data from InfluxDB-compatible agents such as [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) ## How to send data from InfluxDB-compatible agents such as [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/)
Use `http://<victoriametric-addr>:8428` url instead of InfluxDB url in agents' configs. Use `http://<victoriametric-addr>:8428` url instead of InfluxDB url in agents' configs.
@ -408,7 +418,7 @@ to local VictoriaMetrics using `curl`:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write' curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
``` ```
@ -419,7 +429,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}' curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
``` ```
@ -447,7 +457,7 @@ Comma-separated list of expected databases can be passed to VictoriaMetrics via
Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command line flag. For instance, Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command line flag. For instance,
the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port `2003`: the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port `2003`:
```bash ```console
/path/to/victoria-metrics-prod -graphiteListenAddr=:2003 /path/to/victoria-metrics-prod -graphiteListenAddr=:2003
``` ```
@ -456,7 +466,7 @@ to the VictoriaMetrics host in `StatsD` configs.
Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using `nc`: Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using `nc`:
```bash ```console
echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003 echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003
``` ```
@ -466,7 +476,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
``` ```
@ -478,6 +488,8 @@ The `/api/v1/export` endpoint should return the following response:
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]} {"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]}
``` ```
[Graphite relabeling](https://docs.victoriametrics.com/vmagent.html#graphite-relabeling) can be used if the imported Graphite data is going to be queried via [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html).
## Querying Graphite data ## Querying Graphite data
Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via the following APIs: Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via the following APIs:
@ -492,6 +504,9 @@ VictoriaMetrics supports `__graphite__` pseudo-label for selecting time series w
The `__graphite__` pseudo-label supports e.g. alternate regexp filters such as `(value1|...|valueN)`. They are transparently converted to `{value1,...,valueN}` syntax [used in Graphite](https://graphite.readthedocs.io/en/latest/render_api.html#paths-and-wildcards). This allows using [multi-value template variables in Grafana](https://grafana.com/docs/grafana/latest/variables/formatting-multi-value-variables/) inside `__graphite__` pseudo-label. For example, Grafana expands `{__graphite__=~"foo.($bar).baz"}` into `{__graphite__=~"foo.(x|y).baz"}` if `$bar` template variable contains `x` and `y` values. In this case the query is automatically converted into `{__graphite__=~"foo.{x,y}.baz"}` before execution. The `__graphite__` pseudo-label supports e.g. alternate regexp filters such as `(value1|...|valueN)`. They are transparently converted to `{value1,...,valueN}` syntax [used in Graphite](https://graphite.readthedocs.io/en/latest/render_api.html#paths-and-wildcards). This allows using [multi-value template variables in Grafana](https://grafana.com/docs/grafana/latest/variables/formatting-multi-value-variables/) inside `__graphite__` pseudo-label. For example, Grafana expands `{__graphite__=~"foo.($bar).baz"}` into `{__graphite__=~"foo.(x|y).baz"}` if `$bar` template variable contains `x` and `y` values. In this case the query is automatically converted into `{__graphite__=~"foo.{x,y}.baz"}` before execution.
VictoriaMetrics also supports Graphite query language - see [these docs](#graphite-render-api-usage).
## How to send data from OpenTSDB-compatible agents ## How to send data from OpenTSDB-compatible agents
VictoriaMetrics supports [telnet put protocol](http://opentsdb.net/docs/build/html/api_telnet/put.html) VictoriaMetrics supports [telnet put protocol](http://opentsdb.net/docs/build/html/api_telnet/put.html)
@ -503,7 +518,7 @@ The same protocol is used for [ingesting data in KairosDB](https://kairosdb.gith
Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command line flag. For instance, Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command line flag. For instance,
the following command enables OpenTSDB receiver in VictoriaMetrics on TCP and UDP port `4242`: the following command enables OpenTSDB receiver in VictoriaMetrics on TCP and UDP port `4242`:
```bash ```console
/path/to/victoria-metrics-prod -opentsdbListenAddr=:4242 /path/to/victoria-metrics-prod -opentsdbListenAddr=:4242
``` ```
@ -513,7 +528,7 @@ Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242 echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
``` ```
@ -524,7 +539,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
``` ```
@ -541,7 +556,7 @@ The `/api/v1/export` endpoint should return the following response:
Enable HTTP server for OpenTSDB `/api/put` requests by setting `-opentsdbHTTPListenAddr` command line flag. For instance, Enable HTTP server for OpenTSDB `/api/put` requests by setting `-opentsdbHTTPListenAddr` command line flag. For instance,
the following command enables OpenTSDB HTTP server on port `4242`: the following command enables OpenTSDB HTTP server on port `4242`:
```bash ```console
/path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242 /path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242
``` ```
@ -551,7 +566,7 @@ Example for writing a single data point:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put
``` ```
@ -561,7 +576,7 @@ Example for writing multiple data points in a single request:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
``` ```
@ -571,7 +586,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar' curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar'
``` ```
@ -741,7 +756,7 @@ The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is pos
by setting it via `<ROOT_IMAGE>` environment variable. by setting it via `<ROOT_IMAGE>` environment variable.
For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package-victoria-metrics ROOT_IMAGE=scratch make package-victoria-metrics
``` ```
@ -855,7 +870,7 @@ Each JSON line contains samples for a single time series. An example output:
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example: For example:
```bash ```console
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
``` ```
@ -869,7 +884,7 @@ of time series data. This enables gzip compression for the exported data. Exampl
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz
``` ```
@ -903,7 +918,7 @@ for metrics to export.
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example: For example:
```bash ```console
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
``` ```
@ -920,7 +935,7 @@ for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time
On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag: On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag:
```bash ```console
# count unique timeseries in database # count unique timeseries in database
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]' wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
@ -930,7 +945,7 @@ wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example: For example:
```bash ```console
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
``` ```
@ -962,7 +977,7 @@ Time series data can be imported into VictoriaMetrics via any supported data ing
Example for importing data obtained via [/api/v1/export](#how-to-export-data-in-json-line-format): Example for importing data obtained via [/api/v1/export](#how-to-export-data-in-json-line-format):
```bash ```console
# Export the data from <source-victoriametrics>: # Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl curl http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl
@ -972,7 +987,7 @@ curl -X POST http://destination-victoriametrics:8428/api/v1/import -T exported_d
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import` for importing gzipped data: Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import` for importing gzipped data:
```bash ```console
# Export gzipped data from <source-victoriametrics>: # Export gzipped data from <source-victoriametrics>:
curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl.gz curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl.gz
@ -993,7 +1008,7 @@ The specification of VictoriaMetrics' native format may yet change and is not fo
If you have a native format file obtained via [/api/v1/export/native](#how-to-export-data-in-native-format) however this is the most efficient protocol for importing data in. If you have a native format file obtained via [/api/v1/export/native](#how-to-export-data-in-native-format) however this is the most efficient protocol for importing data in.
```bash ```console
# Export the data from <source-victoriametrics>: # Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin
@ -1034,14 +1049,14 @@ Each request to `/api/v1/import/csv` may contain arbitrary number of CSV lines.
Example for importing CSV data via `/api/v1/import/csv`: Example for importing CSV data via `/api/v1/import/csv`:
```bash ```console
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
``` ```
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}' curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
``` ```
@ -1067,7 +1082,7 @@ via `/api/v1/import/prometheus` path. For example, the following line imports a
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
``` ```
@ -1077,7 +1092,7 @@ The following command may be used for verifying the imported data:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}' curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
``` ```
@ -1093,7 +1108,7 @@ Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus`
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
# Import gzipped data to <destination-victoriametrics>: # Import gzipped data to <destination-victoriametrics>:
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz
``` ```
@ -1132,7 +1147,9 @@ Example contents for `-relabelConfig` file:
regex: true regex: true
``` ```
See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) for more details about relabeling in VictoriaMetrics. VictoriaMetrics components provide additional relabeling features such as Graphite-style relabeling.
See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) for more details.
## Federation ## Federation
@ -1142,7 +1159,7 @@ at `http://<victoriametrics-addr>:8428/federate?match[]=<timeseries_selector_for
Optional `start` and `end` args may be added to the request in order to scrape the last point for each selected time series on the `[start ... end]` interval. Optional `start` and `end` args may be added to the request in order to scrape the last point for each selected time series on the `[start ... end]` interval.
`start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. `start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example: For example:
```bash ```console
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
``` ```
@ -1196,7 +1213,7 @@ See also [cardinality limiter](#cardinality-limiter) and [capacity planning docs
* Install multiple VictoriaMetrics instances in distinct datacenters (availability zones). * Install multiple VictoriaMetrics instances in distinct datacenters (availability zones).
* Pass addresses of these instances to [vmagent](https://docs.victoriametrics.com/vmagent.html) via `-remoteWrite.url` command-line flag: * Pass addresses of these instances to [vmagent](https://docs.victoriametrics.com/vmagent.html) via `-remoteWrite.url` command-line flag:
```bash ```console
/path/to/vmagent -remoteWrite.url=http://<victoriametrics-addr-1>:8428/api/v1/write -remoteWrite.url=http://<victoriametrics-addr-2>:8428/api/v1/write /path/to/vmagent -remoteWrite.url=http://<victoriametrics-addr-1>:8428/api/v1/write -remoteWrite.url=http://<victoriametrics-addr-2>:8428/api/v1/write
``` ```
@ -1215,7 +1232,7 @@ remote_write:
* Apply the updated config: * Apply the updated config:
```bash ```console
kill -HUP `pidof prometheus` kill -HUP `pidof prometheus`
``` ```
@ -1396,7 +1413,7 @@ For example, substitute `-graphiteListenAddr=:2003` with `-graphiteListenAddr=<i
If you plan to store more than 1TB of data on `ext4` partition or plan extending it to more than 16TB, If you plan to store more than 1TB of data on `ext4` partition or plan extending it to more than 16TB,
then the following options are recommended to pass to `mkfs.ext4`: then the following options are recommended to pass to `mkfs.ext4`:
```bash ```console
mkfs.ext4 ... -O 64bit,huge_file,extent -T huge mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
``` ```
@ -1457,7 +1474,7 @@ In this case VictoriaMetrics puts query trace into `trace` field in the output J
For example, the following command: For example, the following command:
```bash ```console
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace' curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace'
``` ```
@ -1718,7 +1735,7 @@ VictoriaMetrics provides handlers for collecting the following [Go profiles](htt
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
``` ```
@ -1728,7 +1745,7 @@ curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof
``` ```

View file

@ -73,7 +73,7 @@ Pass `-help` to `vmagent` in order to see [the full list of supported command-li
* Sending `SUGHUP` signal to `vmagent` process: * Sending `SUGHUP` signal to `vmagent` process:
```bash ```console
kill -SIGHUP `pidof vmagent` kill -SIGHUP `pidof vmagent`
``` ```
@ -252,12 +252,13 @@ Labels can be added to metrics by the following mechanisms:
VictoriaMetrics components (including `vmagent`) support Prometheus-compatible relabeling. VictoriaMetrics components (including `vmagent`) support Prometheus-compatible relabeling.
They provide the following additional actions on top of actions from the [Prometheus relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config): They provide the following additional actions on top of actions from the [Prometheus relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config):
* `replace_all`: replaces all of the occurences of `regex` in the values of `source_labels` with the `replacement` and stores the results in the `target_label`. * `replace_all`: replaces all of the occurences of `regex` in the values of `source_labels` with the `replacement` and stores the results in the `target_label`
* `labelmap_all`: replaces all of the occurences of `regex` in all the label names with the `replacement`. * `labelmap_all`: replaces all of the occurences of `regex` in all the label names with the `replacement`
* `keep_if_equal`: keeps the entry if all the label values from `source_labels` are equal. * `keep_if_equal`: keeps the entry if all the label values from `source_labels` are equal
* `drop_if_equal`: drops the entry if all the label values from `source_labels` are equal. * `drop_if_equal`: drops the entry if all the label values from `source_labels` are equal
* `keep_metrics`: keeps all the metrics with names matching the given `regex`. * `keep_metrics`: keeps all the metrics with names matching the given `regex`
* `drop_metrics`: drops all the metrics with names matching the given `regex`. * `drop_metrics`: drops all the metrics with names matching the given `regex`
* `graphite`: applies Graphite-style relabeling to metric name. See [these docs](#graphite-relabeling)
The `regex` value can be split into multiple lines for improved readability and maintainability. These lines are automatically joined with `|` char when parsed. For example, the following configs are equivalent: The `regex` value can be split into multiple lines for improved readability and maintainability. These lines are automatically joined with `|` char when parsed. For example, the following configs are equivalent:
@ -305,6 +306,38 @@ You can read more about relabeling in the following articles:
* [Extracting labels from legacy metric names](https://www.robustperception.io/extracting-labels-from-legacy-metric-names) * [Extracting labels from legacy metric names](https://www.robustperception.io/extracting-labels-from-legacy-metric-names)
* [relabel_configs vs metric_relabel_configs](https://www.robustperception.io/relabel_configs-vs-metric_relabel_configs) * [relabel_configs vs metric_relabel_configs](https://www.robustperception.io/relabel_configs-vs-metric_relabel_configs)
## Graphite relabeling
VictoriaMetrics components support `action: graphite` relabeling rules, which allow extracting various parts from Graphite-style metrics
into the configured labels with the syntax similar to [Glob matching in statsd_exporter](https://github.com/prometheus/statsd_exporter#glob-matching).
Note that the `name` field must be substituted with explicit `__name__` option under `labels` section.
If `__name__` option is missing under `labels` section, then the original Graphite-style metric name is left unchanged.
For example, the following relabeling rule generates `requests_total{job="app42",instance="host124:8080"}` metric
from "app42.host123.requests.total" Graphite-style metric:
```yaml
- action: graphite
match: "*.*.*.total"
labels:
__name__: "${3}_total"
job: "$1"
instance: "${2}:8080"
```
Important notes about `action: graphite` relabeling rules:
- The relabeling rule is applied only to metrics, which match the given `match` expression. Other metrics remain unchanged.
- The `*` matches the maximum possible number of chars until the next dot or until the next part of the `match` expression whichever comes first.
It may match zero chars if the next char is `.`.
For example, `match: "app*foo.bar"` matches `app42foo.bar` and `42` becomes available to use at `labels` section via `$1` capture group.
- The `$0` capture group matches the original metric name.
- The relabeling rules are executed in order defined in the original config.
The `action: graphite` relabeling rules are easier to write and maintain than `action: replace` for labels extraction from Graphite-style metric names.
Additionally, the `action: graphite` relabeling rules usually work much faster than the equivalent `action: replace` rules.
## Prometheus staleness markers ## Prometheus staleness markers
`vmagent` sends [Prometheus staleness markers](https://www.robustperception.io/staleness-and-promql) to `-remoteWrite.url` in the following cases: `vmagent` sends [Prometheus staleness markers](https://www.robustperception.io/staleness-and-promql) to `-remoteWrite.url` in the following cases:
@ -560,7 +593,7 @@ Every Kafka message may contain multiple lines in `influx`, `prometheus`, `graph
The following command starts `vmagent`, which reads metrics in InfluxDB line protocol format from Kafka broker at `localhost:9092` from the topic `metrics-by-telegraf` and sends them to remote storage at `http://localhost:8428/api/v1/write`: The following command starts `vmagent`, which reads metrics in InfluxDB line protocol format from Kafka broker at `localhost:9092` from the topic `metrics-by-telegraf` and sends them to remote storage at `http://localhost:8428/api/v1/write`:
```bash ```console
./bin/vmagent -remoteWrite.url=http://localhost:8428/api/v1/write \ ./bin/vmagent -remoteWrite.url=http://localhost:8428/api/v1/write \
-kafka.consumer.topic.brokers=localhost:9092 \ -kafka.consumer.topic.brokers=localhost:9092 \
-kafka.consumer.topic.format=influx \ -kafka.consumer.topic.format=influx \
@ -622,13 +655,13 @@ Two types of auth are supported:
* sasl with username and password: * sasl with username and password:
```bash ```console
./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SASL_SSL&sasl.mechanisms=PLAIN -remoteWrite.basicAuth.username=user -remoteWrite.basicAuth.password=password ./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SASL_SSL&sasl.mechanisms=PLAIN -remoteWrite.basicAuth.username=user -remoteWrite.basicAuth.password=password
``` ```
* tls certificates: * tls certificates:
```bash ```console
./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SSL -remoteWrite.tlsCAFile=/opt/ca.pem -remoteWrite.tlsCertFile=/opt/cert.pem -remoteWrite.tlsKeyFile=/opt/key.pem ./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SSL -remoteWrite.tlsCAFile=/opt/ca.pem -remoteWrite.tlsCertFile=/opt/cert.pem -remoteWrite.tlsKeyFile=/opt/key.pem
``` ```
@ -657,7 +690,7 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmagent`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package-vmagent ROOT_IMAGE=scratch make package-vmagent
``` ```
@ -685,7 +718,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof
``` ```
@ -695,7 +728,7 @@ curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8429/debug/pprof/profile > cpu.pprof curl http://0.0.0.0:8429/debug/pprof/profile > cpu.pprof
``` ```

View file

@ -36,7 +36,7 @@ implementation and aims to be compatible with its syntax.
To build `vmalert` from sources: To build `vmalert` from sources:
```bash ```console
git clone https://github.com/VictoriaMetrics/VictoriaMetrics git clone https://github.com/VictoriaMetrics/VictoriaMetrics
cd VictoriaMetrics cd VictoriaMetrics
make vmalert make vmalert
@ -52,12 +52,13 @@ To start using `vmalert` you will need the following things:
aggregating alerts, and sending notifications. Please note, notifier address also supports Consul and DNS Service Discovery via aggregating alerts, and sending notifications. Please note, notifier address also supports Consul and DNS Service Discovery via
[config file](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmalert/notifier/config.go). [config file](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmalert/notifier/config.go).
* remote write address [optional] - [remote write](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations) * remote write address [optional] - [remote write](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations)
compatible storage to persist rules and alerts state info; compatible storage to persist rules and alerts state info. To persist results to multiple destinations use vmagent
configured with multiple remote writes as a proxy;
* remote read address [optional] - MetricsQL compatible datasource to restore alerts state from. * remote read address [optional] - MetricsQL compatible datasource to restore alerts state from.
Then configure `vmalert` accordingly: Then configure `vmalert` accordingly:
```bash ```console
./bin/vmalert -rule=alert.rules \ # Path to the file with rules configuration. Supports wildcard ./bin/vmalert -rule=alert.rules \ # Path to the file with rules configuration. Supports wildcard
-datasource.url=http://localhost:8428 \ # PromQL compatible datasource -datasource.url=http://localhost:8428 \ # PromQL compatible datasource
-notifier.url=http://localhost:9093 \ # AlertManager URL (required if alerting rules are used) -notifier.url=http://localhost:9093 \ # AlertManager URL (required if alerting rules are used)
@ -424,6 +425,21 @@ Flags `-remoteRead.url` and `-notifier.url` are omitted since we assume only rec
See also [downsampling docs](https://docs.victoriametrics.com/#downsampling). See also [downsampling docs](https://docs.victoriametrics.com/#downsampling).
#### Multiple remote writes
For persisting recording or alerting rule results `vmalert` requires `-remoteWrite.url` to be set.
But this flag supports only one destination. To persist rule results to multiple destinations
we recommend using [vmagent](https://docs.victoriametrics.com/vmagent.html) as fan-out proxy:
<img alt="vmalert multiple remote write destinations" src="vmalert_multiple_rw.png">
In this topology, `vmalert` is configured to persist rule results to `vmagent`. And `vmagent`
is configured to fan-out received data to two or more destinations.
Using `vmagent` as a proxy provides additional benefits such as
[data persisting when storage is unreachable](https://docs.victoriametrics.com/vmagent.html#replication-and-high-availability),
or time series modification via [relabeling](https://docs.victoriametrics.com/vmagent.html#relabeling).
### Web ### Web
`vmalert` runs a web-server (`-httpListenAddr`) for serving metrics and alerts endpoints: `vmalert` runs a web-server (`-httpListenAddr`) for serving metrics and alerts endpoints:
@ -1022,7 +1038,7 @@ It is recommended using
You can build `vmalert` docker image from source and push it to your own docker repository. You can build `vmalert` docker image from source and push it to your own docker repository.
Run the following commands from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics): Run the following commands from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics):
```bash ```console
make package-vmalert make package-vmalert
docker tag victoria-metrics/vmalert:version my-repo:my-version-name docker tag victoria-metrics/vmalert:version my-repo:my-version-name
docker push my-repo:my-version-name docker push my-repo:my-version-name

View file

@ -438,11 +438,17 @@ func (e *executor) exec(ctx context.Context, rule Rule, ts time.Time, resolveDur
return nil return nil
} }
wg := sync.WaitGroup{}
for _, nt := range e.notifiers() { for _, nt := range e.notifiers() {
wg.Add(1)
go func(nt notifier.Notifier) {
if err := nt.Send(ctx, alerts); err != nil { if err := nt.Send(ctx, alerts); err != nil {
errGr.Add(fmt.Errorf("rule %q: failed to send alerts to addr %q: %w", rule, nt.Addr(), err)) errGr.Add(fmt.Errorf("rule %q: failed to send alerts to addr %q: %w", rule, nt.Addr(), err))
} }
wg.Done()
}(nt)
} }
wg.Wait()
return errGr.Err() return errGr.Err()
} }

View file

@ -413,3 +413,42 @@ func TestPurgeStaleSeries(t *testing.T) {
[]Rule{&AlertingRule{RuleID: 1}, &AlertingRule{RuleID: 2}}, []Rule{&AlertingRule{RuleID: 1}, &AlertingRule{RuleID: 2}},
) )
} }
func TestFaultyNotifier(t *testing.T) {
fq := &fakeQuerier{}
fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "bar"))
r := newTestAlertingRule("instant", 0)
r.q = fq
fn := &fakeNotifier{}
e := &executor{
notifiers: func() []notifier.Notifier {
return []notifier.Notifier{
&faultyNotifier{},
fn,
}
},
}
delay := 5 * time.Second
ctx, cancel := context.WithTimeout(context.Background(), delay)
defer cancel()
go func() {
_ = e.exec(ctx, r, time.Now(), 0, 10)
}()
tn := time.Now()
deadline := tn.Add(delay / 2)
for {
if fn.getCounter() > 0 {
return
}
if tn.After(deadline) {
break
}
tn = time.Now()
time.Sleep(time.Millisecond * 100)
}
t.Fatalf("alive notifier didn't receive notification by %v", deadline)
}

View file

@ -87,6 +87,18 @@ func (fn *fakeNotifier) getAlerts() []notifier.Alert {
return fn.alerts return fn.alerts
} }
type faultyNotifier struct {
fakeNotifier
}
func (fn *faultyNotifier) Send(ctx context.Context, _ []notifier.Alert) error {
d, ok := ctx.Deadline()
if ok {
time.Sleep(time.Until(d))
}
return fmt.Errorf("send failed")
}
func metricWithValueAndLabels(t *testing.T, value float64, labels ...string) datasource.Metric { func metricWithValueAndLabels(t *testing.T, value float64, labels ...string) datasource.Metric {
return metricWithValuesAndLabels(t, []float64{value}, labels...) return metricWithValuesAndLabels(t, []float64{value}, labels...)
} }

View file

@ -145,7 +145,7 @@ func notifiersFromFlags(gen AlertURLGenerator) ([]Notifier, error) {
} }
addr = strings.TrimSuffix(addr, "/") addr = strings.TrimSuffix(addr, "/")
am, err := NewAlertManager(addr+alertManagerPath, gen, authCfg, nil, time.Minute) am, err := NewAlertManager(addr+alertManagerPath, gen, authCfg, nil, time.Second*10)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View file

@ -3,24 +3,34 @@ package utils
import ( import (
"fmt" "fmt"
"strings" "strings"
"sync"
) )
// ErrGroup accumulates multiple errors // ErrGroup accumulates multiple errors
// and produces single error message. // and produces single error message.
type ErrGroup struct { type ErrGroup struct {
mu sync.Mutex
errs []error errs []error
} }
// Add adds a new error to group. // Add adds a new error to group.
// Isn't thread-safe. // Is thread-safe.
func (eg *ErrGroup) Add(err error) { func (eg *ErrGroup) Add(err error) {
eg.mu.Lock()
eg.errs = append(eg.errs, err) eg.errs = append(eg.errs, err)
eg.mu.Unlock()
} }
// Err checks if group contains at least // Err checks if group contains at least
// one error. // one error.
func (eg *ErrGroup) Err() error { func (eg *ErrGroup) Err() error {
if eg == nil || len(eg.errs) == 0 { if eg == nil {
return nil
}
eg.mu.Lock()
defer eg.mu.Unlock()
if len(eg.errs) == 0 {
return nil return nil
} }
return eg return eg
@ -28,6 +38,9 @@ func (eg *ErrGroup) Err() error {
// Error satisfies Error interface // Error satisfies Error interface
func (eg *ErrGroup) Error() string { func (eg *ErrGroup) Error() string {
eg.mu.Lock()
defer eg.mu.Unlock()
if len(eg.errs) == 0 { if len(eg.errs) == 0 {
return "" return ""
} }

View file

@ -2,6 +2,7 @@ package utils
import ( import (
"errors" "errors"
"fmt"
"testing" "testing"
) )
@ -36,3 +37,29 @@ func TestErrGroup(t *testing.T) {
} }
} }
} }
// TestErrGroupConcurrent supposed to test concurrent
// use of error group.
// Should be executed with -race flag
func TestErrGroupConcurrent(t *testing.T) {
eg := new(ErrGroup)
const writersN = 4
payload := make(chan error, writersN)
for i := 0; i < writersN; i++ {
go func() {
for err := range payload {
eg.Add(err)
}
}()
}
const iterations = 500
for i := 0; i < iterations; i++ {
payload <- fmt.Errorf("error %d", i)
if i%10 == 0 {
_ = eg.Err()
}
}
close(payload)
}

View file

@ -0,0 +1,711 @@
{
"type": "excalidraw",
"version": 2,
"source": "https://excalidraw.com",
"elements": [
{
"type": "rectangle",
"version": 797,
"versionNonce": 1977657992,
"isDeleted": false,
"id": "VgBUzo0blGR-Ijd2mQEEf",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"x": 289.6802978515625,
"y": 399.3895568847656,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 123.7601318359375,
"height": 72.13211059570312,
"seed": 1194011660,
"groupIds": [
"iBaXgbpyifSwPplm_GO5b"
],
"strokeSharpness": "sharp",
"boundElements": [
{
"type": "arrow",
"id": "miEbzHxOPXe4PEYvXiJp5"
},
{
"type": "arrow",
"id": "rcmiQfIWtfbTTlwxqr1sl"
},
{
"type": "arrow",
"id": "P-dpWlSTtnsux-zr5oqgF"
},
{
"type": "arrow",
"id": "oAToSPttH7aWoD_AqXGFX"
},
{
"type": "arrow",
"id": "wRO0q9xKPHc8e8XPPsQWh"
},
{
"type": "arrow",
"id": "sxEhnxlbT7ldlSsmHDUHp"
},
{
"type": "arrow",
"id": "pD9DcILMxa6GaR1U5YyMO"
},
{
"type": "arrow",
"id": "HPEwr85wL4IedW0AgdArp"
},
{
"type": "arrow",
"id": "EyecK0YM9Cc8T6ju-nTOc"
},
{
"id": "xpdAlCCGgIMAgSaqQ4K65",
"type": "arrow"
}
],
"updated": 1655372487772,
"link": null,
"locked": false
},
{
"type": "text",
"version": 671,
"versionNonce": 1438327288,
"isDeleted": false,
"id": "e9TDm09y-GhPm84XWt0Jv",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"x": 311.22686767578125,
"y": 420.4738006591797,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 83,
"height": 24,
"seed": 327273100,
"groupIds": [
"iBaXgbpyifSwPplm_GO5b"
],
"strokeSharpness": "sharp",
"boundElements": [],
"updated": 1655372487772,
"link": null,
"locked": false,
"fontSize": 20,
"fontFamily": 3,
"text": "vmagent",
"baseline": 20,
"textAlign": "center",
"verticalAlign": "middle",
"containerId": null,
"originalText": "vmagent"
},
{
"type": "rectangle",
"version": 1247,
"versionNonce": 1809504904,
"isDeleted": false,
"id": "Sa4OBd1ZjD6itohm7Ll8z",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"x": 542.2673645019531,
"y": 308.46409606933594,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 219.1235961914062,
"height": 44.74725341796875,
"seed": 126267060,
"groupIds": [
"ek-pq3umtz1yN-J_-preq"
],
"strokeSharpness": "sharp",
"boundElements": [
{
"type": "arrow",
"id": "wRO0q9xKPHc8e8XPPsQWh"
},
{
"type": "arrow",
"id": "he-SpFjCxEQEWpWny2kKP"
},
{
"type": "arrow",
"id": "-pjrKo16rOsasM8viZPJ-"
},
{
"id": "HPEwr85wL4IedW0AgdArp",
"type": "arrow"
}
],
"updated": 1655372610014,
"link": null,
"locked": false
},
{
"type": "text",
"version": 1149,
"versionNonce": 1939391880,
"isDeleted": false,
"id": "we766A079lfGYu2_aC4Pl",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"x": 629.1559448242188,
"y": 318.8975372314453,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 48,
"height": 24,
"seed": 478660236,
"groupIds": [
"ek-pq3umtz1yN-J_-preq"
],
"strokeSharpness": "sharp",
"boundElements": [],
"updated": 1655372621140,
"link": null,
"locked": false,
"fontSize": 20,
"fontFamily": 3,
"text": "vm-1",
"baseline": 20,
"textAlign": "center",
"verticalAlign": "top",
"containerId": null,
"originalText": "vm-1"
},
{
"type": "arrow",
"version": 337,
"versionNonce": 1739475336,
"isDeleted": false,
"id": "HPEwr85wL4IedW0AgdArp",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"x": 423.70701599121094,
"y": 431.0309448437124,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 107.82342529296875,
"height": 100.61778190120276,
"seed": 389863732,
"groupIds": [],
"strokeSharpness": "round",
"boundElements": [],
"updated": 1655372610015,
"link": null,
"locked": false,
"startBinding": {
"elementId": "VgBUzo0blGR-Ijd2mQEEf",
"focus": 0.6700023593531782,
"gap": 10.266586303710938
},
"endBinding": {
"elementId": "Sa4OBd1ZjD6itohm7Ll8z",
"focus": 0.9042666945544442,
"gap": 10.736923217773438
},
"lastCommittedPoint": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"points": [
[
0,
0
],
[
107.82342529296875,
-100.61778190120276
]
]
},
{
"type": "arrow",
"version": 429,
"versionNonce": 252631288,
"isDeleted": false,
"id": "EyecK0YM9Cc8T6ju-nTOc",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"x": 424.7585906982422,
"y": 432.4328003132737,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 119.94342041015625,
"height": 83.58206327156176,
"seed": 981082124,
"groupIds": [],
"strokeSharpness": "round",
"boundElements": [],
"updated": 1655372623571,
"link": null,
"locked": false,
"startBinding": {
"elementId": "VgBUzo0blGR-Ijd2mQEEf",
"focus": -0.6826568395144794,
"gap": 11.318161010742188
},
"endBinding": {
"elementId": "lXpACjXQqK7SZF_vrACjJ",
"focus": -0.8650156795513397,
"gap": 3.6341629028320312
},
"lastCommittedPoint": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"points": [
[
0,
0
],
[
119.94342041015625,
83.58206327156176
]
]
},
{
"type": "rectangle",
"version": 979,
"versionNonce": 896077192,
"isDeleted": false,
"id": "X08ptHmEm7tCgoFbQntAR",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"x": -4.634010314941406,
"y": 402.69072341918945,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 123.7601318359375,
"height": 72.13211059570312,
"seed": 1000953848,
"groupIds": [
"IAd7y_6yDxq13U11FuJvH"
],
"strokeSharpness": "sharp",
"boundElements": [
{
"type": "arrow",
"id": "miEbzHxOPXe4PEYvXiJp5"
},
{
"type": "arrow",
"id": "rcmiQfIWtfbTTlwxqr1sl"
},
{
"type": "arrow",
"id": "P-dpWlSTtnsux-zr5oqgF"
},
{
"type": "arrow",
"id": "oAToSPttH7aWoD_AqXGFX"
},
{
"type": "arrow",
"id": "wRO0q9xKPHc8e8XPPsQWh"
},
{
"type": "arrow",
"id": "sxEhnxlbT7ldlSsmHDUHp"
},
{
"type": "arrow",
"id": "pD9DcILMxa6GaR1U5YyMO"
},
{
"type": "arrow",
"id": "HPEwr85wL4IedW0AgdArp"
},
{
"type": "arrow",
"id": "EyecK0YM9Cc8T6ju-nTOc"
},
{
"id": "xpdAlCCGgIMAgSaqQ4K65",
"type": "arrow"
}
],
"updated": 1655372487773,
"link": null,
"locked": false
},
{
"type": "text",
"version": 844,
"versionNonce": 2073980664,
"isDeleted": false,
"id": "4lz3UmUePrjYOJGyMEsNo",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"x": 16.912559509277344,
"y": 423.7749671936035,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 82,
"height": 24,
"seed": 808600456,
"groupIds": [
"IAd7y_6yDxq13U11FuJvH"
],
"strokeSharpness": "sharp",
"boundElements": [],
"updated": 1655372487773,
"link": null,
"locked": false,
"fontSize": 20,
"fontFamily": 3,
"text": "vmalert",
"baseline": 19,
"textAlign": "center",
"verticalAlign": "middle",
"containerId": null,
"originalText": "vmalert"
},
{
"id": "xpdAlCCGgIMAgSaqQ4K65",
"type": "arrow",
"x": 127.58199310302739,
"y": 437.3415815729096,
"width": 154.43469238281244,
"height": 0.2578931190849971,
"angle": 0,
"strokeColor": "black",
"backgroundColor": "transparent",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "round",
"seed": 1769759112,
"version": 140,
"versionNonce": 1727929480,
"isDeleted": false,
"boundElements": null,
"updated": 1655372487773,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
154.43469238281244,
0.2578931190849971
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "X08ptHmEm7tCgoFbQntAR",
"focus": -0.042373209435744755,
"gap": 8.45587158203125
},
"endBinding": {
"elementId": "VgBUzo0blGR-Ijd2mQEEf",
"focus": -0.062483627408895646,
"gap": 7.663612365722656
},
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"type": "text",
"version": 896,
"versionNonce": 619040760,
"isDeleted": false,
"id": "d_hJkkcPArQGdFiPDbjtp",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"x": 129.2102279663086,
"y": 404.1378517150879,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 141,
"height": 19,
"seed": 2108447992,
"groupIds": [],
"strokeSharpness": "sharp",
"boundElements": [
{
"type": "arrow",
"id": "wRO0q9xKPHc8e8XPPsQWh"
}
],
"updated": 1655372487773,
"link": null,
"locked": false,
"fontSize": 16,
"fontFamily": 3,
"text": "persist results",
"baseline": 15,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "persist results"
},
{
"id": "P35cFQroIm2nrmm3Jlqgp",
"type": "text",
"x": -7.461128234863281,
"y": 483.3255958557129,
"width": 301,
"height": 20,
"angle": 0,
"strokeColor": "black",
"backgroundColor": "transparent",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1314060792,
"version": 179,
"versionNonce": 139280376,
"isDeleted": false,
"boundElements": null,
"updated": 1655372636346,
"link": null,
"locked": false,
"text": " -remoteWrite.url=http://vmagent",
"fontSize": 16,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 16,
"containerId": null,
"originalText": " -remoteWrite.url=http://vmagent"
},
{
"type": "rectangle",
"version": 1339,
"versionNonce": 812947448,
"isDeleted": false,
"id": "lXpACjXQqK7SZF_vrACjJ",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"x": 548.3361740112305,
"y": 487.1258888244629,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 219.1235961914062,
"height": 44.74725341796875,
"seed": 333549960,
"groupIds": [
"vuLTnxw8A0DXtmDYT1F4r"
],
"strokeSharpness": "sharp",
"boundElements": [
{
"type": "arrow",
"id": "wRO0q9xKPHc8e8XPPsQWh"
},
{
"type": "arrow",
"id": "he-SpFjCxEQEWpWny2kKP"
},
{
"type": "arrow",
"id": "-pjrKo16rOsasM8viZPJ-"
},
{
"id": "EyecK0YM9Cc8T6ju-nTOc",
"type": "arrow"
}
],
"updated": 1655372623571,
"link": null,
"locked": false
},
{
"type": "text",
"version": 1244,
"versionNonce": 666803448,
"isDeleted": false,
"id": "v9qzZSsHdJ_ETRlP4Msn5",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"x": 635.2247543334961,
"y": 497.55932998657227,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 48,
"height": 24,
"seed": 1105210104,
"groupIds": [
"vuLTnxw8A0DXtmDYT1F4r"
],
"strokeSharpness": "sharp",
"boundElements": [],
"updated": 1655372625794,
"link": null,
"locked": false,
"fontSize": 20,
"fontFamily": 3,
"text": "vm-2",
"baseline": 20,
"textAlign": "center",
"verticalAlign": "top",
"containerId": null,
"originalText": "vm-2"
},
{
"id": "yb3B2pFN0OZOd4yLmSU2m",
"type": "text",
"x": 449.79036712646484,
"y": 406.616886138916,
"width": 442,
"height": 20,
"angle": 0,
"strokeColor": "black",
"backgroundColor": "transparent",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1374332808,
"version": 196,
"versionNonce": 526480264,
"isDeleted": false,
"boundElements": null,
"updated": 1655372596999,
"link": null,
"locked": false,
"text": "-remoteWrite.url=https://vm-1:8428/api/v1/write",
"fontSize": 16,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 16,
"containerId": null,
"originalText": "-remoteWrite.url=https://vm-1:8428/api/v1/write"
},
{
"type": "text",
"version": 242,
"versionNonce": 1304477832,
"isDeleted": false,
"id": "8CXNdrDePIAAwgJB2b8YT",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"x": 450.0511703491211,
"y": 432.6653480529785,
"strokeColor": "black",
"backgroundColor": "transparent",
"width": 442,
"height": 20,
"seed": 1349606392,
"groupIds": [],
"strokeSharpness": "sharp",
"boundElements": [],
"updated": 1655372600292,
"link": null,
"locked": false,
"fontSize": 16,
"fontFamily": 3,
"text": "-remoteWrite.url=https://vm-2:8428/api/v1/write",
"baseline": 16,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "-remoteWrite.url=https://vm-2:8428/api/v1/write"
},
{
"type": "text",
"version": 1195,
"versionNonce": 1912405496,
"isDeleted": false,
"id": "Ev-VujoFglVNIh5GIhsba",
"fillStyle": "hachure",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"x": 357.2894821166992,
"y": 370.6587562561035,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 114,
"height": 20,
"seed": 1289300104,
"groupIds": [],
"strokeSharpness": "sharp",
"boundElements": [
{
"type": "arrow",
"id": "wRO0q9xKPHc8e8XPPsQWh"
}
],
"updated": 1655372703770,
"link": null,
"locked": false,
"fontSize": 16,
"fontFamily": 3,
"text": "fan-out data",
"baseline": 16,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "fan-out data"
}
],
"appState": {
"gridSize": null,
"viewBackgroundColor": "#ffffff"
},
"files": {}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

View file

@ -10,7 +10,7 @@ The `-auth.config` can point to either local file or to http url.
Just download `vmutils-*` archive from [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases), unpack it Just download `vmutils-*` archive from [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases), unpack it
and pass the following flag to `vmauth` binary in order to start authorizing and routing requests: and pass the following flag to `vmauth` binary in order to start authorizing and routing requests:
```bash ```console
/path/to/vmauth -auth.config=/path/to/auth/config.yml /path/to/vmauth -auth.config=/path/to/auth/config.yml
``` ```
@ -129,7 +129,7 @@ It is expected that all the backend services protected by `vmauth` are located i
Do not transfer Basic Auth headers in plaintext over untrusted networks. Enable https. This can be done by passing the following `-tls*` command-line flags to `vmauth`: Do not transfer Basic Auth headers in plaintext over untrusted networks. Enable https. This can be done by passing the following `-tls*` command-line flags to `vmauth`:
```bash ```console
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
@ -181,7 +181,7 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmauth`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package-vmauth ROOT_IMAGE=scratch make package-vmauth
``` ```
@ -193,7 +193,7 @@ ROOT_IMAGE=scratch make package-vmauth
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof
``` ```
@ -203,7 +203,7 @@ curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8427/debug/pprof/profile > cpu.pprof curl http://0.0.0.0:8427/debug/pprof/profile > cpu.pprof
``` ```
@ -217,7 +217,7 @@ The collected profiles may be analyzed with [go tool pprof](https://github.com/g
Pass `-help` command-line arg to `vmauth` in order to see all the configuration options: Pass `-help` command-line arg to `vmauth` in order to see all the configuration options:
```bash ```console
./vmauth -help ./vmauth -help
vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics. vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics.

View file

@ -28,7 +28,7 @@ creation of hourly, daily, weekly and monthly backups.
Regular backup can be performed with the following command: Regular backup can be performed with the following command:
```bash ```console
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/new/backup> vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/new/backup>
``` ```
@ -43,7 +43,7 @@ vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=h
If the destination GCS bucket already contains the previous backup at `-origin` path, then new backup can be sped up If the destination GCS bucket already contains the previous backup at `-origin` path, then new backup can be sped up
with the following command: with the following command:
```bash ```console
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/new/backup> -origin=gs://<bucket>/<path/to/existing/backup> ./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/new/backup> -origin=gs://<bucket>/<path/to/existing/backup>
``` ```
@ -54,7 +54,7 @@ It saves time and network bandwidth costs by performing server-side copy for the
Incremental backups are performed if `-dst` points to an already existing backup. In this case only new data is uploaded to remote storage. Incremental backups are performed if `-dst` points to an already existing backup. In this case only new data is uploaded to remote storage.
It saves time and network bandwidth costs when working with big backups: It saves time and network bandwidth costs when working with big backups:
```bash ```console
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/existing/backup> ./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/existing/backup>
``` ```
@ -64,7 +64,7 @@ Smart backups mean storing full daily backups into `YYYYMMDD` folders and creati
* Run the following command every hour: * Run the following command every hour:
```bash ```console
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/latest ./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/latest
``` ```
@ -73,7 +73,7 @@ The command will upload only changed data to `gs://<bucket>/latest`.
* Run the following command once a day: * Run the following command once a day:
```bash ```console
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<YYYYMMDD> -origin=gs://<bucket>/latest vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<YYYYMMDD> -origin=gs://<bucket>/latest
``` ```
@ -129,7 +129,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
for s3 (aws, minio or other s3 compatible storages): for s3 (aws, minio or other s3 compatible storages):
```bash ```console
[default] [default]
aws_access_key_id=theaccesskey aws_access_key_id=theaccesskey
aws_secret_access_key=thesecretaccesskeyvalue aws_secret_access_key=thesecretaccesskeyvalue
@ -155,7 +155,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
* Usage with s3 custom url endpoint. It is possible to use `vmbackup` with s3 compatible storages like minio, cloudian, etc. * Usage with s3 custom url endpoint. It is possible to use `vmbackup` with s3 compatible storages like minio, cloudian, etc.
You have to add a custom url endpoint via flag: You have to add a custom url endpoint via flag:
```bash ```console
# for minio # for minio
-customS3Endpoint=http://localhost:9000 -customS3Endpoint=http://localhost:9000
@ -165,7 +165,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
* Run `vmbackup -help` in order to see all the available options: * Run `vmbackup -help` in order to see all the available options:
```bash ```console
-concurrency int -concurrency int
The number of concurrent workers. Higher concurrency may reduce backup duration (default 10) The number of concurrent workers. Higher concurrency may reduce backup duration (default 10)
-configFilePath string -configFilePath string
@ -280,6 +280,6 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmbackup`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package-vmbackup ROOT_IMAGE=scratch make package-vmbackup
``` ```

View file

@ -15,7 +15,7 @@ Features:
To see the full list of supported modes To see the full list of supported modes
run the following command: run the following command:
```bash ```console
$ ./vmctl --help $ ./vmctl --help
NAME: NAME:
vmctl - VictoriaMetrics command-line tool vmctl - VictoriaMetrics command-line tool
@ -527,7 +527,7 @@ and specify `accountID` param.
In this mode, `vmctl` allows verifying correctness and integrity of data exported via [native format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-export-data-in-native-format) from VictoriaMetrics. In this mode, `vmctl` allows verifying correctness and integrity of data exported via [native format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-export-data-in-native-format) from VictoriaMetrics.
You can verify exported data at disk before uploading it by `vmctl verify-block` command: You can verify exported data at disk before uploading it by `vmctl verify-block` command:
```bash ```console
# export blocks from VictoriaMetrics # export blocks from VictoriaMetrics
curl localhost:8428/api/v1/export/native -g -d 'match[]={__name__!=""}' -o exported_data_block curl localhost:8428/api/v1/export/native -g -d 'match[]={__name__!=""}' -o exported_data_block
# verify block content # verify block content
@ -650,7 +650,7 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmctl`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package-vmctl ROOT_IMAGE=scratch make package-vmctl
``` ```

View file

@ -54,7 +54,7 @@ Where:
Start the single version of VictoriaMetrics Start the single version of VictoriaMetrics
```bash ```console
# single # single
# start node # start node
./bin/victoria-metrics --selfScrapeInterval=10s ./bin/victoria-metrics --selfScrapeInterval=10s
@ -62,19 +62,19 @@ Start the single version of VictoriaMetrics
Start vmgateway Start vmgateway
```bash ```console
./bin/vmgateway -eula -enable.auth -read.url http://localhost:8428 --write.url http://localhost:8428 ./bin/vmgateway -eula -enable.auth -read.url http://localhost:8428 --write.url http://localhost:8428
``` ```
Retrieve data from the database Retrieve data from the database
```bash ```console
curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2bV9hY2Nlc3MiOnsidGVuYW50X2lkIjp7fSwicm9sZSI6MX0sImV4cCI6MTkzOTM0NjIxMH0.5WUxEfdcV9hKo4CtQdtuZYOGpGXWwaqM9VuVivMMrVg' curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2bV9hY2Nlc3MiOnsidGVuYW50X2lkIjp7fSwicm9sZSI6MX0sImV4cCI6MTkzOTM0NjIxMH0.5WUxEfdcV9hKo4CtQdtuZYOGpGXWwaqM9VuVivMMrVg'
``` ```
A request with an incorrect token or without any token will be rejected: A request with an incorrect token or without any token will be rejected:
```bash ```console
curl 'http://localhost:8431/api/v1/series/count' curl 'http://localhost:8431/api/v1/series/count'
curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer incorrect-token' curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer incorrect-token'
@ -124,7 +124,7 @@ limits:
cluster version of VictoriaMetrics is required for rate limiting. cluster version of VictoriaMetrics is required for rate limiting.
```bash ```console
# start datasource for cluster metrics # start datasource for cluster metrics
cat << EOF > cluster.yaml cat << EOF > cluster.yaml

View file

@ -10,7 +10,7 @@ when restarting `vmrestore` with the same args.
VictoriaMetrics must be stopped during the restore process. VictoriaMetrics must be stopped during the restore process.
```bash ```console
vmrestore -src=gs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/restore> vmrestore -src=gs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/restore>
``` ```
@ -36,7 +36,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
for s3 (aws, minio or other s3 compatible storages): for s3 (aws, minio or other s3 compatible storages):
```bash ```console
[default] [default]
aws_access_key_id=theaccesskey aws_access_key_id=theaccesskey
aws_secret_access_key=thesecretaccesskeyvalue aws_secret_access_key=thesecretaccesskeyvalue
@ -62,7 +62,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
* Usage with s3 custom url endpoint. It is possible to use `vmrestore` with s3 api compatible storages, like minio, cloudian and other. * Usage with s3 custom url endpoint. It is possible to use `vmrestore` with s3 api compatible storages, like minio, cloudian and other.
You have to add custom url endpoint with a flag: You have to add custom url endpoint with a flag:
```bash ```console
# for minio: # for minio:
-customS3Endpoint=http://localhost:9000 -customS3Endpoint=http://localhost:9000
@ -72,7 +72,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
* Run `vmrestore -help` in order to see all the available options: * Run `vmrestore -help` in order to see all the available options:
```bash ```console
-concurrency int -concurrency int
The number of concurrent workers. Higher concurrency may reduce restore duration (default 10) The number of concurrent workers. Higher concurrency may reduce restore duration (default 10)
-configFilePath string -configFilePath string
@ -180,6 +180,6 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmrestore`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package-vmrestore ROOT_IMAGE=scratch make package-vmrestore
``` ```

View file

@ -1099,6 +1099,14 @@ func getCommonParams(r *http.Request, startTime time.Time, requireNonEmptyMatch
if err != nil { if err != nil {
return nil, err return nil, err
} }
// Limit the `end` arg to the current time +2 days in the same way
// as it is limited during data ingestion.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/blob/ea06d2fd3ccbbb6aa4480ab3b04f7b671408be2a/lib/storage/table.go#L378
// This should fix possible timestamp overflow - see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2669
maxTS := startTime.UnixNano()/1e6 + 2*24*3600*1000
if end > maxTS {
end = maxTS
}
if end < start { if end < start {
end = start end = start
} }

View file

@ -1,12 +1,12 @@
{ {
"files": { "files": {
"main.css": "./static/css/main.7e6d0c89.css", "main.css": "./static/css/main.7e6d0c89.css",
"main.js": "./static/js/main.f7185a13.js", "main.js": "./static/js/main.fdf5a65f.js",
"static/js/27.939f971b.chunk.js": "./static/js/27.939f971b.chunk.js", "static/js/27.939f971b.chunk.js": "./static/js/27.939f971b.chunk.js",
"index.html": "./index.html" "index.html": "./index.html"
}, },
"entrypoints": [ "entrypoints": [
"static/css/main.7e6d0c89.css", "static/css/main.7e6d0c89.css",
"static/js/main.f7185a13.js" "static/js/main.fdf5a65f.js"
] ]
} }

View file

@ -1 +1 @@
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script src="./dashboards/index.js" type="module"></script><script defer="defer" src="./static/js/main.f7185a13.js"></script><link href="./static/css/main.7e6d0c89.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html> <!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script src="./dashboards/index.js" type="module"></script><script defer="defer" src="./static/js/main.fdf5a65f.js"></script><link href="./static/css/main.7e6d0c89.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -42,7 +42,7 @@ module.exports = {
"max-lines": [ "max-lines": [
"error", "error",
{ {
"max": 150, "max": 1000,
"skipBlankLines": true, "skipBlankLines": true,
"skipComments": true, "skipComments": true,
} }

View file

@ -3,10 +3,12 @@ export interface CardinalityRequestsParams {
extraLabel: string | null, extraLabel: string | null,
match: string | null, match: string | null,
date: string | null, date: string | null,
focusLabel: string | null,
} }
export const getCardinalityInfo = (server: string, requestsParam: CardinalityRequestsParams) => { export const getCardinalityInfo = (server: string, requestsParam: CardinalityRequestsParams) => {
const match = requestsParam.match ? `&match[]=${requestsParam.match}` : ""; const match = requestsParam.match ? "&match[]=" + encodeURIComponent(requestsParam.match) : "";
return `${server}/api/v1/status/tsdb?topN=${requestsParam.topN}&date=${requestsParam.date}${match}`; const focusLabel = requestsParam.focusLabel ? "&focusLabel=" + encodeURIComponent(requestsParam.focusLabel) : "";
return `${server}/api/v1/status/tsdb?topN=${requestsParam.topN}&date=${requestsParam.date}${match}${focusLabel}`;
}; };

View file

@ -17,6 +17,7 @@ export interface CardinalityConfiguratorProps {
onSetQuery: (query: string, index: number) => void; onSetQuery: (query: string, index: number) => void;
onRunQuery: () => void; onRunQuery: () => void;
onTopNChange: (e: ChangeEvent<HTMLTextAreaElement|HTMLInputElement>) => void; onTopNChange: (e: ChangeEvent<HTMLTextAreaElement|HTMLInputElement>) => void;
onFocusLabelChange: (e: ChangeEvent<HTMLTextAreaElement|HTMLInputElement>) => void;
query: string; query: string;
topN: number; topN: number;
error?: ErrorTypes | string; error?: ErrorTypes | string;
@ -24,6 +25,7 @@ export interface CardinalityConfiguratorProps {
totalLabelValuePairs: number; totalLabelValuePairs: number;
date: string | null; date: string | null;
match: string | null; match: string | null;
focusLabel: string | null;
} }
const CardinalityConfigurator: FC<CardinalityConfiguratorProps> = ({ const CardinalityConfigurator: FC<CardinalityConfiguratorProps> = ({
@ -34,10 +36,12 @@ const CardinalityConfigurator: FC<CardinalityConfiguratorProps> = ({
onRunQuery, onRunQuery,
onSetQuery, onSetQuery,
onTopNChange, onTopNChange,
onFocusLabelChange,
totalSeries, totalSeries,
totalLabelValuePairs, totalLabelValuePairs,
date, date,
match match,
focusLabel
}) => { }) => {
const dispatch = useAppDispatch(); const dispatch = useAppDispatch();
const {queryControls: {autocomplete}} = useAppState(); const {queryControls: {autocomplete}} = useAppState();
@ -50,40 +54,48 @@ const CardinalityConfigurator: FC<CardinalityConfiguratorProps> = ({
return <Box boxShadow="rgba(99, 99, 99, 0.2) 0px 2px 8px 0px;" p={4} pb={2} mb={2}> return <Box boxShadow="rgba(99, 99, 99, 0.2) 0px 2px 8px 0px;" p={4} pb={2} mb={2}>
<Box> <Box>
<Box display="grid" gridTemplateColumns="1fr auto auto" gap="4px" width="50%" mb={4}> <Box display="grid" gridTemplateColumns="1fr auto auto auto auto" gap="4px" width="100%" mb={4}>
<QueryEditor <QueryEditor
query={query} index={0} autocomplete={autocomplete} queryOptions={queryOptions} query={query} index={0} autocomplete={autocomplete} queryOptions={queryOptions}
error={error} setHistoryIndex={onSetHistory} runQuery={onRunQuery} setQuery={onSetQuery} error={error} setHistoryIndex={onSetHistory} runQuery={onRunQuery} setQuery={onSetQuery}
label={"Time series selector"} label={"Time series selector"}
/> />
<Box display="flex" alignItems="center"> <Box mr={2}>
<Box ml={2}>
<TextField <TextField
label="Number of entries per table" label="Number of entries per table"
type="number" type="number"
size="small" size="medium"
variant="outlined" variant="outlined"
value={topN} value={topN}
error={topN < 1} error={topN < 1}
helperText={topN < 1 ? "Number must be bigger than zero" : " "} helperText={topN < 1 ? "Number must be bigger than zero" : " "}
onChange={onTopNChange}/> onChange={onTopNChange}/>
</Box> </Box>
<Tooltip title="Execute Query"> <Box mr={2}>
<IconButton onClick={onRunQuery} sx={{height: "49px", width: "49px"}}> <TextField
<PlayCircleOutlineIcon/> label="Focus label"
</IconButton> type="text"
</Tooltip> size="medium"
variant="outlined"
value={focusLabel}
onChange={onFocusLabelChange} />
</Box>
<Box> <Box>
<FormControlLabel label="Enable autocomplete" <FormControlLabel label="Enable autocomplete"
control={<BasicSwitch checked={autocomplete} onChange={onChangeAutocomplete}/>} control={<BasicSwitch checked={autocomplete} onChange={onChangeAutocomplete}/>}
/> />
</Box> </Box>
</Box> <Tooltip title="Execute Query">
<IconButton onClick={onRunQuery} sx={{height: "49px", width: "49px"}}>
<PlayCircleOutlineIcon/>
</IconButton>
</Tooltip>
</Box> </Box>
</Box> </Box>
<Box> <Box>
Analyzed <b>{totalSeries}</b> series with <b>{totalLabelValuePairs}</b> label=value pairs Analyzed <b>{totalSeries}</b> series with <b>{totalLabelValuePairs}</b> &quot;label=value&quot; pairs
at <b>{date}</b> {match && <span>for series selector <b>{match}</b></span>}. Show top {topN} entries per table. at <b>{date}</b> {match && <span>for series selector <b>{match}</b></span>}.
Show top {topN} entries per table.
</Box> </Box>
</Box>; </Box>;
}; };

View file

@ -2,24 +2,30 @@ import React, {ChangeEvent, FC, useState} from "react";
import {SyntheticEvent} from "react"; import {SyntheticEvent} from "react";
import {Alert} from "@mui/material"; import {Alert} from "@mui/material";
import {useFetchQuery} from "../../hooks/useCardinalityFetch"; import {useFetchQuery} from "../../hooks/useCardinalityFetch";
import { import {queryUpdater} from "./helpers";
METRIC_NAMES_HEADERS,
LABEL_NAMES_HEADERS,
LABEL_VALUE_PAIRS_HEADERS,
LABELS_WITH_UNIQUE_VALUES_HEADERS,
spinnerContainerStyles
} from "./consts";
import {defaultProperties, queryUpdater} from "./helpers";
import {Data} from "../Table/types"; import {Data} from "../Table/types";
import CardinalityConfigurator from "./CardinalityConfigurator/CardinalityConfigurator"; import CardinalityConfigurator from "./CardinalityConfigurator/CardinalityConfigurator";
import Spinner from "../common/Spinner"; import Spinner from "../common/Spinner";
import {useCardinalityDispatch, useCardinalityState} from "../../state/cardinality/CardinalityStateContext"; import {useCardinalityDispatch, useCardinalityState} from "../../state/cardinality/CardinalityStateContext";
import MetricsContent from "./MetricsContent/MetricsContent"; import MetricsContent from "./MetricsContent/MetricsContent";
import {DefaultActiveTab, Tabs, TSDBStatus, Containers} from "./types";
const spinnerContainerStyles = (height: string) => {
return {
width: "100%",
maxWidth: "100%",
position: "absolute",
height: height ?? "50%",
background: "rgba(255, 255, 255, 0.7)",
pointerEvents: "none",
zIndex: 1000,
};
};
const CardinalityPanel: FC = () => { const CardinalityPanel: FC = () => {
const cardinalityDispatch = useCardinalityDispatch(); const cardinalityDispatch = useCardinalityDispatch();
const {topN, match, date} = useCardinalityState(); const {topN, match, date, focusLabel} = useCardinalityState();
const configError = ""; const configError = "";
const [query, setQuery] = useState(match || ""); const [query, setQuery] = useState(match || "");
const [queryHistoryIndex, setQueryHistoryIndex] = useState(0); const [queryHistoryIndex, setQueryHistoryIndex] = useState(0);
@ -47,10 +53,13 @@ const CardinalityPanel: FC = () => {
cardinalityDispatch({type: "SET_TOP_N", payload: +e.target.value}); cardinalityDispatch({type: "SET_TOP_N", payload: +e.target.value});
}; };
const {isLoading, tsdbStatus, error} = useFetchQuery(); const onFocusLabelChange = (e: ChangeEvent<HTMLTextAreaElement|HTMLInputElement>) => {
const defaultProps = defaultProperties(tsdbStatus); cardinalityDispatch({type: "SET_FOCUS_LABEL", payload: e.target.value});
const [stateTabs, setTab] = useState(defaultProps.defaultState); };
const {isLoading, appConfigurator, error} = useFetchQuery();
const [stateTabs, setTab] = useState(appConfigurator.defaultState.defaultActiveTab);
const {tsdbStatusData, defaultState, tablesHeaders} = appConfigurator;
const handleTabChange = (e: SyntheticEvent, newValue: number) => { const handleTabChange = (e: SyntheticEvent, newValue: number) => {
// eslint-disable-next-line @typescript-eslint/ban-ts-comment // eslint-disable-next-line @typescript-eslint/ban-ts-comment
// @ts-ignore // @ts-ignore
@ -59,11 +68,16 @@ const CardinalityPanel: FC = () => {
const handleFilterClick = (key: string) => (e: SyntheticEvent) => { const handleFilterClick = (key: string) => (e: SyntheticEvent) => {
const name = e.currentTarget.id; const name = e.currentTarget.id;
const query = queryUpdater[key](name); const query = queryUpdater[key](focusLabel, name);
setQuery(query); setQuery(query);
setQueryHistory(prev => [...prev, query]); setQueryHistory(prev => [...prev, query]);
setQueryHistoryIndex(prev => prev + 1); setQueryHistoryIndex(prev => prev + 1);
cardinalityDispatch({type: "SET_MATCH", payload: query}); cardinalityDispatch({type: "SET_MATCH", payload: query});
let newFocusLabel = "";
if (key === "labelValueCountByLabelName" || key == "seriesCountByLabelName") {
newFocusLabel = name;
}
cardinalityDispatch({type: "SET_FOCUS_LABEL", payload: newFocusLabel});
cardinalityDispatch({type: "RUN_QUERY"}); cardinalityDispatch({type: "RUN_QUERY"});
}; };
@ -79,56 +93,25 @@ const CardinalityPanel: FC = () => {
/>} />}
<CardinalityConfigurator error={configError} query={query} onRunQuery={onRunQuery} onSetQuery={onSetQuery} <CardinalityConfigurator error={configError} query={query} onRunQuery={onRunQuery} onSetQuery={onSetQuery}
onSetHistory={onSetHistory} onTopNChange={onTopNChange} topN={topN} date={date} match={match} onSetHistory={onSetHistory} onTopNChange={onTopNChange} topN={topN} date={date} match={match}
totalSeries={tsdbStatus.totalSeries} totalLabelValuePairs={tsdbStatus.totalLabelValuePairs}/> totalSeries={tsdbStatusData.totalSeries} totalLabelValuePairs={tsdbStatusData.totalLabelValuePairs}
focusLabel={focusLabel} onFocusLabelChange={onFocusLabelChange}
/>
{error && <Alert color="error" severity="error" sx={{whiteSpace: "pre-wrap", mt: 2}}>{error}</Alert>} {error && <Alert color="error" severity="error" sx={{whiteSpace: "pre-wrap", mt: 2}}>{error}</Alert>}
{appConfigurator.keys(focusLabel).map((keyName) => (
<MetricsContent <MetricsContent
sectionTitle={"Metric names with the highest number of series"} key={keyName}
activeTab={stateTabs.seriesCountByMetricName} sectionTitle={appConfigurator.sectionsTitles(focusLabel)[keyName]}
rows={tsdbStatus.seriesCountByMetricName as unknown as Data[]} activeTab={stateTabs[keyName as keyof DefaultActiveTab]}
rows={tsdbStatusData[keyName as keyof TSDBStatus] as unknown as Data[]}
onChange={handleTabChange} onChange={handleTabChange}
onActionClick={handleFilterClick("seriesCountByMetricName")} onActionClick={handleFilterClick(keyName)}
tabs={defaultProps.tabs.seriesCountByMetricName} tabs={defaultState.tabs[keyName as keyof Tabs]}
chartContainer={defaultProps.containerRefs.seriesCountByMetricName} chartContainer={defaultState.containerRefs[keyName as keyof Containers<HTMLDivElement>]}
totalSeries={tsdbStatus.totalSeries} totalSeries={appConfigurator.totalSeries(keyName)}
tabId={"seriesCountByMetricName"} tabId={keyName}
tableHeaderCells={METRIC_NAMES_HEADERS} tableHeaderCells={tablesHeaders[keyName]}
/>
<MetricsContent
sectionTitle={"Labels with the highest number of series"}
activeTab={stateTabs.seriesCountByLabelName}
rows={tsdbStatus.seriesCountByLabelName as unknown as Data[]}
onChange={handleTabChange}
onActionClick={handleFilterClick("seriesCountByLabelName")}
tabs={defaultProps.tabs.seriesCountByLabelName}
chartContainer={defaultProps.containerRefs.seriesCountByLabelName}
totalSeries={tsdbStatus.totalSeries}
tabId={"seriesCountByLabelName"}
tableHeaderCells={LABEL_NAMES_HEADERS}
/>
<MetricsContent
sectionTitle={"Label=value pairs with the highest number of series"}
activeTab={stateTabs.seriesCountByLabelValuePair}
rows={tsdbStatus.seriesCountByLabelValuePair as unknown as Data[]}
onChange={handleTabChange}
onActionClick={handleFilterClick("seriesCountByLabelValuePair")}
tabs={defaultProps.tabs.seriesCountByLabelValuePair}
chartContainer={defaultProps.containerRefs.seriesCountByLabelValuePair}
totalSeries={tsdbStatus.totalSeries}
tabId={"seriesCountByLabelValuePair"}
tableHeaderCells={LABEL_VALUE_PAIRS_HEADERS}
/>
<MetricsContent
sectionTitle={"Labels with the highest number of unique values"}
activeTab={stateTabs.labelValueCountByLabelName}
rows={tsdbStatus.labelValueCountByLabelName as unknown as Data[]}
onChange={handleTabChange}
onActionClick={handleFilterClick("labelValueCountByLabelName")}
tabs={defaultProps.tabs.labelValueCountByLabelName}
chartContainer={defaultProps.containerRefs.labelValueCountByLabelName}
totalSeries={-1}
tabId={"labelValueCountByLabelName"}
tableHeaderCells={LABELS_WITH_UNIQUE_VALUES_HEADERS}
/> />
))}
</> </>
); );
}; };

View file

@ -34,7 +34,7 @@ const MetricsContent: FC<MetricsProperties> = ({
tabId, tabId,
onActionClick, onActionClick,
sectionTitle, sectionTitle,
tableHeaderCells tableHeaderCells,
}) => { }) => {
const tableCells = (row: Data) => ( const tableCells = (row: Data) => (
<TableCells <TableCells

View file

@ -0,0 +1,233 @@
import {Containers, DefaultActiveTab, Tabs, TSDBStatus} from "./types";
import {useRef} from "preact/compat";
import {HeadCell} from "../Table/types";
interface AppState {
tabs: Tabs;
containerRefs: Containers<HTMLDivElement>;
defaultActiveTab: DefaultActiveTab,
}
export default class AppConfigurator {
private tsdbStatus: TSDBStatus;
private tabsNames: string[];
constructor() {
this.tsdbStatus = this.defaultTSDBStatus;
this.tabsNames = ["table", "graph"];
}
set tsdbStatusData(tsdbStatus: TSDBStatus) {
this.tsdbStatus = tsdbStatus;
}
get tsdbStatusData(): TSDBStatus {
return this.tsdbStatus;
}
get defaultTSDBStatus(): TSDBStatus {
return {
totalSeries: 0,
totalLabelValuePairs: 0,
seriesCountByMetricName: [],
seriesCountByLabelName: [],
seriesCountByFocusLabelValue: [],
seriesCountByLabelValuePair: [],
labelValueCountByLabelName: [],
};
}
keys(focusLabel: string | null): string[] {
let keys: string[] = [];
if (focusLabel) {
keys = keys.concat("seriesCountByFocusLabelValue");
}
keys = keys.concat(
"seriesCountByMetricName",
"seriesCountByLabelName",
"seriesCountByLabelValuePair",
"labelValueCountByLabelName",
);
return keys;
}
get defaultState(): AppState {
return this.keys("job").reduce((acc, cur) => {
return {
...acc,
tabs: {
...acc.tabs,
[cur]: this.tabsNames,
},
containerRefs: {
...acc.containerRefs,
[cur]: useRef<HTMLDivElement>(null),
},
defaultActiveTab: {
...acc.defaultActiveTab,
[cur]: 0,
},
};
}, {
tabs: {} as Tabs,
containerRefs: {} as Containers<HTMLDivElement>,
defaultActiveTab: {} as DefaultActiveTab,
} as AppState);
}
sectionsTitles(str: string | null): Record<string, string> {
return {
seriesCountByMetricName: "Metric names with the highest number of series",
seriesCountByLabelName: "Labels with the highest number of series",
seriesCountByFocusLabelValue: `Values for "${str}" label with the highest number of series`,
seriesCountByLabelValuePair: "Label=value pairs with the highest number of series",
labelValueCountByLabelName: "Labels with the highest number of unique values",
};
}
get tablesHeaders(): Record<string, HeadCell[]> {
return {
seriesCountByMetricName: METRIC_NAMES_HEADERS,
seriesCountByLabelName: LABEL_NAMES_HEADERS,
seriesCountByFocusLabelValue: FOCUS_LABEL_VALUES_HEADERS,
seriesCountByLabelValuePair: LABEL_VALUE_PAIRS_HEADERS,
labelValueCountByLabelName: LABEL_NAMES_WITH_UNIQUE_VALUES_HEADERS,
};
}
totalSeries(keyName: string): number {
if (keyName === "labelValueCountByLabelName") {
return -1;
}
return this.tsdbStatus.totalSeries;
}
}
const METRIC_NAMES_HEADERS = [
{
disablePadding: false,
id: "name",
label: "Metric name",
numeric: false,
},
{
disablePadding: false,
id: "value",
label: "Number of series",
numeric: false,
},
{
disablePadding: false,
id: "percentage",
label: "Percent of series",
numeric: false,
},
{
disablePadding: false,
id: "action",
label: "Action",
numeric: false,
}
] as HeadCell[];
const LABEL_NAMES_HEADERS = [
{
disablePadding: false,
id: "name",
label: "Label name",
numeric: false,
},
{
disablePadding: false,
id: "value",
label: "Number of series",
numeric: false,
},
{
disablePadding: false,
id: "percentage",
label: "Percent of series",
numeric: false,
},
{
disablePadding: false,
id: "action",
label: "Action",
numeric: false,
}
] as HeadCell[];
const FOCUS_LABEL_VALUES_HEADERS = [
{
disablePadding: false,
id: "name",
label: "Label value",
numeric: false,
},
{
disablePadding: false,
id: "value",
label: "Number of series",
numeric: false,
},
{
disablePadding: false,
id: "percentage",
label: "Percent of series",
numeric: false,
},
{
disablePadding: false,
id: "action",
label: "Action",
numeric: false,
}
] as HeadCell[];
export const LABEL_VALUE_PAIRS_HEADERS = [
{
disablePadding: false,
id: "name",
label: "Label=value pair",
numeric: false,
},
{
disablePadding: false,
id: "value",
label: "Number of series",
numeric: false,
},
{
disablePadding: false,
id: "percentage",
label: "Percent of series",
numeric: false,
},
{
disablePadding: false,
id: "action",
label: "Action",
numeric: false,
}
] as HeadCell[];
export const LABEL_NAMES_WITH_UNIQUE_VALUES_HEADERS = [
{
disablePadding: false,
id: "name",
label: "Label name",
numeric: false,
},
{
disablePadding: false,
id: "value",
label: "Number of unique values",
numeric: false,
},
{
disablePadding: false,
id: "action",
label: "Action",
numeric: false,
}
] as HeadCell[];

View file

@ -1,115 +0,0 @@
import {HeadCell} from "../Table/types";
export const METRIC_NAMES_HEADERS = [
{
disablePadding: false,
id: "name",
label: "Metric name",
numeric: false,
},
{
disablePadding: false,
id: "value",
label: "Number of series",
numeric: false,
},
{
disablePadding: false,
id: "percentage",
label: "Percent of series",
numeric: false,
},
{
disablePadding: false,
id: "action",
label: "Action",
numeric: false,
}
] as HeadCell[];
export const LABEL_NAMES_HEADERS = [
{
disablePadding: false,
id: "name",
label: "Label name",
numeric: false,
},
{
disablePadding: false,
id: "value",
label: "Number of series",
numeric: false,
},
{
disablePadding: false,
id: "percentage",
label: "Percent of series",
numeric: false,
},
{
disablePadding: false,
id: "action",
label: "Action",
numeric: false,
}
] as HeadCell[];
export const LABEL_VALUE_PAIRS_HEADERS = [
{
disablePadding: false,
id: "name",
label: "Label=value pair",
numeric: false,
},
{
disablePadding: false,
id: "value",
label: "Number of series",
numeric: false,
},
{
disablePadding: false,
id: "percentage",
label: "Percent of series",
numeric: false,
},
{
disablePadding: false,
id: "action",
label: "Action",
numeric: false,
}
]as HeadCell[];
export const LABELS_WITH_UNIQUE_VALUES_HEADERS = [
{
disablePadding: false,
id: "name",
label: "Label name",
numeric: false,
},
{
disablePadding: false,
id: "value",
label: "Number of unique values",
numeric: false,
},
{
disablePadding: false,
id: "action",
label: "Action",
numeric: false,
}
] as HeadCell[];
export const spinnerContainerStyles = (height: string) => {
return {
width: "100%",
maxWidth: "100%",
position: "absolute",
height: height ?? "50%",
background: "rgba(255, 255, 255, 0.7)",
pointerEvents: "none",
zIndex: 1000,
};
};

View file

@ -1,45 +1,25 @@
import {Containers, DefaultState, QueryUpdater, Tabs, TSDBStatus} from "./types"; import {QueryUpdater} from "./types";
import {useRef} from "preact/compat";
export const queryUpdater: QueryUpdater = { export const queryUpdater: QueryUpdater = {
seriesCountByMetricName: (query: string): string => { seriesCountByMetricName: (focusLabel: string | null, query: string): string => {
return getSeriesSelector("__name__", query); return getSeriesSelector("__name__", query);
}, },
seriesCountByLabelName: (query: string): string => `{${query}!=""}`, seriesCountByLabelName: (focusLabel: string | null, query: string): string => `{${query}!=""}`,
seriesCountByLabelValuePair: (query: string): string => { seriesCountByFocusLabelValue: (focusLabel: string | null, query: string): string => {
return getSeriesSelector(focusLabel, query);
},
seriesCountByLabelValuePair: (focusLabel: string | null, query: string): string => {
const a = query.split("="); const a = query.split("=");
const label = a[0]; const label = a[0];
const value = a.slice(1).join("="); const value = a.slice(1).join("=");
return getSeriesSelector(label, value); return getSeriesSelector(label, value);
}, },
labelValueCountByLabelName: (query: string): string => `{${query}!=""}`, labelValueCountByLabelName: (focusLabel: string | null, query: string): string => `{${query}!=""}`,
}; };
const getSeriesSelector = (label: string, value: string): string => { const getSeriesSelector = (label: string | null, value: string): string => {
if (!label) {
return "";
}
return "{" + label + "=" + JSON.stringify(value) + "}"; return "{" + label + "=" + JSON.stringify(value) + "}";
}; };
export const defaultProperties = (tsdbStatus: TSDBStatus) => {
return Object.keys(tsdbStatus).reduce((acc, key) => {
if (key === "totalSeries" || key === "totalLabelValuePairs") return acc;
return {
...acc,
tabs:{
...acc.tabs,
[key]: ["table", "graph"],
},
containerRefs: {
...acc.containerRefs,
[key]: useRef<HTMLDivElement>(null),
},
defaultState: {
...acc.defaultState,
[key]: 0,
},
};
}, {
tabs:{} as Tabs,
containerRefs: {} as Containers<HTMLDivElement>,
defaultState: {} as DefaultState,
});
};

View file

@ -5,6 +5,7 @@ export interface TSDBStatus {
totalLabelValuePairs: number; totalLabelValuePairs: number;
seriesCountByMetricName: TopHeapEntry[]; seriesCountByMetricName: TopHeapEntry[];
seriesCountByLabelName: TopHeapEntry[]; seriesCountByLabelName: TopHeapEntry[];
seriesCountByFocusLabelValue: TopHeapEntry[];
seriesCountByLabelValuePair: TopHeapEntry[]; seriesCountByLabelValuePair: TopHeapEntry[];
labelValueCountByLabelName: TopHeapEntry[]; labelValueCountByLabelName: TopHeapEntry[];
} }
@ -15,12 +16,13 @@ export interface TopHeapEntry {
} }
export type QueryUpdater = { export type QueryUpdater = {
[key: string]: (query: string) => string, [key: string]: (focusLabel: string | null, query: string) => string,
} }
export interface Tabs { export interface Tabs {
seriesCountByMetricName: string[]; seriesCountByMetricName: string[];
seriesCountByLabelName: string[]; seriesCountByLabelName: string[];
seriesCountByFocusLabelValue: string[];
seriesCountByLabelValuePair: string[]; seriesCountByLabelValuePair: string[];
labelValueCountByLabelName: string[]; labelValueCountByLabelName: string[];
} }
@ -28,13 +30,15 @@ export interface Tabs {
export interface Containers<T> { export interface Containers<T> {
seriesCountByMetricName: MutableRef<T>; seriesCountByMetricName: MutableRef<T>;
seriesCountByLabelName: MutableRef<T>; seriesCountByLabelName: MutableRef<T>;
seriesCountByFocusLabelValue: MutableRef<T>;
seriesCountByLabelValuePair: MutableRef<T>; seriesCountByLabelValuePair: MutableRef<T>;
labelValueCountByLabelName: MutableRef<T>; labelValueCountByLabelName: MutableRef<T>;
} }
export interface DefaultState { export interface DefaultActiveTab {
seriesCountByMetricName: number; seriesCountByMetricName: number;
seriesCountByLabelName: number; seriesCountByLabelName: number;
seriesCountByFocusLabelValue: number;
seriesCountByLabelValuePair: number; seriesCountByLabelValuePair: number;
labelValueCountByLabelName: number; labelValueCountByLabelName: number;
} }

View file

@ -5,34 +5,28 @@ import {CardinalityRequestsParams, getCardinalityInfo} from "../api/tsdb";
import {getAppModeEnable, getAppModeParams} from "../utils/app-mode"; import {getAppModeEnable, getAppModeParams} from "../utils/app-mode";
import {TSDBStatus} from "../components/CardinalityPanel/types"; import {TSDBStatus} from "../components/CardinalityPanel/types";
import {useCardinalityState} from "../state/cardinality/CardinalityStateContext"; import {useCardinalityState} from "../state/cardinality/CardinalityStateContext";
import AppConfigurator from "../components/CardinalityPanel/appConfigurator";
const appModeEnable = getAppModeEnable(); const appModeEnable = getAppModeEnable();
const {serverURL: appServerUrl} = getAppModeParams(); const {serverURL: appServerUrl} = getAppModeParams();
const defaultTSDBStatus = {
totalSeries: 0,
totalLabelValuePairs: 0,
seriesCountByMetricName: [],
seriesCountByLabelName: [],
seriesCountByLabelValuePair: [],
labelValueCountByLabelName: [],
};
export const useFetchQuery = (): { export const useFetchQuery = (): {
fetchUrl?: string[], fetchUrl?: string[],
isLoading: boolean, isLoading: boolean,
error?: ErrorTypes | string error?: ErrorTypes | string
tsdbStatus: TSDBStatus, appConfigurator: AppConfigurator,
} => { } => {
const {topN, extraLabel, match, date, runQuery} = useCardinalityState(); const appConfigurator = new AppConfigurator();
const {topN, extraLabel, match, date, runQuery, focusLabel} = useCardinalityState();
const {serverUrl} = useAppState(); const {serverUrl} = useAppState();
const [isLoading, setIsLoading] = useState(false); const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState<ErrorTypes | string>(); const [error, setError] = useState<ErrorTypes | string>();
const [tsdbStatus, setTSDBStatus] = useState<TSDBStatus>(defaultTSDBStatus); const [tsdbStatus, setTSDBStatus] = useState<TSDBStatus>(appConfigurator.defaultTSDBStatus);
useEffect(() => { useEffect(() => {
if (error) { if (error) {
setTSDBStatus(defaultTSDBStatus); setTSDBStatus(appConfigurator.defaultTSDBStatus);
setIsLoading(false); setIsLoading(false);
} }
}, [error]); }, [error]);
@ -42,7 +36,7 @@ export const useFetchQuery = (): {
if (!server) return; if (!server) return;
setError(""); setError("");
setIsLoading(true); setIsLoading(true);
setTSDBStatus(defaultTSDBStatus); setTSDBStatus(appConfigurator.defaultTSDBStatus);
const url = getCardinalityInfo(server, requestParams); const url = getCardinalityInfo(server, requestParams);
try { try {
@ -54,7 +48,7 @@ export const useFetchQuery = (): {
setIsLoading(false); setIsLoading(false);
} else { } else {
setError(resp.error); setError(resp.error);
setTSDBStatus(defaultTSDBStatus); setTSDBStatus(appConfigurator.defaultTSDBStatus);
setIsLoading(false); setIsLoading(false);
} }
} catch (e) { } catch (e) {
@ -65,8 +59,9 @@ export const useFetchQuery = (): {
useEffect(() => { useEffect(() => {
fetchCardinalityInfo({topN, extraLabel, match, date}); fetchCardinalityInfo({topN, extraLabel, match, date, focusLabel});
}, [serverUrl, runQuery, date]); }, [serverUrl, runQuery, date]);
return {isLoading, tsdbStatus, error}; appConfigurator.tsdbStatusData = tsdbStatus;
return {isLoading, appConfigurator: appConfigurator, error};
}; };

View file

@ -7,6 +7,7 @@ export interface CardinalityState {
date: string | null date: string | null
match: string | null match: string | null
extraLabel: string | null extraLabel: string | null
focusLabel: string | null
} }
export type Action = export type Action =
@ -14,12 +15,15 @@ export type Action =
| { type: "SET_DATE", payload: string | null } | { type: "SET_DATE", payload: string | null }
| { type: "SET_MATCH", payload: string | null } | { type: "SET_MATCH", payload: string | null }
| { type: "SET_EXTRA_LABEL", payload: string | null } | { type: "SET_EXTRA_LABEL", payload: string | null }
| { type: "SET_FOCUS_LABEL", payload: string | null }
| { type: "RUN_QUERY" } | { type: "RUN_QUERY" }
export const initialState: CardinalityState = { export const initialState: CardinalityState = {
runQuery: 0, runQuery: 0,
topN: getQueryStringValue("topN", 10) as number, topN: getQueryStringValue("topN", 10) as number,
date: getQueryStringValue("date", dayjs(new Date()).format("YYYY-MM-DD")) as string, date: getQueryStringValue("date", dayjs(new Date()).format("YYYY-MM-DD")) as string,
focusLabel: getQueryStringValue("focusLabel", "") as string,
match: (getQueryStringValue("match", []) as string[]).join("&"), match: (getQueryStringValue("match", []) as string[]).join("&"),
extraLabel: getQueryStringValue("extra_label", "") as string, extraLabel: getQueryStringValue("extra_label", "") as string,
}; };
@ -46,6 +50,11 @@ export function reducer(state: CardinalityState, action: Action): CardinalitySta
...state, ...state,
extraLabel: action.payload extraLabel: action.payload
}; };
case "SET_FOCUS_LABEL":
return {
...state,
focusLabel: action.payload,
};
case "RUN_QUERY": case "RUN_QUERY":
return { return {
...state, ...state,

View file

@ -17,7 +17,8 @@ const stateToUrlParams = {
"topN": "topN", "topN": "topN",
"date": "date", "date": "date",
"match": "match[]", "match": "match[]",
"extraLabel": "extra_label" "extraLabel": "extra_label",
"focusLabel": "focusLabel"
} }
}; };

View file

@ -42,7 +42,7 @@ To check it, open the following in your browser `http://your_droplet_public_ipv4
Run the following command to query and retrieve a result from VictoriaMetrics Single with `curl`: Run the following command to query and retrieve a result from VictoriaMetrics Single with `curl`:
```bash ```console
curl -sg http://your_droplet_public_ipv4:8428/api/v1/query_range?query=vm_app_uptime_seconds | jq curl -sg http://your_droplet_public_ipv4:8428/api/v1/query_range?query=vm_app_uptime_seconds | jq
``` ```
@ -50,6 +50,6 @@ curl -sg http://your_droplet_public_ipv4:8428/api/v1/query_range?query=vm_app_up
Once the Droplet is created, you can use DigitalOcean's web console to start a session or SSH directly to the server as root: Once the Droplet is created, you can use DigitalOcean's web console to start a session or SSH directly to the server as root:
```bash ```console
ssh root@your_droplet_public_ipv4 ssh root@your_droplet_public_ipv4
``` ```

View file

@ -6,13 +6,13 @@
2. API Token can be generated on [https://cloud.digitalocean.com/account/api/tokens](https://cloud.digitalocean.com/account/api/tokens) or use already generated from OnePassword. 2. API Token can be generated on [https://cloud.digitalocean.com/account/api/tokens](https://cloud.digitalocean.com/account/api/tokens) or use already generated from OnePassword.
3. Set variable `DIGITALOCEAN_API_TOKEN` for environment: 3. Set variable `DIGITALOCEAN_API_TOKEN` for environment:
```bash ```console
export DIGITALOCEAN_API_TOKEN="your_token_here" export DIGITALOCEAN_API_TOKEN="your_token_here"
``` ```
or set it by with make: or set it by with make:
```bash ```console
make release-victoria-metrics-digitalocean-oneclick-droplet DIGITALOCEAN_API_TOKEN="your_token_here" make release-victoria-metrics-digitalocean-oneclick-droplet DIGITALOCEAN_API_TOKEN="your_token_here"
``` ```

View file

@ -18,7 +18,7 @@ The following tip changes can be tested by building VictoriaMetrics components f
**Update notes:** this release introduces backwards-incompatible changes to communication protocol between `vmselect` and `vmstorage` nodes in cluster version of VictoriaMetrics because of added [query tracing](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#query-tracing), so `vmselect` and `vmstorage` nodes may log communication errors during the upgrade. These errors should stop after all the `vmselect` and `vmstorage` nodes are updated to new release. It is safe to downgrade to previous releases. **Update notes:** this release introduces backwards-incompatible changes to communication protocol between `vmselect` and `vmstorage` nodes in cluster version of VictoriaMetrics because of added [query tracing](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#query-tracing), so `vmselect` and `vmstorage` nodes may log communication errors during the upgrade. These errors should stop after all the `vmselect` and `vmstorage` nodes are updated to new release. It is safe to downgrade to previous releases.
* FEATURE: support query tracing, which allows determining bottlenecks during query processing. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#query-tracing) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1403). * FEATURE: support query tracing, which allows determining bottlenecks during query processing. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#query-tracing) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1403).
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): add `cardinality` tab, which can help identifying the source of [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) and [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) issues. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2233) and [these docs](https://docs.victoriametrics.com/#cardinality-explorer). * FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): add `cardinality` tab, which can help identifying the source of [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) and [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) issues. See [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2233) and [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2730) feature requests and [these docs](https://docs.victoriametrics.com/#cardinality-explorer).
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): small UX enhancements according to [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2638). * FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): small UX enhancements according to [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2638).
* FEATURE: allow overriding default limits for in-memory cache `indexdb/tagFilters` via flag `-storage.cacheSizeIndexDBTagFilters`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2663). * FEATURE: allow overriding default limits for in-memory cache `indexdb/tagFilters` via flag `-storage.cacheSizeIndexDBTagFilters`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2663).
* FEATURE: add support of `lowercase` and `uppercase` relabeling actions in the same way as [Prometheus 2.36.0 does](https://github.com/prometheus/prometheus/releases/tag/v2.36.0). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2664). * FEATURE: add support of `lowercase` and `uppercase` relabeling actions in the same way as [Prometheus 2.36.0 does](https://github.com/prometheus/prometheus/releases/tag/v2.36.0). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2664).
@ -28,12 +28,14 @@ The following tip changes can be tested by building VictoriaMetrics components f
* FEATURE: optimize performance for [/api/v1/labels](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names) and [/api/v1/label/.../values](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values) endpoints when `match[]`, `extra_label` or `extra_filters[]` query args are passed to these endpoints. This should help with [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1533). * FEATURE: optimize performance for [/api/v1/labels](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names) and [/api/v1/label/.../values](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values) endpoints when `match[]`, `extra_label` or `extra_filters[]` query args are passed to these endpoints. This should help with [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1533).
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): support `limit` param per-group for limiting number of produced samples per each rule. Thanks to @Howie59 for [implementation](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2676). * FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): support `limit` param per-group for limiting number of produced samples per each rule. Thanks to @Howie59 for [implementation](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2676).
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): remove dependency on Internet access at [web API pages](https://docs.victoriametrics.com/vmalert.html#web). Previously the functionality and the layout of these pages was broken without Internet access. See [shis issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2594). * FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): remove dependency on Internet access at [web API pages](https://docs.victoriametrics.com/vmalert.html#web). Previously the functionality and the layout of these pages was broken without Internet access. See [shis issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2594).
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): send alerts to the configured notifiers in parallel. Previously alerts were sent to notifiers sequentially. This could delay sending pending alerts when notifier blocks on the currently sent alert.
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): implement the `http://vmagent:8429/service-discovery` page in the same way as Prometheus does. This page shows the original labels for all the discovered targets alongside the resulting labels after the relabeling. This simplifies service discovery debugging. * FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): implement the `http://vmagent:8429/service-discovery` page in the same way as Prometheus does. This page shows the original labels for all the discovered targets alongside the resulting labels after the relabeling. This simplifies service discovery debugging.
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): remove dependency on Internet access at `http://vmagent:8429/targets` page. Previously the page layout was broken without Internet access. See [shis issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2594). * FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): remove dependency on Internet access at `http://vmagent:8429/targets` page. Previously the page layout was broken without Internet access. See [shis issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2594).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add support for `kubeconfig_file` option at [kubernetes_sd_configs](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config). It may be useful for Kubernetes monitoring by `vmagent` outside Kubernetes cluster. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1464). * FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add support for `kubeconfig_file` option at [kubernetes_sd_configs](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config). It may be useful for Kubernetes monitoring by `vmagent` outside Kubernetes cluster. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1464).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): expose `/api/v1/status/config` endpoint in the same way as Prometheus does. See [these docs](https://prometheus.io/docs/prometheus/latest/querying/api/#config). * FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): expose `/api/v1/status/config` endpoint in the same way as Prometheus does. See [these docs](https://prometheus.io/docs/prometheus/latest/querying/api/#config).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `-promscrape.suppressScrapeErrorsDelay` command-line flag, which can be used for delaying and aggregating the logging of per-target scrape errors. This may reduce the amounts of logs when `vmagent` scrapes many unreliable targets. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2575). Thanks to @jelmd for [the initial implementation](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2576). * FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `-promscrape.suppressScrapeErrorsDelay` command-line flag, which can be used for delaying and aggregating the logging of per-target scrape errors. This may reduce the amounts of logs when `vmagent` scrapes many unreliable targets. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2575). Thanks to @jelmd for [the initial implementation](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2576).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `-promscrape.cluster.name` command-line flag, which allows proper data de-duplication when the same target is scraped from multiple [vmagent clusters](https://docs.victoriametrics.com/vmagent.html#scraping-big-number-of-targets). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2679). * FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `-promscrape.cluster.name` command-line flag, which allows proper data de-duplication when the same target is scraped from multiple [vmagent clusters](https://docs.victoriametrics.com/vmagent.html#scraping-big-number-of-targets). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2679).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `action: graphite` relabeling rules optimized for extracting labels from Graphite-style metric names. See [these docs](https://docs.victoriametrics.com/vmagent.html#graphite-relabeling) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2737).
* FEATURE: [VictoriaMetrics enterprise](https://victoriametrics.com/products/enterprise/): expose `vm_downsampling_partitions_scheduled` and `vm_downsampling_partitions_scheduled_size_bytes` metrics, which can be used for tracking the progress of initial [downsampling](https://docs.victoriametrics.com/#downsampling) for historical data. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2612). * FEATURE: [VictoriaMetrics enterprise](https://victoriametrics.com/products/enterprise/): expose `vm_downsampling_partitions_scheduled` and `vm_downsampling_partitions_scheduled_size_bytes` metrics, which can be used for tracking the progress of initial [downsampling](https://docs.victoriametrics.com/#downsampling) for historical data. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2612).
* BUGFIX: support for data ingestion in [DataDog format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-datadog-agent) from legacy clients / agents. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2670). Thanks to @elProxy for the fix. * BUGFIX: support for data ingestion in [DataDog format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-datadog-agent) from legacy clients / agents. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2670). Thanks to @elProxy for the fix.
@ -44,6 +46,9 @@ The following tip changes can be tested by building VictoriaMetrics components f
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): properly apply the selected time range when auto-refresh is enabled. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2693). * BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): properly apply the selected time range when auto-refresh is enabled. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2693).
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): properly update the url with vmui state when new query is entered. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2692). * BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): properly update the url with vmui state when new query is entered. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2692).
* BUGFIX: [Graphite render API](https://docs.victoriametrics.com/#graphite-render-api-usage): properly calculate sample timestamps when `moving*()` functions such as [movingAverage()](https://graphite.readthedocs.io/en/stable/functions.html#graphite.render.functions.movingAverage) are applied over [summarize()](https://graphite.readthedocs.io/en/stable/functions.html#graphite.render.functions.summarize). * BUGFIX: [Graphite render API](https://docs.victoriametrics.com/#graphite-render-api-usage): properly calculate sample timestamps when `moving*()` functions such as [movingAverage()](https://graphite.readthedocs.io/en/stable/functions.html#graphite.render.functions.movingAverage) are applied over [summarize()](https://graphite.readthedocs.io/en/stable/functions.html#graphite.render.functions.summarize).
* BUGFIX: limit the `end` query arg value to `+2 days` in the future at `/api/v1/*` endpoints, because VictoriaMetrics doesn't allow storing samples with timestamps bigger than +2 days in the future. This should help resolving [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2669).
* BUGFIX: properly register time series in per-day inverted index during the first hour after `indexdb` rotation. Previously this could lead to missing time series during querying if these time series stopped receiving new samples during the first hour after `indexdb` rotation. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2698).
* BUGFIX: do not register new series when `-storage.maxHourlySeries` or `-storage.maxDailySeries` limits were reached. Previously samples for new series weren't added to the database when the [cardinality limit](https://docs.victoriametrics.com/#cardinality-limiter) was reached, but series were still registered in the inverted index (aka `indexdb`). This could lead to unbound `indexdb` growth during [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate).
## [v1.77.2](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.77.2) ## [v1.77.2](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.77.2)
@ -1001,7 +1006,7 @@ Released at 26-11-2020
* FEATURE: added [Snap package for single-node VictoriaMetrics](https://snapcraft.io/victoriametrics). This simplifies installation under Ubuntu to a single command: * FEATURE: added [Snap package for single-node VictoriaMetrics](https://snapcraft.io/victoriametrics). This simplifies installation under Ubuntu to a single command:
```bash ```console
snap install victoriametrics snap install victoriametrics
``` ```

View file

@ -115,7 +115,7 @@ By default images are built on top of [alpine](https://hub.docker.com/_/scratch)
It is possible to build an image on top of any other base image by setting it via `<ROOT_IMAGE>` environment variable. It is possible to build an image on top of any other base image by setting it via `<ROOT_IMAGE>` environment variable.
For example, the following command builds images on top of [scratch](https://hub.docker.com/_/scratch) image: For example, the following command builds images on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package ROOT_IMAGE=scratch make package
``` ```
@ -448,7 +448,7 @@ Example command for collecting cpu profile from `vmstorage` (replace `0.0.0.0` w
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8482/debug/pprof/profile > cpu.pprof curl http://0.0.0.0:8482/debug/pprof/profile > cpu.pprof
``` ```
@ -458,7 +458,7 @@ Example command for collecting memory profile from `vminsert` (replace `0.0.0.0`
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8480/debug/pprof/heap > mem.pprof curl http://0.0.0.0:8480/debug/pprof/heap > mem.pprof
``` ```

View file

@ -40,7 +40,7 @@ under the current directory:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
docker pull victoriametrics/victoria-metrics:latest docker pull victoriametrics/victoria-metrics:latest
docker run -it --rm -v `pwd`/victoria-metrics-data:/victoria-metrics-data -p 8428:8428 victoriametrics/victoria-metrics:latest docker run -it --rm -v `pwd`/victoria-metrics-data:/victoria-metrics-data -p 8428:8428 victoriametrics/victoria-metrics:latest
``` ```
@ -63,7 +63,7 @@ file.
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
git clone https://github.com/VictoriaMetrics/VictoriaMetrics --branch cluster && git clone https://github.com/VictoriaMetrics/VictoriaMetrics --branch cluster &&
cd VictoriaMetrics/deployment/docker && cd VictoriaMetrics/deployment/docker &&
docker-compose up docker-compose up

View file

@ -164,7 +164,7 @@ Then apply new config via the following command:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kill -HUP `pidof prometheus` kill -HUP `pidof prometheus`
``` ```
@ -268,7 +268,8 @@ See the [example VMUI at VictoriaMetrics playground](https://play.victoriametric
VictoriaMetrics provides an ability to explore time series cardinality at `cardinality` tab in [vmui](#vmui) in the following ways: VictoriaMetrics provides an ability to explore time series cardinality at `cardinality` tab in [vmui](#vmui) in the following ways:
- To identify metric names with the highest number of series. - To identify metric names with the highest number of series.
- To idnetify labels with the highest number of series. - To identify labels with the highest number of series.
- To identify values with the highest number of series for the selected label (aka `focusLabel`).
- To identify label=name pairs with the highest number of series. - To identify label=name pairs with the highest number of series.
- To identify labels with the highest number of unique values. - To identify labels with the highest number of unique values.
@ -327,7 +328,7 @@ VictoriaMetrics doesn't check `DD_API_KEY` param, so it can be set to arbitrary
Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line: Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line:
```bash ```console
echo ' echo '
{ {
"series": [ "series": [
@ -353,7 +354,7 @@ The imported data can be read via [export API](https://docs.victoriametrics.com/
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1' curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1'
``` ```
@ -368,6 +369,16 @@ This command should return the following output if everything is OK:
Extra labels may be added to all the written time series by passing `extra_label=name=value` query args. Extra labels may be added to all the written time series by passing `extra_label=name=value` query args.
For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics. For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics.
DataDog agent sends the [configured tags](https://docs.datadoghq.com/getting_started/tagging/) to
undocumented endpoint - `/datadog/intake`. This endpoint isn't supported by VictoriaMetrics yet.
This prevents from adding the configured tags to DataDog agent data sent into VictoriaMetrics.
The workaround is to run a sidecar [vmagent](https://docs.victoriametrics.com/vmagent.html) alongside every DataDog agent,
which must run with `DD_DD_URL=http://localhost:8429/datadog` environment variable.
The sidecar `vmagent` must be configured with the needed tags via `-remoteWrite.label` command-line flag and must forward
incoming data with the added tags to a centralized VictoriaMetrics specified via `-remoteWrite.url` command-line flag.
See [these docs](https://docs.victoriametrics.com/vmagent.html#adding-labels-to-metrics) for details on how to add labels to metrics at `vmagent`.
## How to send data from InfluxDB-compatible agents such as [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) ## How to send data from InfluxDB-compatible agents such as [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/)
Use `http://<victoriametric-addr>:8428` url instead of InfluxDB url in agents' configs. Use `http://<victoriametric-addr>:8428` url instead of InfluxDB url in agents' configs.
@ -407,7 +418,7 @@ to local VictoriaMetrics using `curl`:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write' curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
``` ```
@ -418,7 +429,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}' curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
``` ```
@ -446,7 +457,7 @@ Comma-separated list of expected databases can be passed to VictoriaMetrics via
Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command line flag. For instance, Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command line flag. For instance,
the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port `2003`: the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port `2003`:
```bash ```console
/path/to/victoria-metrics-prod -graphiteListenAddr=:2003 /path/to/victoria-metrics-prod -graphiteListenAddr=:2003
``` ```
@ -455,7 +466,7 @@ to the VictoriaMetrics host in `StatsD` configs.
Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using `nc`: Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using `nc`:
```bash ```console
echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003 echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003
``` ```
@ -465,7 +476,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
``` ```
@ -477,6 +488,8 @@ The `/api/v1/export` endpoint should return the following response:
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]} {"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]}
``` ```
[Graphite relabeling](https://docs.victoriametrics.com/vmagent.html#graphite-relabeling) can be used if the imported Graphite data is going to be queried via [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html).
## Querying Graphite data ## Querying Graphite data
Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via the following APIs: Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via the following APIs:
@ -491,6 +504,9 @@ VictoriaMetrics supports `__graphite__` pseudo-label for selecting time series w
The `__graphite__` pseudo-label supports e.g. alternate regexp filters such as `(value1|...|valueN)`. They are transparently converted to `{value1,...,valueN}` syntax [used in Graphite](https://graphite.readthedocs.io/en/latest/render_api.html#paths-and-wildcards). This allows using [multi-value template variables in Grafana](https://grafana.com/docs/grafana/latest/variables/formatting-multi-value-variables/) inside `__graphite__` pseudo-label. For example, Grafana expands `{__graphite__=~"foo.($bar).baz"}` into `{__graphite__=~"foo.(x|y).baz"}` if `$bar` template variable contains `x` and `y` values. In this case the query is automatically converted into `{__graphite__=~"foo.{x,y}.baz"}` before execution. The `__graphite__` pseudo-label supports e.g. alternate regexp filters such as `(value1|...|valueN)`. They are transparently converted to `{value1,...,valueN}` syntax [used in Graphite](https://graphite.readthedocs.io/en/latest/render_api.html#paths-and-wildcards). This allows using [multi-value template variables in Grafana](https://grafana.com/docs/grafana/latest/variables/formatting-multi-value-variables/) inside `__graphite__` pseudo-label. For example, Grafana expands `{__graphite__=~"foo.($bar).baz"}` into `{__graphite__=~"foo.(x|y).baz"}` if `$bar` template variable contains `x` and `y` values. In this case the query is automatically converted into `{__graphite__=~"foo.{x,y}.baz"}` before execution.
VictoriaMetrics also supports Graphite query language - see [these docs](#graphite-render-api-usage).
## How to send data from OpenTSDB-compatible agents ## How to send data from OpenTSDB-compatible agents
VictoriaMetrics supports [telnet put protocol](http://opentsdb.net/docs/build/html/api_telnet/put.html) VictoriaMetrics supports [telnet put protocol](http://opentsdb.net/docs/build/html/api_telnet/put.html)
@ -502,7 +518,7 @@ The same protocol is used for [ingesting data in KairosDB](https://kairosdb.gith
Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command line flag. For instance, Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command line flag. For instance,
the following command enables OpenTSDB receiver in VictoriaMetrics on TCP and UDP port `4242`: the following command enables OpenTSDB receiver in VictoriaMetrics on TCP and UDP port `4242`:
```bash ```console
/path/to/victoria-metrics-prod -opentsdbListenAddr=:4242 /path/to/victoria-metrics-prod -opentsdbListenAddr=:4242
``` ```
@ -512,7 +528,7 @@ Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242 echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
``` ```
@ -523,7 +539,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
``` ```
@ -540,7 +556,7 @@ The `/api/v1/export` endpoint should return the following response:
Enable HTTP server for OpenTSDB `/api/put` requests by setting `-opentsdbHTTPListenAddr` command line flag. For instance, Enable HTTP server for OpenTSDB `/api/put` requests by setting `-opentsdbHTTPListenAddr` command line flag. For instance,
the following command enables OpenTSDB HTTP server on port `4242`: the following command enables OpenTSDB HTTP server on port `4242`:
```bash ```console
/path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242 /path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242
``` ```
@ -550,7 +566,7 @@ Example for writing a single data point:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put
``` ```
@ -560,7 +576,7 @@ Example for writing multiple data points in a single request:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
``` ```
@ -570,7 +586,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar' curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar'
``` ```
@ -740,7 +756,7 @@ The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is pos
by setting it via `<ROOT_IMAGE>` environment variable. by setting it via `<ROOT_IMAGE>` environment variable.
For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package-victoria-metrics ROOT_IMAGE=scratch make package-victoria-metrics
``` ```
@ -854,7 +870,7 @@ Each JSON line contains samples for a single time series. An example output:
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example: For example:
```bash ```console
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
``` ```
@ -868,7 +884,7 @@ of time series data. This enables gzip compression for the exported data. Exampl
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz
``` ```
@ -902,7 +918,7 @@ for metrics to export.
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example: For example:
```bash ```console
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
``` ```
@ -919,7 +935,7 @@ for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time
On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag: On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag:
```bash ```console
# count unique timeseries in database # count unique timeseries in database
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]' wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
@ -929,7 +945,7 @@ wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example: For example:
```bash ```console
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
``` ```
@ -961,7 +977,7 @@ Time series data can be imported into VictoriaMetrics via any supported data ing
Example for importing data obtained via [/api/v1/export](#how-to-export-data-in-json-line-format): Example for importing data obtained via [/api/v1/export](#how-to-export-data-in-json-line-format):
```bash ```console
# Export the data from <source-victoriametrics>: # Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl curl http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl
@ -971,7 +987,7 @@ curl -X POST http://destination-victoriametrics:8428/api/v1/import -T exported_d
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import` for importing gzipped data: Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import` for importing gzipped data:
```bash ```console
# Export gzipped data from <source-victoriametrics>: # Export gzipped data from <source-victoriametrics>:
curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl.gz curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl.gz
@ -992,7 +1008,7 @@ The specification of VictoriaMetrics' native format may yet change and is not fo
If you have a native format file obtained via [/api/v1/export/native](#how-to-export-data-in-native-format) however this is the most efficient protocol for importing data in. If you have a native format file obtained via [/api/v1/export/native](#how-to-export-data-in-native-format) however this is the most efficient protocol for importing data in.
```bash ```console
# Export the data from <source-victoriametrics>: # Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin
@ -1033,14 +1049,14 @@ Each request to `/api/v1/import/csv` may contain arbitrary number of CSV lines.
Example for importing CSV data via `/api/v1/import/csv`: Example for importing CSV data via `/api/v1/import/csv`:
```bash ```console
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
``` ```
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}' curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
``` ```
@ -1066,7 +1082,7 @@ via `/api/v1/import/prometheus` path. For example, the following line imports a
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
``` ```
@ -1076,7 +1092,7 @@ The following command may be used for verifying the imported data:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}' curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
``` ```
@ -1092,7 +1108,7 @@ Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus`
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
# Import gzipped data to <destination-victoriametrics>: # Import gzipped data to <destination-victoriametrics>:
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz
``` ```
@ -1131,7 +1147,9 @@ Example contents for `-relabelConfig` file:
regex: true regex: true
``` ```
See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) for more details about relabeling in VictoriaMetrics. VictoriaMetrics components provide additional relabeling features such as Graphite-style relabeling.
See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) for more details.
## Federation ## Federation
@ -1141,7 +1159,7 @@ at `http://<victoriametrics-addr>:8428/federate?match[]=<timeseries_selector_for
Optional `start` and `end` args may be added to the request in order to scrape the last point for each selected time series on the `[start ... end]` interval. Optional `start` and `end` args may be added to the request in order to scrape the last point for each selected time series on the `[start ... end]` interval.
`start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. `start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example: For example:
```bash ```console
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
``` ```
@ -1195,7 +1213,7 @@ See also [cardinality limiter](#cardinality-limiter) and [capacity planning docs
* Install multiple VictoriaMetrics instances in distinct datacenters (availability zones). * Install multiple VictoriaMetrics instances in distinct datacenters (availability zones).
* Pass addresses of these instances to [vmagent](https://docs.victoriametrics.com/vmagent.html) via `-remoteWrite.url` command-line flag: * Pass addresses of these instances to [vmagent](https://docs.victoriametrics.com/vmagent.html) via `-remoteWrite.url` command-line flag:
```bash ```console
/path/to/vmagent -remoteWrite.url=http://<victoriametrics-addr-1>:8428/api/v1/write -remoteWrite.url=http://<victoriametrics-addr-2>:8428/api/v1/write /path/to/vmagent -remoteWrite.url=http://<victoriametrics-addr-1>:8428/api/v1/write -remoteWrite.url=http://<victoriametrics-addr-2>:8428/api/v1/write
``` ```
@ -1214,7 +1232,7 @@ remote_write:
* Apply the updated config: * Apply the updated config:
```bash ```console
kill -HUP `pidof prometheus` kill -HUP `pidof prometheus`
``` ```
@ -1395,7 +1413,7 @@ For example, substitute `-graphiteListenAddr=:2003` with `-graphiteListenAddr=<i
If you plan to store more than 1TB of data on `ext4` partition or plan extending it to more than 16TB, If you plan to store more than 1TB of data on `ext4` partition or plan extending it to more than 16TB,
then the following options are recommended to pass to `mkfs.ext4`: then the following options are recommended to pass to `mkfs.ext4`:
```bash ```console
mkfs.ext4 ... -O 64bit,huge_file,extent -T huge mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
``` ```
@ -1441,6 +1459,7 @@ VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way simi
* `topN=N` where `N` is the number of top entries to return in the response. By default top 10 entries are returned. * `topN=N` where `N` is the number of top entries to return in the response. By default top 10 entries are returned.
* `date=YYYY-MM-DD` where `YYYY-MM-DD` is the date for collecting the stats. By default the stats is collected for the current day. Pass `date=1970-01-01` in order to collect global stats across all the days. * `date=YYYY-MM-DD` where `YYYY-MM-DD` is the date for collecting the stats. By default the stats is collected for the current day. Pass `date=1970-01-01` in order to collect global stats across all the days.
* `focusLabel=LABEL_NAME` returns label values with the highest number of time series for the given `LABEL_NAME` in the `seriesCountByFocusLabelValue` list.
* `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account. * `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account.
* `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details. * `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details.
@ -1455,7 +1474,7 @@ In this case VictoriaMetrics puts query trace into `trace` field in the output J
For example, the following command: For example, the following command:
```bash ```console
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace' curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace'
``` ```
@ -1716,7 +1735,7 @@ VictoriaMetrics provides handlers for collecting the following [Go profiles](htt
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
``` ```
@ -1726,7 +1745,7 @@ curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof
``` ```

View file

@ -17,6 +17,8 @@ sort: 18
4. Push release tags to <https://github.com/VictoriaMetrics/VictoriaMetrics> : `git push origin v1.xx.y` and `git push origin v1.xx.y-cluster`. Do not push `-enterprise` tags to public repository. 4. Push release tags to <https://github.com/VictoriaMetrics/VictoriaMetrics> : `git push origin v1.xx.y` and `git push origin v1.xx.y-cluster`. Do not push `-enterprise` tags to public repository.
5. Go to <https://github.com/VictoriaMetrics/VictoriaMetrics/releases> , create new release from the pushed tag on step 4 and upload `*.tar.gz` archive with the corresponding `_checksums.txt` from step 3. 5. Go to <https://github.com/VictoriaMetrics/VictoriaMetrics/releases> , create new release from the pushed tag on step 4 and upload `*.tar.gz` archive with the corresponding `_checksums.txt` from step 3.
6. Copy the [CHANGELOG](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/CHANGELOG.md) for this release to [releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) page. 6. Copy the [CHANGELOG](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/CHANGELOG.md) for this release to [releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) page.
7. Bump version of the VictoriaMetrics cluster setup in for [sandbox environment](https://github.com/VictoriaMetrics/ops/blob/main/sandbox/manifests/benchmark-vm/vmcluster.yaml)
by [opening and merging PR](https://github.com/VictoriaMetrics/ops/pull/58).
## Building snap package ## Building snap package

View file

@ -168,7 +168,7 @@ Then apply new config via the following command:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kill -HUP `pidof prometheus` kill -HUP `pidof prometheus`
``` ```
@ -332,7 +332,7 @@ VictoriaMetrics doesn't check `DD_API_KEY` param, so it can be set to arbitrary
Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line: Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line:
```bash ```console
echo ' echo '
{ {
"series": [ "series": [
@ -358,7 +358,7 @@ The imported data can be read via [export API](https://docs.victoriametrics.com/
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1' curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1'
``` ```
@ -373,6 +373,16 @@ This command should return the following output if everything is OK:
Extra labels may be added to all the written time series by passing `extra_label=name=value` query args. Extra labels may be added to all the written time series by passing `extra_label=name=value` query args.
For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics. For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics.
DataDog agent sends the [configured tags](https://docs.datadoghq.com/getting_started/tagging/) to
undocumented endpoint - `/datadog/intake`. This endpoint isn't supported by VictoriaMetrics yet.
This prevents from adding the configured tags to DataDog agent data sent into VictoriaMetrics.
The workaround is to run a sidecar [vmagent](https://docs.victoriametrics.com/vmagent.html) alongside every DataDog agent,
which must run with `DD_DD_URL=http://localhost:8429/datadog` environment variable.
The sidecar `vmagent` must be configured with the needed tags via `-remoteWrite.label` command-line flag and must forward
incoming data with the added tags to a centralized VictoriaMetrics specified via `-remoteWrite.url` command-line flag.
See [these docs](https://docs.victoriametrics.com/vmagent.html#adding-labels-to-metrics) for details on how to add labels to metrics at `vmagent`.
## How to send data from InfluxDB-compatible agents such as [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) ## How to send data from InfluxDB-compatible agents such as [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/)
Use `http://<victoriametric-addr>:8428` url instead of InfluxDB url in agents' configs. Use `http://<victoriametric-addr>:8428` url instead of InfluxDB url in agents' configs.
@ -412,7 +422,7 @@ to local VictoriaMetrics using `curl`:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write' curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
``` ```
@ -423,7 +433,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}' curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
``` ```
@ -451,7 +461,7 @@ Comma-separated list of expected databases can be passed to VictoriaMetrics via
Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command line flag. For instance, Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command line flag. For instance,
the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port `2003`: the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port `2003`:
```bash ```console
/path/to/victoria-metrics-prod -graphiteListenAddr=:2003 /path/to/victoria-metrics-prod -graphiteListenAddr=:2003
``` ```
@ -460,7 +470,7 @@ to the VictoriaMetrics host in `StatsD` configs.
Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using `nc`: Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using `nc`:
```bash ```console
echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003 echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003
``` ```
@ -470,7 +480,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
``` ```
@ -482,6 +492,8 @@ The `/api/v1/export` endpoint should return the following response:
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]} {"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]}
``` ```
[Graphite relabeling](https://docs.victoriametrics.com/vmagent.html#graphite-relabeling) can be used if the imported Graphite data is going to be queried via [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html).
## Querying Graphite data ## Querying Graphite data
Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via the following APIs: Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via the following APIs:
@ -496,6 +508,9 @@ VictoriaMetrics supports `__graphite__` pseudo-label for selecting time series w
The `__graphite__` pseudo-label supports e.g. alternate regexp filters such as `(value1|...|valueN)`. They are transparently converted to `{value1,...,valueN}` syntax [used in Graphite](https://graphite.readthedocs.io/en/latest/render_api.html#paths-and-wildcards). This allows using [multi-value template variables in Grafana](https://grafana.com/docs/grafana/latest/variables/formatting-multi-value-variables/) inside `__graphite__` pseudo-label. For example, Grafana expands `{__graphite__=~"foo.($bar).baz"}` into `{__graphite__=~"foo.(x|y).baz"}` if `$bar` template variable contains `x` and `y` values. In this case the query is automatically converted into `{__graphite__=~"foo.{x,y}.baz"}` before execution. The `__graphite__` pseudo-label supports e.g. alternate regexp filters such as `(value1|...|valueN)`. They are transparently converted to `{value1,...,valueN}` syntax [used in Graphite](https://graphite.readthedocs.io/en/latest/render_api.html#paths-and-wildcards). This allows using [multi-value template variables in Grafana](https://grafana.com/docs/grafana/latest/variables/formatting-multi-value-variables/) inside `__graphite__` pseudo-label. For example, Grafana expands `{__graphite__=~"foo.($bar).baz"}` into `{__graphite__=~"foo.(x|y).baz"}` if `$bar` template variable contains `x` and `y` values. In this case the query is automatically converted into `{__graphite__=~"foo.{x,y}.baz"}` before execution.
VictoriaMetrics also supports Graphite query language - see [these docs](#graphite-render-api-usage).
## How to send data from OpenTSDB-compatible agents ## How to send data from OpenTSDB-compatible agents
VictoriaMetrics supports [telnet put protocol](http://opentsdb.net/docs/build/html/api_telnet/put.html) VictoriaMetrics supports [telnet put protocol](http://opentsdb.net/docs/build/html/api_telnet/put.html)
@ -507,7 +522,7 @@ The same protocol is used for [ingesting data in KairosDB](https://kairosdb.gith
Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command line flag. For instance, Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command line flag. For instance,
the following command enables OpenTSDB receiver in VictoriaMetrics on TCP and UDP port `4242`: the following command enables OpenTSDB receiver in VictoriaMetrics on TCP and UDP port `4242`:
```bash ```console
/path/to/victoria-metrics-prod -opentsdbListenAddr=:4242 /path/to/victoria-metrics-prod -opentsdbListenAddr=:4242
``` ```
@ -517,7 +532,7 @@ Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242 echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
``` ```
@ -528,7 +543,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
``` ```
@ -545,7 +560,7 @@ The `/api/v1/export` endpoint should return the following response:
Enable HTTP server for OpenTSDB `/api/put` requests by setting `-opentsdbHTTPListenAddr` command line flag. For instance, Enable HTTP server for OpenTSDB `/api/put` requests by setting `-opentsdbHTTPListenAddr` command line flag. For instance,
the following command enables OpenTSDB HTTP server on port `4242`: the following command enables OpenTSDB HTTP server on port `4242`:
```bash ```console
/path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242 /path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242
``` ```
@ -555,7 +570,7 @@ Example for writing a single data point:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put
``` ```
@ -565,7 +580,7 @@ Example for writing multiple data points in a single request:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
``` ```
@ -575,7 +590,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar' curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar'
``` ```
@ -745,7 +760,7 @@ The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is pos
by setting it via `<ROOT_IMAGE>` environment variable. by setting it via `<ROOT_IMAGE>` environment variable.
For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package-victoria-metrics ROOT_IMAGE=scratch make package-victoria-metrics
``` ```
@ -859,7 +874,7 @@ Each JSON line contains samples for a single time series. An example output:
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example: For example:
```bash ```console
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
``` ```
@ -873,7 +888,7 @@ of time series data. This enables gzip compression for the exported data. Exampl
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz
``` ```
@ -907,7 +922,7 @@ for metrics to export.
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example: For example:
```bash ```console
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
``` ```
@ -924,7 +939,7 @@ for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time
On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag: On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag:
```bash ```console
# count unique timeseries in database # count unique timeseries in database
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]' wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
@ -934,7 +949,7 @@ wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example: For example:
```bash ```console
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
``` ```
@ -966,7 +981,7 @@ Time series data can be imported into VictoriaMetrics via any supported data ing
Example for importing data obtained via [/api/v1/export](#how-to-export-data-in-json-line-format): Example for importing data obtained via [/api/v1/export](#how-to-export-data-in-json-line-format):
```bash ```console
# Export the data from <source-victoriametrics>: # Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl curl http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl
@ -976,7 +991,7 @@ curl -X POST http://destination-victoriametrics:8428/api/v1/import -T exported_d
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import` for importing gzipped data: Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import` for importing gzipped data:
```bash ```console
# Export gzipped data from <source-victoriametrics>: # Export gzipped data from <source-victoriametrics>:
curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl.gz curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl.gz
@ -997,7 +1012,7 @@ The specification of VictoriaMetrics' native format may yet change and is not fo
If you have a native format file obtained via [/api/v1/export/native](#how-to-export-data-in-native-format) however this is the most efficient protocol for importing data in. If you have a native format file obtained via [/api/v1/export/native](#how-to-export-data-in-native-format) however this is the most efficient protocol for importing data in.
```bash ```console
# Export the data from <source-victoriametrics>: # Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin
@ -1038,14 +1053,14 @@ Each request to `/api/v1/import/csv` may contain arbitrary number of CSV lines.
Example for importing CSV data via `/api/v1/import/csv`: Example for importing CSV data via `/api/v1/import/csv`:
```bash ```console
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
``` ```
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}' curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
``` ```
@ -1071,7 +1086,7 @@ via `/api/v1/import/prometheus` path. For example, the following line imports a
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
``` ```
@ -1081,7 +1096,7 @@ The following command may be used for verifying the imported data:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}' curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
``` ```
@ -1097,7 +1112,7 @@ Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus`
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
# Import gzipped data to <destination-victoriametrics>: # Import gzipped data to <destination-victoriametrics>:
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz
``` ```
@ -1136,7 +1151,9 @@ Example contents for `-relabelConfig` file:
regex: true regex: true
``` ```
See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) for more details about relabeling in VictoriaMetrics. VictoriaMetrics components provide additional relabeling features such as Graphite-style relabeling.
See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) for more details.
## Federation ## Federation
@ -1146,7 +1163,7 @@ at `http://<victoriametrics-addr>:8428/federate?match[]=<timeseries_selector_for
Optional `start` and `end` args may be added to the request in order to scrape the last point for each selected time series on the `[start ... end]` interval. Optional `start` and `end` args may be added to the request in order to scrape the last point for each selected time series on the `[start ... end]` interval.
`start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. `start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example: For example:
```bash ```console
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486' curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
``` ```
@ -1200,7 +1217,7 @@ See also [cardinality limiter](#cardinality-limiter) and [capacity planning docs
* Install multiple VictoriaMetrics instances in distinct datacenters (availability zones). * Install multiple VictoriaMetrics instances in distinct datacenters (availability zones).
* Pass addresses of these instances to [vmagent](https://docs.victoriametrics.com/vmagent.html) via `-remoteWrite.url` command-line flag: * Pass addresses of these instances to [vmagent](https://docs.victoriametrics.com/vmagent.html) via `-remoteWrite.url` command-line flag:
```bash ```console
/path/to/vmagent -remoteWrite.url=http://<victoriametrics-addr-1>:8428/api/v1/write -remoteWrite.url=http://<victoriametrics-addr-2>:8428/api/v1/write /path/to/vmagent -remoteWrite.url=http://<victoriametrics-addr-1>:8428/api/v1/write -remoteWrite.url=http://<victoriametrics-addr-2>:8428/api/v1/write
``` ```
@ -1219,7 +1236,7 @@ remote_write:
* Apply the updated config: * Apply the updated config:
```bash ```console
kill -HUP `pidof prometheus` kill -HUP `pidof prometheus`
``` ```
@ -1400,7 +1417,7 @@ For example, substitute `-graphiteListenAddr=:2003` with `-graphiteListenAddr=<i
If you plan to store more than 1TB of data on `ext4` partition or plan extending it to more than 16TB, If you plan to store more than 1TB of data on `ext4` partition or plan extending it to more than 16TB,
then the following options are recommended to pass to `mkfs.ext4`: then the following options are recommended to pass to `mkfs.ext4`:
```bash ```console
mkfs.ext4 ... -O 64bit,huge_file,extent -T huge mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
``` ```
@ -1461,7 +1478,7 @@ In this case VictoriaMetrics puts query trace into `trace` field in the output J
For example, the following command: For example, the following command:
```bash ```console
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace' curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace'
``` ```
@ -1722,7 +1739,7 @@ VictoriaMetrics provides handlers for collecting the following [Go profiles](htt
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
``` ```
@ -1732,7 +1749,7 @@ curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof
``` ```

View file

@ -22,7 +22,7 @@ See how to work with a [VictoriaMetrics Helm repository in previous guide](https
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
helm install operator vm/victoria-metrics-operator helm install operator vm/victoria-metrics-operator
``` ```
@ -30,7 +30,7 @@ helm install operator vm/victoria-metrics-operator
The expected output is: The expected output is:
```bash ```console
NAME: vmoperator NAME: vmoperator
LAST DEPLOYED: Thu Sep 30 17:30:30 2021 LAST DEPLOYED: Thu Sep 30 17:30:30 2021
NAMESPACE: default NAMESPACE: default
@ -49,13 +49,13 @@ Run the following command to check that VM Operator is up and running:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kubectl --namespace default get pods -l "app.kubernetes.io/instance=vmoperator" kubectl --namespace default get pods -l "app.kubernetes.io/instance=vmoperator"
``` ```
</div> </div>
The expected output: The expected output:
```bash ```console
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
vmoperator-victoria-metrics-operator-67cff44cd6-s47n6 1/1 Running 0 77s vmoperator-victoria-metrics-operator-67cff44cd6-s47n6 1/1 Running 0 77s
``` ```
@ -68,7 +68,7 @@ Run the following command to install [VictoriaMetrics Cluster](https://docs.vict
<div class="with-copy" markdown="1" id="example-cluster-config"> <div class="with-copy" markdown="1" id="example-cluster-config">
```bash ```console
cat << EOF | kubectl apply -f - cat << EOF | kubectl apply -f -
apiVersion: operator.victoriametrics.com/v1beta1 apiVersion: operator.victoriametrics.com/v1beta1
kind: VMCluster kind: VMCluster
@ -89,7 +89,7 @@ EOF
The expected output: The expected output:
```bash ```console
vmcluster.operator.victoriametrics.com/example-vmcluster-persistent created vmcluster.operator.victoriametrics.com/example-vmcluster-persistent created
``` ```
@ -100,13 +100,13 @@ vmcluster.operator.victoriametrics.com/example-vmcluster-persistent created
Please note that it may take some time for the pods to start. To check that the pods are started, run the following command: Please note that it may take some time for the pods to start. To check that the pods are started, run the following command:
<div class="with-copy" markdown="1" id="example-cluster-config"> <div class="with-copy" markdown="1" id="example-cluster-config">
```bash ```console
kubectl get pods | grep vmcluster kubectl get pods | grep vmcluster
``` ```
</div> </div>
The expected output: The expected output:
```bash ```console
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
vminsert-example-vmcluster-persistent-845849cb84-9vb6f 1/1 Running 0 5m15s vminsert-example-vmcluster-persistent-845849cb84-9vb6f 1/1 Running 0 5m15s
vminsert-example-vmcluster-persistent-845849cb84-r7mmk 1/1 Running 0 5m15s vminsert-example-vmcluster-persistent-845849cb84-r7mmk 1/1 Running 0 5m15s
@ -119,13 +119,13 @@ vmstorage-example-vmcluster-persistent-1 1/1 Running 0
There is an extra command to get information about the cluster state: There is an extra command to get information about the cluster state:
<div class="with-copy" markdown="1" id="services"> <div class="with-copy" markdown="1" id="services">
```bash ```console
kubectl get vmclusters kubectl get vmclusters
``` ```
</div> </div>
The expected output: The expected output:
```bash ```console
NAME INSERT COUNT STORAGE COUNT SELECT COUNT AGE STATUS NAME INSERT COUNT STORAGE COUNT SELECT COUNT AGE STATUS
example-vmcluster-persistent 2 2 2 5m53s operational example-vmcluster-persistent 2 2 2 5m53s operational
``` ```
@ -136,14 +136,14 @@ To get the name of `vminsert` services, please run the following command:
<div class="with-copy" markdown="1" id="services"> <div class="with-copy" markdown="1" id="services">
```bash ```console
kubectl get svc | grep vminsert kubectl get svc | grep vminsert
``` ```
</div> </div>
The expected output: The expected output:
```bash ```console
vminsert-example-vmcluster-persistent ClusterIP 10.107.47.136 <none> 8480/TCP 5m58s vminsert-example-vmcluster-persistent ClusterIP 10.107.47.136 <none> 8480/TCP 5m58s
``` ```
@ -153,7 +153,7 @@ Here is an example of the full configuration that we need to apply:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
cat <<EOF | kubectl apply -f - cat <<EOF | kubectl apply -f -
apiVersion: operator.victoriametrics.com/v1beta1 apiVersion: operator.victoriametrics.com/v1beta1
kind: VMAgent kind: VMAgent
@ -177,7 +177,7 @@ EOF
The expected output: The expected output:
```bash ```console
vmagent.operator.victoriametrics.com/example-vmagent created vmagent.operator.victoriametrics.com/example-vmagent created
``` ```
@ -188,14 +188,14 @@ Verify that `VMAgent` is up and running by executing the following command:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kubectl get pods | grep vmagent kubectl get pods | grep vmagent
``` ```
</div> </div>
The expected output is: The expected output is:
```bash ```console
vmagent-example-vmagent-7996844b5f-b5rzs 2/2 Running 0 9s vmagent-example-vmagent-7996844b5f-b5rzs 2/2 Running 0 9s
``` ```
@ -207,13 +207,13 @@ Run the following command to make `VMAgent`'s port accessible from the local mac
</div> </div>
```bash ```console
kubectl port-forward svc/vmagent-example-vmagent 8429:8429 kubectl port-forward svc/vmagent-example-vmagent 8429:8429
``` ```
The expected output is: The expected output is:
```bash ```console
Forwarding from 127.0.0.1:8429 -> 8429 Forwarding from 127.0.0.1:8429 -> 8429
Forwarding from [::1]:8429 -> 8429 Forwarding from [::1]:8429 -> 8429
``` ```
@ -235,14 +235,14 @@ To get the new service name, please run the following command:
<div class="with-copy" markdown="1" id="services"> <div class="with-copy" markdown="1" id="services">
```bash ```console
kubectl get svc | grep vmselect kubectl get svc | grep vmselect
``` ```
</div> </div>
The expected output: The expected output:
```bash ```console
vmselect-example-vmcluster-persistent ClusterIP None <none> 8481/TCP 7m vmselect-example-vmcluster-persistent ClusterIP None <none> 8481/TCP 7m
``` ```

View file

@ -65,7 +65,7 @@ EOF
The expected result of the command execution is the following: The expected result of the command execution is the following:
```bash ```console
NAME: vmcluster NAME: vmcluster
LAST DEPLOYED: Thu Jul 29 13:33:51 2021 LAST DEPLOYED: Thu Jul 29 13:33:51 2021
NAMESPACE: default NAMESPACE: default
@ -121,14 +121,14 @@ Verify that the VictoriaMetrics cluster pods are up and running by executing the
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kubectl get pods | grep vmcluster kubectl get pods | grep vmcluster
``` ```
</div> </div>
The expected output is: The expected output is:
```bash ```console
vmcluster-victoria-metrics-cluster-vminsert-78b84d8cd9-4mh9d 1/1 Running 0 2m28s vmcluster-victoria-metrics-cluster-vminsert-78b84d8cd9-4mh9d 1/1 Running 0 2m28s
vmcluster-victoria-metrics-cluster-vminsert-78b84d8cd9-4ppl7 1/1 Running 0 2m28s vmcluster-victoria-metrics-cluster-vminsert-78b84d8cd9-4ppl7 1/1 Running 0 2m28s
vmcluster-victoria-metrics-cluster-vminsert-78b84d8cd9-782qk 1/1 Running 0 2m28s vmcluster-victoria-metrics-cluster-vminsert-78b84d8cd9-782qk 1/1 Running 0 2m28s
@ -241,7 +241,7 @@ Verify that `vmagent`'s pod is up and running by executing the following command
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kubectl get pods | grep vmagent kubectl get pods | grep vmagent
``` ```
</div> </div>
@ -249,7 +249,7 @@ kubectl get pods | grep vmagent
The expected output is: The expected output is:
```bash ```console
vmagent-victoria-metrics-agent-57ddbdc55d-h4ljb 1/1 Running 0 13s vmagent-victoria-metrics-agent-57ddbdc55d-h4ljb 1/1 Running 0 13s
``` ```
@ -258,14 +258,14 @@ vmagent-victoria-metrics-agent-57ddbdc55d-h4ljb 1/1 Running
Run the following command to check that VictoriaMetrics services are up and running: Run the following command to check that VictoriaMetrics services are up and running:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kubectl get pods | grep victoria-metrics kubectl get pods | grep victoria-metrics
``` ```
</div> </div>
The expected output is: The expected output is:
```bash ```console
vmagent-victoria-metrics-agent-57ddbdc55d-h4ljb 1/1 Running 0 75s vmagent-victoria-metrics-agent-57ddbdc55d-h4ljb 1/1 Running 0 75s
vmcluster-victoria-metrics-cluster-vminsert-78b84d8cd9-s8v7x 1/1 Running 0 89s vmcluster-victoria-metrics-cluster-vminsert-78b84d8cd9-s8v7x 1/1 Running 0 89s
vmcluster-victoria-metrics-cluster-vminsert-78b84d8cd9-xlm9d 1/1 Running 0 89s vmcluster-victoria-metrics-cluster-vminsert-78b84d8cd9-xlm9d 1/1 Running 0 89s
@ -283,14 +283,14 @@ To verify that metrics are present in the VictoriaMetrics send a curl request to
Run the following command to see the list of services: Run the following command to see the list of services:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
k get svc | grep vmselect k get svc | grep vmselect
``` ```
</div> </div>
The expected output: The expected output:
```bash ```console
vmcluster-victoria-metrics-cluster-vmselect ClusterIP 10.88.2.69 <none> 8481/TCP 1m vmcluster-victoria-metrics-cluster-vmselect ClusterIP 10.88.2.69 <none> 8481/TCP 1m
``` ```
@ -298,20 +298,20 @@ Run the following command to make `vmselect`'s port accessable from the local ma
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kubectl port-forward svc/vmcluster-victoria-metrics-cluster-vmselect 8481:8481 kubectl port-forward svc/vmcluster-victoria-metrics-cluster-vmselect 8481:8481
``` ```
</div> </div>
Execute the following command to get metrics via `curl`: Execute the following command to get metrics via `curl`:
```bash ```console
curl -sg 'http://127.0.0.1:8481/select/0/prometheus/api/v1/query_range?query=count(up{kubernetes_pod_name=~".*vmselect.*"})&start=-10m&step=1m' | jq curl -sg 'http://127.0.0.1:8481/select/0/prometheus/api/v1/query_range?query=count(up{kubernetes_pod_name=~".*vmselect.*"})&start=-10m&step=1m' | jq
``` ```
The expected output is: The expected output is:
```bash ```console
{ {
"status": "success", "status": "success",
"isPartial": false, "isPartial": false,
@ -389,7 +389,7 @@ To test if High Availability works, we need to shutdown one of the `vmstorages`.
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kubectl scale sts vmcluster-victoria-metrics-cluster-vmstorage --replicas=2 kubectl scale sts vmcluster-victoria-metrics-cluster-vmstorage --replicas=2
``` ```
</div> </div>
@ -398,13 +398,13 @@ Verify that now we have two running `vmstorages` in the cluster by executing the
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kubectl get pods | grep vmstorage kubectl get pods | grep vmstorage
``` ```
</div> </div>
The expected output is: The expected output is:
```bash ```console
vmcluster-victoria-metrics-cluster-vmstorage-0 1/1 Running 0 44m vmcluster-victoria-metrics-cluster-vmstorage-0 1/1 Running 0 44m
vmcluster-victoria-metrics-cluster-vmstorage-1 1/1 Running 0 43m vmcluster-victoria-metrics-cluster-vmstorage-1 1/1 Running 0 43m
``` ```

View file

@ -28,7 +28,7 @@ You need to add the VictoriaMetrics Helm repository to install VictoriaMetrics c
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
helm repo add vm https://victoriametrics.github.io/helm-charts/ helm repo add vm https://victoriametrics.github.io/helm-charts/
``` ```
@ -38,7 +38,7 @@ Update Helm repositories:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
helm repo update helm repo update
``` ```
@ -48,7 +48,7 @@ To verify that everything is set up correctly you may run this command:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
helm search repo vm/ helm search repo vm/
``` ```
@ -56,7 +56,7 @@ helm search repo vm/
The expected output is: The expected output is:
```bash ```console
NAME CHART VERSION APP VERSION DESCRIPTION NAME CHART VERSION APP VERSION DESCRIPTION
vm/victoria-metrics-agent 0.7.20 v1.62.0 Victoria Metrics Agent - collects metrics from ... vm/victoria-metrics-agent 0.7.20 v1.62.0 Victoria Metrics Agent - collects metrics from ...
vm/victoria-metrics-alert 0.3.34 v1.62.0 Victoria Metrics Alert - executes a list of giv... vm/victoria-metrics-alert 0.3.34 v1.62.0 Victoria Metrics Alert - executes a list of giv...
@ -100,7 +100,7 @@ EOF
As a result of this command you will see the following output: As a result of this command you will see the following output:
```bash ```console
NAME: vmcluster NAME: vmcluster
LAST DEPLOYED: Thu Jul 1 09:41:57 2021 LAST DEPLOYED: Thu Jul 1 09:41:57 2021
NAMESPACE: default NAMESPACE: default
@ -159,14 +159,14 @@ Verify that [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-V
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kubectl get pods kubectl get pods
``` ```
</div> </div>
The expected output is: The expected output is:
```bash ```console
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
vmcluster-victoria-metrics-cluster-vminsert-689cbc8f55-95szg 1/1 Running 0 16m vmcluster-victoria-metrics-cluster-vminsert-689cbc8f55-95szg 1/1 Running 0 16m
vmcluster-victoria-metrics-cluster-vminsert-689cbc8f55-f852l 1/1 Running 0 16m vmcluster-victoria-metrics-cluster-vminsert-689cbc8f55-f852l 1/1 Running 0 16m
@ -422,14 +422,14 @@ Verify that `vmagent`'s pod is up and running by executing the following command
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kubectl get pods | grep vmagent kubectl get pods | grep vmagent
``` ```
</div> </div>
The expected output is: The expected output is:
```bash ```console
vmagent-victoria-metrics-agent-69974b95b4-mhjph 1/1 Running 0 11m vmagent-victoria-metrics-agent-69974b95b4-mhjph 1/1 Running 0 11m
``` ```
@ -440,7 +440,7 @@ Add the Grafana Helm repository.
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
helm repo add grafana https://grafana.github.io/helm-charts helm repo add grafana https://grafana.github.io/helm-charts
helm repo update helm repo update
``` ```
@ -512,7 +512,7 @@ The second and the third will forward Grafana to `127.0.0.1:3000`:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kubectl get secret --namespace default my-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo kubectl get secret --namespace default my-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=my-grafana" -o jsonpath="{.items[0].metadata.name}") export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=my-grafana" -o jsonpath="{.items[0].metadata.name}")

View file

@ -28,7 +28,7 @@ You need to add the VictoriaMetrics Helm repository to install VictoriaMetrics c
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
helm repo add vm https://victoriametrics.github.io/helm-charts/ helm repo add vm https://victoriametrics.github.io/helm-charts/
``` ```
@ -38,7 +38,7 @@ Update Helm repositories:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
helm repo update helm repo update
``` ```
@ -48,7 +48,7 @@ To verify that everything is set up correctly you may run this command:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
helm search repo vm/ helm search repo vm/
``` ```
@ -56,7 +56,7 @@ helm search repo vm/
The expected output is: The expected output is:
```bash ```console
NAME CHART VERSION APP VERSION DESCRIPTION NAME CHART VERSION APP VERSION DESCRIPTION
vm/victoria-metrics-agent 0.7.20 v1.62.0 Victoria Metrics Agent - collects metrics from ... vm/victoria-metrics-agent 0.7.20 v1.62.0 Victoria Metrics Agent - collects metrics from ...
vm/victoria-metrics-alert 0.3.34 v1.62.0 Victoria Metrics Alert - executes a list of giv... vm/victoria-metrics-alert 0.3.34 v1.62.0 Victoria Metrics Alert - executes a list of giv...
@ -74,7 +74,7 @@ Run this command in your terminal:
<div class="with-copy" markdown="1">.html <div class="with-copy" markdown="1">.html
```bash ```console
helm install vmsingle vm/victoria-metrics-single -f https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml helm install vmsingle vm/victoria-metrics-single -f https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml
``` ```
@ -175,7 +175,7 @@ server:
As a result of the command you will see the following output: As a result of the command you will see the following output:
```bash ```console
NAME: victoria-metrics NAME: victoria-metrics
LAST DEPLOYED: Fri Jun 25 12:06:13 2021 LAST DEPLOYED: Fri Jun 25 12:06:13 2021
NAMESPACE: default NAMESPACE: default
@ -219,7 +219,7 @@ Verify that VictoriaMetrics pod is up and running by executing the following com
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kubectl get pods kubectl get pods
``` ```
@ -227,7 +227,7 @@ kubectl get pods
The expected output is: The expected output is:
```bash ```console
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
vmsingle-victoria-metrics-single-server-0 1/1 Running 0 68s vmsingle-victoria-metrics-single-server-0 1/1 Running 0 68s
``` ```
@ -239,7 +239,7 @@ Add the Grafana Helm repository.
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
helm repo add grafana https://grafana.github.io/helm-charts helm repo add grafana https://grafana.github.io/helm-charts
helm repo update helm repo update
``` ```
@ -305,7 +305,7 @@ To see the password for Grafana `admin` user use the following command:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
kubectl get secret --namespace default my-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo kubectl get secret --namespace default my-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
``` ```
@ -315,7 +315,7 @@ Expose Grafana service on `127.0.0.1:3000`:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=my-grafana" -o jsonpath="{.items[0].metadata.name}") export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=my-grafana" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 3000 kubectl --namespace default port-forward $POD_NAME 3000

View file

@ -72,7 +72,7 @@ supports [InfluxDB line protocol](https://docs.victoriametrics.com/#how-to-send-
for data ingestion. For example, to write a measurement to VictoriaMetrics we need to send an HTTP POST request with for data ingestion. For example, to write a measurement to VictoriaMetrics we need to send an HTTP POST request with
payload in a line protocol format: payload in a line protocol format:
```bash ```console
curl -d 'census,location=klamath,scientist=anderson bees=23 1566079200000' -X POST 'http://<victoriametric-addr>:8428/write' curl -d 'census,location=klamath,scientist=anderson bees=23 1566079200000' -X POST 'http://<victoriametric-addr>:8428/write'
``` ```
@ -83,7 +83,7 @@ Please note, an arbitrary number of lines delimited by `\n` (aka newline char) c
To get the written data back let's export all series matching the `location="klamath"` filter: To get the written data back let's export all series matching the `location="klamath"` filter:
```bash ```console
curl -G 'http://<victoriametric-addr>:8428/api/v1/export' -d 'match={location="klamath"}' curl -G 'http://<victoriametric-addr>:8428/api/v1/export' -d 'match={location="klamath"}'
``` ```

View file

@ -28,7 +28,7 @@ Using this schema, you can achieve:
* You need to pass two `-remoteWrite.url` command-line options to `vmagent`: * You need to pass two `-remoteWrite.url` command-line options to `vmagent`:
```bash ```console
/path/to/vmagent-prod \ /path/to/vmagent-prod \
-remoteWrite.url=<ground-control-1-remote-write> \ -remoteWrite.url=<ground-control-1-remote-write> \
-remoteWrite.url=<ground-control-2-remote-write> -remoteWrite.url=<ground-control-2-remote-write>

View file

@ -295,7 +295,7 @@ for [InfluxDB line protocol](https://docs.victoriametrics.com/Single-server-Vict
Creating custom clients or instrumenting the application for metrics writing is as easy as sending a POST request: Creating custom clients or instrumenting the application for metrics writing is as easy as sending a POST request:
```bash ```console
curl -d '{"metric":{"__name__":"foo","job":"node_exporter"},"values":[0,1,2],"timestamps":[1549891472010,1549891487724,1549891503438]}' -X POST 'http://localhost:8428/api/v1/import' curl -d '{"metric":{"__name__":"foo","job":"node_exporter"},"values":[0,1,2],"timestamps":[1549891472010,1549891487724,1549891503438]}' -X POST 'http://localhost:8428/api/v1/import'
``` ```
@ -441,7 +441,7 @@ plot this data sample on the system of coordinates, it will have the following f
To get the value of `foo_bar` metric at some specific moment of time, for example `2022-05-10 10:03:00`, in To get the value of `foo_bar` metric at some specific moment of time, for example `2022-05-10 10:03:00`, in
VictoriaMetrics we need to issue an **instant query**: VictoriaMetrics we need to issue an **instant query**:
```bash ```console
curl "http://<victoria-metrics-addr>/api/v1/query?query=foo_bar&time=2022-05-10T10:03:00.000Z" curl "http://<victoria-metrics-addr>/api/v1/query?query=foo_bar&time=2022-05-10T10:03:00.000Z"
``` ```
@ -504,7 +504,7 @@ step - step in seconds for evaluating query expression on the time range. If omi
To get the values of `foo_bar` on time range from `2022-05-10 09:59:00` to `2022-05-10 10:17:00`, in VictoriaMetrics we To get the values of `foo_bar` on time range from `2022-05-10 09:59:00` to `2022-05-10 10:17:00`, in VictoriaMetrics we
need to issue a range query: need to issue a range query:
```bash ```console
curl "http://<victoria-metrics-addr>/api/v1/query_range?query=foo_bar&step=1m&start=2022-05-10T09:59:00.000Z&end=2022-05-10T10:17:00.000Z" curl "http://<victoria-metrics-addr>/api/v1/query_range?query=foo_bar&step=1m&start=2022-05-10T09:59:00.000Z&end=2022-05-10T10:17:00.000Z"
``` ```

View file

@ -48,7 +48,7 @@ and disable CRD controller with flag: `--controller.disableCRDOwnership=true`
## Troubleshooting ## Troubleshooting
- cannot apply crd at kubernetes 1.18 + version and kubectl reports error: - cannot apply crd at kubernetes 1.18 + version and kubectl reports error:
```bash ```console
Error from server (Invalid): error when creating "release/crds/crd.yaml": CustomResourceDefinition.apiextensions.k8s.io "vmalertmanagers.operator.victoriametrics.com" is invalid: [spec.validation.openAPIV3Schema.properties[spec].properties[initContainers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property, spec.validation.openAPIV3Schema.properties[spec].properties[containers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property] Error from server (Invalid): error when creating "release/crds/crd.yaml": CustomResourceDefinition.apiextensions.k8s.io "vmalertmanagers.operator.victoriametrics.com" is invalid: [spec.validation.openAPIV3Schema.properties[spec].properties[initContainers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property, spec.validation.openAPIV3Schema.properties[spec].properties[containers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property]
Error from server (Invalid): error when creating "release/crds/crd.yaml": CustomResourceDefinition.apiextensions.k8s.io "vmalerts.operator.victoriametrics.com" is invalid: [ Error from server (Invalid): error when creating "release/crds/crd.yaml": CustomResourceDefinition.apiextensions.k8s.io "vmalerts.operator.victoriametrics.com" is invalid: [
``` ```
@ -62,12 +62,12 @@ Error from server (Invalid): error when creating "release/crds/crd.yaml": Custom
- minikube or kind - minikube or kind
start: start:
```bash ```console
make run make run
``` ```
for test execution run: for test execution run:
```bash ```console
#unit tests #unit tests
make test make test

View file

@ -280,7 +280,7 @@ EOF
Then wait for the cluster becomes ready Then wait for the cluster becomes ready
```bash ```console
kubectl get vmclusters -w kubectl get vmclusters -w
NAME INSERT COUNT STORAGE COUNT SELECT COUNT AGE STATUS NAME INSERT COUNT STORAGE COUNT SELECT COUNT AGE STATUS
example-vmcluster-persistent 2 2 2 2s expanding example-vmcluster-persistent 2 2 2 2s expanding
@ -289,7 +289,7 @@ example-vmcluster-persistent 2 2 2 30s
Get links for connection by executing the command: Get links for connection by executing the command:
```bash ```console
kubectl get svc -l app.kubernetes.io/instance=example-vmcluster-persistent kubectl get svc -l app.kubernetes.io/instance=example-vmcluster-persistent
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vminsert-example-vmcluster-persistent ClusterIP 10.96.34.94 <none> 8480/TCP 69s vminsert-example-vmcluster-persistent ClusterIP 10.96.34.94 <none> 8480/TCP 69s

View file

@ -13,7 +13,7 @@ Obtain release from releases page:
We suggest use the latest release. We suggest use the latest release.
```bash ```console
# Get latest release version from https://github.com/VictoriaMetrics/operator/releases/latest # Get latest release version from https://github.com/VictoriaMetrics/operator/releases/latest
export VM_VERSION=`basename $(curl -fs -o/dev/null -w %{redirect_url} https://github.com/VictoriaMetrics/operator/releases/latest)` export VM_VERSION=`basename $(curl -fs -o/dev/null -w %{redirect_url} https://github.com/VictoriaMetrics/operator/releases/latest)`
wget https://github.com/VictoriaMetrics/operator/releases/download/$VM_VERSION/bundle_crd.zip wget https://github.com/VictoriaMetrics/operator/releases/download/$VM_VERSION/bundle_crd.zip
@ -24,7 +24,7 @@ unzip bundle_crd.zip
> sed -i "s/namespace: monitoring-system/namespace: YOUR_NAMESPACE/g" release/operator/* > sed -i "s/namespace: monitoring-system/namespace: YOUR_NAMESPACE/g" release/operator/*
First of all, you have to create [custom resource definitions](https://github.com/VictoriaMetrics/operator) First of all, you have to create [custom resource definitions](https://github.com/VictoriaMetrics/operator)
```bash ```console
kubectl apply -f release/crds kubectl apply -f release/crds
``` ```
@ -32,13 +32,13 @@ Then you need RBAC for operator, relevant configuration for the release can be f
Change configuration for operator at `release/operator/manager.yaml`, possible settings: [operator-settings](/vars.MD) Change configuration for operator at `release/operator/manager.yaml`, possible settings: [operator-settings](/vars.MD)
and apply it: and apply it:
```bash ```console
kubectl apply -f release/operator/ kubectl apply -f release/operator/
``` ```
Check the status of operator Check the status of operator
```bash ```console
kubectl get pods -n monitoring-system kubectl get pods -n monitoring-system
#NAME READY STATUS RESTARTS AGE #NAME READY STATUS RESTARTS AGE
@ -74,19 +74,19 @@ You can change [operator-settings](/vars.MD), or use your custom namespace see [
Build template Build template
```bash ```console
kustomize build . -o monitoring.yaml kustomize build . -o monitoring.yaml
``` ```
Apply manifests Apply manifests
```bash ```console
kubectl apply -f monitoring.yaml kubectl apply -f monitoring.yaml
``` ```
Check the status of operator Check the status of operator
```bash ```console
kubectl get pods -n monitoring-system kubectl get pods -n monitoring-system
#NAME READY STATUS RESTARTS AGE #NAME READY STATUS RESTARTS AGE
@ -269,7 +269,7 @@ EOF
It requires access to Kubernetes API and you can create RBAC for it first, it can be found at `release/examples/VMAgent_rbac.yaml` It requires access to Kubernetes API and you can create RBAC for it first, it can be found at `release/examples/VMAgent_rbac.yaml`
Or you can use default rbac account, that will be created for `VMAgent` by operator automatically. Or you can use default rbac account, that will be created for `VMAgent` by operator automatically.
```bash ```console
kubectl apply -f release/examples/vmagent_rbac.yaml kubectl apply -f release/examples/vmagent_rbac.yaml
``` ```
@ -538,7 +538,7 @@ EOF
``` ```
Check status for pods: Check status for pods:
```bash ```console
kubectl get pods kubectl get pods
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
example-app-594f97677c-g72v8 1/1 Running 0 23s example-app-594f97677c-g72v8 1/1 Running 0 23s
@ -552,7 +552,7 @@ vmsingle-example-vmsingle-persisted-794b59ccc6-fnkpt 1/1 Running 0
``` ```
Checking logs for `VMAgent`: Checking logs for `VMAgent`:
```bash ```console
kubectl logs vmagent-example-vmagent-5777fdf7bf-tctcv vmagent kubectl logs vmagent-example-vmagent-5777fdf7bf-tctcv vmagent
2020-08-02T18:18:17.226Z info VictoriaMetrics/app/vmagent/remotewrite/remotewrite.go:98 Successfully reloaded relabel configs 2020-08-02T18:18:17.226Z info VictoriaMetrics/app/vmagent/remotewrite/remotewrite.go:98 Successfully reloaded relabel configs
2020-08-02T18:18:17.229Z info VictoriaMetrics/lib/promscrape/scraper.go:137 found changes in "/etc/vmagent/config_out/vmagent.env.yaml"; applying these changes 2020-08-02T18:18:17.229Z info VictoriaMetrics/lib/promscrape/scraper.go:137 found changes in "/etc/vmagent/config_out/vmagent.env.yaml"; applying these changes
@ -606,7 +606,7 @@ EOF
Let's check `VMAgent` logs (you have to wait some time for config sync, usually its around 1 min): Let's check `VMAgent` logs (you have to wait some time for config sync, usually its around 1 min):
```bash ```console
kubectl logs vmagent-example-vmagent-5777fdf7bf-tctcv vmagent --tail 100 kubectl logs vmagent-example-vmagent-5777fdf7bf-tctcv vmagent --tail 100
2020-08-03T08:24:13.312Z info VictoriaMetrics/lib/promscrape/scraper.go:106 SIGHUP received; reloading Prometheus configs from "/etc/vmagent/config_out/vmagent.env.yaml" 2020-08-03T08:24:13.312Z info VictoriaMetrics/lib/promscrape/scraper.go:106 SIGHUP received; reloading Prometheus configs from "/etc/vmagent/config_out/vmagent.env.yaml"
2020-08-03T08:24:13.312Z info VictoriaMetrics/app/vmagent/remotewrite/remotewrite.go:98 Successfully reloaded relabel configs 2020-08-03T08:24:13.312Z info VictoriaMetrics/app/vmagent/remotewrite/remotewrite.go:98 Successfully reloaded relabel configs
@ -668,7 +668,7 @@ EOF
``` ```
Ensure, that pods started: Ensure, that pods started:
```bash ```console
kubectl get pods kubectl get pods
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
example-app-594f97677c-g72v8 1/1 Running 0 3m40s example-app-594f97677c-g72v8 1/1 Running 0 3m40s
@ -700,7 +700,7 @@ EOF
``` ```
Lets check `VMAgent` logs: Lets check `VMAgent` logs:
```bash ```console
kubectl logs vmagent-example-vmagent-5777fdf7bf-tctcv vmagent --tail 100 kubectl logs vmagent-example-vmagent-5777fdf7bf-tctcv vmagent --tail 100
2020-08-03T08:51:13.582Z info VictoriaMetrics/app/vmagent/remotewrite/remotewrite.go:98 Successfully reloaded relabel configs 2020-08-03T08:51:13.582Z info VictoriaMetrics/app/vmagent/remotewrite/remotewrite.go:98 Successfully reloaded relabel configs
2020-08-03T08:51:13.585Z info VictoriaMetrics/lib/promscrape/scraper.go:137 found changes in "/etc/vmagent/config_out/vmagent.env.yaml"; applying these changes 2020-08-03T08:51:13.585Z info VictoriaMetrics/lib/promscrape/scraper.go:137 found changes in "/etc/vmagent/config_out/vmagent.env.yaml"; applying these changes
@ -736,7 +736,7 @@ EOF
``` ```
Ensure, that it started and ready: Ensure, that it started and ready:
```bash ```console
kubectl get pods -l app.kubernetes.io/name=vmalert kubectl get pods -l app.kubernetes.io/name=vmalert
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
vmalert-example-vmalert-6f8748c6f9-hcfrr 2/2 Running 0 2m26s vmalert-example-vmalert-6f8748c6f9-hcfrr 2/2 Running 0 2m26s
@ -779,7 +779,7 @@ EOF
{% endraw %} {% endraw %}
Ensure, that new alert was started: Ensure, that new alert was started:
```bash ```console
kubectl logs vmalert-example-vmalert-6f8748c6f9-hcfrr vmalert kubectl logs vmalert-example-vmalert-6f8748c6f9-hcfrr vmalert
2020-08-03T09:07:49.772Z info VictoriaMetrics/app/vmalert/web.go:45 api config reload was called, sending sighup 2020-08-03T09:07:49.772Z info VictoriaMetrics/app/vmalert/web.go:45 api config reload was called, sending sighup
2020-08-03T09:07:49.772Z info VictoriaMetrics/app/vmalert/main.go:115 SIGHUP received. Going to reload rules ["/etc/vmalert/config/vm-example-vmalert-rulefiles-0/*.yaml"] ... 2020-08-03T09:07:49.772Z info VictoriaMetrics/app/vmalert/main.go:115 SIGHUP received. Going to reload rules ["/etc/vmalert/config/vm-example-vmalert-rulefiles-0/*.yaml"] ...
@ -817,14 +817,14 @@ EOF
{% endraw %} {% endraw %}
`VMAlert` will report incorrect rule config and fire alert: `VMAlert` will report incorrect rule config and fire alert:
```bash ```console
2020-08-03T09:11:40.672Z info VictoriaMetrics/app/vmalert/main.go:115 SIGHUP received. Going to reload rules ["/etc/vmalert/config/vm-example-vmalert-rulefiles-0/*.yaml"] ... 2020-08-03T09:11:40.672Z info VictoriaMetrics/app/vmalert/main.go:115 SIGHUP received. Going to reload rules ["/etc/vmalert/config/vm-example-vmalert-rulefiles-0/*.yaml"] ...
2020-08-03T09:11:40.672Z info VictoriaMetrics/app/vmalert/manager.go:83 reading rules configuration file from "/etc/vmalert/config/vm-example-vmalert-rulefiles-0/*.yaml" 2020-08-03T09:11:40.672Z info VictoriaMetrics/app/vmalert/manager.go:83 reading rules configuration file from "/etc/vmalert/config/vm-example-vmalert-rulefiles-0/*.yaml"
2020-08-03T09:11:40.673Z error VictoriaMetrics/app/vmalert/main.go:119 error while reloading rules: cannot parse configuration file: invalid group "incorrect rule" in file "/etc/vmalert/config/vm-example-vmalert-rulefiles-0/default-example-vmrule-incorrect-rule.yaml": invalid rule "incorrect rule"."vmalert bad config": invalid expression: unparsed data left: "expression" 2020-08-03T09:11:40.673Z error VictoriaMetrics/app/vmalert/main.go:119 error while reloading rules: cannot parse configuration file: invalid group "incorrect rule" in file "/etc/vmalert/config/vm-example-vmalert-rulefiles-0/default-example-vmrule-incorrect-rule.yaml": invalid rule "incorrect rule"."vmalert bad config": invalid expression: unparsed data left: "expression"
``` ```
Clean up incorrect rule: Clean up incorrect rule:
```bash ```console
kubectl delete vmrule example-vmrule-incorrect-rule kubectl delete vmrule example-vmrule-incorrect-rule
``` ```
@ -976,7 +976,7 @@ EOF
Ensure, that pods are ready: Ensure, that pods are ready:
```bash ```console
kubectl get pods kubectl get pods
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
prometheus-blackbox-exporter-5b5f44bd9c-2szdj 1/1 Running 0 3m3s prometheus-blackbox-exporter-5b5f44bd9c-2szdj 1/1 Running 0 3m3s
@ -986,7 +986,7 @@ vmsingle-example-vmsingle-persisted-8584486b68-mqg6b 1/1 Running 0
Now define some `VMProbe`, lets start with basic static target and probe `VMAgent` with its service address, for accessing Now define some `VMProbe`, lets start with basic static target and probe `VMAgent` with its service address, for accessing
blackbox exporter, you have to specify its url at `VMProbe` config. Lets get both services names: blackbox exporter, you have to specify its url at `VMProbe` config. Lets get both services names:
```bash ```console
kubectl get svc kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h21m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h21m
@ -1095,7 +1095,7 @@ spec:
EOF EOF
``` ```
2 targets must be added to `VMAgent` scrape config: 2 targets must be added to `VMAgent` scrape config:
```bash ```console
static_configs: added targets: 2, removed targets: 0; total targets: 2 static_configs: added targets: 2, removed targets: 0; total targets: 2
``` ```
@ -1200,7 +1200,7 @@ EOF
``` ```
Check its status Check its status
```bash ```console
kubectl get pods kubectl get pods
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
@ -1233,7 +1233,7 @@ EOF
Configuration changes for `VMAuth` takes some time, coz of mounted secret, its eventually updated by kubelet. Check vmauth log for changes: Configuration changes for `VMAuth` takes some time, coz of mounted secret, its eventually updated by kubelet. Check vmauth log for changes:
```bash ```console
kubectl logs vmauth-example-ffcc78fcc-xddk7 vmauth -f --tail 10 kubectl logs vmauth-example-ffcc78fcc-xddk7 vmauth -f --tail 10
2021-05-31T10:46:40.171Z info VictoriaMetrics/app/vmauth/auth_config.go:168 Loaded information about 1 users from "/opt/vmauth/config.yaml" 2021-05-31T10:46:40.171Z info VictoriaMetrics/app/vmauth/auth_config.go:168 Loaded information about 1 users from "/opt/vmauth/config.yaml"
2021-05-31T10:46:40.171Z info VictoriaMetrics/app/vmauth/main.go:37 started vmauth in 0.000 seconds 2021-05-31T10:46:40.171Z info VictoriaMetrics/app/vmauth/main.go:37 started vmauth in 0.000 seconds
@ -1249,7 +1249,7 @@ kubectl logs vmauth-example-ffcc78fcc-xddk7 vmauth -f --tail 10
Now lets try to access protected endpoints, i will use port-forward for that: Now lets try to access protected endpoints, i will use port-forward for that:
```bash ```console
kubectl port-forward vmauth-example-ffcc78fcc-xddk7 8427 kubectl port-forward vmauth-example-ffcc78fcc-xddk7 8427
# at separate terminal execute: # at separate terminal execute:
@ -1263,7 +1263,7 @@ curl localhost:8427/api/v1/groups -u 'simple-user:simple-password'
Check create secret for application access: Check create secret for application access:
```bash ```console
kubectl get secrets vmuser-example kubectl get secrets vmuser-example
NAME TYPE DATA AGE NAME TYPE DATA AGE
vmuser-example Opaque 2 6m33s vmuser-example Opaque 2 6m33s
@ -1275,7 +1275,7 @@ By default, the operator converts all existing prometheus-operator API objects i
You can control this behaviour by setting env variable for operator: You can control this behaviour by setting env variable for operator:
```bash ```console
#disable convertion for each object #disable convertion for each object
VM_ENABLEDPROMETHEUSCONVERTER_PODMONITOR=false VM_ENABLEDPROMETHEUSCONVERTER_PODMONITOR=false
VM_ENABLEDPROMETHEUSCONVERTER_SERVICESCRAPE=false VM_ENABLEDPROMETHEUSCONVERTER_SERVICESCRAPE=false
@ -1326,7 +1326,7 @@ spec:
By default the operator doesn't make converted objects disappear after original ones are deleted. To change this behaviour By default the operator doesn't make converted objects disappear after original ones are deleted. To change this behaviour
configure adding `OwnerReferences` to converted objects: configure adding `OwnerReferences` to converted objects:
```bash ```console
VM_ENABLEDPROMETHEUSCONVERTEROWNERREFERENCES=true VM_ENABLEDPROMETHEUSCONVERTEROWNERREFERENCES=true
``` ```
Converted objects will be linked to the original ones and will be deleted by kubernetes after the original ones are deleted. Converted objects will be linked to the original ones and will be deleted by kubernetes after the original ones are deleted.
@ -1404,7 +1404,7 @@ to the rule config:
Example for Kubernetes Nginx ingress [doc](https://kubernetes.github.io/ingress-nginx/examples/auth/basic/) Example for Kubernetes Nginx ingress [doc](https://kubernetes.github.io/ingress-nginx/examples/auth/basic/)
```bash ```console
#generate creds #generate creds
htpasswd -c auth foo htpasswd -c auth foo

View file

@ -11,7 +11,7 @@ sort: 21
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl 'http://<victoriametrics-addr>:8428/api/v1/admin/tsdb/delete_series?match[]=vm_http_request_errors_total' curl 'http://<victoriametrics-addr>:8428/api/v1/admin/tsdb/delete_series?match[]=vm_http_request_errors_total'
``` ```
@ -20,7 +20,7 @@ curl 'http://<victoriametrics-addr>:8428/api/v1/admin/tsdb/delete_series?match[]
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl 'http://<vmselect>:8481/delete/0/prometheus/api/v1/admin/tsdb/delete_series?match[]=vm_http_request_errors_total' curl 'http://<vmselect>:8481/delete/0/prometheus/api/v1/admin/tsdb/delete_series?match[]=vm_http_request_errors_total'
``` ```
@ -37,7 +37,7 @@ Additional information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl 'http://<victoriametrics-addr>:8428/api/v1/export/csv?format=__name__,__value__,__timestamp__:unix_s&match=vm_http_request_errors_total' > filename.txt curl 'http://<victoriametrics-addr>:8428/api/v1/export/csv?format=__name__,__value__,__timestamp__:unix_s&match=vm_http_request_errors_total' > filename.txt
``` ```
@ -46,7 +46,7 @@ curl 'http://<victoriametrics-addr>:8428/api/v1/export/csv?format=__name__,__val
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/export/csv?format=__name__,__value__,__timestamp__:unix_s&match=vm_http_request_errors_total' > filename.txt curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/export/csv?format=__name__,__value__,__timestamp__:unix_s&match=vm_http_request_errors_total' > filename.txt
``` ```
@ -64,7 +64,7 @@ Additional information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<victoriametrics-addr>:8428/api/v1/export/native?match[]=vm_http_request_errors_total' > filename.txt curl -G 'http://<victoriametrics-addr>:8428/api/v1/export/native?match[]=vm_http_request_errors_total' > filename.txt
``` ```
@ -73,7 +73,7 @@ curl -G 'http://<victoriametrics-addr>:8428/api/v1/export/native?match[]=vm_http
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/export/native?match=vm_http_request_errors_total' > filename.txt curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/export/native?match=vm_http_request_errors_total' > filename.txt
``` ```
@ -90,7 +90,7 @@ More information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl --data-binary "@import.txt" -X POST 'http://destination-victoriametrics:8428/api/v1/import' curl --data-binary "@import.txt" -X POST 'http://destination-victoriametrics:8428/api/v1/import'
``` ```
@ -99,7 +99,7 @@ curl --data-binary "@import.txt" -X POST 'http://destination-victoriametrics:842
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl --data-binary "@import.txt" -X POST 'http://<vminsert>:8480/insert/0/prometheus/api/v1/import' curl --data-binary "@import.txt" -X POST 'http://<vminsert>:8480/insert/0/prometheus/api/v1/import'
``` ```
@ -107,7 +107,7 @@ curl --data-binary "@import.txt" -X POST 'http://<vminsert>:8480/insert/0/promet
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -d 'metric_name{foo="bar"} 123' -X POST 'http://<vminsert>:8480/insert/0/prometheus/api/v1/import/prometheus' curl -d 'metric_name{foo="bar"} 123' -X POST 'http://<vminsert>:8480/insert/0/prometheus/api/v1/import/prometheus'
``` ```
@ -124,7 +124,7 @@ Additional information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl --data-binary "@import.txt" -X POST 'http://localhost:8428/api/v1/import/prometheus' curl --data-binary "@import.txt" -X POST 'http://localhost:8428/api/v1/import/prometheus'
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
``` ```
@ -134,7 +134,7 @@ curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl --data-binary "@import.txt" -X POST 'http://<vminsert>:8480/insert/0/prometheus/api/v1/import/csv' curl --data-binary "@import.txt" -X POST 'http://<vminsert>:8480/insert/0/prometheus/api/v1/import/csv'
curl -d "GOOG,1.23,4.56,NYSE" 'http://<vminsert>:8480/insert/0/prometheus/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' curl -d "GOOG,1.23,4.56,NYSE" 'http://<vminsert>:8480/insert/0/prometheus/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
``` ```
@ -153,7 +153,7 @@ Additional information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/prometheus/api/v1/labels' curl -G 'http://localhost:8428/prometheus/api/v1/labels'
``` ```
@ -162,7 +162,7 @@ curl -G 'http://localhost:8428/prometheus/api/v1/labels'
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/labels' curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/labels'
``` ```
@ -179,7 +179,7 @@ Additional information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/prometheus/api/v1/label/job/values' curl -G 'http://localhost:8428/prometheus/api/v1/label/job/values'
``` ```
@ -188,7 +188,7 @@ curl -G 'http://localhost:8428/prometheus/api/v1/label/job/values'
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/label/job/values' curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/label/job/values'
``` ```
@ -204,7 +204,7 @@ Additional information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/prometheus/api/v1/query?query=vm_http_request_errors_total&time=2021-02-22T19:10:30.781Z' curl -G 'http://localhost:8428/prometheus/api/v1/query?query=vm_http_request_errors_total&time=2021-02-22T19:10:30.781Z'
``` ```
@ -213,7 +213,7 @@ curl -G 'http://localhost:8428/prometheus/api/v1/query?query=vm_http_request_err
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/query?query=vm_http_request_errors_total&time=2021-02-22T19:10:30.781Z' curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/query?query=vm_http_request_errors_total&time=2021-02-22T19:10:30.781Z'
``` ```
@ -231,7 +231,7 @@ Additional information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/prometheus/api/v1/query_range?query=vm_http_request_errors_total&start=2021-02-22T19:10:30.781Z&step=20m' curl -G 'http://localhost:8428/prometheus/api/v1/query_range?query=vm_http_request_errors_total&start=2021-02-22T19:10:30.781Z&step=20m'
``` ```
@ -240,7 +240,7 @@ curl -G 'http://localhost:8428/prometheus/api/v1/query_range?query=vm_http_reque
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/query_range?query=vm_http_request_errors_total&start=2021-02-22T19:10:30.781Z&step=20m' curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/query_range?query=vm_http_request_errors_total&start=2021-02-22T19:10:30.781Z&step=20m'
``` ```
@ -248,11 +248,11 @@ curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/query_range?query=vm_
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/query_range?query=vm_http_request_errors_total&start=-1h&step=10m' curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/query_range?query=vm_http_request_errors_total&start=-1h&step=10m'
``` ```
```bash ```console
curl -G http://<vmselect>:8481/select/0/prometheus/api/v1/query_range --data-urlencode 'query=sum(increase(vm_http_request_errors_total{status=""}[5m])) by (status)' curl -G http://<vmselect>:8481/select/0/prometheus/api/v1/query_range --data-urlencode 'query=sum(increase(vm_http_request_errors_total{status=""}[5m])) by (status)'
``` ```
@ -270,7 +270,7 @@ Additional information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/prometheus/api/v1/series?match[]=vm_http_request_errors_total&start=-1h' curl -G 'http://localhost:8428/prometheus/api/v1/series?match[]=vm_http_request_errors_total&start=-1h'
``` ```
@ -279,7 +279,7 @@ curl -G 'http://localhost:8428/prometheus/api/v1/series?match[]=vm_http_request_
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/series?match[]=vm_http_request_errors_total&start=-1h' curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/series?match[]=vm_http_request_errors_total&start=-1h'
``` ```
@ -296,7 +296,7 @@ Additional information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/prometheus/api/v1/status/tsdb' curl -G 'http://localhost:8428/prometheus/api/v1/status/tsdb'
``` ```
@ -305,7 +305,7 @@ curl -G 'http://localhost:8428/prometheus/api/v1/status/tsdb'
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/status/tsdb' curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/status/tsdb'
``` ```
@ -324,7 +324,7 @@ Should be sent to vmagent/VMsingle
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<vmsingle>:8428/api/v1/targets' curl -G 'http://<vmsingle>:8428/api/v1/targets'
``` ```
@ -332,7 +332,7 @@ curl -G 'http://<vmsingle>:8428/api/v1/targets'
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<vmagent>:8429/api/v1/targets' curl -G 'http://<vmagent>:8429/api/v1/targets'
``` ```
@ -349,7 +349,7 @@ Additional information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
echo ' echo '
{ {
"series": [ "series": [
@ -376,7 +376,7 @@ echo '
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
echo ' echo '
{ {
"series": [ "series": [
@ -411,7 +411,7 @@ Additional information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/federate?match[]=vm_http_request_errors_total&start=2021-02-22T19:10:30.781Z' curl -G 'http://localhost:8428/federate?match[]=vm_http_request_errors_total&start=2021-02-22T19:10:30.781Z'
``` ```
@ -420,7 +420,7 @@ curl -G 'http://localhost:8428/federate?match[]=vm_http_request_errors_total&sta
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<vmselect>:8481/select/0/prometheus/federate?match[]=vm_http_request_errors_total&start=2021-02-22T19:10:30.781Z' curl -G 'http://<vmselect>:8481/select/0/prometheus/federate?match[]=vm_http_request_errors_total&start=2021-02-22T19:10:30.781Z'
``` ```
@ -438,7 +438,7 @@ Additional information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://localhost:8428/graphite/metrics/find?query=vm_http_request_errors_total' curl -G 'http://localhost:8428/graphite/metrics/find?query=vm_http_request_errors_total'
``` ```
@ -447,7 +447,7 @@ curl -G 'http://localhost:8428/graphite/metrics/find?query=vm_http_request_error
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -G 'http://<vmselect>:8481/select/0/graphite/metrics/find?query=vm_http_request_errors_total' curl -G 'http://<vmselect>:8481/select/0/graphite/metrics/find?query=vm_http_request_errors_total'
``` ```
@ -466,7 +466,7 @@ Additional information:
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write' curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
``` ```
@ -475,7 +475,7 @@ curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'ht
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://<vminsert>:8480/insert/0/influx/write' curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://<vminsert>:8480/insert/0/influx/write'
``` ```
@ -495,7 +495,7 @@ Turned off by default. Enable OpenTSDB receiver in VictoriaMetrics by setting `-
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242 echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
``` ```
@ -504,7 +504,7 @@ echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2 VictoriaMetrics_AccountID=0" | nc -N http://<vminsert> 4242 echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2 VictoriaMetrics_AccountID=0" | nc -N http://<vminsert> 4242
``` ```
@ -515,7 +515,7 @@ Enable HTTP server for OpenTSDB /api/put requests by setting `-opentsdbHTTPListe
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
``` ```
@ -524,7 +524,7 @@ curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"m
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]'
'http://<vminsert>:8480/insert/42/opentsdb/api/put' 'http://<vminsert>:8480/insert/42/opentsdb/api/put'
``` ```
@ -543,7 +543,7 @@ Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` com
Single: Single:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" |
nc -N localhost 2003 nc -N localhost 2003
``` ```
@ -553,7 +553,7 @@ echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" |
Cluster: Cluster:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
echo "foo.bar.baz;tag1=value1;tag2=value2;VictoriaMetrics_AccountID=42 123 `date +%s`" | nc -N http://<vminsert> 2003 echo "foo.bar.baz;tag1=value1;tag2=value2;VictoriaMetrics_AccountID=42 123 `date +%s`" | nc -N http://<vminsert> 2003
``` ```

View file

@ -77,7 +77,7 @@ Pass `-help` to `vmagent` in order to see [the full list of supported command-li
* Sending `SUGHUP` signal to `vmagent` process: * Sending `SUGHUP` signal to `vmagent` process:
```bash ```console
kill -SIGHUP `pidof vmagent` kill -SIGHUP `pidof vmagent`
``` ```
@ -256,12 +256,13 @@ Labels can be added to metrics by the following mechanisms:
VictoriaMetrics components (including `vmagent`) support Prometheus-compatible relabeling. VictoriaMetrics components (including `vmagent`) support Prometheus-compatible relabeling.
They provide the following additional actions on top of actions from the [Prometheus relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config): They provide the following additional actions on top of actions from the [Prometheus relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config):
* `replace_all`: replaces all of the occurences of `regex` in the values of `source_labels` with the `replacement` and stores the results in the `target_label`. * `replace_all`: replaces all of the occurences of `regex` in the values of `source_labels` with the `replacement` and stores the results in the `target_label`
* `labelmap_all`: replaces all of the occurences of `regex` in all the label names with the `replacement`. * `labelmap_all`: replaces all of the occurences of `regex` in all the label names with the `replacement`
* `keep_if_equal`: keeps the entry if all the label values from `source_labels` are equal. * `keep_if_equal`: keeps the entry if all the label values from `source_labels` are equal
* `drop_if_equal`: drops the entry if all the label values from `source_labels` are equal. * `drop_if_equal`: drops the entry if all the label values from `source_labels` are equal
* `keep_metrics`: keeps all the metrics with names matching the given `regex`. * `keep_metrics`: keeps all the metrics with names matching the given `regex`
* `drop_metrics`: drops all the metrics with names matching the given `regex`. * `drop_metrics`: drops all the metrics with names matching the given `regex`
* `graphite`: applies Graphite-style relabeling to metric name. See [these docs](#graphite-relabeling)
The `regex` value can be split into multiple lines for improved readability and maintainability. These lines are automatically joined with `|` char when parsed. For example, the following configs are equivalent: The `regex` value can be split into multiple lines for improved readability and maintainability. These lines are automatically joined with `|` char when parsed. For example, the following configs are equivalent:
@ -309,6 +310,38 @@ You can read more about relabeling in the following articles:
* [Extracting labels from legacy metric names](https://www.robustperception.io/extracting-labels-from-legacy-metric-names) * [Extracting labels from legacy metric names](https://www.robustperception.io/extracting-labels-from-legacy-metric-names)
* [relabel_configs vs metric_relabel_configs](https://www.robustperception.io/relabel_configs-vs-metric_relabel_configs) * [relabel_configs vs metric_relabel_configs](https://www.robustperception.io/relabel_configs-vs-metric_relabel_configs)
## Graphite relabeling
VictoriaMetrics components support `action: graphite` relabeling rules, which allow extracting various parts from Graphite-style metrics
into the configured labels with the syntax similar to [Glob matching in statsd_exporter](https://github.com/prometheus/statsd_exporter#glob-matching).
Note that the `name` field must be substituted with explicit `__name__` option under `labels` section.
If `__name__` option is missing under `labels` section, then the original Graphite-style metric name is left unchanged.
For example, the following relabeling rule generates `requests_total{job="app42",instance="host124:8080"}` metric
from "app42.host123.requests.total" Graphite-style metric:
```yaml
- action: graphite
match: "*.*.*.total"
labels:
__name__: "${3}_total"
job: "$1"
instance: "${2}:8080"
```
Important notes about `action: graphite` relabeling rules:
- The relabeling rule is applied only to metrics, which match the given `match` expression. Other metrics remain unchanged.
- The `*` matches the maximum possible number of chars until the next dot or until the next part of the `match` expression whichever comes first.
It may match zero chars if the next char is `.`.
For example, `match: "app*foo.bar"` matches `app42foo.bar` and `42` becomes available to use at `labels` section via `$1` capture group.
- The `$0` capture group matches the original metric name.
- The relabeling rules are executed in order defined in the original config.
The `action: graphite` relabeling rules are easier to write and maintain than `action: replace` for labels extraction from Graphite-style metric names.
Additionally, the `action: graphite` relabeling rules usually work much faster than the equivalent `action: replace` rules.
## Prometheus staleness markers ## Prometheus staleness markers
`vmagent` sends [Prometheus staleness markers](https://www.robustperception.io/staleness-and-promql) to `-remoteWrite.url` in the following cases: `vmagent` sends [Prometheus staleness markers](https://www.robustperception.io/staleness-and-promql) to `-remoteWrite.url` in the following cases:
@ -564,7 +597,7 @@ Every Kafka message may contain multiple lines in `influx`, `prometheus`, `graph
The following command starts `vmagent`, which reads metrics in InfluxDB line protocol format from Kafka broker at `localhost:9092` from the topic `metrics-by-telegraf` and sends them to remote storage at `http://localhost:8428/api/v1/write`: The following command starts `vmagent`, which reads metrics in InfluxDB line protocol format from Kafka broker at `localhost:9092` from the topic `metrics-by-telegraf` and sends them to remote storage at `http://localhost:8428/api/v1/write`:
```bash ```console
./bin/vmagent -remoteWrite.url=http://localhost:8428/api/v1/write \ ./bin/vmagent -remoteWrite.url=http://localhost:8428/api/v1/write \
-kafka.consumer.topic.brokers=localhost:9092 \ -kafka.consumer.topic.brokers=localhost:9092 \
-kafka.consumer.topic.format=influx \ -kafka.consumer.topic.format=influx \
@ -626,13 +659,13 @@ Two types of auth are supported:
* sasl with username and password: * sasl with username and password:
```bash ```console
./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SASL_SSL&sasl.mechanisms=PLAIN -remoteWrite.basicAuth.username=user -remoteWrite.basicAuth.password=password ./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SASL_SSL&sasl.mechanisms=PLAIN -remoteWrite.basicAuth.username=user -remoteWrite.basicAuth.password=password
``` ```
* tls certificates: * tls certificates:
```bash ```console
./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SSL -remoteWrite.tlsCAFile=/opt/ca.pem -remoteWrite.tlsCertFile=/opt/cert.pem -remoteWrite.tlsKeyFile=/opt/key.pem ./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SSL -remoteWrite.tlsCAFile=/opt/ca.pem -remoteWrite.tlsCertFile=/opt/cert.pem -remoteWrite.tlsKeyFile=/opt/key.pem
``` ```
@ -661,7 +694,7 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmagent`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package-vmagent ROOT_IMAGE=scratch make package-vmagent
``` ```
@ -689,7 +722,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof
``` ```
@ -699,7 +732,7 @@ curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8429/debug/pprof/profile > cpu.pprof curl http://0.0.0.0:8429/debug/pprof/profile > cpu.pprof
``` ```

View file

@ -40,7 +40,7 @@ implementation and aims to be compatible with its syntax.
To build `vmalert` from sources: To build `vmalert` from sources:
```bash ```console
git clone https://github.com/VictoriaMetrics/VictoriaMetrics git clone https://github.com/VictoriaMetrics/VictoriaMetrics
cd VictoriaMetrics cd VictoriaMetrics
make vmalert make vmalert
@ -56,12 +56,13 @@ To start using `vmalert` you will need the following things:
aggregating alerts, and sending notifications. Please note, notifier address also supports Consul and DNS Service Discovery via aggregating alerts, and sending notifications. Please note, notifier address also supports Consul and DNS Service Discovery via
[config file](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmalert/notifier/config.go). [config file](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmalert/notifier/config.go).
* remote write address [optional] - [remote write](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations) * remote write address [optional] - [remote write](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations)
compatible storage to persist rules and alerts state info; compatible storage to persist rules and alerts state info. To persist results to multiple destinations use vmagent
configured with multiple remote writes as a proxy;
* remote read address [optional] - MetricsQL compatible datasource to restore alerts state from. * remote read address [optional] - MetricsQL compatible datasource to restore alerts state from.
Then configure `vmalert` accordingly: Then configure `vmalert` accordingly:
```bash ```console
./bin/vmalert -rule=alert.rules \ # Path to the file with rules configuration. Supports wildcard ./bin/vmalert -rule=alert.rules \ # Path to the file with rules configuration. Supports wildcard
-datasource.url=http://localhost:8428 \ # PromQL compatible datasource -datasource.url=http://localhost:8428 \ # PromQL compatible datasource
-notifier.url=http://localhost:9093 \ # AlertManager URL (required if alerting rules are used) -notifier.url=http://localhost:9093 \ # AlertManager URL (required if alerting rules are used)
@ -428,6 +429,21 @@ Flags `-remoteRead.url` and `-notifier.url` are omitted since we assume only rec
See also [downsampling docs](https://docs.victoriametrics.com/#downsampling). See also [downsampling docs](https://docs.victoriametrics.com/#downsampling).
#### Multiple remote writes
For persisting recording or alerting rule results `vmalert` requires `-remoteWrite.url` to be set.
But this flag supports only one destination. To persist rule results to multiple destinations
we recommend using [vmagent](https://docs.victoriametrics.com/vmagent.html) as fan-out proxy:
<img alt="vmalert multiple remote write destinations" src="vmalert_multiple_rw.png">
In this topology, `vmalert` is configured to persist rule results to `vmagent`. And `vmagent`
is configured to fan-out received data to two or more destinations.
Using `vmagent` as a proxy provides additional benefits such as
[data persisting when storage is unreachable](https://docs.victoriametrics.com/vmagent.html#replication-and-high-availability),
or time series modification via [relabeling](https://docs.victoriametrics.com/vmagent.html#relabeling).
### Web ### Web
`vmalert` runs a web-server (`-httpListenAddr`) for serving metrics and alerts endpoints: `vmalert` runs a web-server (`-httpListenAddr`) for serving metrics and alerts endpoints:
@ -1026,7 +1042,7 @@ It is recommended using
You can build `vmalert` docker image from source and push it to your own docker repository. You can build `vmalert` docker image from source and push it to your own docker repository.
Run the following commands from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics): Run the following commands from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics):
```bash ```console
make package-vmalert make package-vmalert
docker tag victoria-metrics/vmalert:version my-repo:my-version-name docker tag victoria-metrics/vmalert:version my-repo:my-version-name
docker push my-repo:my-version-name docker push my-repo:my-version-name

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

View file

@ -14,7 +14,7 @@ The `-auth.config` can point to either local file or to http url.
Just download `vmutils-*` archive from [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases), unpack it Just download `vmutils-*` archive from [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases), unpack it
and pass the following flag to `vmauth` binary in order to start authorizing and routing requests: and pass the following flag to `vmauth` binary in order to start authorizing and routing requests:
```bash ```console
/path/to/vmauth -auth.config=/path/to/auth/config.yml /path/to/vmauth -auth.config=/path/to/auth/config.yml
``` ```
@ -133,7 +133,7 @@ It is expected that all the backend services protected by `vmauth` are located i
Do not transfer Basic Auth headers in plaintext over untrusted networks. Enable https. This can be done by passing the following `-tls*` command-line flags to `vmauth`: Do not transfer Basic Auth headers in plaintext over untrusted networks. Enable https. This can be done by passing the following `-tls*` command-line flags to `vmauth`:
```bash ```console
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
@ -185,7 +185,7 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmauth`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package-vmauth ROOT_IMAGE=scratch make package-vmauth
``` ```
@ -197,7 +197,7 @@ ROOT_IMAGE=scratch make package-vmauth
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof
``` ```
@ -207,7 +207,7 @@ curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
```bash ```console
curl http://0.0.0.0:8427/debug/pprof/profile > cpu.pprof curl http://0.0.0.0:8427/debug/pprof/profile > cpu.pprof
``` ```
@ -221,7 +221,7 @@ The collected profiles may be analyzed with [go tool pprof](https://github.com/g
Pass `-help` command-line arg to `vmauth` in order to see all the configuration options: Pass `-help` command-line arg to `vmauth` in order to see all the configuration options:
```bash ```console
./vmauth -help ./vmauth -help
vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics. vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics.

View file

@ -32,7 +32,7 @@ creation of hourly, daily, weekly and monthly backups.
Regular backup can be performed with the following command: Regular backup can be performed with the following command:
```bash ```console
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/new/backup> vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/new/backup>
``` ```
@ -47,7 +47,7 @@ vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=h
If the destination GCS bucket already contains the previous backup at `-origin` path, then new backup can be sped up If the destination GCS bucket already contains the previous backup at `-origin` path, then new backup can be sped up
with the following command: with the following command:
```bash ```console
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/new/backup> -origin=gs://<bucket>/<path/to/existing/backup> ./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/new/backup> -origin=gs://<bucket>/<path/to/existing/backup>
``` ```
@ -58,7 +58,7 @@ It saves time and network bandwidth costs by performing server-side copy for the
Incremental backups are performed if `-dst` points to an already existing backup. In this case only new data is uploaded to remote storage. Incremental backups are performed if `-dst` points to an already existing backup. In this case only new data is uploaded to remote storage.
It saves time and network bandwidth costs when working with big backups: It saves time and network bandwidth costs when working with big backups:
```bash ```console
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/existing/backup> ./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/existing/backup>
``` ```
@ -68,7 +68,7 @@ Smart backups mean storing full daily backups into `YYYYMMDD` folders and creati
* Run the following command every hour: * Run the following command every hour:
```bash ```console
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/latest ./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/latest
``` ```
@ -77,7 +77,7 @@ The command will upload only changed data to `gs://<bucket>/latest`.
* Run the following command once a day: * Run the following command once a day:
```bash ```console
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<YYYYMMDD> -origin=gs://<bucket>/latest vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<YYYYMMDD> -origin=gs://<bucket>/latest
``` ```
@ -133,7 +133,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
for s3 (aws, minio or other s3 compatible storages): for s3 (aws, minio or other s3 compatible storages):
```bash ```console
[default] [default]
aws_access_key_id=theaccesskey aws_access_key_id=theaccesskey
aws_secret_access_key=thesecretaccesskeyvalue aws_secret_access_key=thesecretaccesskeyvalue
@ -159,7 +159,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
* Usage with s3 custom url endpoint. It is possible to use `vmbackup` with s3 compatible storages like minio, cloudian, etc. * Usage with s3 custom url endpoint. It is possible to use `vmbackup` with s3 compatible storages like minio, cloudian, etc.
You have to add a custom url endpoint via flag: You have to add a custom url endpoint via flag:
```bash ```console
# for minio # for minio
-customS3Endpoint=http://localhost:9000 -customS3Endpoint=http://localhost:9000
@ -169,7 +169,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
* Run `vmbackup -help` in order to see all the available options: * Run `vmbackup -help` in order to see all the available options:
```bash ```console
-concurrency int -concurrency int
The number of concurrent workers. Higher concurrency may reduce backup duration (default 10) The number of concurrent workers. Higher concurrency may reduce backup duration (default 10)
-configFilePath string -configFilePath string
@ -284,6 +284,6 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmbackup`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package-vmbackup ROOT_IMAGE=scratch make package-vmbackup
``` ```

View file

@ -19,7 +19,7 @@ Features:
To see the full list of supported modes To see the full list of supported modes
run the following command: run the following command:
```bash ```console
$ ./vmctl --help $ ./vmctl --help
NAME: NAME:
vmctl - VictoriaMetrics command-line tool vmctl - VictoriaMetrics command-line tool
@ -531,7 +531,7 @@ and specify `accountID` param.
In this mode, `vmctl` allows verifying correctness and integrity of data exported via [native format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-export-data-in-native-format) from VictoriaMetrics. In this mode, `vmctl` allows verifying correctness and integrity of data exported via [native format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-export-data-in-native-format) from VictoriaMetrics.
You can verify exported data at disk before uploading it by `vmctl verify-block` command: You can verify exported data at disk before uploading it by `vmctl verify-block` command:
```bash ```console
# export blocks from VictoriaMetrics # export blocks from VictoriaMetrics
curl localhost:8428/api/v1/export/native -g -d 'match[]={__name__!=""}' -o exported_data_block curl localhost:8428/api/v1/export/native -g -d 'match[]={__name__!=""}' -o exported_data_block
# verify block content # verify block content
@ -654,7 +654,7 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmctl`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package-vmctl ROOT_IMAGE=scratch make package-vmctl
``` ```

View file

@ -58,7 +58,7 @@ Where:
Start the single version of VictoriaMetrics Start the single version of VictoriaMetrics
```bash ```console
# single # single
# start node # start node
./bin/victoria-metrics --selfScrapeInterval=10s ./bin/victoria-metrics --selfScrapeInterval=10s
@ -66,19 +66,19 @@ Start the single version of VictoriaMetrics
Start vmgateway Start vmgateway
```bash ```console
./bin/vmgateway -eula -enable.auth -read.url http://localhost:8428 --write.url http://localhost:8428 ./bin/vmgateway -eula -enable.auth -read.url http://localhost:8428 --write.url http://localhost:8428
``` ```
Retrieve data from the database Retrieve data from the database
```bash ```console
curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2bV9hY2Nlc3MiOnsidGVuYW50X2lkIjp7fSwicm9sZSI6MX0sImV4cCI6MTkzOTM0NjIxMH0.5WUxEfdcV9hKo4CtQdtuZYOGpGXWwaqM9VuVivMMrVg' curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2bV9hY2Nlc3MiOnsidGVuYW50X2lkIjp7fSwicm9sZSI6MX0sImV4cCI6MTkzOTM0NjIxMH0.5WUxEfdcV9hKo4CtQdtuZYOGpGXWwaqM9VuVivMMrVg'
``` ```
A request with an incorrect token or without any token will be rejected: A request with an incorrect token or without any token will be rejected:
```bash ```console
curl 'http://localhost:8431/api/v1/series/count' curl 'http://localhost:8431/api/v1/series/count'
curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer incorrect-token' curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer incorrect-token'
@ -128,7 +128,7 @@ limits:
cluster version of VictoriaMetrics is required for rate limiting. cluster version of VictoriaMetrics is required for rate limiting.
```bash ```console
# start datasource for cluster metrics # start datasource for cluster metrics
cat << EOF > cluster.yaml cat << EOF > cluster.yaml

View file

@ -14,7 +14,7 @@ when restarting `vmrestore` with the same args.
VictoriaMetrics must be stopped during the restore process. VictoriaMetrics must be stopped during the restore process.
```bash ```console
vmrestore -src=gs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/restore> vmrestore -src=gs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/restore>
``` ```
@ -40,7 +40,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
for s3 (aws, minio or other s3 compatible storages): for s3 (aws, minio or other s3 compatible storages):
```bash ```console
[default] [default]
aws_access_key_id=theaccesskey aws_access_key_id=theaccesskey
aws_secret_access_key=thesecretaccesskeyvalue aws_secret_access_key=thesecretaccesskeyvalue
@ -66,7 +66,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
* Usage with s3 custom url endpoint. It is possible to use `vmrestore` with s3 api compatible storages, like minio, cloudian and other. * Usage with s3 custom url endpoint. It is possible to use `vmrestore` with s3 api compatible storages, like minio, cloudian and other.
You have to add custom url endpoint with a flag: You have to add custom url endpoint with a flag:
```bash ```console
# for minio: # for minio:
-customS3Endpoint=http://localhost:9000 -customS3Endpoint=http://localhost:9000
@ -76,7 +76,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
* Run `vmrestore -help` in order to see all the available options: * Run `vmrestore -help` in order to see all the available options:
```bash ```console
-concurrency int -concurrency int
The number of concurrent workers. Higher concurrency may reduce restore duration (default 10) The number of concurrent workers. Higher concurrency may reduce restore duration (default 10)
-configFilePath string -configFilePath string
@ -184,6 +184,6 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmrestore`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash ```console
ROOT_IMAGE=scratch make package-vmrestore ROOT_IMAGE=scratch make package-vmrestore
``` ```

View file

@ -23,6 +23,22 @@ type RelabelConfig struct {
Replacement *string `yaml:"replacement,omitempty"` Replacement *string `yaml:"replacement,omitempty"`
Action string `yaml:"action,omitempty"` Action string `yaml:"action,omitempty"`
If *IfExpression `yaml:"if,omitempty"` If *IfExpression `yaml:"if,omitempty"`
// Match is used together with Labels for `action: graphite`. For example:
// - action: graphite
// match: 'foo.*.*.bar'
// labels:
// job: '$1'
// instance: '${2}:8080'
Match string `yaml:"match,omitempty"`
// Labels is used together with Match for `action: graphite`. For example:
// - action: graphite
// match: 'foo.*.*.bar'
// labels:
// job: '$1'
// instance: '${2}:8080'
Labels map[string]string `yaml:"labels,omitempty"`
} }
// MultiLineRegex contains a regex, which can be split into multiple lines. // MultiLineRegex contains a regex, which can be split into multiple lines.
@ -114,12 +130,12 @@ func (pcs *ParsedConfigs) String() string {
if pcs == nil { if pcs == nil {
return "" return ""
} }
var sb strings.Builder var a []string
for _, prc := range pcs.prcs { for _, prc := range pcs.prcs {
fmt.Fprintf(&sb, "%s,", prc.String()) s := "[" + prc.String() + "]"
a = append(a, s)
} }
fmt.Fprintf(&sb, "relabelDebug=%v", pcs.relabelDebug) return fmt.Sprintf("%s, relabelDebug=%v", strings.Join(a, ","), pcs.relabelDebug)
return sb.String()
} }
// LoadRelabelConfigs loads relabel configs from the given path. // LoadRelabelConfigs loads relabel configs from the given path.
@ -200,11 +216,38 @@ func parseRelabelConfig(rc *RelabelConfig) (*parsedRelabelConfig, error) {
if rc.Replacement != nil { if rc.Replacement != nil {
replacement = *rc.Replacement replacement = *rc.Replacement
} }
var graphiteMatchTemplate *graphiteMatchTemplate
if rc.Match != "" {
graphiteMatchTemplate = newGraphiteMatchTemplate(rc.Match)
}
var graphiteLabelRules []graphiteLabelRule
if rc.Labels != nil {
graphiteLabelRules = newGraphiteLabelRules(rc.Labels)
}
action := rc.Action action := rc.Action
if action == "" { if action == "" {
action = "replace" action = "replace"
} }
switch action { switch action {
case "graphite":
if graphiteMatchTemplate == nil {
return nil, fmt.Errorf("missing `match` for `action=graphite`; see https://docs.victoriametrics.com/vmagent.html#graphite-relabeling")
}
if len(graphiteLabelRules) == 0 {
return nil, fmt.Errorf("missing `labels` for `action=graphite`; see https://docs.victoriametrics.com/vmagent.html#graphite-relabeling")
}
if len(rc.SourceLabels) > 0 {
return nil, fmt.Errorf("`source_labels` cannot be used with `action=graphite`; see https://docs.victoriametrics.com/vmagent.html#graphite-relabeling")
}
if rc.TargetLabel != "" {
return nil, fmt.Errorf("`target_label` cannot be used with `action=graphite`; see https://docs.victoriametrics.com/vmagent.html#graphite-relabeling")
}
if rc.Replacement != nil {
return nil, fmt.Errorf("`replacement` cannot be used with `action=graphite`; see https://docs.victoriametrics.com/vmagent.html#graphite-relabeling")
}
if rc.Regex != nil {
return nil, fmt.Errorf("`regex` cannot be used with `action=graphite`; see https://docs.victoriametrics.com/vmagent.html#graphite-relabeling")
}
case "replace": case "replace":
if targetLabel == "" { if targetLabel == "" {
return nil, fmt.Errorf("missing `target_label` for `action=replace`") return nil, fmt.Errorf("missing `target_label` for `action=replace`")
@ -274,6 +317,14 @@ func parseRelabelConfig(rc *RelabelConfig) (*parsedRelabelConfig, error) {
default: default:
return nil, fmt.Errorf("unknown `action` %q", action) return nil, fmt.Errorf("unknown `action` %q", action)
} }
if action != "graphite" {
if graphiteMatchTemplate != nil {
return nil, fmt.Errorf("`match` config cannot be applied to `action=%s`; it is applied only to `action=graphite`", action)
}
if len(graphiteLabelRules) > 0 {
return nil, fmt.Errorf("`labels` config cannot be applied to `action=%s`; it is applied only to `action=graphite`", action)
}
}
return &parsedRelabelConfig{ return &parsedRelabelConfig{
SourceLabels: sourceLabels, SourceLabels: sourceLabels,
Separator: separator, Separator: separator,
@ -284,6 +335,9 @@ func parseRelabelConfig(rc *RelabelConfig) (*parsedRelabelConfig, error) {
Action: action, Action: action,
If: rc.If, If: rc.If,
graphiteMatchTemplate: graphiteMatchTemplate,
graphiteLabelRules: graphiteLabelRules,
regexOriginal: regexOriginalCompiled, regexOriginal: regexOriginalCompiled,
hasCaptureGroupInTargetLabel: strings.Contains(targetLabel, "$"), hasCaptureGroupInTargetLabel: strings.Contains(targetLabel, "$"),
hasCaptureGroupInReplacement: strings.Contains(replacement, "$"), hasCaptureGroupInReplacement: strings.Contains(replacement, "$"),

View file

@ -45,6 +45,13 @@ func TestRelabelConfigMarshalUnmarshal(t *testing.T) {
- null - null
- nan - nan
`, "- regex:\n - \"-1.23\"\n - \"false\"\n - \"null\"\n - nan\n") `, "- regex:\n - \"-1.23\"\n - \"false\"\n - \"null\"\n - nan\n")
f(`
- action: graphite
match: 'foo.*.*.aaa'
labels:
instance: '$1-abc'
job: '${2}'
`, "- action: graphite\n match: foo.*.*.aaa\n labels:\n instance: $1-abc\n job: ${2}\n")
} }
func TestLoadRelabelConfigsSuccess(t *testing.T) { func TestLoadRelabelConfigsSuccess(t *testing.T) {
@ -53,8 +60,9 @@ func TestLoadRelabelConfigsSuccess(t *testing.T) {
if err != nil { if err != nil {
t.Fatalf("cannot load relabel configs from %q: %s", path, err) t.Fatalf("cannot load relabel configs from %q: %s", path, err)
} }
if n := pcs.Len(); n != 14 { nExpected := 16
t.Fatalf("unexpected number of relabel configs loaded from %q; got %d; want %d", path, n, 14) if n := pcs.Len(); n != nExpected {
t.Fatalf("unexpected number of relabel configs loaded from %q; got %d; want %d", path, n, nExpected)
} }
} }
@ -77,6 +85,51 @@ func TestLoadRelabelConfigsFailure(t *testing.T) {
}) })
} }
func TestParsedConfigsString(t *testing.T) {
f := func(rcs []RelabelConfig, sExpected string) {
t.Helper()
pcs, err := ParseRelabelConfigs(rcs, false)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
s := pcs.String()
if s != sExpected {
t.Fatalf("unexpected string representation for ParsedConfigs;\ngot\n%s\nwant\n%s", s, sExpected)
}
}
f([]RelabelConfig{
{
TargetLabel: "foo",
SourceLabels: []string{"aaa"},
},
}, "[SourceLabels=[aaa], Separator=;, TargetLabel=foo, Regex=^(.*)$, Modulus=0, Replacement=$1, Action=replace, If=, "+
"graphiteMatchTemplate=<nil>, graphiteLabelRules=[]], relabelDebug=false")
var ie IfExpression
if err := ie.Parse("{foo=~'bar'}"); err != nil {
t.Fatalf("unexpected error when parsing if expression: %s", err)
}
f([]RelabelConfig{
{
Action: "graphite",
Match: "foo.*.bar",
Labels: map[string]string{
"job": "$1-zz",
},
If: &ie,
},
}, "[SourceLabels=[], Separator=;, TargetLabel=, Regex=^(.*)$, Modulus=0, Replacement=$1, Action=graphite, If={foo=~'bar'}, "+
"graphiteMatchTemplate=foo.*.bar, graphiteLabelRules=[replaceTemplate=$1-zz, targetLabel=job]], relabelDebug=false")
f([]RelabelConfig{
{
Action: "replace",
SourceLabels: []string{"foo", "bar"},
TargetLabel: "x",
If: &ie,
},
}, "[SourceLabels=[foo bar], Separator=;, TargetLabel=x, Regex=^(.*)$, Modulus=0, Replacement=$1, Action=replace, If={foo=~'bar'}, "+
"graphiteMatchTemplate=<nil>, graphiteLabelRules=[]], relabelDebug=false")
}
func TestParseRelabelConfigsSuccess(t *testing.T) { func TestParseRelabelConfigsSuccess(t *testing.T) {
f := func(rcs []RelabelConfig, pcsExpected *ParsedConfigs) { f := func(rcs []RelabelConfig, pcsExpected *ParsedConfigs) {
t.Helper() t.Helper()
@ -271,4 +324,110 @@ func TestParseRelabelConfigsFailure(t *testing.T) {
}, },
}) })
}) })
t.Run("uppercase-missing-sourceLabels", func(t *testing.T) {
f([]RelabelConfig{
{
Action: "uppercase",
TargetLabel: "foobar",
},
})
})
t.Run("lowercase-missing-targetLabel", func(t *testing.T) {
f([]RelabelConfig{
{
Action: "lowercase",
SourceLabels: []string{"foobar"},
},
})
})
t.Run("graphite-missing-match", func(t *testing.T) {
f([]RelabelConfig{
{
Action: "graphite",
Labels: map[string]string{
"foo": "bar",
},
},
})
})
t.Run("graphite-missing-labels", func(t *testing.T) {
f([]RelabelConfig{
{
Action: "graphite",
Match: "foo.*.bar",
},
})
})
t.Run("graphite-superflouous-sourceLabels", func(t *testing.T) {
f([]RelabelConfig{
{
Action: "graphite",
Match: "foo.*.bar",
Labels: map[string]string{
"foo": "bar",
},
SourceLabels: []string{"foo"},
},
})
})
t.Run("graphite-superflouous-targetLabel", func(t *testing.T) {
f([]RelabelConfig{
{
Action: "graphite",
Match: "foo.*.bar",
Labels: map[string]string{
"foo": "bar",
},
TargetLabel: "foo",
},
})
})
replacement := "foo"
t.Run("graphite-superflouous-replacement", func(t *testing.T) {
f([]RelabelConfig{
{
Action: "graphite",
Match: "foo.*.bar",
Labels: map[string]string{
"foo": "bar",
},
Replacement: &replacement,
},
})
})
var re MultiLineRegex
t.Run("graphite-superflouous-regex", func(t *testing.T) {
f([]RelabelConfig{
{
Action: "graphite",
Match: "foo.*.bar",
Labels: map[string]string{
"foo": "bar",
},
Regex: &re,
},
})
})
t.Run("non-graphite-superflouos-match", func(t *testing.T) {
f([]RelabelConfig{
{
Action: "uppercase",
SourceLabels: []string{"foo"},
TargetLabel: "foo",
Match: "aaa",
},
})
})
t.Run("non-graphite-superflouos-labels", func(t *testing.T) {
f([]RelabelConfig{
{
Action: "uppercase",
SourceLabels: []string{"foo"},
TargetLabel: "foo",
Labels: map[string]string{
"foo": "Bar",
},
},
})
})
} }

212
lib/promrelabel/graphite.go Normal file
View file

@ -0,0 +1,212 @@
package promrelabel
import (
"fmt"
"strconv"
"strings"
"sync"
)
var graphiteMatchesPool = &sync.Pool{
New: func() interface{} {
return &graphiteMatches{}
},
}
type graphiteMatches struct {
a []string
}
type graphiteMatchTemplate struct {
sOrig string
parts []string
}
func (gmt *graphiteMatchTemplate) String() string {
return gmt.sOrig
}
type graphiteLabelRule struct {
grt *graphiteReplaceTemplate
targetLabel string
}
func (glr graphiteLabelRule) String() string {
return fmt.Sprintf("replaceTemplate=%s, targetLabel=%s", glr.grt, glr.targetLabel)
}
func newGraphiteLabelRules(m map[string]string) []graphiteLabelRule {
a := make([]graphiteLabelRule, 0, len(m))
for labelName, replaceTemplate := range m {
a = append(a, graphiteLabelRule{
grt: newGraphiteReplaceTemplate(replaceTemplate),
targetLabel: labelName,
})
}
return a
}
func newGraphiteMatchTemplate(s string) *graphiteMatchTemplate {
sOrig := s
var parts []string
for {
n := strings.IndexByte(s, '*')
if n < 0 {
parts = appendGraphiteMatchTemplateParts(parts, s)
break
}
parts = appendGraphiteMatchTemplateParts(parts, s[:n])
parts = appendGraphiteMatchTemplateParts(parts, "*")
s = s[n+1:]
}
return &graphiteMatchTemplate{
sOrig: sOrig,
parts: parts,
}
}
func appendGraphiteMatchTemplateParts(dst []string, s string) []string {
if len(s) == 0 {
// Skip empty part
return dst
}
return append(dst, s)
}
// Match matches s against gmt.
//
// On success it adds matched captures to dst and returns it with true.
// Of failre it returns false.
func (gmt *graphiteMatchTemplate) Match(dst []string, s string) ([]string, bool) {
dst = append(dst, s)
parts := gmt.parts
if len(parts) > 0 {
if p := parts[len(parts)-1]; p != "*" && !strings.HasSuffix(s, p) {
// fast path - suffix mismatch
return dst, false
}
}
for i := 0; i < len(parts); i++ {
p := parts[i]
if p != "*" {
if !strings.HasPrefix(s, p) {
// Cannot match the current part
return dst, false
}
s = s[len(p):]
continue
}
// Search for the matching substring for '*' part.
if i+1 >= len(parts) {
// Matching the last part.
if strings.IndexByte(s, '.') >= 0 {
// The '*' cannot match string with dots.
return dst, false
}
dst = append(dst, s)
return dst, true
}
// Search for the the start of the next part.
p = parts[i+1]
i++
n := strings.Index(s, p)
if n < 0 {
// Cannot match the next part
return dst, false
}
tmp := s[:n]
if strings.IndexByte(tmp, '.') >= 0 {
// The '*' cannot match string with dots.
return dst, false
}
dst = append(dst, tmp)
s = s[n+len(p):]
}
return dst, len(s) == 0
}
type graphiteReplaceTemplate struct {
sOrig string
parts []graphiteReplaceTemplatePart
}
func (grt *graphiteReplaceTemplate) String() string {
return grt.sOrig
}
type graphiteReplaceTemplatePart struct {
n int
s string
}
func newGraphiteReplaceTemplate(s string) *graphiteReplaceTemplate {
sOrig := s
var parts []graphiteReplaceTemplatePart
for {
n := strings.IndexByte(s, '$')
if n < 0 {
parts = appendGraphiteReplaceTemplateParts(parts, s, -1)
break
}
if n > 0 {
parts = appendGraphiteReplaceTemplateParts(parts, s[:n], -1)
}
s = s[n+1:]
if len(s) > 0 && s[0] == '{' {
// The index in the form ${123}
n = strings.IndexByte(s, '}')
if n < 0 {
parts = appendGraphiteReplaceTemplateParts(parts, "$"+s, -1)
break
}
idxStr := s[1:n]
s = s[n+1:]
idx, err := strconv.Atoi(idxStr)
if err != nil {
parts = appendGraphiteReplaceTemplateParts(parts, "${"+idxStr+"}", -1)
} else {
parts = appendGraphiteReplaceTemplateParts(parts, "${"+idxStr+"}", idx)
}
} else {
// The index in the form $123
n := 0
for n < len(s) && s[n] >= '0' && s[n] <= '9' {
n++
}
idxStr := s[:n]
s = s[n:]
idx, err := strconv.Atoi(idxStr)
if err != nil {
parts = appendGraphiteReplaceTemplateParts(parts, "$"+idxStr, -1)
} else {
parts = appendGraphiteReplaceTemplateParts(parts, "$"+idxStr, idx)
}
}
}
return &graphiteReplaceTemplate{
sOrig: sOrig,
parts: parts,
}
}
// Expand expands grt with the given matches into dst and returns it.
func (grt *graphiteReplaceTemplate) Expand(dst []byte, matches []string) []byte {
for _, part := range grt.parts {
if n := part.n; n >= 0 && n < len(matches) {
dst = append(dst, matches[n]...)
} else {
dst = append(dst, part.s...)
}
}
return dst
}
func appendGraphiteReplaceTemplateParts(dst []graphiteReplaceTemplatePart, s string, n int) []graphiteReplaceTemplatePart {
if len(s) > 0 {
dst = append(dst, graphiteReplaceTemplatePart{
s: s,
n: n,
})
}
return dst
}

View file

@ -0,0 +1,93 @@
package promrelabel
import (
"reflect"
"testing"
)
func TestGraphiteTemplateMatchExpand(t *testing.T) {
f := func(matchTpl, s, replaceTpl, resultExpected string) {
t.Helper()
gmt := newGraphiteMatchTemplate(matchTpl)
matches, ok := gmt.Match(nil, s)
if !ok {
matches = nil
}
grt := newGraphiteReplaceTemplate(replaceTpl)
result := grt.Expand(nil, matches)
if string(result) != resultExpected {
t.Fatalf("unexpected result; got %q; want %q", result, resultExpected)
}
}
f("", "", "", "")
f("test.*.*.counter", "test.foo.bar.counter", "${2}_total", "bar_total")
f("test.*.*.counter", "test.foo.bar.counter", "$1_total", "foo_total")
f("test.*.*.counter", "test.foo.bar.counter", "total_$0", "total_test.foo.bar.counter")
f("test.dispatcher.*.*.*", "test.dispatcher.foo.bar.baz", "$3-$2-$1", "baz-bar-foo")
f("*.signup.*.*", "foo.signup.bar.baz", "$1-${3}_$2_total", "foo-baz_bar_total")
}
func TestGraphiteMatchTemplateMatch(t *testing.T) {
f := func(tpl, s string, matchesExpected []string, okExpected bool) {
t.Helper()
gmt := newGraphiteMatchTemplate(tpl)
tplGot := gmt.String()
if tplGot != tpl {
t.Fatalf("unexpected template; got %q; want %q", tplGot, tpl)
}
matches, ok := gmt.Match(nil, s)
if ok != okExpected {
t.Fatalf("unexpected ok result for tpl=%q, s=%q; got %v; want %v", tpl, s, ok, okExpected)
}
if okExpected {
if !reflect.DeepEqual(matches, matchesExpected) {
t.Fatalf("unexpected matches for tpl=%q, s=%q; got\n%q\nwant\n%q\ngraphiteMatchTemplate=%v", tpl, s, matches, matchesExpected, gmt)
}
}
}
f("", "", []string{""}, true)
f("", "foobar", nil, false)
f("foo", "foo", []string{"foo"}, true)
f("foo", "", nil, false)
f("foo.bar.baz", "foo.bar.baz", []string{"foo.bar.baz"}, true)
f("*", "foobar", []string{"foobar", "foobar"}, true)
f("**", "foobar", nil, false)
f("*", "foo.bar", nil, false)
f("*foo", "barfoo", []string{"barfoo", "bar"}, true)
f("*foo", "foo", []string{"foo", ""}, true)
f("*foo", "bar.foo", nil, false)
f("foo*", "foobar", []string{"foobar", "bar"}, true)
f("foo*", "foo", []string{"foo", ""}, true)
f("foo*", "foo.bar", nil, false)
f("foo.*", "foobar", nil, false)
f("foo.*", "foo.bar", []string{"foo.bar", "bar"}, true)
f("foo.*", "foo.bar.baz", nil, false)
f("*.*.baz", "foo.bar.baz", []string{"foo.bar.baz", "foo", "bar"}, true)
f("*.bar", "foo.bar.baz", nil, false)
f("*.bar", "foo.baz", nil, false)
}
func TestGraphiteReplaceTemplateExpand(t *testing.T) {
f := func(tpl string, matches []string, resultExpected string) {
t.Helper()
grt := newGraphiteReplaceTemplate(tpl)
tplGot := grt.String()
if tplGot != tpl {
t.Fatalf("unexpected template; got %q; want %q", tplGot, tpl)
}
result := grt.Expand(nil, matches)
if string(result) != resultExpected {
t.Fatalf("unexpected result for tpl=%q; got\n%q\nwant\n%q\ngraphiteReplaceTemplate=%v", tpl, result, resultExpected, grt)
}
}
f("", nil, "")
f("foo", nil, "foo")
f("$", nil, "$")
f("$1", nil, "$1")
f("${123", nil, "${123")
f("${123}", nil, "${123}")
f("${foo}45$sdf$3", nil, "${foo}45$sdf$3")
f("$1", []string{"foo", "bar"}, "bar")
f("$0-$1", []string{"foo", "bar"}, "foo-bar")
f("x-${0}-$1", []string{"foo", "bar"}, "x-foo-bar")
}

View file

@ -0,0 +1,93 @@
package promrelabel
import (
"fmt"
"testing"
)
func BenchmarkGraphiteMatchTemplateMatch(b *testing.B) {
b.Run("match-short", func(b *testing.B) {
tpl := "*.bar.baz"
s := "foo.bar.baz"
benchmarkGraphiteMatchTemplateMatch(b, tpl, s, true)
})
b.Run("mismtach-short", func(b *testing.B) {
tpl := "*.bar.baz"
s := "foo.aaa"
benchmarkGraphiteMatchTemplateMatch(b, tpl, s, false)
})
b.Run("match-long", func(b *testing.B) {
tpl := "*.*.*.bar.*.baz"
s := "foo.bar.baz.bar.aa.baz"
benchmarkGraphiteMatchTemplateMatch(b, tpl, s, true)
})
b.Run("mismatch-long", func(b *testing.B) {
tpl := "*.*.*.bar.*.baz"
s := "foo.bar.baz.bar.aa.bb"
benchmarkGraphiteMatchTemplateMatch(b, tpl, s, false)
})
}
func benchmarkGraphiteMatchTemplateMatch(b *testing.B, tpl, s string, okExpected bool) {
gmt := newGraphiteMatchTemplate(tpl)
b.ReportAllocs()
b.SetBytes(1)
b.RunParallel(func(pb *testing.PB) {
var matches []string
for pb.Next() {
var ok bool
matches, ok = gmt.Match(matches[:0], s)
if ok != okExpected {
panic(fmt.Errorf("unexpected ok=%v for tpl=%q, s=%q", ok, tpl, s))
}
}
})
}
func BenchmarkGraphiteReplaceTemplateExpand(b *testing.B) {
b.Run("one-replacement", func(b *testing.B) {
tpl := "$1"
matches := []string{"", "foo"}
resultExpected := "foo"
benchmarkGraphiteReplaceTemplateExpand(b, tpl, matches, resultExpected)
})
b.Run("one-replacement-with-prefix", func(b *testing.B) {
tpl := "x-$1"
matches := []string{"", "foo"}
resultExpected := "x-foo"
benchmarkGraphiteReplaceTemplateExpand(b, tpl, matches, resultExpected)
})
b.Run("one-replacement-with-prefix-suffix", func(b *testing.B) {
tpl := "x-$1-y"
matches := []string{"", "foo"}
resultExpected := "x-foo-y"
benchmarkGraphiteReplaceTemplateExpand(b, tpl, matches, resultExpected)
})
b.Run("two-replacements", func(b *testing.B) {
tpl := "$1$2"
matches := []string{"", "foo", "bar"}
resultExpected := "foobar"
benchmarkGraphiteReplaceTemplateExpand(b, tpl, matches, resultExpected)
})
b.Run("two-replacements-with-delimiter", func(b *testing.B) {
tpl := "$1-$2"
matches := []string{"", "foo", "bar"}
resultExpected := "foo-bar"
benchmarkGraphiteReplaceTemplateExpand(b, tpl, matches, resultExpected)
})
}
func benchmarkGraphiteReplaceTemplateExpand(b *testing.B, tpl string, matches []string, resultExpected string) {
grt := newGraphiteReplaceTemplate(tpl)
b.ReportAllocs()
b.SetBytes(1)
b.RunParallel(func(pb *testing.PB) {
var b []byte
for pb.Next() {
b = grt.Expand(b[:0], matches)
if string(b) != resultExpected {
panic(fmt.Errorf("unexpected result; got\n%q\nwant\n%q", b, resultExpected))
}
}
})
}

View file

@ -18,6 +18,14 @@ type IfExpression struct {
lfs []*labelFilter lfs []*labelFilter
} }
// String returns string representation of ie.
func (ie *IfExpression) String() string {
if ie == nil {
return ""
}
return ie.s
}
// Parse parses `if` expression from s and stores it to ie. // Parse parses `if` expression from s and stores it to ie.
func (ie *IfExpression) Parse(s string) error { func (ie *IfExpression) Parse(s string) error {
expr, err := metricsql.Parse(s) expr, err := metricsql.Parse(s)

View file

@ -2,6 +2,7 @@ package promrelabel
import ( import (
"bytes" "bytes"
"encoding/json"
"fmt" "fmt"
"testing" "testing"
@ -36,6 +37,36 @@ func TestIfExpressionParseSuccess(t *testing.T) {
f(`foo{bar=~"baz", x!="y"}`) f(`foo{bar=~"baz", x!="y"}`)
} }
func TestIfExpressionMarshalUnmarshalJSON(t *testing.T) {
f := func(s, jsonExpected string) {
t.Helper()
var ie IfExpression
if err := ie.Parse(s); err != nil {
t.Fatalf("cannot parse ifExpression %q: %s", s, err)
}
data, err := json.Marshal(&ie)
if err != nil {
t.Fatalf("cannot marshal ifExpression %q: %s", s, err)
}
if string(data) != jsonExpected {
t.Fatalf("unexpected value after json marshaling;\ngot\n%s\nwant\n%s", data, jsonExpected)
}
var ie2 IfExpression
if err := json.Unmarshal(data, &ie2); err != nil {
t.Fatalf("cannot unmarshal ifExpression from json %q: %s", data, err)
}
data2, err := json.Marshal(&ie2)
if err != nil {
t.Fatalf("cannot marshal ifExpression2: %s", err)
}
if string(data2) != jsonExpected {
t.Fatalf("unexpected data after unmarshal/marshal cycle;\ngot\n%s\nwant\n%s", data2, jsonExpected)
}
}
f("foo", `"foo"`)
f(`{foo="bar",baz=~"x.*"}`, `"{foo=\"bar\",baz=~\"x.*\"}"`)
}
func TestIfExpressionUnmarshalFailure(t *testing.T) { func TestIfExpressionUnmarshalFailure(t *testing.T) {
f := func(s string) { f := func(s string) {
t.Helper() t.Helper()

View file

@ -25,6 +25,9 @@ type parsedRelabelConfig struct {
Action string Action string
If *IfExpression If *IfExpression
graphiteMatchTemplate *graphiteMatchTemplate
graphiteLabelRules []graphiteLabelRule
regexOriginal *regexp.Regexp regexOriginal *regexp.Regexp
hasCaptureGroupInTargetLabel bool hasCaptureGroupInTargetLabel bool
hasCaptureGroupInReplacement bool hasCaptureGroupInReplacement bool
@ -32,8 +35,8 @@ type parsedRelabelConfig struct {
// String returns human-readable representation for prc. // String returns human-readable representation for prc.
func (prc *parsedRelabelConfig) String() string { func (prc *parsedRelabelConfig) String() string {
return fmt.Sprintf("SourceLabels=%s, Separator=%s, TargetLabel=%s, Regex=%s, Modulus=%d, Replacement=%s, Action=%s", return fmt.Sprintf("SourceLabels=%s, Separator=%s, TargetLabel=%s, Regex=%s, Modulus=%d, Replacement=%s, Action=%s, If=%s, graphiteMatchTemplate=%s, graphiteLabelRules=%s",
prc.SourceLabels, prc.Separator, prc.TargetLabel, prc.Regex.String(), prc.Modulus, prc.Replacement, prc.Action) prc.SourceLabels, prc.Separator, prc.TargetLabel, prc.Regex, prc.Modulus, prc.Replacement, prc.Action, prc.If, prc.graphiteMatchTemplate, prc.graphiteLabelRules)
} }
// Apply applies pcs to labels starting from the labelsOffset. // Apply applies pcs to labels starting from the labelsOffset.
@ -147,6 +150,26 @@ func (prc *parsedRelabelConfig) apply(labels []prompbmarshal.Label, labelsOffset
return labels return labels
} }
switch prc.Action { switch prc.Action {
case "graphite":
metricName := GetLabelValueByName(src, "__name__")
gm := graphiteMatchesPool.Get().(*graphiteMatches)
var ok bool
gm.a, ok = prc.graphiteMatchTemplate.Match(gm.a[:0], metricName)
if !ok {
// Fast path - name mismatch
graphiteMatchesPool.Put(gm)
return labels
}
// Slow path - extract labels from graphite metric name
bb := relabelBufPool.Get()
for _, gl := range prc.graphiteLabelRules {
bb.B = gl.grt.Expand(bb.B[:0], gm.a)
valueStr := string(bb.B)
labels = setLabelValue(labels, labelsOffset, gl.targetLabel, valueStr)
}
relabelBufPool.Put(bb)
graphiteMatchesPool.Put(gm)
return labels
case "replace": case "replace":
// Store `replacement` at `target_label` if the `regex` matches `source_labels` joined with `separator` // Store `replacement` at `target_label` if the `regex` matches `source_labels` joined with `separator`
bb := relabelBufPool.Get() bb := relabelBufPool.Get()

View file

@ -1580,7 +1580,6 @@ func TestApplyRelabelConfigs(t *testing.T) {
}, },
}) })
}) })
t.Run("upper-lower-case", func(t *testing.T) { t.Run("upper-lower-case", func(t *testing.T) {
f(` f(`
- action: uppercase - action: uppercase
@ -1618,7 +1617,6 @@ func TestApplyRelabelConfigs(t *testing.T) {
Value: "bar;foo", Value: "bar;foo",
}, },
}) })
})
f(` f(`
- action: lowercase - action: lowercase
source_labels: ["foo"] source_labels: ["foo"]
@ -1637,6 +1635,49 @@ func TestApplyRelabelConfigs(t *testing.T) {
Value: "quux", Value: "quux",
}, },
}) })
})
t.Run("graphite-match", func(t *testing.T) {
f(`
- action: graphite
match: foo.*.baz
labels:
__name__: aaa
job: ${1}-zz
`, []prompbmarshal.Label{
{
Name: "__name__",
Value: "foo.bar.baz",
},
}, true, []prompbmarshal.Label{
{
Name: "__name__",
Value: "aaa",
},
{
Name: "job",
Value: "bar-zz",
},
})
})
t.Run("graphite-mismatch", func(t *testing.T) {
f(`
- action: graphite
match: foo.*.baz
labels:
__name__: aaa
job: ${1}-zz
`, []prompbmarshal.Label{
{
Name: "__name__",
Value: "foo.bar.bazz",
},
}, true, []prompbmarshal.Label{
{
Name: "__name__",
Value: "foo.bar.bazz",
},
})
})
} }
func TestFinalizeLabels(t *testing.T) { func TestFinalizeLabels(t *testing.T) {

View file

@ -39,3 +39,11 @@
- source_labels: [__tmp_uppercase] - source_labels: [__tmp_uppercase]
target_label: lower_aaa target_label: lower_aaa
action: lowercase action: lowercase
- if: '{foo=~"bar.*",baz="aa"}'
target_label: aaa
replacement: foobar
- action: graphite
match: 'foo.*.bar'
labels:
instance: 'foo-$1'
job: '${1}-bar'

View file

@ -390,13 +390,13 @@ func (db *indexDB) putMetricNameToCache(metricID uint64, metricName []byte) {
db.s.metricNameCache.Set(key[:], metricName) db.s.metricNameCache.Set(key[:], metricName)
} }
// maybeCreateIndexes probabilistically creates indexes for the given (tsid, metricNameRaw) at db. // maybeCreateIndexes probabilistically creates global and per-day indexes for the given (tsid, metricNameRaw, date) at db.
// //
// The probability increases from 0 to 100% during the first hour since db rotation. // The probability increases from 0 to 100% during the first hour since db rotation.
// //
// It returns true if new index entry was created, and false if it was skipped. // It returns true if new index entry was created, and false if it was skipped.
func (db *indexDB) maybeCreateIndexes(tsid *TSID, metricNameRaw []byte) (bool, error) { func (is *indexSearch) maybeCreateIndexes(tsid *TSID, metricNameRaw []byte, date uint64) (bool, error) {
pMin := float64(fasttime.UnixTimestamp()-db.rotationTimestamp) / 3600 pMin := float64(fasttime.UnixTimestamp()-is.db.rotationTimestamp) / 3600
if pMin < 1 { if pMin < 1 {
p := float64(uint32(fastHashUint64(tsid.MetricID))) / (1 << 32) p := float64(uint32(fastHashUint64(tsid.MetricID))) / (1 << 32)
if p > pMin { if p > pMin {
@ -410,11 +410,14 @@ func (db *indexDB) maybeCreateIndexes(tsid *TSID, metricNameRaw []byte) (bool, e
return false, fmt.Errorf("cannot unmarshal metricNameRaw %q: %w", metricNameRaw, err) return false, fmt.Errorf("cannot unmarshal metricNameRaw %q: %w", metricNameRaw, err)
} }
mn.sortTags() mn.sortTags()
if err := db.createIndexes(tsid, mn); err != nil { if err := is.createGlobalIndexes(tsid, mn); err != nil {
return false, err return false, fmt.Errorf("cannot create global indexes: %w", err)
}
if err := is.createPerDayIndexes(date, tsid.MetricID, mn); err != nil {
return false, fmt.Errorf("cannot create per-day indexes for date=%d: %w", date, err)
} }
PutMetricName(mn) PutMetricName(mn)
atomic.AddUint64(&db.timeseriesRepopulated, 1) atomic.AddUint64(&is.db.timeseriesRepopulated, 1)
return true, nil return true, nil
} }
@ -515,7 +518,10 @@ type indexSearch struct {
} }
// GetOrCreateTSIDByName fills the dst with TSID for the given metricName. // GetOrCreateTSIDByName fills the dst with TSID for the given metricName.
func (is *indexSearch) GetOrCreateTSIDByName(dst *TSID, metricName []byte) error { //
// It also registers the metricName in global and per-day indexes
// for the given date if the metricName->TSID entry is missing in the index.
func (is *indexSearch) GetOrCreateTSIDByName(dst *TSID, metricName []byte, date uint64) error {
// A hack: skip searching for the TSID after many serial misses. // A hack: skip searching for the TSID after many serial misses.
// This should improve insertion performance for big batches // This should improve insertion performance for big batches
// of new time series. // of new time series.
@ -540,7 +546,7 @@ func (is *indexSearch) GetOrCreateTSIDByName(dst *TSID, metricName []byte) error
// TSID for the given name wasn't found. Create it. // TSID for the given name wasn't found. Create it.
// It is OK if duplicate TSID for mn is created by concurrent goroutines. // It is OK if duplicate TSID for mn is created by concurrent goroutines.
// Metric results will be merged by mn after TableSearch. // Metric results will be merged by mn after TableSearch.
if err := is.db.createTSIDByName(dst, metricName); err != nil { if err := is.createTSIDByName(dst, metricName, date); err != nil {
return fmt.Errorf("cannot create TSID by MetricName %q: %w", metricName, err) return fmt.Errorf("cannot create TSID by MetricName %q: %w", metricName, err)
} }
return nil return nil
@ -571,19 +577,25 @@ func (db *indexDB) putIndexSearch(is *indexSearch) {
db.indexSearchPool.Put(is) db.indexSearchPool.Put(is)
} }
func (db *indexDB) createTSIDByName(dst *TSID, metricName []byte) error { func (is *indexSearch) createTSIDByName(dst *TSID, metricName []byte, date uint64) error {
mn := GetMetricName() mn := GetMetricName()
defer PutMetricName(mn) defer PutMetricName(mn)
if err := mn.Unmarshal(metricName); err != nil { if err := mn.Unmarshal(metricName); err != nil {
return fmt.Errorf("cannot unmarshal metricName %q: %w", metricName, err) return fmt.Errorf("cannot unmarshal metricName %q: %w", metricName, err)
} }
created, err := db.getOrCreateTSID(dst, metricName, mn) created, err := is.db.getOrCreateTSID(dst, metricName, mn)
if err != nil { if err != nil {
return fmt.Errorf("cannot generate TSID: %w", err) return fmt.Errorf("cannot generate TSID: %w", err)
} }
if err := db.createIndexes(dst, mn); err != nil { if !is.db.s.registerSeriesCardinality(dst.MetricID, mn) {
return fmt.Errorf("cannot create indexes: %w", err) return errSeriesCardinalityExceeded
}
if err := is.createGlobalIndexes(dst, mn); err != nil {
return fmt.Errorf("cannot create global indexes: %w", err)
}
if err := is.createPerDayIndexes(date, dst.MetricID, mn); err != nil {
return fmt.Errorf("cannot create per-day indexes for date=%d: %w", date, err)
} }
// There is no need in invalidating tag cache, since it is invalidated // There is no need in invalidating tag cache, since it is invalidated
@ -591,7 +603,7 @@ func (db *indexDB) createTSIDByName(dst *TSID, metricName []byte) error {
if created { if created {
// Increase the newTimeseriesCreated counter only if tsid wasn't found in indexDB // Increase the newTimeseriesCreated counter only if tsid wasn't found in indexDB
atomic.AddUint64(&db.newTimeseriesCreated, 1) atomic.AddUint64(&is.db.newTimeseriesCreated, 1)
if logNewSeries { if logNewSeries {
logger.Infof("new series created: %s", mn.String()) logger.Infof("new series created: %s", mn.String())
} }
@ -599,6 +611,8 @@ func (db *indexDB) createTSIDByName(dst *TSID, metricName []byte) error {
return nil return nil
} }
var errSeriesCardinalityExceeded = fmt.Errorf("cannot create series because series cardinality limit exceeded")
// SetLogNewSeries updates new series logging. // SetLogNewSeries updates new series logging.
// //
// This function must be called before any calling any storage functions. // This function must be called before any calling any storage functions.
@ -648,7 +662,7 @@ func generateTSID(dst *TSID, mn *MetricName) {
dst.MetricID = generateUniqueMetricID() dst.MetricID = generateUniqueMetricID()
} }
func (db *indexDB) createIndexes(tsid *TSID, mn *MetricName) error { func (is *indexSearch) createGlobalIndexes(tsid *TSID, mn *MetricName) error {
// The order of index items is important. // The order of index items is important.
// It guarantees index consistency. // It guarantees index consistency.
@ -679,7 +693,7 @@ func (db *indexDB) createIndexes(tsid *TSID, mn *MetricName) error {
ii.registerTagIndexes(prefix.B, mn, tsid.MetricID) ii.registerTagIndexes(prefix.B, mn, tsid.MetricID)
kbPool.Put(prefix) kbPool.Put(prefix)
return db.tb.AddItems(ii.Items) return is.db.tb.AddItems(ii.Items)
} }
type indexItems struct { type indexItems struct {
@ -2681,11 +2695,11 @@ const (
int64Max = int64((1 << 63) - 1) int64Max = int64((1 << 63) - 1)
) )
func (is *indexSearch) storeDateMetricID(date, metricID uint64, mn *MetricName) error { func (is *indexSearch) createPerDayIndexes(date, metricID uint64, mn *MetricName) error {
ii := getIndexItems() ii := getIndexItems()
defer putIndexItems(ii) defer putIndexItems(ii)
ii.B = is.marshalCommonPrefix(ii.B, nsPrefixDateToMetricID) ii.B = marshalCommonPrefix(ii.B, nsPrefixDateToMetricID)
ii.B = encoding.MarshalUint64(ii.B, date) ii.B = encoding.MarshalUint64(ii.B, date)
ii.B = encoding.MarshalUint64(ii.B, metricID) ii.B = encoding.MarshalUint64(ii.B, metricID)
ii.Next() ii.Next()
@ -2693,7 +2707,7 @@ func (is *indexSearch) storeDateMetricID(date, metricID uint64, mn *MetricName)
// Create per-day inverted index entries for metricID. // Create per-day inverted index entries for metricID.
kb := kbPool.Get() kb := kbPool.Get()
defer kbPool.Put(kb) defer kbPool.Put(kb)
kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixDateTagToMetricIDs) kb.B = marshalCommonPrefix(kb.B[:0], nsPrefixDateTagToMetricIDs)
kb.B = encoding.MarshalUint64(kb.B, date) kb.B = encoding.MarshalUint64(kb.B, date)
ii.registerTagIndexes(kb.B, mn, metricID) ii.registerTagIndexes(kb.B, mn, metricID)
if err := is.db.tb.AddItems(ii.Items); err != nil { if err := is.db.tb.AddItems(ii.Items); err != nil {
@ -2812,7 +2826,7 @@ func reverseBytes(dst, src []byte) []byte {
func (is *indexSearch) hasDateMetricID(date, metricID uint64) (bool, error) { func (is *indexSearch) hasDateMetricID(date, metricID uint64) (bool, error) {
ts := &is.ts ts := &is.ts
kb := &is.kb kb := &is.kb
kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixDateToMetricID) kb.B = marshalCommonPrefix(kb.B[:0], nsPrefixDateToMetricID)
kb.B = encoding.MarshalUint64(kb.B, date) kb.B = encoding.MarshalUint64(kb.B, date)
kb.B = encoding.MarshalUint64(kb.B, metricID) kb.B = encoding.MarshalUint64(kb.B, metricID)
if err := ts.FirstItemWithPrefix(kb.B); err != nil { if err := ts.FirstItemWithPrefix(kb.B); err != nil {

View file

@ -604,7 +604,7 @@ func testIndexDBBigMetricName(db *indexDB) error {
mn.MetricGroup = append(mn.MetricGroup[:0], bigBytes...) mn.MetricGroup = append(mn.MetricGroup[:0], bigBytes...)
mn.sortTags() mn.sortTags()
metricName := mn.Marshal(nil) metricName := mn.Marshal(nil)
if err := is.GetOrCreateTSIDByName(&tsid, metricName); err == nil { if err := is.GetOrCreateTSIDByName(&tsid, metricName, 0); err == nil {
return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too big MetricGroup") return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too big MetricGroup")
} }
@ -617,7 +617,7 @@ func testIndexDBBigMetricName(db *indexDB) error {
}} }}
mn.sortTags() mn.sortTags()
metricName = mn.Marshal(nil) metricName = mn.Marshal(nil)
if err := is.GetOrCreateTSIDByName(&tsid, metricName); err == nil { if err := is.GetOrCreateTSIDByName(&tsid, metricName, 0); err == nil {
return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too big tag key") return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too big tag key")
} }
@ -630,7 +630,7 @@ func testIndexDBBigMetricName(db *indexDB) error {
}} }}
mn.sortTags() mn.sortTags()
metricName = mn.Marshal(nil) metricName = mn.Marshal(nil)
if err := is.GetOrCreateTSIDByName(&tsid, metricName); err == nil { if err := is.GetOrCreateTSIDByName(&tsid, metricName, 0); err == nil {
return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too big tag value") return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too big tag value")
} }
@ -645,7 +645,7 @@ func testIndexDBBigMetricName(db *indexDB) error {
} }
mn.sortTags() mn.sortTags()
metricName = mn.Marshal(nil) metricName = mn.Marshal(nil)
if err := is.GetOrCreateTSIDByName(&tsid, metricName); err == nil { if err := is.GetOrCreateTSIDByName(&tsid, metricName, 0); err == nil {
return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too many tags") return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too many tags")
} }
@ -679,7 +679,7 @@ func testIndexDBGetOrCreateTSIDByName(db *indexDB, metricGroups int) ([]MetricNa
// Create tsid for the metricName. // Create tsid for the metricName.
var tsid TSID var tsid TSID
if err := is.GetOrCreateTSIDByName(&tsid, metricNameBuf); err != nil { if err := is.GetOrCreateTSIDByName(&tsid, metricNameBuf, 0); err != nil {
return nil, nil, fmt.Errorf("unexpected error when creating tsid for mn:\n%s: %w", &mn, err) return nil, nil, fmt.Errorf("unexpected error when creating tsid for mn:\n%s: %w", &mn, err)
} }
@ -691,8 +691,8 @@ func testIndexDBGetOrCreateTSIDByName(db *indexDB, metricGroups int) ([]MetricNa
date := uint64(timestampFromTime(time.Now())) / msecPerDay date := uint64(timestampFromTime(time.Now())) / msecPerDay
for i := range tsids { for i := range tsids {
tsid := &tsids[i] tsid := &tsids[i]
if err := is.storeDateMetricID(date, tsid.MetricID, &mns[i]); err != nil { if err := is.createPerDayIndexes(date, tsid.MetricID, &mns[i]); err != nil {
return nil, nil, fmt.Errorf("error in storeDateMetricID(%d, %d): %w", date, tsid.MetricID, err) return nil, nil, fmt.Errorf("error in createPerDayIndexes(%d, %d): %w", date, tsid.MetricID, err)
} }
} }
@ -1662,7 +1662,7 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
metricNameBuf = mn.Marshal(metricNameBuf[:0]) metricNameBuf = mn.Marshal(metricNameBuf[:0])
var tsid TSID var tsid TSID
if err := is.GetOrCreateTSIDByName(&tsid, metricNameBuf); err != nil { if err := is.GetOrCreateTSIDByName(&tsid, metricNameBuf, 0); err != nil {
t.Fatalf("unexpected error when creating tsid for mn:\n%s: %s", &mn, err) t.Fatalf("unexpected error when creating tsid for mn:\n%s: %s", &mn, err)
} }
mns = append(mns, mn) mns = append(mns, mn)
@ -1675,8 +1675,8 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
for i := range tsids { for i := range tsids {
tsid := &tsids[i] tsid := &tsids[i]
metricIDs.Add(tsid.MetricID) metricIDs.Add(tsid.MetricID)
if err := is.storeDateMetricID(date, tsid.MetricID, &mns[i]); err != nil { if err := is.createPerDayIndexes(date, tsid.MetricID, &mns[i]); err != nil {
t.Fatalf("error in storeDateMetricID(%d, %d): %s", date, tsid.MetricID, err) t.Fatalf("error in createPerDayIndexes(%d, %d): %s", date, tsid.MetricID, err)
} }
} }
allMetricIDs.Union(&metricIDs) allMetricIDs.Union(&metricIDs)

View file

@ -93,7 +93,7 @@ func benchmarkIndexDBAddTSIDs(db *indexDB, tsid *TSID, mn *MetricName, startOffs
} }
mn.sortTags() mn.sortTags()
metricName = mn.Marshal(metricName[:0]) metricName = mn.Marshal(metricName[:0])
if err := is.GetOrCreateTSIDByName(tsid, metricName); err != nil { if err := is.GetOrCreateTSIDByName(tsid, metricName, 0); err != nil {
panic(fmt.Errorf("cannot insert record: %w", err)) panic(fmt.Errorf("cannot insert record: %w", err))
} }
} }
@ -122,6 +122,8 @@ func BenchmarkHeadPostingForMatchers(b *testing.B) {
var mn MetricName var mn MetricName
var metricName []byte var metricName []byte
var tsid TSID var tsid TSID
is := db.getIndexSearch(noDeadline)
defer db.putIndexSearch(is)
addSeries := func(kvs ...string) { addSeries := func(kvs ...string) {
mn.Reset() mn.Reset()
for i := 0; i < len(kvs); i += 2 { for i := 0; i < len(kvs); i += 2 {
@ -129,20 +131,20 @@ func BenchmarkHeadPostingForMatchers(b *testing.B) {
} }
mn.sortTags() mn.sortTags()
metricName = mn.Marshal(metricName[:0]) metricName = mn.Marshal(metricName[:0])
if err := db.createTSIDByName(&tsid, metricName); err != nil { if err := is.createTSIDByName(&tsid, metricName, 0); err != nil {
b.Fatalf("cannot insert record: %s", err) b.Fatalf("cannot insert record: %s", err)
} }
} }
for n := 0; n < 10; n++ { for n := 0; n < 10; n++ {
ns := strconv.Itoa(n) ns := strconv.Itoa(n)
for i := 0; i < 100000; i++ { for i := 0; i < 100000; i++ {
is := strconv.Itoa(i) ix := strconv.Itoa(i)
addSeries("i", is, "n", ns, "j", "foo") addSeries("i", ix, "n", ns, "j", "foo")
// Have some series that won't be matched, to properly test inverted matches. // Have some series that won't be matched, to properly test inverted matches.
addSeries("i", is, "n", ns, "j", "bar") addSeries("i", ix, "n", ns, "j", "bar")
addSeries("i", is, "n", "0_"+ns, "j", "bar") addSeries("i", ix, "n", "0_"+ns, "j", "bar")
addSeries("i", is, "n", "1_"+ns, "j", "bar") addSeries("i", ix, "n", "1_"+ns, "j", "bar")
addSeries("i", is, "n", "2_"+ns, "j", "foo") addSeries("i", ix, "n", "2_"+ns, "j", "foo")
} }
} }
@ -313,7 +315,7 @@ func BenchmarkIndexDBGetTSIDs(b *testing.B) {
for i := 0; i < recordsCount; i++ { for i := 0; i < recordsCount; i++ {
mn.sortTags() mn.sortTags()
metricName = mn.Marshal(metricName[:0]) metricName = mn.Marshal(metricName[:0])
if err := is.GetOrCreateTSIDByName(&tsid, metricName); err != nil { if err := is.GetOrCreateTSIDByName(&tsid, metricName, 0); err != nil {
b.Fatalf("cannot insert record: %s", err) b.Fatalf("cannot insert record: %s", err)
} }
} }
@ -331,7 +333,7 @@ func BenchmarkIndexDBGetTSIDs(b *testing.B) {
for i := 0; i < recordsPerLoop; i++ { for i := 0; i < recordsPerLoop; i++ {
mnLocal.sortTags() mnLocal.sortTags()
metricNameLocal = mnLocal.Marshal(metricNameLocal[:0]) metricNameLocal = mnLocal.Marshal(metricNameLocal[:0])
if err := is.GetOrCreateTSIDByName(&tsidLocal, metricNameLocal); err != nil { if err := is.GetOrCreateTSIDByName(&tsidLocal, metricNameLocal, 0); err != nil {
panic(fmt.Errorf("cannot obtain tsid: %w", err)) panic(fmt.Errorf("cannot obtain tsid: %w", err))
} }
} }

View file

@ -2,6 +2,7 @@ package storage
import ( import (
"bytes" "bytes"
"errors"
"fmt" "fmt"
"io" "io"
"io/ioutil" "io/ioutil"
@ -365,7 +366,7 @@ func (s *Storage) CreateSnapshot() (string, error) {
srcMetadataDir := srcDir + "/metadata" srcMetadataDir := srcDir + "/metadata"
dstMetadataDir := dstDir + "/metadata" dstMetadataDir := dstDir + "/metadata"
if err := fs.CopyDirectory(srcMetadataDir, dstMetadataDir); err != nil { if err := fs.CopyDirectory(srcMetadataDir, dstMetadataDir); err != nil {
return "", fmt.Errorf("cannot copy metadata: %s", err) return "", fmt.Errorf("cannot copy metadata: %w", err)
} }
fs.MustSyncPath(dstDir) fs.MustSyncPath(dstDir)
@ -745,6 +746,27 @@ func (s *Storage) mustRotateIndexDB() {
// and slowly re-populate new idb with entries from the cache via maybeCreateIndexes(). // and slowly re-populate new idb with entries from the cache via maybeCreateIndexes().
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1401 // See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1401
// Flush metric id caches for the current and the previous hour,
// since they may contain entries missing in idbNew.
// This should prevent from missing data in queries when
// the following steps are performed for short -retentionPeriod (e.g. 1 day):
//
// 1. Add samples for some series between 3-4 UTC. These series are registered in currHourMetricIDs.
// 2. The indexdb rotation is performed at 4 UTC. currHourMetricIDs is moved to prevHourMetricIDs.
// 3. Continue adding samples for series from step 1 during time range 4-5 UTC.
// These series are already registered in prevHourMetricIDs, so VM doesn't add per-day entries to the current indexdb.
// 4. Stop adding new samples for these series just before 5 UTC.
// 5. The next indexdb rotation is performed at 4 UTC next day.
// The information about the series from step 5 disappears from indexdb, since the old indexdb from step 1 is deleted,
// while the current indexdb doesn't contain information about the series.
// So queries for the last 24 hours stop returning samples added at step 3.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2698
s.pendingHourEntriesLock.Lock()
s.pendingHourEntries = &uint64set.Set{}
s.pendingHourEntriesLock.Unlock()
s.currHourMetricIDs.Store(&hourMetricIDs{})
s.prevHourMetricIDs.Store(&hourMetricIDs{})
// Flush dateMetricIDCache, so idbNew can be populated with fresh data. // Flush dateMetricIDCache, so idbNew can be populated with fresh data.
s.dateMetricIDCache.Reset() s.dateMetricIDCache.Reset()
@ -1644,10 +1666,7 @@ var (
// The the MetricRow.Timestamp is used for registering the metric name starting from the given timestamp. // The the MetricRow.Timestamp is used for registering the metric name starting from the given timestamp.
// Th MetricRow.Value field is ignored. // Th MetricRow.Value field is ignored.
func (s *Storage) RegisterMetricNames(mrs []MetricRow) error { func (s *Storage) RegisterMetricNames(mrs []MetricRow) error {
var ( var metricName []byte
metricName []byte
)
var genTSID generationTSID var genTSID generationTSID
mn := GetMetricName() mn := GetMetricName()
defer PutMetricName(mn) defer PutMetricName(mn)
@ -1658,64 +1677,35 @@ func (s *Storage) RegisterMetricNames(mrs []MetricRow) error {
for i := range mrs { for i := range mrs {
mr := &mrs[i] mr := &mrs[i]
if s.getTSIDFromCache(&genTSID, mr.MetricNameRaw) { if s.getTSIDFromCache(&genTSID, mr.MetricNameRaw) {
if genTSID.generation != idb.generation { if genTSID.generation == idb.generation {
// The found entry is from the previous cache generation // Fast path - mr.MetricNameRaw has been already registered in the current idb.
// so attempt to re-populate the current generation with this entry.
// This is needed for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1401
created, err := idb.maybeCreateIndexes(&genTSID.TSID, mr.MetricNameRaw)
if err != nil {
return fmt.Errorf("cannot create indexes in the current indexdb: %w", err)
}
if created {
genTSID.generation = idb.generation
s.putTSIDToCache(&genTSID, mr.MetricNameRaw)
}
}
// Fast path - mr.MetricNameRaw has been already registered.
continue continue
} }
}
// Slow path - register mr.MetricNameRaw. // Slow path - register mr.MetricNameRaw.
if err := mn.UnmarshalRaw(mr.MetricNameRaw); err != nil { if err := mn.UnmarshalRaw(mr.MetricNameRaw); err != nil {
return fmt.Errorf("cannot register the metric because cannot unmarshal MetricNameRaw %q: %w", mr.MetricNameRaw, err) return fmt.Errorf("cannot unmarshal MetricNameRaw %q: %w", mr.MetricNameRaw, err)
} }
mn.sortTags() mn.sortTags()
metricName = mn.Marshal(metricName[:0]) metricName = mn.Marshal(metricName[:0])
if err := is.GetOrCreateTSIDByName(&genTSID.TSID, metricName); err != nil {
return fmt.Errorf("cannot register the metric because cannot create TSID for metricName %q: %w", metricName, err)
}
s.putTSIDToCache(&genTSID, mr.MetricNameRaw)
// Register the metric in per-day inverted index.
date := uint64(mr.Timestamp) / msecPerDay date := uint64(mr.Timestamp) / msecPerDay
metricID := genTSID.TSID.MetricID if err := is.GetOrCreateTSIDByName(&genTSID.TSID, metricName, date); err != nil {
if s.dateMetricIDCache.Has(date, metricID) { if errors.Is(err, errSeriesCardinalityExceeded) {
// Fast path: the metric has been already registered in per-day inverted index
continue continue
} }
return fmt.Errorf("cannot create TSID for metricName %q: %w", metricName, err)
// Slow path: acutally register the metric in per-day inverted index.
ok, err := is.hasDateMetricID(date, metricID)
if err != nil {
return fmt.Errorf("cannot register the metric in per-date inverted index because of error when locating (date=%d, metricID=%d) in database: %w",
date, metricID, err)
} }
if !ok { genTSID.generation = idb.generation
// The (date, metricID) entry is missing in the indexDB. Add it there. s.putTSIDToCache(&genTSID, mr.MetricNameRaw)
if err := is.storeDateMetricID(date, metricID, mn); err != nil { s.dateMetricIDCache.Set(date, genTSID.TSID.MetricID)
return fmt.Errorf("cannot register the metric in per-date inverted index because of error when storing (date=%d, metricID=%d) in database: %w",
date, metricID, err)
}
}
// The metric must be added to cache only after it has been successfully added to indexDB.
s.dateMetricIDCache.Set(date, metricID)
} }
return nil return nil
} }
func (s *Storage) add(rows []rawRow, dstMrs []*MetricRow, mrs []MetricRow, precisionBits uint8) error { func (s *Storage) add(rows []rawRow, dstMrs []*MetricRow, mrs []MetricRow, precisionBits uint8) error {
idb := s.idb() idb := s.idb()
j := 0 is := idb.getIndexSearch(noDeadline)
defer idb.putIndexSearch(is)
var ( var (
// These vars are used for speeding up bulk imports of multiple adjacent rows for the same metricName. // These vars are used for speeding up bulk imports of multiple adjacent rows for the same metricName.
prevTSID TSID prevTSID TSID
@ -1728,6 +1718,7 @@ func (s *Storage) add(rows []rawRow, dstMrs []*MetricRow, mrs []MetricRow, preci
// Return only the first error, since it has no sense in returning all errors. // Return only the first error, since it has no sense in returning all errors.
var firstWarn error var firstWarn error
j := 0
for i := range mrs { for i := range mrs {
mr := &mrs[i] mr := &mrs[i]
if math.IsNaN(mr.Value) { if math.IsNaN(mr.Value) {
@ -1772,11 +1763,6 @@ func (s *Storage) add(rows []rawRow, dstMrs []*MetricRow, mrs []MetricRow, preci
} }
if s.getTSIDFromCache(&genTSID, mr.MetricNameRaw) { if s.getTSIDFromCache(&genTSID, mr.MetricNameRaw) {
r.TSID = genTSID.TSID r.TSID = genTSID.TSID
if s.isSeriesCardinalityExceeded(r.TSID.MetricID, mr.MetricNameRaw) {
// Skip the row, since the limit on the number of unique series has been exceeded.
j--
continue
}
// Fast path - the TSID for the given MetricNameRaw has been found in cache and isn't deleted. // Fast path - the TSID for the given MetricNameRaw has been found in cache and isn't deleted.
// There is no need in checking whether r.TSID.MetricID is deleted, since tsidCache doesn't // There is no need in checking whether r.TSID.MetricID is deleted, since tsidCache doesn't
// contain MetricName->TSID entries for deleted time series. // contain MetricName->TSID entries for deleted time series.
@ -1785,16 +1771,18 @@ func (s *Storage) add(rows []rawRow, dstMrs []*MetricRow, mrs []MetricRow, preci
prevMetricNameRaw = mr.MetricNameRaw prevMetricNameRaw = mr.MetricNameRaw
if genTSID.generation != idb.generation { if genTSID.generation != idb.generation {
// The found entry is from the previous cache generation // The found entry is from the previous cache generation,
// so attempt to re-populate the current generation with this entry. // so attempt to re-populate the current generation with this entry.
// This is needed for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1401 // This is needed for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1401
created, err := idb.maybeCreateIndexes(&genTSID.TSID, mr.MetricNameRaw) date := uint64(r.Timestamp) / msecPerDay
created, err := is.maybeCreateIndexes(&genTSID.TSID, mr.MetricNameRaw, date)
if err != nil { if err != nil {
return fmt.Errorf("cannot create indexes in the current indexdb: %w", err) return fmt.Errorf("cannot create indexes: %w", err)
} }
if created { if created {
genTSID.generation = idb.generation genTSID.generation = idb.generation
s.putTSIDToCache(&genTSID, mr.MetricNameRaw) s.putTSIDToCache(&genTSID, mr.MetricNameRaw)
s.dateMetricIDCache.Set(date, genTSID.TSID.MetricID)
} }
} }
continue continue
@ -1822,7 +1810,6 @@ func (s *Storage) add(rows []rawRow, dstMrs []*MetricRow, mrs []MetricRow, preci
sort.Slice(pendingMetricRows, func(i, j int) bool { sort.Slice(pendingMetricRows, func(i, j int) bool {
return string(pendingMetricRows[i].MetricName) < string(pendingMetricRows[j].MetricName) return string(pendingMetricRows[i].MetricName) < string(pendingMetricRows[j].MetricName)
}) })
is := idb.getIndexSearch(noDeadline)
prevMetricNameRaw = nil prevMetricNameRaw = nil
var slowInsertsCount uint64 var slowInsertsCount uint64
for i := range pendingMetricRows { for i := range pendingMetricRows {
@ -1838,36 +1825,31 @@ func (s *Storage) add(rows []rawRow, dstMrs []*MetricRow, mrs []MetricRow, preci
// Fast path - the current mr contains the same metric name as the previous mr, so it contains the same TSID. // Fast path - the current mr contains the same metric name as the previous mr, so it contains the same TSID.
// This path should trigger on bulk imports when many rows contain the same MetricNameRaw. // This path should trigger on bulk imports when many rows contain the same MetricNameRaw.
r.TSID = prevTSID r.TSID = prevTSID
if s.isSeriesCardinalityExceeded(r.TSID.MetricID, mr.MetricNameRaw) {
// Skip the row, since the limit on the number of unique series has been exceeded.
j--
continue
}
continue continue
} }
slowInsertsCount++ slowInsertsCount++
if err := is.GetOrCreateTSIDByName(&r.TSID, pmr.MetricName); err != nil { date := uint64(r.Timestamp) / msecPerDay
if err := is.GetOrCreateTSIDByName(&r.TSID, pmr.MetricName, date); err != nil {
j--
if errors.Is(err, errSeriesCardinalityExceeded) {
continue
}
// Do not stop adding rows on error - just skip invalid row. // Do not stop adding rows on error - just skip invalid row.
// This guarantees that invalid rows don't prevent // This guarantees that invalid rows don't prevent
// from adding valid rows into the storage. // from adding valid rows into the storage.
if firstWarn == nil { if firstWarn == nil {
firstWarn = fmt.Errorf("cannot obtain or create TSID for MetricName %q: %w", pmr.MetricName, err) firstWarn = fmt.Errorf("cannot obtain or create TSID for MetricName %q: %w", pmr.MetricName, err)
} }
j--
continue continue
} }
genTSID.generation = idb.generation genTSID.generation = idb.generation
genTSID.TSID = r.TSID genTSID.TSID = r.TSID
s.putTSIDToCache(&genTSID, mr.MetricNameRaw) s.putTSIDToCache(&genTSID, mr.MetricNameRaw)
s.dateMetricIDCache.Set(date, genTSID.TSID.MetricID)
prevTSID = r.TSID prevTSID = r.TSID
prevMetricNameRaw = mr.MetricNameRaw prevMetricNameRaw = mr.MetricNameRaw
if s.isSeriesCardinalityExceeded(r.TSID.MetricID, mr.MetricNameRaw) {
// Skip the row, since the limit on the number of unique series has been exceeded.
j--
continue
} }
}
idb.putIndexSearch(is)
putPendingMetricRows(pmrs) putPendingMetricRows(pmrs)
atomic.AddUint64(&s.slowRowInserts, slowInsertsCount) atomic.AddUint64(&s.slowRowInserts, slowInsertsCount)
} }
@ -1877,39 +1859,41 @@ func (s *Storage) add(rows []rawRow, dstMrs []*MetricRow, mrs []MetricRow, preci
dstMrs = dstMrs[:j] dstMrs = dstMrs[:j]
rows = rows[:j] rows = rows[:j]
var firstError error err := s.updatePerDateData(rows, dstMrs)
if err := s.tb.AddRows(rows); err != nil { if err != nil {
firstError = fmt.Errorf("cannot add rows to table: %w", err) err = fmt.Errorf("cannot update per-date data: %w", err)
} else {
err = s.tb.AddRows(rows)
if err != nil {
err = fmt.Errorf("cannot add rows to table: %w", err)
} }
if err := s.updatePerDateData(rows, dstMrs); err != nil && firstError == nil {
firstError = fmt.Errorf("cannot update per-date data: %w", err)
} }
if firstError != nil { if err != nil {
return fmt.Errorf("error occurred during rows addition: %w", firstError) return fmt.Errorf("error occurred during rows addition: %w", err)
} }
return nil return nil
} }
func (s *Storage) isSeriesCardinalityExceeded(metricID uint64, metricNameRaw []byte) bool { func (s *Storage) registerSeriesCardinality(metricID uint64, mn *MetricName) bool {
if sl := s.hourlySeriesLimiter; sl != nil && !sl.Add(metricID) { if sl := s.hourlySeriesLimiter; sl != nil && !sl.Add(metricID) {
atomic.AddUint64(&s.hourlySeriesLimitRowsDropped, 1) atomic.AddUint64(&s.hourlySeriesLimitRowsDropped, 1)
logSkippedSeries(metricNameRaw, "-storage.maxHourlySeries", sl.MaxItems()) logSkippedSeries(mn, "-storage.maxHourlySeries", sl.MaxItems())
return true return false
} }
if sl := s.dailySeriesLimiter; sl != nil && !sl.Add(metricID) { if sl := s.dailySeriesLimiter; sl != nil && !sl.Add(metricID) {
atomic.AddUint64(&s.dailySeriesLimitRowsDropped, 1) atomic.AddUint64(&s.dailySeriesLimitRowsDropped, 1)
logSkippedSeries(metricNameRaw, "-storage.maxDailySeries", sl.MaxItems()) logSkippedSeries(mn, "-storage.maxDailySeries", sl.MaxItems())
return true
}
return false return false
} }
return true
}
func logSkippedSeries(metricNameRaw []byte, flagName string, flagValue int) { func logSkippedSeries(mn *MetricName, flagName string, flagValue int) {
select { select {
case <-logSkippedSeriesTicker.C: case <-logSkippedSeriesTicker.C:
// Do not use logger.WithThrottler() here, since this will result in increased CPU load // Do not use logger.WithThrottler() here, since this will result in increased CPU load
// because of getUserReadableMetricName() calls per each logSkippedSeries call. // because of getUserReadableMetricName() calls per each logSkippedSeries call.
logger.Warnf("skip series %s because %s=%d reached", getUserReadableMetricName(metricNameRaw), flagName, flagValue) logger.Warnf("skip series %s because %s=%d reached", mn, flagName, flagValue)
default: default:
} }
} }
@ -2111,7 +2095,7 @@ func (s *Storage) updatePerDateData(rows []rawRow, mrs []*MetricRow) error {
continue continue
} }
if !ok { if !ok {
// The (date, metricID) entry is missing in the indexDB. Add it there. // The (date, metricID) entry is missing in the indexDB. Add it there together with per-day indexes.
// It is OK if the (date, metricID) entry is added multiple times to db // It is OK if the (date, metricID) entry is added multiple times to db
// by concurrent goroutines. // by concurrent goroutines.
if err := mn.UnmarshalRaw(dmid.mr.MetricNameRaw); err != nil { if err := mn.UnmarshalRaw(dmid.mr.MetricNameRaw); err != nil {
@ -2121,9 +2105,9 @@ func (s *Storage) updatePerDateData(rows []rawRow, mrs []*MetricRow) error {
continue continue
} }
mn.sortTags() mn.sortTags()
if err := is.storeDateMetricID(date, metricID, mn); err != nil { if err := is.createPerDayIndexes(date, metricID, mn); err != nil {
if firstError == nil { if firstError == nil {
firstError = fmt.Errorf("error when storing (date=%d, metricID=%d) in database: %w", date, metricID, err) firstError = fmt.Errorf("error when storing per-date inverted index for (date=%d, metricID=%d): %w", date, metricID, err)
} }
continue continue
} }
@ -2374,6 +2358,7 @@ func (s *Storage) updateCurrHourMetricIDs() {
newMetricIDs := s.pendingHourEntries newMetricIDs := s.pendingHourEntries
s.pendingHourEntries = &uint64set.Set{} s.pendingHourEntries = &uint64set.Set{}
s.pendingHourEntriesLock.Unlock() s.pendingHourEntriesLock.Unlock()
hour := fasttime.UnixHour() hour := fasttime.UnixHour()
if newMetricIDs.Len() == 0 && hm.hour == hour { if newMetricIDs.Len() == 0 && hm.hour == hour {
// Fast path: nothing to update. // Fast path: nothing to update.

View file

@ -10,7 +10,7 @@ Install snapcraft or docker
build snap package with command build snap package with command
```bash ```console
make build-snap make build-snap
``` ```
@ -21,7 +21,7 @@ You can install it with command: `snap install victoriametrics_v1.46.0+git1.1beb
installation and configuration: installation and configuration:
```bash ```console
# install # install
snap install victoriametrics snap install victoriametrics
# logs # logs
@ -34,7 +34,7 @@ Configuration management:
Prometheus scrape config can be edited with your favorite editor, its located at Prometheus scrape config can be edited with your favorite editor, its located at
```bash ```console
vi /var/snap/victoriametrics/current/etc/victoriametrics-scrape-config.yaml vi /var/snap/victoriametrics/current/etc/victoriametrics-scrape-config.yaml
``` ```
@ -42,7 +42,7 @@ after changes, you can trigger config reread with `curl localhost:8248/-/reload`
Configuration tuning is possible with editing extra_flags: Configuration tuning is possible with editing extra_flags:
```bash ```console
echo 'FLAGS="-selfScrapeInterval=10s -search.logSlowQueryDuration=20s"' > /var/snap/victoriametrics/current/extra_flags echo 'FLAGS="-selfScrapeInterval=10s -search.logSlowQueryDuration=20s"' > /var/snap/victoriametrics/current/extra_flags
snap restart victoriametrics snap restart victoriametrics
``` ```