docs: follow-up after 491287ed15

* port un-synced changed from docs/readme to readme
* consistently use `sh` instead of `console` highlight, as it looks like
a more appropriate syntax highlight
* consistently use `sh` instead of `bash`, as it is shorter
* consistently use `yaml` instead of `yml`

See syntax codes here https://gohugo.io/content-management/syntax-highlighting/

Signed-off-by: hagen1778 <roman@victoriametrics.com>
This commit is contained in:
hagen1778 2024-01-27 19:29:11 +01:00
parent c20d68e28d
commit 6c6c2c185f
No known key found for this signature in database
GPG key ID: 3BF75F3741CA9640
37 changed files with 424 additions and 436 deletions

106
README.md
View file

@ -258,7 +258,7 @@ and then install it as a service according to the following guide:
1. Install VictoriaMetrics as a service by running the following from elevated PowerShell:
```console
```sh
winsw install VictoriaMetrics.xml
Get-Service VictoriaMetrics | Start-Service
```
@ -271,7 +271,7 @@ See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3781)
Add the following lines to Prometheus config file (it is usually located at `/etc/prometheus/prometheus.yml`) in order to send data to VictoriaMetrics:
```yml
```yaml
remote_write:
- url: http://<victoriametrics-addr>:8428/api/v1/write
```
@ -281,7 +281,7 @@ Substitute `<victoriametrics-addr>` with hostname or IP address of VictoriaMetri
Then apply new config via the following command:
```console
```sh
kill -HUP `pidof prometheus`
```
@ -293,7 +293,7 @@ even if remote storage is unavailable.
If you plan sending data to VictoriaMetrics from multiple Prometheus instances, then add the following lines into `global` section
of [Prometheus config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file):
```yml
```yaml
global:
external_labels:
datacenter: dc-123
@ -548,21 +548,17 @@ sending via ENV variable `DD_ADDITIONAL_ENDPOINTS` or via configuration file `ad
Run DataDog using the following ENV variable with VictoriaMetrics as additional metrics receiver:
```
```sh
DD_ADDITIONAL_ENDPOINTS='{\"http://victoriametrics:8428/datadog\": [\"apikey\"]}'
```
_Choose correct URL for VictoriaMetrics [here](https://docs.victoriametrics.com/url-examples.html#datadog)._
To configure DataDog Dual Shipping via [configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files)
add the following line:
```
```yaml
additional_endpoints:
"http://victoriametrics:8428/datadog":
- apikey
@ -637,7 +633,7 @@ Example for writing data with [InfluxDB line protocol](https://docs.influxdata.c
to local VictoriaMetrics using `curl`:
```console
```sh
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
```
@ -646,7 +642,7 @@ An arbitrary number of lines delimited by '\n' (aka newline char) can be sent in
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
```
@ -677,7 +673,7 @@ VictoriaMetrics exposes endpoint for InfluxDB v2 HTTP API at `/influx/api/v2/wri
In order to write data with InfluxDB line protocol to local VictoriaMetrics using `curl`:
```console
```sh
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/api/v2/write'
```
@ -694,7 +690,7 @@ The `/api/v1/export` endpoint should return the following response:
Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command line flag. For instance,
the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port `2003`:
```console
```sh
/path/to/victoria-metrics-prod -graphiteListenAddr=:2003
```
@ -703,7 +699,7 @@ to the VictoriaMetrics host in `StatsD` configs.
Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using `nc`:
```console
```sh
echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003
```
@ -712,7 +708,7 @@ An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
```
@ -752,7 +748,7 @@ The same protocol is used for [ingesting data in KairosDB](https://kairosdb.gith
Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command line flag. For instance,
the following command enables OpenTSDB receiver in VictoriaMetrics on TCP and UDP port `4242`:
```console
```sh
/path/to/victoria-metrics-prod -opentsdbListenAddr=:4242
```
@ -761,7 +757,7 @@ Send data to the given address from OpenTSDB-compatible agents.
Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `nc`:
```console
```sh
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
```
@ -770,7 +766,7 @@ An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
```
@ -786,7 +782,7 @@ The `/api/v1/export` endpoint should return the following response:
Enable HTTP server for OpenTSDB `/api/put` requests by setting `-opentsdbHTTPListenAddr` command line flag. For instance,
the following command enables OpenTSDB HTTP server on port `4242`:
```console
```sh
/path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242
```
@ -795,14 +791,14 @@ Send data to the given address from OpenTSDB-compatible agents.
Example for writing a single data point:
```console
```sh
curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put
```
Example for writing multiple data points in a single request:
```console
```sh
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
```
@ -810,7 +806,7 @@ curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"m
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar'
```
@ -839,7 +835,7 @@ The `COLLECTOR_URL` must point to `/newrelic` HTTP endpoint at VictoriaMetrics,
which can be obtained [here](https://newrelic.com/signup).
For example, if VictoriaMetrics runs at `localhost:8428`, then the following command can be used for running NewRelic infrastructure agent:
```console
```sh
COLLECTOR_URL="http://localhost:8428/newrelic" NRIA_LICENSE_KEY="NEWRELIC_LICENSE_KEY" ./newrelic-infra
```
@ -880,13 +876,13 @@ For example, let's import the following NewRelic Events request to VictoriaMetri
Save this JSON into `newrelic.json` file and then use the following command in order to import it into VictoriaMetrics:
```console
```sh
curl -X POST -H 'Content-Type: application/json' --data-binary @newrelic.json http://localhost:8428/newrelic/infra/v2/metrics/events/bulk
```
Let's fetch the ingested data via [data export API](#how-to-export-data-in-json-line-format):
```console
```sh
curl http://localhost:8428/api/v1/export -d 'match={eventType="SystemSample"}'
{"metric":{"__name__":"cpuStealPercent","entityKey":"macbook-pro.local","eventType":"SystemSample"},"values":[0],"timestamps":[1697407970000]}
{"metric":{"__name__":"loadAverageFiveMinute","entityKey":"macbook-pro.local","eventType":"SystemSample"},"values":[4.099609375],"timestamps":[1697407970000]}
@ -1094,7 +1090,7 @@ The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is pos
by setting it via `<ROOT_IMAGE>` environment variable.
For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```console
```sh
ROOT_IMAGE=scratch make package-victoria-metrics
```
@ -1237,7 +1233,7 @@ Optional `start` and `end` args may be added to the request in order to limit th
See [allowed formats](#timestamp-formats) for these args.
For example:
```console
```sh
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07'
```
@ -1250,7 +1246,7 @@ Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in o
of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data:
```console
```sh
curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz
```
@ -1284,7 +1280,7 @@ Optional `start` and `end` args may be added to the request in order to limit th
See [allowed formats](#timestamp-formats) for these args.
For example:
```console
```sh
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07'
```
@ -1301,7 +1297,7 @@ for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time
On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag:
```console
```sh
# count unique time series in database
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
@ -1312,7 +1308,7 @@ Optional `start` and `end` args may be added to the request in order to limit th
See [allowed formats](#timestamp-formats) for these args.
For example:
```console
```sh
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07'
```
@ -1357,7 +1353,7 @@ VictoriaMetrics accepts metrics data in JSON line format at `/api/v1/import` end
Example for importing data obtained via [/api/v1/export](#how-to-export-data-in-json-line-format):
```console
```sh
# Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl
@ -1367,7 +1363,7 @@ curl -X POST http://destination-victoriametrics:8428/api/v1/import -T exported_d
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import` for importing gzipped data:
```console
```sh
# Export gzipped data from <source-victoriametrics>:
curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl.gz
@ -1393,7 +1389,7 @@ The specification of VictoriaMetrics' native format may yet change and is not fo
If you have a native format file obtained via [/api/v1/export/native](#how-to-export-data-in-native-format) however this is the most efficient protocol for importing data in.
```console
```sh
# Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin
@ -1411,7 +1407,7 @@ Note that it could be required to flush response cache after importing historica
Arbitrary CSV data can be imported via `/api/v1/import/csv`. The CSV data is imported according to the provided `format` query arg.
The `format` query arg must contain comma-separated list of parsing rules for CSV fields. Each rule consists of three parts delimited by a colon:
```
```text
<column_pos>:<type>:<context>
```
@ -1434,14 +1430,14 @@ Each request to `/api/v1/import/csv` may contain arbitrary number of CSV lines.
Example for importing CSV data via `/api/v1/import/csv`:
```console
```sh
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
```
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
```
@ -1468,7 +1464,7 @@ and in [Pushgateway format](https://github.com/prometheus/pushgateway#url) via `
For example, the following command imports a single line in Prometheus exposition format into VictoriaMetrics:
```console
```sh
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
```
@ -1476,7 +1472,7 @@ curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/promet
The following command may be used for verifying the imported data:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
```
@ -1490,7 +1486,7 @@ It should return something like the following:
The following command imports a single metric via [Pushgateway format](https://github.com/prometheus/pushgateway#url) with `{job="my_app",instance="host123"}` labels:
```console
```sh
curl -d 'metric{label="abc"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus/metrics/job/my_app/instance/host123'
```
@ -1498,7 +1494,7 @@ curl -d 'metric{label="abc"} 123' -X POST 'http://localhost:8428/api/v1/import/p
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data:
```console
```sh
# Import gzipped data to <destination-victoriametrics>:
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz
```
@ -1530,7 +1526,7 @@ and exports data in this format at [/api/v1/export](#how-to-export-data-in-json-
The format follows [JSON streaming concept](http://ndjson.org/), e.g. each line contains JSON object with metrics data in the following format:
```
```json
{
// metric contans metric name plus labels for a particular time series
"metric":{
@ -1582,7 +1578,7 @@ The `-relabelConfig` files can contain special placeholders in the form `%{ENV_V
Example contents for `-relabelConfig` file:
```yml
```yaml
# Add {cluster="dev"} label.
- target_label: cluster
replacement: dev
@ -1610,7 +1606,7 @@ Optional `start` and `end` args may be added to the request in order to scrape t
See [allowed formats](#timestamp-formats) for these args.
For example:
```console
```sh
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07'
```
@ -1687,7 +1683,7 @@ then it can be configured with multiple `-remoteWrite.url` command-line flags, w
instance in a particular availability zone, in order to replicate the collected data to all the VictoriaMetrics instances.
For example, the following command instructs `vmagent` to replicate data to `vm-az1` and `vm-az2` instances of VictoriaMetrics:
```console
```sh
/path/to/vmagent \
-remoteWrite.url=http://<vm-az1>:8428/api/v1/write \
-remoteWrite.url=http://<vm-az2>:8428/api/v1/write
@ -1697,7 +1693,7 @@ If you use Prometheus for collecting and writing the data to VictoriaMetrics,
then the following [`remote_write`](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) section
in Prometheus config can be used for replicating the collected data to `vm-az1` and `vm-az2` VictoriaMetrics instances:
```yml
```yaml
remote_write:
- url: http://<vm-az1>:8428/api/v1/write
- url: http://<vm-az2>:8428/api/v1/write
@ -1871,7 +1867,7 @@ command-line flag is applied to it. If series matches multiple configured retent
For example, the following config sets 3 days retention for time series with `team="juniors"` label,
30 days retention for time series with `env="dev"` or `env="staging"` label and 1 year retention for the remaining time series:
```
```sh
-retentionFilter='{team="juniors"}:3d' -retentionFilter='{env=~"dev|staging"}:30d' -retentionPeriod=1y
```
@ -1999,7 +1995,7 @@ and [the general security page at VictoriaMetrics website](https://victoriametri
If you plan to store more than 1TB of data on `ext4` partition or plan extending it to more than 16TB,
then the following options are recommended to pass to `mkfs.ext4`:
```console
```sh
mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
```
@ -2057,7 +2053,7 @@ In this case VictoriaMetrics puts query trace into `trace` field in the output J
For example, the following command:
```console
```sh
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace'
```
@ -2256,7 +2252,7 @@ For example, the following command instructs VictoriaMetrics to push metrics fro
with `user:pass` [Basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication). The `instance="foobar"` and `job="vm"` labels
are added to all the metrics before sending them to the remote storage:
```console
```sh
/path/to/victoria-metrics \
-pushmetrics.url=https://user:pass@maas.victoriametrics.com/api/v1/import/prometheus \
-pushmetrics.extraLabel='instance="foobar"' \
@ -2400,7 +2396,7 @@ VictoriaMetrics provides handlers for collecting the following [Go profiles](htt
* Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
```console
```sh
curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
```
@ -2408,7 +2404,7 @@ curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
```console
```sh
curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof
```
@ -2479,7 +2475,7 @@ If the page needs to have many images, consider using WEB-optimized image format
When adding a new doc with many images use `webp` format right away. Or use a Makefile command below to
convert already existing images at `docs` folder automatically to `web` format:
```console
```sh
make docs-images-to-webp
```
@ -2519,7 +2515,7 @@ Files included in each folder:
Pass `-help` to VictoriaMetrics in order to see the list of supported command-line flags with their description:
```
```sh
-bigMergeConcurrency int
Deprecated: this flag does nothing
-blockcache.missesBeforeCaching int

View file

@ -96,7 +96,7 @@ Released at 2020-11-26
* FEATURE: added [Snap package for single-node VictoriaMetrics](https://snapcraft.io/victoriametrics). This simplifies installation under Ubuntu to a single command:
```console
```sh
snap install victoriametrics
```

View file

@ -34,7 +34,7 @@ Released at 2022-12-19
* FEATURE: allow changing field names in JSON logs if VictoriaMetrics components are started with `-loggerFormat=json` command-line flags. The field names can be changed with the `-loggerJSONFields` command-line flag. For example `-loggerJSONFields=ts:timestamp,msg:message` would rename `ts` and `msg` fields on the output JSON to `timestamp` and `message` fields. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2348). Thanks to @michal-kralik for [the pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/3488).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): expose `__meta_consul_tag_<tagname>` and `__meta_consul_tagpresent_<tagname>` labels for targets discovered via [consul_sd_configs](https://docs.victoriametrics.com/sd_configs.html#consul_sd_configs). This simplifies converting [Consul service tags](https://developer.hashicorp.com/consul/docs/services/discovery/dns-overview) to target labels with a simple [relabeling rule](https://docs.victoriametrics.com/vmagent.html#relabeling):
```yml
```yaml
- action: labelmap
regex: __meta_consul_tag_(.+)
```
@ -194,7 +194,7 @@ Released at 2022-10-29
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): improve the performance for metric-level [relabeling](https://docs.victoriametrics.com/vmagent.html#relabeling), which can be applied via `metric_relabel_configs` section at [scrape_configs](https://docs.victoriametrics.com/sd_configs.html#scrape_configs), via `-remoteWrite.relabelConfig` or via `-remoteWrite.urlRelabelConfig` command-line options.
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): allow specifying full url in scrape target addresses (aka `__address__` label). This makes valid the following `-promscrape.config`:
```yml
```yaml
scrape_configs:
- job_name: abc
metrics_path: /foo/bar
@ -652,7 +652,7 @@ scrape_configs:
* BUGFIX: consistently name binaries at [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) in the form `$(APP_NAME)-$(GOOS)-$(GOARCH)-$(VERSION).tar.gz`. For example, `victoria-metrics-linux-amd64-v1.79.0.tar.gz`. Previously the `$(GOOS)` part was missing in binaries for Linux.
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): allow using `__name__` label (aka [metric name](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)) in alerting annotations. For example:
```console
```sh
{{ $labels.__name__ }}: Too high connection number for "{{ $labels.instance }}
```
@ -885,14 +885,14 @@ Released at 2022-03-03
* FEATURE: add support for conditional relabeling via `if` filter. The `if` filter can contain arbitrary [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). For example, the following rule drops targets matching `foo{bar="baz"}` series selector:
```yml
```yaml
- action: drop
if: 'foo{bar="baz"}'
```
This rule is equivalent to less clear traditional one:
```yml
```yaml
- action: drop
source_labels: [__name__, bar]
regex: 'foo;baz'

View file

@ -174,7 +174,7 @@ By default, images are built on top of [alpine](https://hub.docker.com/_/scratch
It is possible to build an image on top of any other base image by setting it via `<ROOT_IMAGE>` environment variable.
For example, the following command builds images on top of [scratch](https://hub.docker.com/_/scratch) image:
```console
```sh
ROOT_IMAGE=scratch make package
```
@ -859,7 +859,7 @@ All the cluster components provide the following handlers for [profiling](https:
Example command for collecting cpu profile from `vmstorage` (replace `0.0.0.0` with `vmstorage` hostname if needed):
```console
```sh
curl http://0.0.0.0:8482/debug/pprof/profile > cpu.pprof
```
@ -867,7 +867,7 @@ curl http://0.0.0.0:8482/debug/pprof/profile > cpu.pprof
Example command for collecting memory profile from `vminsert` (replace `0.0.0.0` with `vminsert` hostname if needed):
```console
```sh
curl http://0.0.0.0:8480/debug/pprof/heap > mem.pprof
```

View file

@ -49,7 +49,7 @@ and start it at port 8428, while storing the ingested data at `victoria-metrics-
under the current directory:
```console
```sh
docker pull victoriametrics/victoria-metrics:latest
docker run -it --rm -v `pwd`/victoria-metrics-data:/victoria-metrics-data -p 8428:8428 victoriametrics/victoria-metrics:latest
```
@ -70,7 +70,7 @@ the [docker-compose-cluster.yml](https://github.com/VictoriaMetrics/VictoriaMetr
file.
```console
```sh
git clone https://github.com/VictoriaMetrics/VictoriaMetrics && cd VictoriaMetrics
make docker-cluster-up
```

View file

@ -261,7 +261,7 @@ and then install it as a service according to the following guide:
1. Install VictoriaMetrics as a service by running the following from elevated PowerShell:
```console
```sh
winsw install VictoriaMetrics.xml
Get-Service VictoriaMetrics | Start-Service
```
@ -274,7 +274,7 @@ See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3781)
Add the following lines to Prometheus config file (it is usually located at `/etc/prometheus/prometheus.yml`) in order to send data to VictoriaMetrics:
```yml
```yaml
remote_write:
- url: http://<victoriametrics-addr>:8428/api/v1/write
```
@ -284,7 +284,7 @@ Substitute `<victoriametrics-addr>` with hostname or IP address of VictoriaMetri
Then apply new config via the following command:
```console
```sh
kill -HUP `pidof prometheus`
```
@ -296,7 +296,7 @@ even if remote storage is unavailable.
If you plan sending data to VictoriaMetrics from multiple Prometheus instances, then add the following lines into `global` section
of [Prometheus config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file):
```yml
```yaml
global:
external_labels:
datacenter: dc-123
@ -551,21 +551,17 @@ sending via ENV variable `DD_ADDITIONAL_ENDPOINTS` or via configuration file `ad
Run DataDog using the following ENV variable with VictoriaMetrics as additional metrics receiver:
```
```sh
DD_ADDITIONAL_ENDPOINTS='{\"http://victoriametrics:8428/datadog\": [\"apikey\"]}'
```
_Choose correct URL for VictoriaMetrics [here](https://docs.victoriametrics.com/url-examples.html#datadog)._
To configure DataDog Dual Shipping via [configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files)
add the following line:
```
```yaml
additional_endpoints:
"http://victoriametrics:8428/datadog":
- apikey
@ -640,7 +636,7 @@ Example for writing data with [InfluxDB line protocol](https://docs.influxdata.c
to local VictoriaMetrics using `curl`:
```console
```sh
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
```
@ -649,7 +645,7 @@ An arbitrary number of lines delimited by '\n' (aka newline char) can be sent in
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
```
@ -680,7 +676,7 @@ VictoriaMetrics exposes endpoint for InfluxDB v2 HTTP API at `/influx/api/v2/wri
In order to write data with InfluxDB line protocol to local VictoriaMetrics using `curl`:
```console
```sh
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/api/v2/write'
```
@ -697,7 +693,7 @@ The `/api/v1/export` endpoint should return the following response:
Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command line flag. For instance,
the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port `2003`:
```console
```sh
/path/to/victoria-metrics-prod -graphiteListenAddr=:2003
```
@ -706,7 +702,7 @@ to the VictoriaMetrics host in `StatsD` configs.
Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using `nc`:
```console
```sh
echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003
```
@ -715,7 +711,7 @@ An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
```
@ -755,7 +751,7 @@ The same protocol is used for [ingesting data in KairosDB](https://kairosdb.gith
Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command line flag. For instance,
the following command enables OpenTSDB receiver in VictoriaMetrics on TCP and UDP port `4242`:
```console
```sh
/path/to/victoria-metrics-prod -opentsdbListenAddr=:4242
```
@ -764,7 +760,7 @@ Send data to the given address from OpenTSDB-compatible agents.
Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `nc`:
```console
```sh
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
```
@ -773,7 +769,7 @@ An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
```
@ -789,7 +785,7 @@ The `/api/v1/export` endpoint should return the following response:
Enable HTTP server for OpenTSDB `/api/put` requests by setting `-opentsdbHTTPListenAddr` command line flag. For instance,
the following command enables OpenTSDB HTTP server on port `4242`:
```console
```sh
/path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242
```
@ -798,14 +794,14 @@ Send data to the given address from OpenTSDB-compatible agents.
Example for writing a single data point:
```console
```sh
curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put
```
Example for writing multiple data points in a single request:
```console
```sh
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
```
@ -813,7 +809,7 @@ curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"m
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar'
```
@ -842,7 +838,7 @@ The `COLLECTOR_URL` must point to `/newrelic` HTTP endpoint at VictoriaMetrics,
which can be obtained [here](https://newrelic.com/signup).
For example, if VictoriaMetrics runs at `localhost:8428`, then the following command can be used for running NewRelic infrastructure agent:
```console
```sh
COLLECTOR_URL="http://localhost:8428/newrelic" NRIA_LICENSE_KEY="NEWRELIC_LICENSE_KEY" ./newrelic-infra
```
@ -883,13 +879,13 @@ For example, let's import the following NewRelic Events request to VictoriaMetri
Save this JSON into `newrelic.json` file and then use the following command in order to import it into VictoriaMetrics:
```console
```sh
curl -X POST -H 'Content-Type: application/json' --data-binary @newrelic.json http://localhost:8428/newrelic/infra/v2/metrics/events/bulk
```
Let's fetch the ingested data via [data export API](#how-to-export-data-in-json-line-format):
```console
```sh
curl http://localhost:8428/api/v1/export -d 'match={eventType="SystemSample"}'
{"metric":{"__name__":"cpuStealPercent","entityKey":"macbook-pro.local","eventType":"SystemSample"},"values":[0],"timestamps":[1697407970000]}
{"metric":{"__name__":"loadAverageFiveMinute","entityKey":"macbook-pro.local","eventType":"SystemSample"},"values":[4.099609375],"timestamps":[1697407970000]}
@ -1097,7 +1093,7 @@ The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is pos
by setting it via `<ROOT_IMAGE>` environment variable.
For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```console
```sh
ROOT_IMAGE=scratch make package-victoria-metrics
```
@ -1240,7 +1236,7 @@ Optional `start` and `end` args may be added to the request in order to limit th
See [allowed formats](#timestamp-formats) for these args.
For example:
```console
```sh
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07'
```
@ -1253,7 +1249,7 @@ Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in o
of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data:
```console
```sh
curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz
```
@ -1287,7 +1283,7 @@ Optional `start` and `end` args may be added to the request in order to limit th
See [allowed formats](#timestamp-formats) for these args.
For example:
```console
```sh
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07'
```
@ -1304,7 +1300,7 @@ for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time
On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag:
```console
```sh
# count unique time series in database
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
@ -1315,7 +1311,7 @@ Optional `start` and `end` args may be added to the request in order to limit th
See [allowed formats](#timestamp-formats) for these args.
For example:
```console
```sh
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07'
```
@ -1360,7 +1356,7 @@ VictoriaMetrics accepts metrics data in JSON line format at `/api/v1/import` end
Example for importing data obtained via [/api/v1/export](#how-to-export-data-in-json-line-format):
```console
```sh
# Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl
@ -1370,7 +1366,7 @@ curl -X POST http://destination-victoriametrics:8428/api/v1/import -T exported_d
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import` for importing gzipped data:
```console
```sh
# Export gzipped data from <source-victoriametrics>:
curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl.gz
@ -1396,7 +1392,7 @@ The specification of VictoriaMetrics' native format may yet change and is not fo
If you have a native format file obtained via [/api/v1/export/native](#how-to-export-data-in-native-format) however this is the most efficient protocol for importing data in.
```console
```sh
# Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin
@ -1414,7 +1410,7 @@ Note that it could be required to flush response cache after importing historica
Arbitrary CSV data can be imported via `/api/v1/import/csv`. The CSV data is imported according to the provided `format` query arg.
The `format` query arg must contain comma-separated list of parsing rules for CSV fields. Each rule consists of three parts delimited by a colon:
```
```text
<column_pos>:<type>:<context>
```
@ -1437,14 +1433,14 @@ Each request to `/api/v1/import/csv` may contain arbitrary number of CSV lines.
Example for importing CSV data via `/api/v1/import/csv`:
```console
```sh
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
```
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
```
@ -1471,7 +1467,7 @@ and in [Pushgateway format](https://github.com/prometheus/pushgateway#url) via `
For example, the following command imports a single line in Prometheus exposition format into VictoriaMetrics:
```console
```sh
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
```
@ -1479,7 +1475,7 @@ curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/promet
The following command may be used for verifying the imported data:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
```
@ -1493,7 +1489,7 @@ It should return something like the following:
The following command imports a single metric via [Pushgateway format](https://github.com/prometheus/pushgateway#url) with `{job="my_app",instance="host123"}` labels:
```console
```sh
curl -d 'metric{label="abc"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus/metrics/job/my_app/instance/host123'
```
@ -1501,7 +1497,7 @@ curl -d 'metric{label="abc"} 123' -X POST 'http://localhost:8428/api/v1/import/p
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data:
```console
```sh
# Import gzipped data to <destination-victoriametrics>:
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz
```
@ -1533,7 +1529,7 @@ and exports data in this format at [/api/v1/export](#how-to-export-data-in-json-
The format follows [JSON streaming concept](http://ndjson.org/), e.g. each line contains JSON object with metrics data in the following format:
```
```json
{
// metric contans metric name plus labels for a particular time series
"metric":{
@ -1585,7 +1581,7 @@ The `-relabelConfig` files can contain special placeholders in the form `%{ENV_V
Example contents for `-relabelConfig` file:
```yml
```yaml
# Add {cluster="dev"} label.
- target_label: cluster
replacement: dev
@ -1613,7 +1609,7 @@ Optional `start` and `end` args may be added to the request in order to scrape t
See [allowed formats](#timestamp-formats) for these args.
For example:
```console
```sh
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07'
```
@ -1690,7 +1686,7 @@ then it can be configured with multiple `-remoteWrite.url` command-line flags, w
instance in a particular availability zone, in order to replicate the collected data to all the VictoriaMetrics instances.
For example, the following command instructs `vmagent` to replicate data to `vm-az1` and `vm-az2` instances of VictoriaMetrics:
```console
```sh
/path/to/vmagent \
-remoteWrite.url=http://<vm-az1>:8428/api/v1/write \
-remoteWrite.url=http://<vm-az2>:8428/api/v1/write
@ -1700,7 +1696,7 @@ If you use Prometheus for collecting and writing the data to VictoriaMetrics,
then the following [`remote_write`](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) section
in Prometheus config can be used for replicating the collected data to `vm-az1` and `vm-az2` VictoriaMetrics instances:
```yml
```yaml
remote_write:
- url: http://<vm-az1>:8428/api/v1/write
- url: http://<vm-az2>:8428/api/v1/write
@ -1874,7 +1870,7 @@ command-line flag is applied to it. If series matches multiple configured retent
For example, the following config sets 3 days retention for time series with `team="juniors"` label,
30 days retention for time series with `env="dev"` or `env="staging"` label and 1 year retention for the remaining time series:
```
```sh
-retentionFilter='{team="juniors"}:3d' -retentionFilter='{env=~"dev|staging"}:30d' -retentionPeriod=1y
```
@ -2002,7 +1998,7 @@ and [the general security page at VictoriaMetrics website](https://victoriametri
If you plan to store more than 1TB of data on `ext4` partition or plan extending it to more than 16TB,
then the following options are recommended to pass to `mkfs.ext4`:
```console
```sh
mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
```
@ -2060,7 +2056,7 @@ In this case VictoriaMetrics puts query trace into `trace` field in the output J
For example, the following command:
```console
```sh
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace'
```
@ -2259,7 +2255,7 @@ For example, the following command instructs VictoriaMetrics to push metrics fro
with `user:pass` [Basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication). The `instance="foobar"` and `job="vm"` labels
are added to all the metrics before sending them to the remote storage:
```console
```sh
/path/to/victoria-metrics \
-pushmetrics.url=https://user:pass@maas.victoriametrics.com/api/v1/import/prometheus \
-pushmetrics.extraLabel='instance="foobar"' \
@ -2403,7 +2399,7 @@ VictoriaMetrics provides handlers for collecting the following [Go profiles](htt
* Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
```console
```sh
curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
```
@ -2411,7 +2407,7 @@ curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
```console
```sh
curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof
```
@ -2482,7 +2478,7 @@ If the page needs to have many images, consider using WEB-optimized image format
When adding a new doc with many images use `webp` format right away. Or use a Makefile command below to
convert already existing images at `docs` folder automatically to `web` format:
```console
```sh
make docs-images-to-webp
```
@ -2522,7 +2518,7 @@ Files included in each folder:
Pass `-help` to VictoriaMetrics in order to see the list of supported command-line flags with their description:
```
```sh
-bigMergeConcurrency int
Deprecated: this flag does nothing
-blockcache.missesBeforeCaching int

View file

@ -26,14 +26,14 @@ git remote add enterprise <url>
### For MacOS users
Make sure you have GNU version of utilities `zip`, `tar`, `sha256sum`. To install them run the following commands:
```bash
```sh
brew install coreutils
brew install gnu-tar
export PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
```
Docker may need additional configuration changes:
```bash
```sh
docker buildx create --use --name=qemu
docker buildx inspect --bootstrap
```

View file

@ -269,7 +269,7 @@ and then install it as a service according to the following guide:
1. Install VictoriaMetrics as a service by running the following from elevated PowerShell:
```console
```sh
winsw install VictoriaMetrics.xml
Get-Service VictoriaMetrics | Start-Service
```
@ -282,7 +282,7 @@ See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3781)
Add the following lines to Prometheus config file (it is usually located at `/etc/prometheus/prometheus.yml`) in order to send data to VictoriaMetrics:
```yml
```yaml
remote_write:
- url: http://<victoriametrics-addr>:8428/api/v1/write
```
@ -292,7 +292,7 @@ Substitute `<victoriametrics-addr>` with hostname or IP address of VictoriaMetri
Then apply new config via the following command:
```console
```sh
kill -HUP `pidof prometheus`
```
@ -304,7 +304,7 @@ even if remote storage is unavailable.
If you plan sending data to VictoriaMetrics from multiple Prometheus instances, then add the following lines into `global` section
of [Prometheus config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file):
```yml
```yaml
global:
external_labels:
datacenter: dc-123
@ -559,21 +559,17 @@ sending via ENV variable `DD_ADDITIONAL_ENDPOINTS` or via configuration file `ad
Run DataDog using the following ENV variable with VictoriaMetrics as additional metrics receiver:
```
```sh
DD_ADDITIONAL_ENDPOINTS='{\"http://victoriametrics:8428/datadog\": [\"apikey\"]}'
```
_Choose correct URL for VictoriaMetrics [here](https://docs.victoriametrics.com/url-examples.html#datadog)._
To configure DataDog Dual Shipping via [configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files)
add the following line:
```
```yaml
additional_endpoints:
"http://victoriametrics:8428/datadog":
- apikey
@ -648,7 +644,7 @@ Example for writing data with [InfluxDB line protocol](https://docs.influxdata.c
to local VictoriaMetrics using `curl`:
```console
```sh
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
```
@ -657,7 +653,7 @@ An arbitrary number of lines delimited by '\n' (aka newline char) can be sent in
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
```
@ -688,7 +684,7 @@ VictoriaMetrics exposes endpoint for InfluxDB v2 HTTP API at `/influx/api/v2/wri
In order to write data with InfluxDB line protocol to local VictoriaMetrics using `curl`:
```console
```sh
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/api/v2/write'
```
@ -705,7 +701,7 @@ The `/api/v1/export` endpoint should return the following response:
Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command line flag. For instance,
the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port `2003`:
```console
```sh
/path/to/victoria-metrics-prod -graphiteListenAddr=:2003
```
@ -714,7 +710,7 @@ to the VictoriaMetrics host in `StatsD` configs.
Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using `nc`:
```console
```sh
echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003
```
@ -723,7 +719,7 @@ An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
```
@ -763,7 +759,7 @@ The same protocol is used for [ingesting data in KairosDB](https://kairosdb.gith
Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command line flag. For instance,
the following command enables OpenTSDB receiver in VictoriaMetrics on TCP and UDP port `4242`:
```console
```sh
/path/to/victoria-metrics-prod -opentsdbListenAddr=:4242
```
@ -772,7 +768,7 @@ Send data to the given address from OpenTSDB-compatible agents.
Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `nc`:
```console
```sh
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
```
@ -781,7 +777,7 @@ An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
```
@ -797,7 +793,7 @@ The `/api/v1/export` endpoint should return the following response:
Enable HTTP server for OpenTSDB `/api/put` requests by setting `-opentsdbHTTPListenAddr` command line flag. For instance,
the following command enables OpenTSDB HTTP server on port `4242`:
```console
```sh
/path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242
```
@ -806,14 +802,14 @@ Send data to the given address from OpenTSDB-compatible agents.
Example for writing a single data point:
```console
```sh
curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put
```
Example for writing multiple data points in a single request:
```console
```sh
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
```
@ -821,7 +817,7 @@ curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"m
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar'
```
@ -850,7 +846,7 @@ The `COLLECTOR_URL` must point to `/newrelic` HTTP endpoint at VictoriaMetrics,
which can be obtained [here](https://newrelic.com/signup).
For example, if VictoriaMetrics runs at `localhost:8428`, then the following command can be used for running NewRelic infrastructure agent:
```console
```sh
COLLECTOR_URL="http://localhost:8428/newrelic" NRIA_LICENSE_KEY="NEWRELIC_LICENSE_KEY" ./newrelic-infra
```
@ -891,13 +887,13 @@ For example, let's import the following NewRelic Events request to VictoriaMetri
Save this JSON into `newrelic.json` file and then use the following command in order to import it into VictoriaMetrics:
```console
```sh
curl -X POST -H 'Content-Type: application/json' --data-binary @newrelic.json http://localhost:8428/newrelic/infra/v2/metrics/events/bulk
```
Let's fetch the ingested data via [data export API](#how-to-export-data-in-json-line-format):
```console
```sh
curl http://localhost:8428/api/v1/export -d 'match={eventType="SystemSample"}'
{"metric":{"__name__":"cpuStealPercent","entityKey":"macbook-pro.local","eventType":"SystemSample"},"values":[0],"timestamps":[1697407970000]}
{"metric":{"__name__":"loadAverageFiveMinute","entityKey":"macbook-pro.local","eventType":"SystemSample"},"values":[4.099609375],"timestamps":[1697407970000]}
@ -1105,7 +1101,7 @@ The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is pos
by setting it via `<ROOT_IMAGE>` environment variable.
For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```console
```sh
ROOT_IMAGE=scratch make package-victoria-metrics
```
@ -1248,7 +1244,7 @@ Optional `start` and `end` args may be added to the request in order to limit th
See [allowed formats](#timestamp-formats) for these args.
For example:
```console
```sh
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07'
```
@ -1261,7 +1257,7 @@ Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in o
of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data:
```console
```sh
curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz
```
@ -1295,7 +1291,7 @@ Optional `start` and `end` args may be added to the request in order to limit th
See [allowed formats](#timestamp-formats) for these args.
For example:
```console
```sh
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07'
```
@ -1312,7 +1308,7 @@ for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time
On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag:
```console
```sh
# count unique time series in database
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
@ -1323,7 +1319,7 @@ Optional `start` and `end` args may be added to the request in order to limit th
See [allowed formats](#timestamp-formats) for these args.
For example:
```console
```sh
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07'
```
@ -1368,7 +1364,7 @@ VictoriaMetrics accepts metrics data in JSON line format at `/api/v1/import` end
Example for importing data obtained via [/api/v1/export](#how-to-export-data-in-json-line-format):
```console
```sh
# Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl
@ -1378,7 +1374,7 @@ curl -X POST http://destination-victoriametrics:8428/api/v1/import -T exported_d
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import` for importing gzipped data:
```console
```sh
# Export gzipped data from <source-victoriametrics>:
curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl.gz
@ -1404,7 +1400,7 @@ The specification of VictoriaMetrics' native format may yet change and is not fo
If you have a native format file obtained via [/api/v1/export/native](#how-to-export-data-in-native-format) however this is the most efficient protocol for importing data in.
```console
```sh
# Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin
@ -1422,7 +1418,7 @@ Note that it could be required to flush response cache after importing historica
Arbitrary CSV data can be imported via `/api/v1/import/csv`. The CSV data is imported according to the provided `format` query arg.
The `format` query arg must contain comma-separated list of parsing rules for CSV fields. Each rule consists of three parts delimited by a colon:
```
```text
<column_pos>:<type>:<context>
```
@ -1445,14 +1441,14 @@ Each request to `/api/v1/import/csv` may contain arbitrary number of CSV lines.
Example for importing CSV data via `/api/v1/import/csv`:
```console
```sh
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
```
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
```
@ -1479,7 +1475,7 @@ and in [Pushgateway format](https://github.com/prometheus/pushgateway#url) via `
For example, the following command imports a single line in Prometheus exposition format into VictoriaMetrics:
```console
```sh
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
```
@ -1487,7 +1483,7 @@ curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/promet
The following command may be used for verifying the imported data:
```console
```sh
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
```
@ -1501,7 +1497,7 @@ It should return something like the following:
The following command imports a single metric via [Pushgateway format](https://github.com/prometheus/pushgateway#url) with `{job="my_app",instance="host123"}` labels:
```console
```sh
curl -d 'metric{label="abc"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus/metrics/job/my_app/instance/host123'
```
@ -1509,7 +1505,7 @@ curl -d 'metric{label="abc"} 123' -X POST 'http://localhost:8428/api/v1/import/p
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data:
```console
```sh
# Import gzipped data to <destination-victoriametrics>:
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz
```
@ -1541,7 +1537,7 @@ and exports data in this format at [/api/v1/export](#how-to-export-data-in-json-
The format follows [JSON streaming concept](http://ndjson.org/), e.g. each line contains JSON object with metrics data in the following format:
```
```json
{
// metric contans metric name plus labels for a particular time series
"metric":{
@ -1593,7 +1589,7 @@ The `-relabelConfig` files can contain special placeholders in the form `%{ENV_V
Example contents for `-relabelConfig` file:
```yml
```yaml
# Add {cluster="dev"} label.
- target_label: cluster
replacement: dev
@ -1621,7 +1617,7 @@ Optional `start` and `end` args may be added to the request in order to scrape t
See [allowed formats](#timestamp-formats) for these args.
For example:
```console
```sh
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48' -d 'end=2022-06-06T19:29:07'
```
@ -1698,7 +1694,7 @@ then it can be configured with multiple `-remoteWrite.url` command-line flags, w
instance in a particular availability zone, in order to replicate the collected data to all the VictoriaMetrics instances.
For example, the following command instructs `vmagent` to replicate data to `vm-az1` and `vm-az2` instances of VictoriaMetrics:
```console
```sh
/path/to/vmagent \
-remoteWrite.url=http://<vm-az1>:8428/api/v1/write \
-remoteWrite.url=http://<vm-az2>:8428/api/v1/write
@ -1708,7 +1704,7 @@ If you use Prometheus for collecting and writing the data to VictoriaMetrics,
then the following [`remote_write`](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) section
in Prometheus config can be used for replicating the collected data to `vm-az1` and `vm-az2` VictoriaMetrics instances:
```yml
```yaml
remote_write:
- url: http://<vm-az1>:8428/api/v1/write
- url: http://<vm-az2>:8428/api/v1/write
@ -1882,7 +1878,7 @@ command-line flag is applied to it. If series matches multiple configured retent
For example, the following config sets 3 days retention for time series with `team="juniors"` label,
30 days retention for time series with `env="dev"` or `env="staging"` label and 1 year retention for the remaining time series:
```
```sh
-retentionFilter='{team="juniors"}:3d' -retentionFilter='{env=~"dev|staging"}:30d' -retentionPeriod=1y
```
@ -2010,7 +2006,7 @@ and [the general security page at VictoriaMetrics website](https://victoriametri
If you plan to store more than 1TB of data on `ext4` partition or plan extending it to more than 16TB,
then the following options are recommended to pass to `mkfs.ext4`:
```console
```sh
mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
```
@ -2068,7 +2064,7 @@ In this case VictoriaMetrics puts query trace into `trace` field in the output J
For example, the following command:
```console
```sh
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace'
```
@ -2267,7 +2263,7 @@ For example, the following command instructs VictoriaMetrics to push metrics fro
with `user:pass` [Basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication). The `instance="foobar"` and `job="vm"` labels
are added to all the metrics before sending them to the remote storage:
```console
```sh
/path/to/victoria-metrics \
-pushmetrics.url=https://user:pass@maas.victoriametrics.com/api/v1/import/prometheus \
-pushmetrics.extraLabel='instance="foobar"' \
@ -2411,7 +2407,7 @@ VictoriaMetrics provides handlers for collecting the following [Go profiles](htt
* Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
```console
```sh
curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
```
@ -2419,7 +2415,7 @@ curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
```console
```sh
curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof
```
@ -2490,7 +2486,7 @@ If the page needs to have many images, consider using WEB-optimized image format
When adding a new doc with many images use `webp` format right away. Or use a Makefile command below to
convert already existing images at `docs` folder automatically to `web` format:
```console
```sh
make docs-images-to-webp
```
@ -2530,7 +2526,7 @@ Files included in each folder:
Pass `-help` to VictoriaMetrics in order to see the list of supported command-line flags with their description:
```
```sh
-bigMergeConcurrency int
Deprecated: this flag does nothing
-blockcache.missesBeforeCaching int

View file

@ -131,7 +131,7 @@ If you see unexpected or unreliable query results from VictoriaMetrics, then try
of raw unprocessed samples for this query via [/api/v1/export](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format)
on the given `[start..end]` time range and check whether they are expected:
```console
```sh
single-node: curl http://victoriametrics:8428/api/v1/export -d 'match[]=http_requests_total' -d 'start=...' -d 'end=...'
cluster: curl http://<vmselect>:8481/select/<tenantID>/prometheus/api/v1/export -d 'match[]=http_requests_total' -d 'start=...' -d 'end=...'

View file

@ -33,7 +33,7 @@ Just download archive for the needed Operating system and architecture, unpack i
For example, the following commands download VictoriaLogs archive for Linux/amd64, unpack and run it:
```bash
```sh
curl -L -O https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v0.4.2-victorialogs/victoria-logs-linux-amd64-v0.4.2-victorialogs.tar.gz
tar xzf victoria-logs-linux-amd64-v0.4.2-victorialogs.tar.gz
./victoria-logs-prod
@ -57,7 +57,7 @@ See also:
You can run VictoriaLogs in a Docker container. It is the easiest way to start using VictoriaLogs.
Here is the command to run VictoriaLogs in a Docker container:
```bash
```sh
docker run --rm -it -p 9428:9428 -v ./victoria-logs-data:/victoria-logs-data \
docker.io/victoriametrics/victoria-logs:v0.4.2-victorialogs
```
@ -79,20 +79,20 @@ Follow the following steps in order to build VictoriaLogs from source code:
- Checkout VictoriaLogs source code. It is located in the VictoriaMetrics repository:
```bash
```sh
git clone https://github.com/VictoriaMetrics/VictoriaMetrics
cd VictoriaMetrics
```
- Build VictoriaLogs. The build command requires [Go 1.20](https://golang.org/doc/install).
```bash
```sh
make victoria-logs
```
- Run the built binary:
```bash
```sh
bin/victoria-logs
```
@ -117,7 +117,7 @@ without additional configuration.
Pass `-help` to VictoriaLogs in order to see the list of supported command-line flags with their description and default values:
```bash
```sh
/path/to/victoria-logs -help
```

View file

@ -72,7 +72,7 @@ for the supported duration formats.
For example, the following command starts VictoriaLogs with the retention of 8 weeks:
```bash
```sh
/path/to/victoria-logs -retentionPeriod=8w
```
@ -96,7 +96,7 @@ for the supported duration formats.
For example, the following command starts VictoriaLogs, which accepts logs with timestamps up to a year in the future:
```bash
```sh
/path/to/victoria-logs -futureRetention=1y
```
@ -105,7 +105,7 @@ For example, the following command starts VictoriaLogs, which accepts logs with
VictoriaLogs stores all its data in a single directory - `victoria-logs-data`. The path to the directory can be changed via `-storageDataPath` command-line flag.
For example, the following command starts VictoriaLogs, which stores the data at `/var/lib/victoria-logs`:
```bash
```sh
/path/to/victoria-logs -storageDataPath=/var/lib/victoria-logs
```

View file

@ -15,7 +15,7 @@ aliases:
Specify [`output.elasicsearch`](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) section in the `filebeat.yml`
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
```yml
```yaml
output.elasticsearch:
hosts: ["http://localhost:9428/insert/elasticsearch/"]
parameters:
@ -33,7 +33,7 @@ and uses the correct [stream fields](https://docs.victoriametrics.com/VictoriaLo
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters)
and inspecting VictoriaLogs logs then:
```yml
```yaml
output.elasticsearch:
hosts: ["http://localhost:9428/insert/elasticsearch/"]
parameters:
@ -47,7 +47,7 @@ If some [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.h
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters).
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
```yml
```yaml
output.elasticsearch:
hosts: ["http://localhost:9428/insert/elasticsearch/"]
parameters:
@ -60,7 +60,7 @@ output.elasticsearch:
When Filebeat ingests logs into VictoriaLogs at a high rate, then it may be needed to tune `worker` and `bulk_max_size` options.
For example, the following config is optimized for higher than usual ingestion rate:
```yml
```yaml
output.elasticsearch:
hosts: ["http://localhost:9428/insert/elasticsearch/"]
parameters:
@ -74,7 +74,7 @@ output.elasticsearch:
If the Filebeat sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compression_level` option.
This usually allows saving network bandwidth and costs by up to 5 times:
```yml
```yaml
output.elasticsearch:
hosts: ["http://localhost:9428/insert/elasticsearch/"]
parameters:
@ -88,7 +88,7 @@ By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [te
If you need storing logs in other tenant, then specify the needed tenant via `headers` at `output.elasticsearch` section.
For example, the following `filebeat.yml` config instructs Filebeat to store the data to `(AccountID=12, ProjectID=34)` tenant:
```yml
```yaml
output.elasticsearch:
hosts: ["http://localhost:9428/insert/elasticsearch/"]
headers:
@ -103,7 +103,7 @@ output.elasticsearch:
Filebeat checks a version of ElasticSearch on startup and refuses to start sending logs if the version is not compatible.
In order to bypass this check please add `allow_older_versions: true` into `output.elasticsearch` section:
```yml
```yaml
output.elasticsearch:
hosts: [ "http://localhost:9428/insert/elasticsearch/" ]
parameters:

View file

@ -48,7 +48,7 @@ at `http://localhost:9428/insert/elasticsearch/_bulk` endpoint.
The following command pushes a single log line to VictoriaLogs:
```bash
```sh
echo '{"create":{}}
{"_msg":"cannot open file","_time":"0","host.name":"host123"}
' | curl -X POST -H 'Content-Type: application/json' --data-binary @- http://localhost:9428/insert/elasticsearch/_bulk
@ -69,13 +69,13 @@ The API accepts various http parameters, which can change the data ingestion beh
The following command verifies that the data has been successfully ingested to VictoriaLogs by [querying](https://docs.victoriametrics.com/VictoriaLogs/querying/) it:
```bash
```sh
curl http://localhost:9428/select/logsql/query -d 'query=host.name:host123'
```
The command should return the following response:
```bash
```sh
{"_msg":"cannot open file","_stream":"{}","_time":"2023-06-21T04:24:24Z","host.name":"host123"}
```
@ -98,7 +98,7 @@ VictoriaLogs accepts JSON line stream aka [ndjson](http://ndjson.org/) at `http:
The following command pushes multiple log lines to VictoriaLogs:
```bash
```sh
echo '{ "log": { "level": "info", "message": "hello world" }, "date": "0", "stream": "stream1" }
{ "log": { "level": "error", "message": "oh no!" }, "date": "0", "stream": "stream1" }
{ "log": { "level": "info", "message": "hello world" }, "date": "0", "stream": "stream2" }
@ -121,13 +121,13 @@ The API accepts various http parameters, which can change the data ingestion beh
The following command verifies that the data has been successfully ingested into VictoriaLogs by [querying](https://docs.victoriametrics.com/VictoriaLogs/querying/) it:
```bash
```sh
curl http://localhost:9428/select/logsql/query -d 'query=log.level:*'
```
The command should return the following response:
```bash
```sh
{"_msg":"hello world","_stream":"{stream=\"stream2\"}","_time":"2023-06-20T13:35:11.56789Z","log.level":"info"}
{"_msg":"hello world","_stream":"{stream=\"stream1\"}","_time":"2023-06-20T15:31:23Z","log.level":"info"}
{"_msg":"oh no!","_stream":"{stream=\"stream1\"}","_time":"2023-06-20T15:32:10.567Z","log.level":"error"}
@ -152,7 +152,7 @@ VictoriaLogs accepts logs in [Loki JSON API](https://grafana.com/docs/loki/lates
The following command pushes a single log line to Loki JSON API at VictoriaLogs:
```bash
```sh
curl -H "Content-Type: application/json" -XPOST "http://localhost:9428/insert/loki/api/v1/push?_stream_fields=instance,job" --data-raw \
'{"streams": [{ "stream": { "instance": "host123", "job": "app42" }, "values": [ [ "0", "foo fizzbuzz bar" ] ] }]}'
```
@ -164,13 +164,13 @@ There is no need in specifying `_msg_field` and `_time_field` query args, since
The following command verifies that the data has been successfully ingested into VictoriaLogs by [querying](https://docs.victoriametrics.com/VictoriaLogs/querying/) it:
```bash
```sh
curl http://localhost:9428/select/logsql/query -d 'query=fizzbuzz'
```
The command should return the following response:
```bash
```sh
{"_msg":"foo fizzbuzz bar","_stream":"{instance=\"host123\",job=\"app42\"}","_time":"2023-07-20T23:01:19.288676497Z"}
```
@ -223,7 +223,7 @@ These headers may contain the needed tenant to ingest data to. See [multitenancy
The following command can be used for verifying whether the data is successfully ingested into VictoriaLogs:
```bash
```sh
curl http://localhost:9428/select/logsql/query -d 'query=*' | head
```

View file

@ -27,7 +27,7 @@ VictoriaLogs can be queried at the `/select/logsql/query` HTTP endpoint.
The [LogsQL](https://docs.victoriametrics.com/VictoriaLogs/LogsQL.html) query must be passed via `query` argument.
For example, the following query returns all the log entries with the `error` word:
```bash
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error'
```
@ -69,7 +69,7 @@ By default the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametric
If you need querying other tenant, then specify the needed tenant via http request headers. For example, the following query searches
for log messages at `(AccountID=12, ProjectID=34)` tenant:
```bash
```sh
curl http://localhost:9428/select/logsql/query -H 'AccountID: 12' -H 'ProjectID: 34' -d 'query=error'
```
@ -119,7 +119,7 @@ without the risk of high resource usage (CPU, RAM, disk IO) at VictoriaLogs serv
For example, the following query can return very big number of matching log entries (e.g. billions) if VictoriaLogs contains
many log messages with the `error` [word](https://docs.victoriametrics.com/VictoriaLogs/LogsQL.html#word):
```bash
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error'
```
@ -128,7 +128,7 @@ VictoriaLogs notices that the response stream is closed, so it cancels the query
Then just use `head` command for investigating the returned log messages and narrowing down the query:
```bash
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error' | head -10
```
@ -137,7 +137,7 @@ This automatically cancels the query at VictoriaLogs side, so it stops consuming
Sometimes it may be more convenient to use `less` command instead of `head` during the investigation of the returned response:
```bash
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error' | less
```
@ -152,7 +152,7 @@ Then the query can be narrowed down to `error AND "cannot open file"`
(see [these docs](https://docs.victoriametrics.com/VictoriaLogs/LogsQL.html#logical-filter) about `AND` operator).
Then run the updated command in order to continue the investigation:
```bash
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error AND "cannot open file"' | head
```
@ -170,7 +170,7 @@ with the `error` [word](https://docs.victoriametrics.com/VictoriaLogs/LogsQL.htm
received from [streams](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields) with `app="nginx"` field
during the last 5 minutes:
```bash
```sh
curl http://localhost:9428/select/logsql/query -d 'query=_stream:{app="nginx"} AND _time:5m AND error' | wc -l
```
@ -180,7 +180,7 @@ and [these docs](https://docs.victoriametrics.com/VictoriaLogs/LogsQL.html#logic
The following example shows how to sort query results by the [`_time` field](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#time-field):
```bash
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error' | jq -r '._time + " " + ._msg' | sort | less
```
@ -196,7 +196,7 @@ on how to narrow down query results.
The following example calculates stats on the number of log messages received during the last 5 minutes
grouped by `log.level` [field](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model):
```bash
```sh
curl http://localhost:9428/select/logsql/query -d 'query=_time:5m log.level:*' | jq -r '."log.level"' | sort | uniq -c
```

View file

@ -16,7 +16,7 @@ aliases:
Please find the changelog for VictoriaMetrics Anomaly Detection below.
The following `tip` changes can be tested by building from the `latest` tag:
```bash
```sh
docker pull us-docker.pkg.dev/victoriametrics-test/public/vmanomaly-trial:latest
```

View file

@ -300,7 +300,7 @@ For additional licensing options, please refer to the [VictoriaMetrics Anomaly D
Let's create `alertmanager.yml` file for `alertmanager` configuration.
```yml
```yaml
route:
receiver: blackhole

View file

@ -100,7 +100,7 @@ The `-eula` command-line flag is deprecated starting from `v1.94.0` release in f
For example, the following command runs VictoriaMetrics Enterprise binary with the Enterprise license
obtained at [this page](https://victoriametrics.com/products/enterprise/trial/):
```console
```sh
wget https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v1.96.0/victoria-metrics-linux-amd64-v1.96.0-enterprise.tar.gz
tar -xzf victoria-metrics-linux-amd64-v1.96.0-enterprise.tar.gz
./victoria-metrics-prod -license=BASE64_ENCODED_LICENSE_KEY
@ -108,7 +108,7 @@ tar -xzf victoria-metrics-linux-amd64-v1.96.0-enterprise.tar.gz
Alternatively, VictoriaMetrics Enterprise license can be stored in the file and then referred via `-licenseFile` command-line flag:
```console
```sh
./victoria-metrics-prod -licenseFile=/path/to/vm-license
```
@ -126,13 +126,13 @@ Enterprise license key can be obtained at [this page](https://victoriametrics.co
For example, the following command runs VictoriaMetrics Enterprise Docker image with the specified license key:
```console
```sh
docker run --name=victoria-metrics victoriametrics/victoria-metrics:v1.96.0-enteprise -license=BASE64_ENCODED_LICENSE_KEY
```
Alternatively, the license code can be stored in the file and then referred via `-licenseFile` command-line flag:
```console
```sh
docker run --name=victoria-metrics -v /vm-license:/vm-license victoriametrics/victoria-metrics:v1.96.0-enteprise -licenseFile=/path/to/vm-license
```
@ -206,7 +206,7 @@ data:
```
Or create secret via `kubectl`:
```console
```sh
kubectl create secret generic vm-license --from-literal=license={BASE64_ENCODED_LICENSE_KEY}
```
@ -265,7 +265,7 @@ data:
```
Or create secret via `kubectl`:
```console
```sh
kubectl create secret generic vm-license --from-literal=license={BASE64_ENCODED_LICENSE_KEY}
```

View file

@ -31,14 +31,14 @@ See how to work with a [VictoriaMetrics Helm repository in previous guide](https
## 2. Install the VM Operator from the Helm chart
```console
```sh
helm install vmoperator vm/victoria-metrics-operator
```
The expected output is:
```console
```sh
NAME: vmoperator
LAST DEPLOYED: Thu Sep 30 17:30:30 2021
NAMESPACE: default
@ -56,12 +56,12 @@ See "Getting started guide for VM Operator" on https://docs.victoriametrics.com/
Run the following command to check that VM Operator is up and running:
```console
```sh
kubectl --namespace default get pods -l "app.kubernetes.io/instance=vmoperator"
```
The expected output:
```console
```sh
NAME READY STATUS RESTARTS AGE
vmoperator-victoria-metrics-operator-67cff44cd6-s47n6 1/1 Running 0 77s
```
@ -74,7 +74,7 @@ Run the following command to install [VictoriaMetrics Cluster](https://docs.vict
<p id="example-cluster-config"></p>
```console
```sh
cat << EOF | kubectl apply -f -
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMCluster
@ -94,7 +94,7 @@ EOF
The expected output:
```console
```sh
vmcluster.operator.victoriametrics.com/example-vmcluster-persistent created
```
@ -143,7 +143,7 @@ kubectl get svc | grep vminsert
The expected output:
```console
```sh
vminsert-example-vmcluster-persistent ClusterIP 10.107.47.136 <none> 8480/TCP 5m58s
```
@ -226,13 +226,13 @@ See [how to install and connect Grafana to VictoriaMetrics](https://docs.victori
To get the new service name, please run the following command:
```console
```sh
kubectl get svc | grep vmselect
```
The expected output:
```console
```sh
vmselect-example-vmcluster-persistent ClusterIP None <none> 8481/TCP 7m
```

View file

@ -107,7 +107,7 @@ After restarting Grafana with the new config you should be able to log in using
Now starting vmgateway with enabled authentication is as simple as adding the `-enable.auth=true` flag.
In order to enable multi-tenant access, you must also specify the `-clusterMode=true` flag.
```console
```sh
./bin/vmgateway -eula \
-enable.auth=true \
-clusterMode=true \
@ -132,7 +132,7 @@ For example, if the JWT token contains the following `vm_access` claim:
Then vmgateway will proxy request to an endpoint with the following path:
```console
```sh
http://localhost:8480/select/0:0/
```
@ -173,7 +173,7 @@ It is also possible to enable [JWT token signature verification](https://docs.vi
vmgateway.
To do this by using OpenID Connect discovery endpoint you need to specify the `-auth.oidcDiscoveryEndpoints` flag. For example:
```console
```sh
./bin/vmgateway -eula \
-enable.auth=true \
-clusterMode=true \
@ -184,7 +184,7 @@ To do this by using OpenID Connect discovery endpoint you need to specify the `-
Now vmgateway will print the following message on startup:
```console
```sh
2023-03-13T14:45:31.552Z info VictoriaMetrics/app/vmgateway/main.go:154 using 2 keys for JWT token signature verification
```

View file

@ -151,7 +151,7 @@ For us its important to remember the url for the datasource (copy lines from
Verify that [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html) pods are up and running by executing the following command:
```console
```sh
kubectl get pods
```

View file

@ -84,7 +84,7 @@ supports [InfluxDB line protocol](https://docs.victoriametrics.com/#how-to-send-
for data ingestion. For example, to write a measurement to VictoriaMetrics we need to send an HTTP POST request with
payload in a line protocol format:
```console
```sh
curl -d 'census,location=klamath,scientist=anderson bees=23 1566079200000' -X POST 'http://<victoriametric-addr>:8428/write'
```
@ -95,7 +95,7 @@ Please note, an arbitrary number of lines delimited by `\n` (aka newline char) c
To get the written data back let's export all series matching the `location="klamath"` filter:
```console
```sh
curl -G 'http://<victoriametric-addr>:8428/api/v1/export' -d 'match={location="klamath"}'
```

View file

@ -33,7 +33,7 @@ Using this schema, you can achieve:
* You need to pass two `-remoteWrite.url` command-line options to `vmagent`:
```console
```sh
/path/to/vmagent-prod \
-remoteWrite.url=<ground-control-1-remote-write> \
-remoteWrite.url=<ground-control-2-remote-write>

View file

@ -402,7 +402,7 @@ for [InfluxDB line protocol](https://docs.victoriametrics.com/Single-server-Vict
Creating custom clients or instrumenting the application for metrics writing is as easy as sending a POST request:
```console
```sh
curl -d '{"metric":{"__name__":"foo","job":"node_exporter"},"values":[0,1,2],"timestamps":[1549891472010,1549891487724,1549891503438]}' -X POST 'http://localhost:8428/api/v1/import'
```
@ -542,7 +542,7 @@ ranging from 1m to 3m. If we plot this data sample on the graph, it will have th
To get the value of the `foo_bar` series at some specific moment of time, for example `2022-05-10 10:03:00`, in
VictoriaMetrics we need to issue an **instant query**:
```console
```sh
curl "http://<victoria-metrics-addr>/api/v1/query?query=foo_bar&time=2022-05-10T10:03:00.000Z"
```
@ -607,7 +607,7 @@ Params:
For example, to get the values of `foo_bar` during the time range from `2022-05-10 09:59:00` to `2022-05-10 10:17:00`,
we need to issue a range query:
```console
```sh
curl "http://<victoria-metrics-addr>/api/v1/query_range?query=foo_bar&step=1m&start=2022-05-10T09:59:00.000Z&end=2022-05-10T10:17:00.000Z"
```

View file

@ -70,7 +70,7 @@ For instructions on how to create tokens, please refer to this section of the [d
##### Binary
```console
```sh
export TOKEN=81e8226e-****-****-****-************
export MANAGED_VM_URL=https://gw-c15-1c.cloud.victoriametrics.com
export ALERTMANAGER_URL=http://localhost:9093
@ -79,7 +79,7 @@ export ALERTMANAGER_URL=http://localhost:9093
##### Docker
```console
```sh
export TOKEN=81e8226e-****-****-****-************
export MANAGED_VM_URL=https://gw-c15-1c.cloud.victoriametrics.com
export ALERTMANAGER_URL=http://alertmanager:9093
@ -88,7 +88,7 @@ docker run -it -p 8080:8080 -v $(pwd)/alerts.yml:/etc/alerts/alerts.yml victoria
##### Helm Chart
```console
```sh
export TOKEN=81e8226e-****-****-****-************
export MANAGED_VM_URL=https://gw-c15-1c.cloud.victoriametrics.com
export ALERTMANAGER=http://alertmanager:9093
@ -128,7 +128,7 @@ EOF
##### VMalert CRD for vmoperator
```console
```sh
export TOKEN=81e8226e-****-****-****-************
export MANAGED_VM_URL=https://gw-c15-1c.cloud.victoriametrics.com
export ALERTMANAGER=http://alertmanager:9093
@ -173,7 +173,7 @@ EOF
You can ingest metric that will raise an alert
```console
```sh
export TOKEN=81e8226e-****-****-****-*************
export MANAGED_VM_URL=https://gw-c15-1c.cloud.victoriametrics.com/
curl -H "Authorization: Bearer $TOKEN" -X POST "$MANAGED_VM_URLapi/v1/import/prometheus" -d 'up{job="vmalert-test", instance="localhost"} 0'
@ -183,7 +183,7 @@ curl -H "Authorization: Bearer $TOKEN" -X POST "$MANAGED_VM_URLapi/v1/import/pro
##### Binary
```console
```sh
export TOKEN=76bc5470-****-****-****-************
export MANAGED_VM_READ_URL=https://gw-c15-1a.cloud.victoriametrics.com/select/0/prometheus/
export MANAGED_VM_WRITE_URL=https://gw-c15-1a.cloud.victoriametrics.com/insert/0/prometheus/
@ -193,7 +193,7 @@ export ALERTMANAGER_URL=http://localhost:9093
##### Docker
```console
```sh
export TOKEN=76bc5470-****-****-****-************
export MANAGED_VM_READ_URL=https://gw-c15-1a.cloud.victoriametrics.com/select/0/prometheus/
export MANAGED_VM_WRITE_URL=https://gw-c15-1a.cloud.victoriametrics.com/insert/0/prometheus/
@ -203,7 +203,7 @@ docker run -it -p 8080:8080 -v $(pwd)/alerts.yml:/etc/alerts/alerts.yml victoria
##### Helm Chart
```console
```sh
export TOKEN=76bc5470-****-****-****-************
export MANAGED_VM_READ_URL=https://gw-c15-1a.cloud.victoriametrics.com/select/0/prometheus/
export MANAGED_VM_WRITE_URL=https://gw-c15-1a.cloud.victoriametrics.com/insert/0/prometheus/
@ -244,7 +244,7 @@ EOF
##### VMalert CRD for vmoperator
```console
```sh
export TOKEN=76bc5470-****-****-****-************
export MANAGED_VM_READ_URL=https://gw-c15-1a.cloud.victoriametrics.com/select/0/prometheus/
export MANAGED_VM_WRITE_URL=https://gw-c15-1a.cloud.victoriametrics.com/insert/0/prometheus/
@ -290,7 +290,7 @@ EOF
You can ingest metric that will raise an alert
```console
```sh
export TOKEN=76bc5470-****-****-****-************
export MANAGED_VM_WRITE_URL=https://gw-c15-1a.cloud.victoriametrics.com/insert/0/prometheus/
curl -H "Authorization: Bearer $TOKEN" -X POST "$MANAGED_VM_WRITE_URLapi/v1/import/prometheus" -d 'up{job="vmalert-test", instance="localhost"} 0'

View file

@ -20,7 +20,7 @@ It defines default configuration options, like images for components, timeouts,
In addition, the operator has a special startup mode for outputting all variables, their types and default values.
For instance, with this mode you can know versions of VM components, which are used by default:
```console
```sh
./operator --printDefaults
# This application is configured via the environment. The following environment variables can be used:
@ -38,7 +38,7 @@ For instance, with this mode you can know versions of VM components, which are u
You can choose output format for variables with `--printFormat` flag, possible values: `json`, `yaml`, `list` and `table` (default):
```console
```sh
.operator --printDefaults --printFormat=json
# {
@ -239,7 +239,7 @@ This should reduce errors and simplify debugging.
Validation hooks at operator side must be enabled with flags:
```console
```sh
./operator
--webhook.enable
# optional configuration for certDir and tls names.
@ -251,7 +251,7 @@ Validation hooks at operator side must be enabled with flags:
You have to mount correct certificates at give directory.
It can be simplified with cert-manager and kustomize command:
```console
```sh
kustomize build config/deployments/webhook/
```

View file

@ -49,7 +49,7 @@ i.e. creates resources of VictoriaMetrics similar to Prometheus resources in the
You can control this behaviour by setting env variable for operator:
```console
```sh
# disable convertion for each object
VM_ENABLEDPROMETHEUSCONVERTER_PODMONITOR=false
VM_ENABLEDPROMETHEUSCONVERTER_SERVICESCRAPE=false
@ -80,7 +80,7 @@ For more information about the operator's workflow, see [this doc](./README.md).
By default, the operator doesn't make converted objects disappear after original ones are deleted. To change this behaviour
configure adding `OwnerReferences` to converted objects with following [operator parameter](./setup.md#settings):
```console
```sh
VM_ENABLEDPROMETHEUSCONVERTEROWNERREFERENCES=true
```
@ -177,7 +177,7 @@ and [VMNodeScrape](./resources/vmnodescrape.md) because these objects are not cr
You can filter labels for syncing
with [operator parameter](./setup.md#settings) `VM_FILTERPROMETHEUSCONVERTERLABELPREFIXES`:
```console
```sh
# it excludes all labels that start with "helm.sh" or "argoproj.io" from synchronization
VM_FILTERPROMETHEUSCONVERTERLABELPREFIXES=helm.sh,argoproj.io
```
@ -185,7 +185,7 @@ VM_FILTERPROMETHEUSCONVERTERLABELPREFIXES=helm.sh,argoproj.io
In the same way, annotations with specified prefixes can be excluded from synchronization
with [operator parameter](./setup.md#settings) `VM_FILTERPROMETHEUSCONVERTERANNOTATIONPREFIXES`:
```console
```sh
# it excludes all annotations that start with "helm.sh" or "argoproj.io" from synchronization
VM_FILTERPROMETHEUSCONVERTERANNOTATIONPREFIXES=helm.sh,argoproj.io
```
@ -197,7 +197,7 @@ with [operator parameter](./setup.md#settings) `VM_PROMETHEUSCONVERTERADDARGOCDI
It helps to properly use converter with ArgoCD and should help prevent out-of-sync issues with argo-cd based deployments:
```console
```sh
# adds compare-options and sync-options for prometheus objects converted by operator
VM_PROMETHEUSCONVERTERADDARGOCDIGNOREANNOTATIONS=true
```

View file

@ -34,7 +34,7 @@ Obtain release from releases page:
We suggest use the latest release.
```console
```sh
# Get latest release version from https://github.com/VictoriaMetrics/operator/releases/latest
export VM_VERSION=`basename $(curl -fs -o/dev/null -w %{redirect_url} https://github.com/VictoriaMetrics/operator/releases/latest)`
wget https://github.com/VictoriaMetrics/operator/releases/download/$VM_VERSION/bundle_crd.zip
@ -43,13 +43,13 @@ unzip bundle_crd.zip
Operator use `monitoring-system` namespace, but you can install it to specific namespace with command:
```console
```sh
sed -i "s/namespace: monitoring-system/namespace: YOUR_NAMESPACE/g" release/operator/*
```
First of all, you have to create [custom resource definitions](https://github.com/VictoriaMetrics/operator):
```console
```sh
kubectl apply -f release/crds
```
@ -58,13 +58,13 @@ Then you need RBAC for operator, relevant configuration for the release can be f
Change configuration for operator at `release/operator/manager.yaml`, possible settings: [operator-settings](/operator/vars.html)
and apply it:
```console
```sh
kubectl apply -f release/operator/
```
Check the status of operator
```console
```sh
kubectl get pods -n monitoring-system
#NAME READY STATUS RESTARTS AGE
@ -75,7 +75,7 @@ kubectl get pods -n monitoring-system
You can install operator using [Kustomize](https://kustomize.io/) by pointing to the remote kustomization file.
```console
```sh
# Get latest release version from https://github.com/VictoriaMetrics/operator/releases/latest
export VM_VERSION=`basename $(curl -fs -o/dev/null -w %{redirect_url} https://github.com/VictoriaMetrics/operator/releases/latest)`
@ -95,19 +95,19 @@ You can change [operator configuration](#configuring), or use your custom namesp
Build template
```console
```sh
kustomize build . -o monitoring.yaml
```
Apply manifests
```console
```sh
kubectl apply -f monitoring.yaml
```
Check the status of operator
```console
```sh
kubectl get pods -n monitoring-system
#NAME READY STATUS RESTARTS AGE

View file

@ -215,7 +215,7 @@ For example, if an advertising server generates `hits{some="labels} N` and `clic
at irregular intervals, then the following [stream aggregation config](#stream-aggregation-config)
can be used for summing these metrics per every minute:
```yml
```yaml
- match: '{__name__=~"hits|clicks"}'
interval: 1m
outputs: [sum_samples]
@ -694,7 +694,7 @@ support the following approaches for hot reloading stream aggregation configs fr
* By sending `SIGHUP` signal to `vmagent` or `victoria-metrics` process:
```bash
```sh
kill -SIGHUP `pidof vmagent`
```

View file

@ -18,7 +18,7 @@ Note that handler accepts any HTTP method, so sending a `GET` request to `/api/v
Single-node VictoriaMetrics:
```console
```sh
curl -v http://localhost:8428/api/v1/admin/tsdb/delete_series -d 'match[]=vm_http_request_errors_total'
```
@ -26,7 +26,7 @@ curl -v http://localhost:8428/api/v1/admin/tsdb/delete_series -d 'match[]=vm_htt
The expected output should return [HTTP Status 204](https://datatracker.ietf.org/doc/html/rfc7231#page-53) and will look like:
```console
```sh
* Trying 127.0.0.1:8428...
* Connected to 127.0.0.1 (127.0.0.1) port 8428 (#0)
> GET /api/v1/admin/tsdb/delete_series?match[]=vm_http_request_errors_total HTTP/1.1
@ -45,7 +45,7 @@ The expected output should return [HTTP Status 204](https://datatracker.ietf.org
Cluster version of VictoriaMetrics:
```console
```sh
curl -v http://<vmselect>:8481/delete/0/prometheus/api/v1/admin/tsdb/delete_series -d 'match[]=vm_http_request_errors_total'
```
@ -53,7 +53,7 @@ curl -v http://<vmselect>:8481/delete/0/prometheus/api/v1/admin/tsdb/delete_seri
The expected output should return [HTTP Status 204](https://datatracker.ietf.org/doc/html/rfc7231#page-53) and will look like:
```console
```sh
* Trying 127.0.0.1:8481...
* Connected to 127.0.0.1 (127.0.0.1) port 8481 (#0)
> GET /delete/0/prometheus/api/v1/admin/tsdb/delete_series?match[]=vm_http_request_errors_total HTTP/1.1
@ -81,14 +81,14 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl http://localhost:8428/api/v1/export -d 'match[]=vm_http_request_errors_total' > filename.json
```
Cluster version of VictoriaMetrics:
```console
```sh
curl http://<vmselect>:8481/select/0/prometheus/api/v1/export -d 'match[]=vm_http_request_errors_total' > filename.json
```
@ -106,14 +106,14 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl http://localhost:8428/api/v1/export/csv -d 'format=__name__,__value__,__timestamp__:unix_s' -d 'match[]=vm_http_request_errors_total' > filename.csv
```
Cluster version of VictoriaMetrics:
```console
```sh
curl http://<vmselect>:8481/select/0/prometheus/api/v1/export/csv -d 'format=__name__,__value__,__timestamp__:unix_s' -d 'match[]=vm_http_request_errors_total' > filename.csv
```
@ -130,14 +130,14 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl http://localhost:8428/api/v1/export/native -d 'match[]=vm_http_request_errors_total' > filename.bin
```
Cluster version of VictoriaMetrics:
```console
```sh
curl http://<vmselect>:8481/select/0/prometheus/api/v1/export/native -d 'match[]=vm_http_request_errors_total' > filename.bin
```
@ -154,14 +154,14 @@ More information:
Single-node VictoriaMetrics:
```console
```sh
curl -H 'Content-Type: application/json' --data-binary "@filename.json" -X POST http://localhost:8428/api/v1/import
```
Cluster version of VictoriaMetrics:
```console
```sh
curl -H 'Content-Type: application/json' --data-binary "@filename.json" -X POST http://<vminsert>:8480/insert/0/prometheus/api/v1/import
```
@ -178,14 +178,14 @@ More information:
Single-node VictoriaMetrics:
```console
```sh
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
```
Cluster version of VictoriaMetrics:
```console
```sh
curl -d "GOOG,1.23,4.56,NYSE" 'http://<vminsert>:8480/insert/0/prometheus/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
```
@ -202,13 +202,13 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl -X POST http://localhost:8428/api/v1/import/native -T filename.bin
```
Cluster version of VictoriaMetrics:
```console
```sh
curl -X POST http://<vminsert>:8480/insert/0/prometheus/api/v1/import/native -T filename.bin
```
@ -224,14 +224,14 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl -d 'metric_name{foo="bar"} 123' -X POST http://localhost:8428/api/v1/import/prometheus
```
Cluster version of VictoriaMetrics:
```console
```sh
curl -d 'metric_name{foo="bar"} 123' -X POST http://<vminsert>:8480/insert/0/prometheus/api/v1/import/prometheus
```
@ -247,14 +247,14 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl http://localhost:8428/prometheus/api/v1/labels
```
Cluster version of VictoriaMetrics:
```console
```sh
curl http://<vmselect>:8481/select/0/prometheus/api/v1/labels
```
@ -273,14 +273,14 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl http://localhost:8428/prometheus/api/v1/label/job/values
```
Cluster version of VictoriaMetrics:
```console
```sh
curl http://<vmselect>:8481/select/0/prometheus/api/v1/label/job/values
```
@ -299,14 +299,14 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl http://localhost:8428/prometheus/api/v1/query -d 'query=vm_http_request_errors_total'
```
Cluster version of VictoriaMetrics:
```console
```sh
curl http://<vmselect>:8481/select/0/prometheus/api/v1/query -d 'query=vm_http_request_errors_total'
```
@ -323,14 +323,14 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl http://localhost:8428/prometheus/api/v1/query_range -d 'query=sum(increase(vm_http_request_errors_total{job="foo"}[5m]))' -d 'start=-1d' -d 'step=1h'
```
Cluster version of VictoriaMetrics:
```console
```sh
curl http://<vmselect>:8481/select/0/prometheus/api/v1/query_range -d 'query=sum(increase(vm_http_request_errors_total{job="foo"}[5m]))' -d 'start=-1d' -d 'step=1h'
```
@ -347,14 +347,14 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl http://localhost:8428/prometheus/api/v1/series -d 'match[]=vm_http_request_errors_total'
```
Cluster version of VictoriaMetrics:
```console
```sh
curl http://<vmselect>:8481/select/0/prometheus/api/v1/series -d 'match[]=vm_http_request_errors_total'
```
@ -374,14 +374,14 @@ VictoriaMetrics accepts `limit` query arg for `/api/v1/series` handlers for limi
Single-node VictoriaMetrics:
```console
```sh
curl http://localhost:8428/prometheus/api/v1/status/tsdb
```
Cluster version of VictoriaMetrics:
```console
```sh
curl http://<vmselect>:8481/select/0/prometheus/api/v1/status/tsdb
```
@ -415,7 +415,7 @@ http://vminsert:8480/insert/0/datadog
Single-node VictoriaMetrics:
```console
```sh
echo '
{
"series": [
@ -440,7 +440,7 @@ echo '
Cluster version of VictoriaMetrics:
```console
```sh
echo '
{
"series": [
@ -475,7 +475,7 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
echo '
{
"series": [
@ -504,7 +504,7 @@ echo '
Cluster version of VictoriaMetrics:
```console
```sh
echo '
{
"series": [
@ -542,14 +542,14 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl http://localhost:8428/federate -d 'match[]=vm_http_request_errors_total'
```
Cluster version of VictoriaMetrics:
```console
```sh
curl http://<vmselect>:8481/select/0/prometheus/federate -d 'match[]=vm_http_request_errors_total'
```
@ -566,14 +566,14 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl http://localhost:8428/graphite/metrics/find -d 'query=vm_http_request_errors_total'
```
Cluster version of VictoriaMetrics:
```console
```sh
curl http://<vmselect>:8481/select/0/graphite/metrics/find -d 'query=vm_http_request_errors_total'
```
@ -591,14 +591,14 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST http://localhost:8428/write
```
Cluster version of VictoriaMetrics:
```console
```sh
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST http://<vminsert>:8480/insert/0/influx/write
```
@ -614,7 +614,7 @@ Additional information:
Single-node VictoriaMetrics:
```console
```sh
curl -Is http://localhost:8428/internal/resetRollupResultCache
```
@ -622,7 +622,7 @@ curl -Is http://localhost:8428/internal/resetRollupResultCache
Cluster version of VictoriaMetrics:
```console
```sh
curl -Is http://<vmselect>:8481/select/internal/resetRollupResultCache
```
@ -640,14 +640,14 @@ Turned off by default. Enable OpenTSDB receiver in VictoriaMetrics by setting `-
Single-node VictoriaMetrics:
```console
```sh
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
```
Cluster version of VictoriaMetrics:
```console
```sh
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N http://<vminsert> 4242
```
@ -656,14 +656,14 @@ Enable HTTP server for OpenTSDB /api/put requests by setting `-opentsdbHTTPListe
Single-node VictoriaMetrics:
```console
```sh
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
```
Cluster version of VictoriaMetrics:
```console
```sh
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://<vminsert>:8480/insert/42/opentsdb/api/put
```
@ -679,14 +679,14 @@ Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` com
Single-node VictoriaMetrics:
```console
```sh
echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003
```
Cluster version of VictoriaMetrics:
```console
```sh
echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N http://<vminsert> 2003
```

View file

@ -71,7 +71,7 @@ and sending the data to the Prometheus-compatible remote storage:
Example command for writing the data received via [supported push-based protocols](#how-to-push-data-to-vmagent)
to [single-node VictoriaMetrics](https://docs.victoriametrics.com/) located at `victoria-metrics-host:8428`:
```bash
```sh
/path/to/vmagent -remoteWrite.url=https://victoria-metrics-host:8428/api/v1/write
```
@ -80,7 +80,7 @@ the data to [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-V
Example command for scraping Prometheus targets and writing the data to single-node VictoriaMetrics:
```bash
```sh
/path/to/vmagent -promscrape.config=/path/to/prometheus.yml -remoteWrite.url=https://victoria-metrics-host:8428/api/v1/write
```
@ -122,7 +122,7 @@ additionally to pull-based Prometheus-compatible targets' scraping:
* Sending `SIGHUP` signal to `vmagent` process:
```bash
```sh
kill -SIGHUP `pidof vmagent`
```
@ -223,7 +223,7 @@ To route metrics `env=dev` to destination `dev` and metrics with `env=prod` to d
```
1. Configure `vmagent` with 2 `-remoteWrite.url` flags pointing to destinations `dev` and `prod` with corresponding
`-remoteWrite.urlRelabelConfig` configs:
```console
```sh
./vmagent \
-remoteWrite.url=http://<dev-url> -remoteWrite.urlRelabelConfig=relabelDev.yml \
-remoteWrite.url=http://<prod-url> -remoteWrite.urlRelabelConfig=relabelProd.yml
@ -407,7 +407,7 @@ Extra labels can be added to metrics collected by `vmagent` via the following me
For example, the following command starts `vmagent`, which adds `{datacenter="foobar"}` label to all the metrics pushed
to all the configured remote storage systems (all the `-remoteWrite.url` flag values):
```bash
```sh
/path/to/vmagent -remoteWrite.label=datacenter=foobar ...
```
@ -1161,7 +1161,7 @@ take into account the following attributes:
For example, if `vmagent` should be able to buffer the data for at least 6 hours, then the following query
can be used for estimating the needed amounts of disk space in gigabytes:
```
```metricsql
sum(rate(vmagent_remotewrite_bytes_sent_total[1h])) by(instance,url) * 6h / 1Gi
```
@ -1204,7 +1204,7 @@ Multiple topics can be specified by passing multiple `-gcp.pubsub.subscribe.topi
For example, the following command starts `vmagent`, which reads metrics in [InfluxDB line protocol format](https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/)
from PubSub `projects/victoriametrics-vmagent-pub-sub-test/subscriptions/telegraf-testing` and sends them to remote storage at `http://localhost:8428/api/v1/write`:
```bash
```sh
./bin/vmagent -remoteWrite.url=http://localhost:8428/api/v1/write \
-gcp.pubsub.subscribe.topicSubscription=projects/victoriametrics-vmagent-pub-sub-test/subscriptions/telegraf-testing \
-gcp.pubsub.subscribe.topicSubscription.messageFormat=influx
@ -1232,7 +1232,7 @@ See also [how to write metrics to multiple distinct tenants](https://docs.victor
[Influx](https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/) messages from `telegraf-testing` topic
and gzipp'ed [JSON line](https://docs.victoriametrics.com/#json-line-format) messages from `json-line-testing` topic:
```bash
```sh
./bin/vmagent -remoteWrite.url=http://localhost:8428/api/v1/write \
-gcp.pubsub.subscribe.topicSubscription=projects/victoriametrics-vmagent-pub-sub-test/subscriptions/telegraf-testing \
-gcp.pubsub.subscribe.topicSubscription.messageFormat=influx \
@ -1248,7 +1248,7 @@ These command-line flags are available only in [enterprise](https://docs.victori
which can be downloaded for evaluation from [releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest) page
(see `vmutils-...-enterprise.tar.gz` archives) and from [docker images](https://hub.docker.com/r/victoriametrics/vmagent/tags) with tags containing `enterprise` suffix.
```console
```sh
-gcp.pubsub.subscribe.credentialsFile string
Path to file with GCP credentials to use for PubSub client. If not set, default credentials are used (see Workload Identity for K8S or https://cloud.google.com/docs/authentication/application-default-credentials ). See https://docs.victoriametrics.com/vmagent.html#reading-metrics-from-pubsub . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
-gcp.pubsub.subscribe.defaultMessageFormat string
@ -1282,7 +1282,7 @@ These command-line flags are available only in [enterprise](https://docs.victori
which can be downloaded for evaluation from [releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest) page
(see `vmutils-...-enterprise.tar.gz` archives) and from [docker images](https://hub.docker.com/r/victoriametrics/vmagent/tags) with tags containing `enterprise` suffix.
```console
```sh
-gcp.pubsub.publish.byteThreshold int
Publish a batch when its size in bytes reaches this value. See https://docs.victoriametrics.com/vmagent.html#writing-metrics-to-pubsub . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html (default 1000000)
-gcp.pubsub.publish.countThreshold int
@ -1337,7 +1337,7 @@ For example, `-kafka.consumer.topic.brokers='host1:9092;host2:9092'`.
The following command starts `vmagent`, which reads metrics in InfluxDB line protocol format from Kafka broker at `localhost:9092`
from the topic `metrics-by-telegraf` and sends them to remote storage at `http://localhost:8428/api/v1/write`:
```bash
```sh
./bin/vmagent -remoteWrite.url=http://localhost:8428/api/v1/write \
-kafka.consumer.topic.brokers=localhost:9092 \
-kafka.consumer.topic.format=influx \
@ -1366,7 +1366,7 @@ These command-line flags are available only in [enterprise](https://docs.victori
which can be downloaded for evaluation from [releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest) page
(see `vmutils-...-enterprise.tar.gz` archives) and from [docker images](https://hub.docker.com/r/victoriametrics/vmagent/tags) with tags containing `enterprise` suffix.
```console
```sh
-kafka.consumer.topic array
Kafka topic names for data consumption. See https://docs.victoriametrics.com/vmagent.html#reading-metrics-from-kafka . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
Supports an array of values separated by comma or specified via multiple flags.
@ -1414,7 +1414,7 @@ Two types of auth are supported:
* sasl with username and password:
```bash
```sh
./bin/vmagent -remoteWrite.url='kafka://localhost:9092/?topic=prom-rw&security.protocol=SASL_SSL&sasl.mechanisms=PLAIN' \
-remoteWrite.basicAuth.username=user \
-remoteWrite.basicAuth.password=password
@ -1422,7 +1422,7 @@ Two types of auth are supported:
* tls certificates:
```bash
```sh
./bin/vmagent -remoteWrite.url='kafka://localhost:9092/?topic=prom-rw&security.protocol=SSL' \
-remoteWrite.tlsCAFile=/opt/ca.pem \
-remoteWrite.tlsCertFile=/opt/cert.pem \
@ -1456,7 +1456,7 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmagent`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash
```sh
ROOT_IMAGE=scratch make package-vmagent
```
@ -1483,7 +1483,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
* Memory profile can be collected with the following command (replace `0.0.0.0` with hostname if needed):
```bash
```sh
curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof
```
@ -1491,7 +1491,7 @@ curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof
* CPU profile can be collected with the following command (replace `0.0.0.0` with hostname if needed):
```bash
```sh
curl http://0.0.0.0:8429/debug/pprof/profile > cpu.pprof
```
@ -1506,7 +1506,7 @@ It is safe sharing the collected profiles from security point of view, since the
`vmagent` can be fine-tuned with various command-line flags. Run `./vmagent -help` in order to see the full list of these flags with their descriptions and default values:
```console
```sh
./vmagent -help
vmagent collects metrics data via popular data ingestion protocols and routes them to VictoriaMetrics.

View file

@ -56,7 +56,7 @@ Use this feature for the following cases:
To build `vmalert` from sources:
```console
```sh
git clone https://github.com/VictoriaMetrics/VictoriaMetrics
cd VictoriaMetrics
make vmalert
@ -78,7 +78,7 @@ To start using `vmalert` you will need the following things:
Then configure `vmalert` accordingly:
```console
```sh
./bin/vmalert -rule=alert.rules \ # Path to the file with rules configuration. Supports wildcard
-datasource.url=http://localhost:8428 \ # Prometheus HTTP API compatible datasource
-notifier.url=http://localhost:9093 \ # AlertManager URL (required if alerting rules are used)
@ -915,7 +915,7 @@ To disable stripping of such info pass `-datasource.showURL` cmd-line flag to vm
* Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
```console
```sh
curl http://0.0.0.0:8880/debug/pprof/heap > mem.pprof
```
@ -923,7 +923,7 @@ curl http://0.0.0.0:8880/debug/pprof/heap > mem.pprof
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
```console
```sh
curl http://0.0.0.0:8880/debug/pprof/profile > cpu.pprof
```
@ -942,7 +942,7 @@ command-line flags with their descriptions.
The shortlist of configuration flags is the following:
```console
```sh
-clusterMode
If clusterMode is enabled, then vmalert automatically adds the tenant specified in config groups to -datasource.url, -remoteWrite.url and -remoteRead.url. See https://docs.victoriametrics.com/vmalert.html#multitenancy . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
-configCheckInterval duration
@ -1547,7 +1547,7 @@ It is recommended using
You can build `vmalert` docker image from source and push it to your own docker repository.
Run the following commands from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics):
```console
```sh
make package-vmalert
docker tag victoria-metrics/vmalert:version my-repo:my-version-name
docker push my-repo:my-version-name

View file

@ -22,7 +22,7 @@ The `-auth.config` can point to either local file or to http url.
Just download `vmutils-*` archive from [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest), unpack it
and pass the following flag to `vmauth` binary in order to start authorizing and proxying requests:
```console
```sh
/path/to/vmauth -auth.config=/path/to/auth/config.yml
```
@ -65,7 +65,7 @@ accounting and rate limiting such as [vmgateway](https://docs.victoriametrics.co
The following [`-auth.config`](#auth-config) instructs `vmauth` to proxy all the incoming requests to the given backend.
For example, requests to `http://vmauth:8427/foo/bar` are proxied to `http://backend/foo/bar`:
```yml
```yaml
unauthorized_user:
url_prefix: "http://backend/"
```
@ -84,7 +84,7 @@ For example, the following [`-auth.config`](#auth-config) instructs `vmauth` to
- Other requests are proxied to `http://some-backend/404-page.html`, while the requested path is passed via `request_path` query arg.
For example, the request to `http://vmauth:8427/foo/bar?baz=qwe` is proxied to `http://some-backend/404-page.html?request_path=%2Ffoo%2Fbar%3Fbaz%3Dqwe`.
```yml
```yaml
unauthorized_user:
url_map:
- src_paths:
@ -100,7 +100,7 @@ unauthorized_user:
The following config routes requests to host `app1.my-host.com` to `http://app1-backend`, while routing requests to `app2.my-host.com` to `http://app2-backend`:
```yml
```yaml
unauthorized_user:
url_map:
- src_hosts:
@ -121,7 +121,7 @@ in the corresponding lists.
`vmauth` can balance load among multiple HTTP backends in least-loaded round-robin mode.
For example, the following [`-auth.config`](#auth-config) instructs `vmauth` to spread load load among multiple application instances:
```yml
```yaml
unauthorized_user:
url_prefix:
- "http://app-instance-1/"
@ -137,7 +137,7 @@ If [vmagent](https://docs.victoriametrics.com/vmagent.html) is used for processi
then it is possible to scale the performance of data processing at `vmagent` by spreading load among multiple identically configured `vmagent` instances.
This can be done with the following [config](#auth-config) for `vmagent`:
```yml
```yaml
unauthorized_user:
url_map:
- src_paths:
@ -159,7 +159,7 @@ See [load balancing docs](#load-balancing) for more details.
and processes incoming requests via `vmselect` nodes according to [these docs](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#architecture-overview).
`vmauth` can be used for balancing both `insert` and `select` requests among `vminsert` and `vmselect` nodes, when the following [`-auth.config`](#auth-config) is used:
```yml
```yaml
unauthorized_user:
url_map:
- src_paths:
@ -185,7 +185,7 @@ of [`-auth.config`](#auth-config) via `load_balancing_policy` option. For exampl
If this backend becomes unavailable, then `vmauth` starts proxying requests to `http://victoria-metrics-standby1:8428/`.
If this backend becomes also unavailable, then requests are proxied to the last specified backend - `http://victoria-metrics-standby2:8428/`:
```yml
```yaml
unauthorized_user:
url_prefix:
- "http://victoria-metrics-main:8428/"
@ -215,7 +215,7 @@ See [load-balancing docs](#load-balancing) for more details.
For example, the following [config](#auth-config) proxies requests to [single-node VictoriaMetrics](https://docs.victoriametrics.com/)
if they contain Basic Auth header with the given `username` and `password`:
```yml
```yaml
users:
- username: foo
password: bar
@ -230,7 +230,7 @@ See also [security docs](#security).
For example, the following [config](#auth-config) proxies requests to [single-node VictoriaMetrics](https://docs.victoriametrics.com/)
if they contain the given `bearer_token`:
```yml
```yaml
users:
- bearer_token: ABCDEF
url_prefix: "http://victoria-metrics:8428/"
@ -244,7 +244,7 @@ The following [`-auth.config`](#auth-config) instructs proxying `insert` and `se
user `tenant1` to the [tenant](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#multitenancy) `1`,
while requests from the user `tenant2` are sent to tenant `2`:
```yml
```yaml
users:
- username: tenant1
password: "***"
@ -280,7 +280,7 @@ users:
For example, the following [config](#auth-config) adds [`extra_label`](https://docs.victoriametrics.com/#prometheus-querying-api-enhancements)
to all the requests, which are proxied to [single-node VictoriaMetrics](https://docs.victoriametrics.com/):
```yml
```yaml
unauthorized_user:
url_prefix: "http://victoria-metrics:8428/?extra_label=foo=bar"
```
@ -295,7 +295,7 @@ For example, if you need to serve requests to [vmalert](https://docs.victoriamet
while serving requests to [vmagent](https://docs.victoriametrics.com/vmagent.html) at `/vmagent/` path prefix for a particular user,
then the following [-auth.config](#auth-config) can be used:
```yml
```yaml
users:
- username: foo
url_map:
@ -323,7 +323,7 @@ Each `url_prefix` in the [-auth.config](#auth-config) can be specified in the fo
- A single url. For example:
```yml
```yaml
unauthorized_user:
url_prefix: 'http://vminsert:8480/insert/0/prometheus/`
```
@ -332,7 +332,7 @@ Each `url_prefix` in the [-auth.config](#auth-config) can be specified in the fo
- A list of urls. For example:
```yml
```yaml
unauthorized_user:
url_prefix:
- 'http://vminsert-1:8480/insert/0/prometheus/'
@ -351,7 +351,7 @@ Each `url_prefix` in the [-auth.config](#auth-config) can be specified in the fo
It is possible to customize the list of http response status codes to retry via `retry_status_codes` list at `user` and `url_map` level of [`-auth.config`](#auth-config).
For example, the following config re-tries requests on other backends if the current backend returns response with `500` or `502` HTTP status code:
```yml
```yaml
unauthorized_user:
url_prefix:
- http://vmselect1:8481/
@ -367,7 +367,7 @@ Each `url_prefix` in the [-auth.config](#auth-config) can be specified in the fo
It is possible to customize the load balancing policy at the `user` and `url_map` level.
For example, the following config specifies `first_available` load balancing policy for unauthorized requests:
```yml
```yaml
unauthorized_user:
url_prefix:
- http://victoria-metrics-main:8428/
@ -381,7 +381,7 @@ Load balancing feature can be used in the following cases:
The following [`-auth.config`](#auth-config) can be used for spreading incoming requests among 3 vmselect nodes and re-trying failed requests
or requests with 500 and 502 response status codes:
```yml
```yaml
unauthorized_user:
url_prefix:
- http://vmselect1:8481/
@ -396,7 +396,7 @@ Load balancing feature can be used in the following cases:
See [these docs](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#cluster-availability) for details about `deny_partial_response` query arg,
which is added to requests before they are proxied to backends.
```yml
```yaml
unauthorized_user:
url_prefix:
- https://vmselect-az1/?deny_partial_response=1
@ -414,7 +414,7 @@ This is done via `headers` option. For example, the following [`-auth.config`](#
to requests proxied to `http://backend:1234/`. It also overrides `X-Forwarded-For` request header with an empty value. This effectively
removes the `X-Forwarded-For` header from requests proxied to `http://backend:1234/`:
```yml
```yaml
unauthorized_user:
url_prefix: "http://backend:1234/"
headers:
@ -426,7 +426,7 @@ unauthorized_user:
This is done via `response_headers` option. For example, the following [`-auth.config`](#auth-config) adds `Foo: bar` response header
and removes `Server` response header before returning the response to client:
```yml
```yaml
unauthorized_user:
url_prefix: "http://backend:1234/"
response_headers:
@ -471,7 +471,7 @@ in the [`-auth.config`](#auth-config). These settings can be overridden with the
This global setting can be overridden at per-user level inside [`-auth.config`](#auth-config)
via `tls_insecure_skip_verify` option. For example:
```yml
```yaml
- username: "foo"
url_prefix: "https://localhost"
tls_insecure_skip_verify: true
@ -482,7 +482,7 @@ in the [`-auth.config`](#auth-config). These settings can be overridden with the
This global setting can be overridden at per-user level inside [`-auth.config`](#auth-config)
via `tls_ca_file` option. For example:
```yml
```yaml
- username: "foo"
url_prefix: "https://localhost"
tls_ca_file: "/path/to/tls/root/ca"
@ -494,7 +494,7 @@ in the [`-auth.config`](#auth-config). These settings can be overridden with the
For example, the following config allows requests to `vmauth` from `10.0.0.0/24` network and from `1.2.3.4` IP address, while denying requests from `10.0.0.42` IP address:
```yml
```yaml
users:
# User configs here
@ -507,7 +507,7 @@ ip_filters:
The following config allows requests for the user 'foobar' only from the IP `127.0.0.1`:
```yml
```yaml
users:
- username: "foobar"
password: "***"
@ -522,7 +522,7 @@ See config example of using IP filters [here](https://github.com/VictoriaMetrics
`-auth.config` is represented in the following simple `yml` format:
```yml
```yaml
# Arbitrary number of usernames may be put here.
# It is possible to set multiple identical usernames with different passwords.
# Such usernames can be differentiated by `name` option.
@ -671,7 +671,7 @@ It is expected that all the backend services protected by `vmauth` are located i
Do not transfer Basic Auth headers in plaintext over untrusted networks. Enable https at `-httpListenAddr`. This can be done by passing the following `-tls*` command-line flags to `vmauth`:
```console
```sh
-tls
Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string
@ -711,7 +711,7 @@ By default, per-user metrics contain only `username` label. This label is set to
It is possible to override the `username` label value by specifying `name` field additionally to `username` field.
For example, the following config will result in `vmauth_user_requests_total{username="foobar"}` instead of `vmauth_user_requests_total{username="secret_user"}`:
```yml
```yaml
users:
- username: "secret_user"
name: "foobar"
@ -721,7 +721,7 @@ users:
Additional labels for per-user metrics can be specified via `metric_labels` section. For example, the following config
defines `{dc="eu",team="dev"}` labels additionally to `username="foobar"` label:
```yml
```yaml
users:
- username: "foobar"
metric_labels:
@ -766,7 +766,7 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmauth`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```console
```sh
ROOT_IMAGE=scratch make package-vmauth
```
@ -777,7 +777,7 @@ ROOT_IMAGE=scratch make package-vmauth
* Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
```console
```sh
curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof
```
@ -785,7 +785,7 @@ curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
```console
```sh
curl http://0.0.0.0:8427/debug/pprof/profile > cpu.pprof
```
@ -799,7 +799,7 @@ It is safe sharing the collected profiles from security point of view, since the
Pass `-help` command-line arg to `vmauth` in order to see all the configuration options:
```console
```sh
./vmauth -help
vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics.

View file

@ -42,7 +42,7 @@ creation of hourly, daily, weekly and monthly backups.
Regular backup can be performed with the following command:
```console
```sh
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/new/backup>
```
@ -57,7 +57,7 @@ Regular backup can be performed with the following command:
If the destination GCS bucket already contains the previous backup at `-origin` path, then new backup can be sped up
with the following command:
```console
```sh
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/new/backup> -origin=gs://<bucket>/<path/to/existing/backup>
```
@ -72,7 +72,7 @@ and make it very expensive.
Incremental backups are performed if `-dst` points to an already existing backup. In this case only new data is uploaded to remote storage.
It saves time and network bandwidth costs when working with big backups:
```console
```sh
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/existing/backup>
```
@ -82,7 +82,7 @@ Smart backups mean storing full daily backups into `YYYYMMDD` folders and creati
* Run the following command every hour:
```console
```sh
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/latest
```
@ -92,7 +92,7 @@ when backing up large amounts of data.
* Run the following command once a day:
```console
```sh
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -origin=gs://<bucket>/latest -dst=gs://<bucket>/<YYYYMMDD>
```
@ -116,7 +116,7 @@ Sometimes it is needed to make server-side copy of the existing backup. This can
while the destination path for backup copy must be specified via `-dst` command-line flag. For example, the following command copies backup
from `gs://bucket/foo` to `gs://bucket/bar`:
```console
```sh
./vmbackup -origin=gs://bucket/foo -dst=gs://bucket/bar
```
@ -176,7 +176,7 @@ Add flag `-credsFilePath=/etc/credentials` with the following content:
- for S3 (AWS, MinIO or other S3 compatible storages):
```console
```sh
[default]
aws_access_key_id=theaccesskey
aws_secret_access_key=thesecretaccesskeyvalue
@ -279,12 +279,12 @@ Usage with s3 custom url endpoint. It is possible to use `vmbackup` with s3 comp
You have to add a custom url endpoint via flag:
- for MinIO
```console
```sh
-customS3Endpoint=http://localhost:9000
```
- for aws gov region
```console
```sh
-customS3Endpoint=https://s3-fips.us-gov-west-1.amazonaws.com
```
@ -303,7 +303,7 @@ Refer to the respective documentation for your object storage provider for more
Run `vmbackup -help` in order to see all the available options:
```console
```sh
-concurrency int
The number of concurrent workers. Higher concurrency may reduce backup duration (default 10)
-configFilePath string
@ -475,6 +475,6 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmbackup`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```console
```sh
ROOT_IMAGE=scratch make package-vmbackup
```

View file

@ -51,7 +51,7 @@ The backup manager creates the following directory hierarchy at **-dst**:
To get the full list of supported flags please run the following command:
```console
```sh
./vmbackupmanager --help
```
@ -93,7 +93,7 @@ credentials.json
Backup manager launched with the following configuration:
```console
```sh
export NODE_IP=192.168.0.10
export VMSTORAGE_ENDPOINT=http://127.0.0.1:8428
./vmbackupmanager -dst=gs://vmstorage-data/$NODE_IP -credsFilePath=credentials.json -storageDataPath=/vmstorage-data -snapshot.createURL=$VMSTORAGE_ENDPOINT/snapshot/create -eula
@ -101,14 +101,14 @@ export VMSTORAGE_ENDPOINT=http://127.0.0.1:8428
Expected logs in vmbackupmanager:
```console
```sh
info lib/backup/actions/backup.go:131 server-side copied 81 out of 81 parts from GCS{bucket: "vmstorage-data", dir: "192.168.0.10//latest/"} to GCS{bucket: "vmstorage-data", dir: "192.168.0.10//weekly/2020-34/"} in 2.549833008s
info lib/backup/actions/backup.go:169 backed up 853315 bytes in 2.882 seconds; deleted 0 bytes; server-side copied 853315 bytes; uploaded 0 bytes
```
Expected logs in vmstorage:
```console
```sh
info VictoriaMetrics/lib/storage/table.go:146 creating table snapshot of "/vmstorage-data/data"...
info VictoriaMetrics/lib/storage/storage.go:311 deleting snapshot "/vmstorage-data/snapshots/20200818201959-162C760149895DDA"...
info VictoriaMetrics/lib/storage/storage.go:319 deleted snapshot "/vmstorage-data/snapshots/20200818201959-162C760149895DDA" in 0.169 seconds
@ -152,7 +152,7 @@ Lets assume we have a backup manager collecting daily backups for the past 10
We enable backup retention policy for backup manager by using following configuration:
```console
```sh
export NODE_IP=192.168.0.10
export VMSTORAGE_ENDPOINT=http://127.0.0.1:8428
./vmbackupmanager -dst=gs://vmstorage-data/$NODE_IP -credsFilePath=credentials.json -storageDataPath=/vmstorage-data -snapshot.createURL=$VMSTORAGE_ENDPOINT/snapshot/create
@ -161,13 +161,13 @@ export VMSTORAGE_ENDPOINT=http://127.0.0.1:8428
Expected logs in backup manager on start:
```console
```sh
info lib/logger/flag.go:20 flag "keepLastDaily" = "3"
```
Expected logs in backup manager during retention cycle:
```console
```sh
info app/vmbackupmanager/retention.go:106 daily backups to delete [daily/2021-02-13 daily/2021-02-12 daily/2021-02-11 daily/2021-02-10 daily/2021-02-09 daily/2021-02-08 daily/2021-02-07]
```
@ -181,14 +181,14 @@ You can protect any backup against deletion by retention policy with the `vmback
For instance:
```console
```sh
./vmbackupmanager backup lock daily/2021-02-13 -dst=<DST_PATH> -storageDataPath=/vmstorage-data -eula
```
After that the backup won't be deleted by retention policy.
You can view the `locked` attribute in backup list:
```console
```sh
./vmbackupmanager backup list -dst=<DST_PATH> -storageDataPath=/vmstorage-data -eula
```
@ -196,7 +196,7 @@ To remove protection, you can use the command `vmbackupmanager backups unlock`.
For example:
```console
```sh
./vmbackupmanager backup unlock daily/2021-02-13 -dst=<DST_PATH> -storageDataPath=/vmstorage-data -eula
```
@ -246,7 +246,7 @@ For example:
`vmbackupmanager` exposes CLI commands to work with [API methods](#api-methods) without external dependencies.
Supported commands:
```console
```sh
vmbackupmanager backup
vmbackupmanager backup list
@ -281,7 +281,7 @@ It can be changed by using flag:
### Backup commands
`vmbackupmanager backup list` lists backups in remote storage:
```console
```sh
$ ./vmbackupmanager backup list
[{"name":"daily/2023-04-07","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:07+00:00"},{"name":"hourly/2023-04-07:11","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:06+00:00"},{"name":"latest","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:04+00:00"},{"name":"monthly/2023-04","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:10+00:00"},{"name":"weekly/2023-14","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:09+00:00"}]
```
@ -293,23 +293,23 @@ Restore mark is used by `vmbackupmanager` to store backup name to restore when r
Create restore mark:
```console
```sh
$ ./vmbackupmanager restore create daily/2022-10-06
```
Get restore mark if it exists:
```console
```sh
$ ./vmbackupmanager restore get
{"backup":"daily/2022-10-06"}
```
Delete restore mark if it exists:
```console
```sh
$ ./vmbackupmanager restore delete
```
Perform restore:
```console
```sh
$ /vmbackupmanager-prod restore -dst=gs://vmstorage-data/$NODE_IP -credsFilePath=credentials.json -storageDataPath=/vmstorage-data
```
Note that `vmsingle` or `vmstorage` should be stopped before performing restore.
@ -319,22 +319,22 @@ If restore mark doesn't exist at `storageDataPath`(restore wasn't requested) `vm
### How to restore backup via CLI
1. Run `vmbackupmanager backup list` to get list of available backups:
```console
```sh
$ /vmbackupmanager-prod backup list
[{"name":"daily/2023-04-07","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:07+00:00"},{"name":"hourly/2023-04-07:11","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:06+00:00"},{"name":"latest","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:04+00:00"},{"name":"monthly/2023-04","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:10+00:00"},{"name":"weekly/2023-14","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:09+00:00"}]
```
1. Run `vmbackupmanager restore create` to create restore mark:
- Use relative path to backup to restore from currently used remote storage:
```console
```sh
$ /vmbackupmanager-prod restore create daily/2023-04-07
```
- Use full path to backup to restore from any remote storage:
```console
```sh
$ /vmbackupmanager-prod restore create azblob://test1/vmbackupmanager/daily/2023-04-07
```
1. Stop `vmstorage` or `vmsingle` node
1. Run `vmbackupmanager restore` to restore backup:
```console
```sh
$ /vmbackupmanager-prod restore -credsFilePath=credentials.json -storageDataPath=/vmstorage-data
```
1. Start `vmstorage` or `vmsingle` node
@ -353,17 +353,17 @@ If restore mark doesn't exist at `storageDataPath`(restore wasn't requested) `vm
See operator `VMStorage` schema [here](https://docs.victoriametrics.com/operator/api.html#vmstorage) and `VMSingle` [here](https://docs.victoriametrics.com/operator/api.html#vmsinglespec).
1. Enter container running `vmbackupmanager`
1. Use `vmbackupmanager backup list` to get list of available backups:
```console
```sh
$ /vmbackupmanager-prod backup list
[{"name":"daily/2023-04-07","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:07+00:00"},{"name":"hourly/2023-04-07:11","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:06+00:00"},{"name":"latest","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:04+00:00"},{"name":"monthly/2023-04","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:10+00:00"},{"name":"weekly/2023-14","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:09+00:00"}]
```
1. Use `vmbackupmanager restore create` to create restore mark:
- Use relative path to backup to restore from currently used remote storage:
```console
```sh
$ /vmbackupmanager-prod restore create daily/2023-04-07
```
- Use full path to backup to restore from any remote storage:
```console
```sh
$ /vmbackupmanager-prod restore create azblob://test1/vmbackupmanager/daily/2023-04-07
```
1. Restart pod
@ -385,14 +385,14 @@ Clusters here are referred to as `source` and `destination`.
> Important! Use different `-dst` for *destination* cluster to avoid overwriting backup data of the *source* cluster.
1. Enter container running `vmbackupmanager` in *source* cluster
1. Use `vmbackupmanager backup list` to get list of available backups:
```console
```sh
$ /vmbackupmanager-prod backup list
[{"name":"daily/2023-04-07","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:07+00:00"},{"name":"hourly/2023-04-07:11","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:06+00:00"},{"name":"latest","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:04+00:00"},{"name":"monthly/2023-04","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:10+00:00"},{"name":"weekly/2023-14","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:09+00:00"}]
```
1. Use `vmbackupmanager restore create` to create restore mark at each pod of the *destination* cluster.
Each pod in *destination* cluster should be restored from backup of respective pod in *source* cluster.
For example: `vmstorage-source-0` in *source* cluster should be restored from `vmstorage-destination-0` in *destination* cluster.
```console
```sh
$ /vmbackupmanager-prod restore create s3://source_cluster/vmstorage-source-0/daily/2023-04-07
```
1. Restart `vmstorage` pods of *destination* cluster. On pod start `vmbackupmanager` will restore data from the specified backup.

View file

@ -30,7 +30,7 @@ Features:
To see the full list of supported modes
run the following command:
```console
```sh
$ ./vmctl --help
NAME:
vmctl - VictoriaMetrics command-line tool
@ -325,7 +325,7 @@ To migrate historical data from Promscale to VictoriaMetrics we recommend using
in [remote-read](https://docs.victoriametrics.com/vmctl.html#migrating-data-by-remote-read-protocol) mode.
See the example of migration command below:
```console
```sh
./vmctl remote-read --remote-read-src-addr=http://<promscale>:9201/read \
--remote-read-step-interval=day \
--remote-read-use-stream=false \ # promscale doesn't support streaming
@ -844,7 +844,7 @@ Importing tips:
if you already have `-dedup.minScrapeInterval` set to 1ms or higher values at destination.
1. When migrating data from one VM cluster to another, consider using [cluster-to-cluster mode](#cluster-to-cluster-migration-mode).
Or manually specify addresses according to [URL format](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format):
```console
```sh
# Migrating from cluster specific tenantID to single
--vm-native-src-addr=http://<src-vmselect>:8481/select/0/prometheus
--vm-native-dst-addr=http://<dst-vmsingle>:8428
@ -891,7 +891,7 @@ It is recommended using default `month` step when migrating the data over the lo
limits on `--vm-native-src-addr` and can't or don't want to change them, try lowering the step interval to `week`, `day` or `hour`.
Usage example:
```console
```sh
./vmctl vm-native \
--vm-native-src-addr=http://127.0.0.1:8481/select/0/prometheus \
--vm-native-dst-addr=http://localhost:8428 \
@ -925,7 +925,7 @@ Cluster-to-cluster uses `/admin/tenants` endpoint (available starting from [v1.8
To use this mode you need to set `--vm-intercluster` flag to `true`, `--vm-native-src-addr` flag to 'http://vmselect:8481/' and `--vm-native-dst-addr` value to http://vminsert:8480/:
```console
```sh
./vmctl vm-native --vm-native-src-addr=http://127.0.0.1:8481/ \
--vm-native-dst-addr=http://127.0.0.1:8480/ \
--vm-native-filter-match='{__name__="vm_app_uptime_seconds"}' \
@ -970,7 +970,7 @@ In this mode, `vmctl` allows verifying correctness and integrity of data exporte
from VictoriaMetrics.
You can verify exported data at disk before uploading it by `vmctl verify-block` command:
```console
```sh
# export blocks from VictoriaMetrics
curl localhost:8428/api/v1/export/native -g -d 'match[]={__name__!=""}' -o exported_data_block
# verify block content
@ -1094,7 +1094,7 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmctl`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```console
```sh
ROOT_IMAGE=scratch make package-vmctl
```

View file

@ -67,7 +67,7 @@ Where:
Start the single version of VictoriaMetrics
```console
```sh
# single
# start node
./bin/victoria-metrics --selfScrapeInterval=10s
@ -75,19 +75,19 @@ Start the single version of VictoriaMetrics
Start vmgateway
```console
```sh
./bin/vmgateway -eula -enable.auth -read.url http://localhost:8428 --write.url http://localhost:8428
```
Retrieve data from the database
```console
```sh
curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2bV9hY2Nlc3MiOnsidGVuYW50X2lkIjp7fSwicm9sZSI6MX0sImV4cCI6MTkzOTM0NjIxMH0.5WUxEfdcV9hKo4CtQdtuZYOGpGXWwaqM9VuVivMMrVg'
```
A request with an incorrect token or without any token will be rejected:
```console
```sh
curl 'http://localhost:8431/api/v1/series/count'
curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer incorrect-token'
@ -137,7 +137,7 @@ limits:
cluster version of VictoriaMetrics is required for rate limiting.
```console
```sh
# start datasource for cluster metrics
cat << EOF > cluster.yaml
@ -199,7 +199,7 @@ The following flags are used to specify keys:
Note that both flags support passing multiple keys and also can be used together.
Example usage:
```console
```sh
./bin/vmgateway -eula \
-enable.auth \
-write.url=http://localhost:8480 \
@ -227,7 +227,7 @@ In order to enable [OpenID discovery](https://openid.net/specs/openid-connect-di
When `auth.oidcDiscoveryEndpoints` is specified `vmageteway` will fetch JWKS keys from the specified endpoint and use them for JWT signature verification.
Example usage for tokens issued by Azure Active Directory:
```console
```sh
/bin/vmgateway -eula \
-enable.auth \
-write.url=http://localhost:8480 \
@ -236,7 +236,7 @@ Example usage for tokens issued by Azure Active Directory:
```
Example usage for tokens issued by Google:
```console
```sh
/bin/vmgateway -eula \
-enable.auth \
-write.url=http://localhost:8480 \
@ -252,7 +252,7 @@ In order to enable JWKS endpoint for JWT signature verification, you need to spe
When `auth.jwksEndpoints` is specified `vmageteway` will fetch public keys from the specified endpoint and use them for JWT signature verification.
Example usage for tokens issued by Azure Active Directory:
```console
```sh
/bin/vmgateway -eula \
-enable.auth \
-write.url=http://localhost:8480 \
@ -261,7 +261,7 @@ Example usage for tokens issued by Azure Active Directory:
```
Example usage for tokens issued by Google:
```console
```sh
/bin/vmgateway -eula \
-enable.auth \
-write.url=http://localhost:8480 \
@ -273,7 +273,7 @@ Example usage for tokens issued by Google:
The shortlist of configuration flags include the following:
```console
```sh
-auth.httpHeader string
HTTP header name to look for JWT authorization token (default "Authorization")
-auth.jwksEndpoints array

View file

@ -22,7 +22,7 @@ VictoriaMetrics must be stopped during the restore process.
Run the following command to restore backup from the given `-src` into the given `-storageDataPath`:
```console
```sh
./vmrestore -src=<storageType>://<path/to/backup> -storageDataPath=<local/path/to/restore>
```
@ -55,7 +55,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
for s3 (aws, minio or other s3 compatible storages):
```console
```sh
[default]
aws_access_key_id=theaccesskey
aws_secret_access_key=thesecretaccesskeyvalue
@ -81,7 +81,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
* Usage with s3 custom url endpoint. It is possible to use `vmrestore` with s3 api compatible storages, like minio, cloudian and other.
You have to add custom url endpoint with a flag:
```console
```sh
# for minio:
-customS3Endpoint=http://localhost:9000
@ -91,7 +91,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
* Run `vmrestore -help` in order to see all the available options:
```console
```sh
-concurrency int
The number of concurrent workers. Higher concurrency may reduce restore duration (default 10)
-configFilePath string
@ -256,6 +256,6 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmrestore`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```console
```sh
ROOT_IMAGE=scratch make package-vmrestore
```