all: replace bash with console blocks in all the *.md files

This is a follow-up for 954a7a6fc6
This commit is contained in:
Aliaksandr Valialkin 2022-06-19 23:00:37 +03:00
parent 954a7a6fc6
commit afc26c57cc
No known key found for this signature in database
GPG key ID: A72BEC6CD3D0DED1
11 changed files with 82 additions and 82 deletions

View file

@ -164,7 +164,7 @@ Then apply new config via the following command:
<div class="with-copy" markdown="1">
```bash
```console
kill -HUP `pidof prometheus`
```
@ -328,7 +328,7 @@ VictoriaMetrics doesn't check `DD_API_KEY` param, so it can be set to arbitrary
Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line:
```bash
```console
echo '
{
"series": [
@ -354,7 +354,7 @@ The imported data can be read via [export API](https://docs.victoriametrics.com/
<div class="with-copy" markdown="1">
```bash
```console
curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1'
```
@ -418,7 +418,7 @@ to local VictoriaMetrics using `curl`:
<div class="with-copy" markdown="1">
```bash
```console
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
```
@ -429,7 +429,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1">
```bash
```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
```
@ -457,7 +457,7 @@ Comma-separated list of expected databases can be passed to VictoriaMetrics via
Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command line flag. For instance,
the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port `2003`:
```bash
```console
/path/to/victoria-metrics-prod -graphiteListenAddr=:2003
```
@ -466,7 +466,7 @@ to the VictoriaMetrics host in `StatsD` configs.
Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using `nc`:
```bash
```console
echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003
```
@ -476,7 +476,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1">
```bash
```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
```
@ -518,7 +518,7 @@ The same protocol is used for [ingesting data in KairosDB](https://kairosdb.gith
Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command line flag. For instance,
the following command enables OpenTSDB receiver in VictoriaMetrics on TCP and UDP port `4242`:
```bash
```console
/path/to/victoria-metrics-prod -opentsdbListenAddr=:4242
```
@ -528,7 +528,7 @@ Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `
<div class="with-copy" markdown="1">
```bash
```console
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
```
@ -539,7 +539,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1">
```bash
```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
```
@ -556,7 +556,7 @@ The `/api/v1/export` endpoint should return the following response:
Enable HTTP server for OpenTSDB `/api/put` requests by setting `-opentsdbHTTPListenAddr` command line flag. For instance,
the following command enables OpenTSDB HTTP server on port `4242`:
```bash
```console
/path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242
```
@ -566,7 +566,7 @@ Example for writing a single data point:
<div class="with-copy" markdown="1">
```bash
```console
curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put
```
@ -576,7 +576,7 @@ Example for writing multiple data points in a single request:
<div class="with-copy" markdown="1">
```bash
```console
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
```
@ -586,7 +586,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
<div class="with-copy" markdown="1">
```bash
```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar'
```
@ -756,7 +756,7 @@ The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is pos
by setting it via `<ROOT_IMAGE>` environment variable.
For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash
```console
ROOT_IMAGE=scratch make package-victoria-metrics
```
@ -870,7 +870,7 @@ Each JSON line contains samples for a single time series. An example output:
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
```console
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
@ -884,7 +884,7 @@ of time series data. This enables gzip compression for the exported data. Exampl
<div class="with-copy" markdown="1">
```bash
```console
curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz
```
@ -918,7 +918,7 @@ for metrics to export.
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
```console
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
@ -935,7 +935,7 @@ for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time
On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag:
```bash
```console
# count unique timeseries in database
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
@ -945,7 +945,7 @@ wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
```console
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
@ -977,7 +977,7 @@ Time series data can be imported into VictoriaMetrics via any supported data ing
Example for importing data obtained via [/api/v1/export](#how-to-export-data-in-json-line-format):
```bash
```console
# Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl
@ -987,7 +987,7 @@ curl -X POST http://destination-victoriametrics:8428/api/v1/import -T exported_d
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import` for importing gzipped data:
```bash
```console
# Export gzipped data from <source-victoriametrics>:
curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl.gz
@ -1008,7 +1008,7 @@ The specification of VictoriaMetrics' native format may yet change and is not fo
If you have a native format file obtained via [/api/v1/export/native](#how-to-export-data-in-native-format) however this is the most efficient protocol for importing data in.
```bash
```console
# Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin
@ -1049,14 +1049,14 @@ Each request to `/api/v1/import/csv` may contain arbitrary number of CSV lines.
Example for importing CSV data via `/api/v1/import/csv`:
```bash
```console
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
```
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
```bash
```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
```
@ -1082,7 +1082,7 @@ via `/api/v1/import/prometheus` path. For example, the following line imports a
<div class="with-copy" markdown="1">
```bash
```console
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
```
@ -1092,7 +1092,7 @@ The following command may be used for verifying the imported data:
<div class="with-copy" markdown="1">
```bash
```console
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
```
@ -1108,7 +1108,7 @@ Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus`
<div class="with-copy" markdown="1">
```bash
```console
# Import gzipped data to <destination-victoriametrics>:
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz
```
@ -1159,7 +1159,7 @@ at `http://<victoriametrics-addr>:8428/federate?match[]=<timeseries_selector_for
Optional `start` and `end` args may be added to the request in order to scrape the last point for each selected time series on the `[start ... end]` interval.
`start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
```console
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
@ -1213,7 +1213,7 @@ See also [cardinality limiter](#cardinality-limiter) and [capacity planning docs
* Install multiple VictoriaMetrics instances in distinct datacenters (availability zones).
* Pass addresses of these instances to [vmagent](https://docs.victoriametrics.com/vmagent.html) via `-remoteWrite.url` command-line flag:
```bash
```console
/path/to/vmagent -remoteWrite.url=http://<victoriametrics-addr-1>:8428/api/v1/write -remoteWrite.url=http://<victoriametrics-addr-2>:8428/api/v1/write
```
@ -1232,7 +1232,7 @@ remote_write:
* Apply the updated config:
```bash
```console
kill -HUP `pidof prometheus`
```
@ -1413,7 +1413,7 @@ For example, substitute `-graphiteListenAddr=:2003` with `-graphiteListenAddr=<i
If you plan to store more than 1TB of data on `ext4` partition or plan extending it to more than 16TB,
then the following options are recommended to pass to `mkfs.ext4`:
```bash
```console
mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
```
@ -1474,7 +1474,7 @@ In this case VictoriaMetrics puts query trace into `trace` field in the output J
For example, the following command:
```bash
```console
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace'
```
@ -1735,7 +1735,7 @@ VictoriaMetrics provides handlers for collecting the following [Go profiles](htt
<div class="with-copy" markdown="1">
```bash
```console
curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
```
@ -1745,7 +1745,7 @@ curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
<div class="with-copy" markdown="1">
```bash
```console
curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof
```

View file

@ -73,7 +73,7 @@ Pass `-help` to `vmagent` in order to see [the full list of supported command-li
* Sending `SUGHUP` signal to `vmagent` process:
```bash
```console
kill -SIGHUP `pidof vmagent`
```
@ -593,7 +593,7 @@ Every Kafka message may contain multiple lines in `influx`, `prometheus`, `graph
The following command starts `vmagent`, which reads metrics in InfluxDB line protocol format from Kafka broker at `localhost:9092` from the topic `metrics-by-telegraf` and sends them to remote storage at `http://localhost:8428/api/v1/write`:
```bash
```console
./bin/vmagent -remoteWrite.url=http://localhost:8428/api/v1/write \
-kafka.consumer.topic.brokers=localhost:9092 \
-kafka.consumer.topic.format=influx \
@ -655,13 +655,13 @@ Two types of auth are supported:
* sasl with username and password:
```bash
```console
./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SASL_SSL&sasl.mechanisms=PLAIN -remoteWrite.basicAuth.username=user -remoteWrite.basicAuth.password=password
```
* tls certificates:
```bash
```console
./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SSL -remoteWrite.tlsCAFile=/opt/ca.pem -remoteWrite.tlsCertFile=/opt/cert.pem -remoteWrite.tlsKeyFile=/opt/key.pem
```
@ -690,7 +690,7 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmagent`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash
```console
ROOT_IMAGE=scratch make package-vmagent
```
@ -718,7 +718,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
<div class="with-copy" markdown="1">
```bash
```console
curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof
```
@ -728,7 +728,7 @@ curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof
<div class="with-copy" markdown="1">
```bash
```console
curl http://0.0.0.0:8429/debug/pprof/profile > cpu.pprof
```

View file

@ -36,7 +36,7 @@ implementation and aims to be compatible with its syntax.
To build `vmalert` from sources:
```bash
```console
git clone https://github.com/VictoriaMetrics/VictoriaMetrics
cd VictoriaMetrics
make vmalert
@ -58,7 +58,7 @@ To start using `vmalert` you will need the following things:
Then configure `vmalert` accordingly:
```bash
```console
./bin/vmalert -rule=alert.rules \ # Path to the file with rules configuration. Supports wildcard
-datasource.url=http://localhost:8428 \ # PromQL compatible datasource
-notifier.url=http://localhost:9093 \ # AlertManager URL (required if alerting rules are used)
@ -1038,7 +1038,7 @@ It is recommended using
You can build `vmalert` docker image from source and push it to your own docker repository.
Run the following commands from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics):
```bash
```console
make package-vmalert
docker tag victoria-metrics/vmalert:version my-repo:my-version-name
docker push my-repo:my-version-name

View file

@ -10,7 +10,7 @@ The `-auth.config` can point to either local file or to http url.
Just download `vmutils-*` archive from [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases), unpack it
and pass the following flag to `vmauth` binary in order to start authorizing and routing requests:
```bash
```console
/path/to/vmauth -auth.config=/path/to/auth/config.yml
```
@ -129,7 +129,7 @@ It is expected that all the backend services protected by `vmauth` are located i
Do not transfer Basic Auth headers in plaintext over untrusted networks. Enable https. This can be done by passing the following `-tls*` command-line flags to `vmauth`:
```bash
```console
-tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string
@ -181,7 +181,7 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmauth`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash
```console
ROOT_IMAGE=scratch make package-vmauth
```
@ -193,7 +193,7 @@ ROOT_IMAGE=scratch make package-vmauth
<div class="with-copy" markdown="1">
```bash
```console
curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof
```
@ -203,7 +203,7 @@ curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof
<div class="with-copy" markdown="1">
```bash
```console
curl http://0.0.0.0:8427/debug/pprof/profile > cpu.pprof
```
@ -217,7 +217,7 @@ The collected profiles may be analyzed with [go tool pprof](https://github.com/g
Pass `-help` command-line arg to `vmauth` in order to see all the configuration options:
```bash
```console
./vmauth -help
vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics.

View file

@ -28,7 +28,7 @@ creation of hourly, daily, weekly and monthly backups.
Regular backup can be performed with the following command:
```bash
```console
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/new/backup>
```
@ -43,7 +43,7 @@ vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=h
If the destination GCS bucket already contains the previous backup at `-origin` path, then new backup can be sped up
with the following command:
```bash
```console
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/new/backup> -origin=gs://<bucket>/<path/to/existing/backup>
```
@ -54,7 +54,7 @@ It saves time and network bandwidth costs by performing server-side copy for the
Incremental backups are performed if `-dst` points to an already existing backup. In this case only new data is uploaded to remote storage.
It saves time and network bandwidth costs when working with big backups:
```bash
```console
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<path/to/existing/backup>
```
@ -64,7 +64,7 @@ Smart backups mean storing full daily backups into `YYYYMMDD` folders and creati
* Run the following command every hour:
```bash
```console
./vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/latest
```
@ -73,7 +73,7 @@ The command will upload only changed data to `gs://<bucket>/latest`.
* Run the following command once a day:
```bash
```console
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs://<bucket>/<YYYYMMDD> -origin=gs://<bucket>/latest
```
@ -129,7 +129,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
for s3 (aws, minio or other s3 compatible storages):
```bash
```console
[default]
aws_access_key_id=theaccesskey
aws_secret_access_key=thesecretaccesskeyvalue
@ -155,7 +155,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
* Usage with s3 custom url endpoint. It is possible to use `vmbackup` with s3 compatible storages like minio, cloudian, etc.
You have to add a custom url endpoint via flag:
```bash
```console
# for minio
-customS3Endpoint=http://localhost:9000
@ -165,7 +165,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
* Run `vmbackup -help` in order to see all the available options:
```bash
```console
-concurrency int
The number of concurrent workers. Higher concurrency may reduce backup duration (default 10)
-configFilePath string
@ -280,6 +280,6 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmbackup`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash
```console
ROOT_IMAGE=scratch make package-vmbackup
```

View file

@ -15,7 +15,7 @@ Features:
To see the full list of supported modes
run the following command:
```bash
```console
$ ./vmctl --help
NAME:
vmctl - VictoriaMetrics command-line tool
@ -527,7 +527,7 @@ and specify `accountID` param.
In this mode, `vmctl` allows verifying correctness and integrity of data exported via [native format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-export-data-in-native-format) from VictoriaMetrics.
You can verify exported data at disk before uploading it by `vmctl verify-block` command:
```bash
```console
# export blocks from VictoriaMetrics
curl localhost:8428/api/v1/export/native -g -d 'match[]={__name__!=""}' -o exported_data_block
# verify block content
@ -650,7 +650,7 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmctl`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash
```console
ROOT_IMAGE=scratch make package-vmctl
```

View file

@ -54,7 +54,7 @@ Where:
Start the single version of VictoriaMetrics
```bash
```console
# single
# start node
./bin/victoria-metrics --selfScrapeInterval=10s
@ -62,19 +62,19 @@ Start the single version of VictoriaMetrics
Start vmgateway
```bash
```console
./bin/vmgateway -eula -enable.auth -read.url http://localhost:8428 --write.url http://localhost:8428
```
Retrieve data from the database
```bash
```console
curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2bV9hY2Nlc3MiOnsidGVuYW50X2lkIjp7fSwicm9sZSI6MX0sImV4cCI6MTkzOTM0NjIxMH0.5WUxEfdcV9hKo4CtQdtuZYOGpGXWwaqM9VuVivMMrVg'
```
A request with an incorrect token or without any token will be rejected:
```bash
```console
curl 'http://localhost:8431/api/v1/series/count'
curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer incorrect-token'
@ -124,7 +124,7 @@ limits:
cluster version of VictoriaMetrics is required for rate limiting.
```bash
```console
# start datasource for cluster metrics
cat << EOF > cluster.yaml

View file

@ -10,7 +10,7 @@ when restarting `vmrestore` with the same args.
VictoriaMetrics must be stopped during the restore process.
```bash
```console
vmrestore -src=gs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/restore>
```
@ -36,7 +36,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
for s3 (aws, minio or other s3 compatible storages):
```bash
```console
[default]
aws_access_key_id=theaccesskey
aws_secret_access_key=thesecretaccesskeyvalue
@ -62,7 +62,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
* Usage with s3 custom url endpoint. It is possible to use `vmrestore` with s3 api compatible storages, like minio, cloudian and other.
You have to add custom url endpoint with a flag:
```bash
```console
# for minio:
-customS3Endpoint=http://localhost:9000
@ -72,7 +72,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
* Run `vmrestore -help` in order to see all the available options:
```bash
```console
-concurrency int
The number of concurrent workers. Higher concurrency may reduce restore duration (default 10)
-configFilePath string
@ -180,6 +180,6 @@ The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmrestore`.
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
```bash
```console
ROOT_IMAGE=scratch make package-vmrestore
```

View file

@ -42,7 +42,7 @@ To check it, open the following in your browser `http://your_droplet_public_ipv4
Run the following command to query and retrieve a result from VictoriaMetrics Single with `curl`:
```bash
```console
curl -sg http://your_droplet_public_ipv4:8428/api/v1/query_range?query=vm_app_uptime_seconds | jq
```
@ -50,6 +50,6 @@ curl -sg http://your_droplet_public_ipv4:8428/api/v1/query_range?query=vm_app_up
Once the Droplet is created, you can use DigitalOcean's web console to start a session or SSH directly to the server as root:
```bash
```console
ssh root@your_droplet_public_ipv4
```

View file

@ -6,13 +6,13 @@
2. API Token can be generated on [https://cloud.digitalocean.com/account/api/tokens](https://cloud.digitalocean.com/account/api/tokens) or use already generated from OnePassword.
3. Set variable `DIGITALOCEAN_API_TOKEN` for environment:
```bash
```console
export DIGITALOCEAN_API_TOKEN="your_token_here"
```
or set it by with make:
```bash
```console
make release-victoria-metrics-digitalocean-oneclick-droplet DIGITALOCEAN_API_TOKEN="your_token_here"
```

View file

@ -10,7 +10,7 @@ Install snapcraft or docker
build snap package with command
```bash
```console
make build-snap
```
@ -21,7 +21,7 @@ You can install it with command: `snap install victoriametrics_v1.46.0+git1.1beb
installation and configuration:
```bash
```console
# install
snap install victoriametrics
# logs
@ -34,7 +34,7 @@ Configuration management:
Prometheus scrape config can be edited with your favorite editor, its located at
```bash
```console
vi /var/snap/victoriametrics/current/etc/victoriametrics-scrape-config.yaml
```
@ -42,7 +42,7 @@ after changes, you can trigger config reread with `curl localhost:8248/-/reload`
Configuration tuning is possible with editing extra_flags:
```bash
```console
echo 'FLAGS="-selfScrapeInterval=10s -search.logSlowQueryDuration=20s"' > /var/snap/victoriametrics/current/extra_flags
snap restart victoriametrics
```