From afc26c57ccf2923ee2ce1a4b51ef7ad5dbebf609 Mon Sep 17 00:00:00 2001 From: Aliaksandr Valialkin Date: Sun, 19 Jun 2022 23:00:37 +0300 Subject: [PATCH] all: replace `bash` with `console` blocks in all the *.md files This is a follow-up for 954a7a6fc6dd0dab3dcb296483a7310773e95a0a --- README.md | 72 +++++++++---------- app/vmagent/README.md | 14 ++-- app/vmalert/README.md | 6 +- app/vmauth/README.md | 12 ++-- app/vmbackup/README.md | 18 ++--- app/vmctl/README.md | 6 +- app/vmgateway/README.md | 10 +-- app/vmrestore/README.md | 10 +-- .../digitialocean/one-click-droplet/README.md | 4 +- .../one-click-droplet/RELEASE_GUIDE.md | 4 +- snap/local/README.md | 8 +-- 11 files changed, 82 insertions(+), 82 deletions(-) diff --git a/README.md b/README.md index c29be2f99..7ef7e15d7 100644 --- a/README.md +++ b/README.md @@ -164,7 +164,7 @@ Then apply new config via the following command:
-```bash +```console kill -HUP `pidof prometheus` ``` @@ -328,7 +328,7 @@ VictoriaMetrics doesn't check `DD_API_KEY` param, so it can be set to arbitrary Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line: -```bash +```console echo ' { "series": [ @@ -354,7 +354,7 @@ The imported data can be read via [export API](https://docs.victoriametrics.com/
-```bash +```console curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1' ``` @@ -418,7 +418,7 @@ to local VictoriaMetrics using `curl`:
-```bash +```console curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write' ``` @@ -429,7 +429,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
-```bash +```console curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}' ``` @@ -457,7 +457,7 @@ Comma-separated list of expected databases can be passed to VictoriaMetrics via Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command line flag. For instance, the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port `2003`: -```bash +```console /path/to/victoria-metrics-prod -graphiteListenAddr=:2003 ``` @@ -466,7 +466,7 @@ to the VictoriaMetrics host in `StatsD` configs. Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using `nc`: -```bash +```console echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003 ``` @@ -476,7 +476,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
-```bash +```console curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' ``` @@ -518,7 +518,7 @@ The same protocol is used for [ingesting data in KairosDB](https://kairosdb.gith Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command line flag. For instance, the following command enables OpenTSDB receiver in VictoriaMetrics on TCP and UDP port `4242`: -```bash +```console /path/to/victoria-metrics-prod -opentsdbListenAddr=:4242 ``` @@ -528,7 +528,7 @@ Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `
-```bash +```console echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242 ``` @@ -539,7 +539,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
-```bash +```console curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' ``` @@ -556,7 +556,7 @@ The `/api/v1/export` endpoint should return the following response: Enable HTTP server for OpenTSDB `/api/put` requests by setting `-opentsdbHTTPListenAddr` command line flag. For instance, the following command enables OpenTSDB HTTP server on port `4242`: -```bash +```console /path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242 ``` @@ -566,7 +566,7 @@ Example for writing a single data point:
-```bash +```console curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put ``` @@ -576,7 +576,7 @@ Example for writing multiple data points in a single request:
-```bash +```console curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put ``` @@ -586,7 +586,7 @@ After that the data may be read via [/api/v1/export](#how-to-export-data-in-json
-```bash +```console curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar' ``` @@ -756,7 +756,7 @@ The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is pos by setting it via `` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: -```bash +```console ROOT_IMAGE=scratch make package-victoria-metrics ``` @@ -870,7 +870,7 @@ Each JSON line contains samples for a single time series. An example output: Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. For example: -```bash +```console curl http://:8428/api/v1/export -d 'match[]=' -d 'start=1654543486' -d 'end=1654543486' curl http://:8428/api/v1/export -d 'match[]=' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' ``` @@ -884,7 +884,7 @@ of time series data. This enables gzip compression for the exported data. Exampl
-```bash +```console curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz ``` @@ -918,7 +918,7 @@ for metrics to export. Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. For example: -```bash +```console curl http://:8428/api/v1/export/csv -d 'format=' -d 'match[]=' -d 'start=1654543486' -d 'end=1654543486' curl http://:8428/api/v1/export/csv -d 'format=' -d 'match[]=' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' ``` @@ -935,7 +935,7 @@ for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag: -```bash +```console # count unique timeseries in database wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]' @@ -945,7 +945,7 @@ wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. For example: -```bash +```console curl http://:8428/api/v1/export/native -d 'match[]=' -d 'start=1654543486' -d 'end=1654543486' curl http://:8428/api/v1/export/native -d 'match[]=' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' ``` @@ -977,7 +977,7 @@ Time series data can be imported into VictoriaMetrics via any supported data ing Example for importing data obtained via [/api/v1/export](#how-to-export-data-in-json-line-format): -```bash +```console # Export the data from : curl http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl @@ -987,7 +987,7 @@ curl -X POST http://destination-victoriametrics:8428/api/v1/import -T exported_d Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import` for importing gzipped data: -```bash +```console # Export gzipped data from : curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl.gz @@ -1008,7 +1008,7 @@ The specification of VictoriaMetrics' native format may yet change and is not fo If you have a native format file obtained via [/api/v1/export/native](#how-to-export-data-in-native-format) however this is the most efficient protocol for importing data in. -```bash +```console # Export the data from : curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin @@ -1049,14 +1049,14 @@ Each request to `/api/v1/import/csv` may contain arbitrary number of CSV lines. Example for importing CSV data via `/api/v1/import/csv`: -```bash +```console curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' ``` After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: -```bash +```console curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}' ``` @@ -1082,7 +1082,7 @@ via `/api/v1/import/prometheus` path. For example, the following line imports a
-```bash +```console curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' ``` @@ -1092,7 +1092,7 @@ The following command may be used for verifying the imported data:
-```bash +```console curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}' ``` @@ -1108,7 +1108,7 @@ Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus`
-```bash +```console # Import gzipped data to : curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz ``` @@ -1159,7 +1159,7 @@ at `http://:8428/federate?match[]=:8428/federate -d 'match[]=' -d 'start=1654543486' -d 'end=1654543486' curl http://:8428/federate -d 'match[]=' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00' ``` @@ -1213,7 +1213,7 @@ See also [cardinality limiter](#cardinality-limiter) and [capacity planning docs * Install multiple VictoriaMetrics instances in distinct datacenters (availability zones). * Pass addresses of these instances to [vmagent](https://docs.victoriametrics.com/vmagent.html) via `-remoteWrite.url` command-line flag: -```bash +```console /path/to/vmagent -remoteWrite.url=http://:8428/api/v1/write -remoteWrite.url=http://:8428/api/v1/write ``` @@ -1232,7 +1232,7 @@ remote_write: * Apply the updated config: -```bash +```console kill -HUP `pidof prometheus` ``` @@ -1413,7 +1413,7 @@ For example, substitute `-graphiteListenAddr=:2003` with `-graphiteListenAddr= -```bash +```console curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof ``` @@ -1745,7 +1745,7 @@ curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
-```bash +```console curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof ``` diff --git a/app/vmagent/README.md b/app/vmagent/README.md index 30204f821..fc958b62e 100644 --- a/app/vmagent/README.md +++ b/app/vmagent/README.md @@ -73,7 +73,7 @@ Pass `-help` to `vmagent` in order to see [the full list of supported command-li * Sending `SUGHUP` signal to `vmagent` process: - ```bash + ```console kill -SIGHUP `pidof vmagent` ``` @@ -593,7 +593,7 @@ Every Kafka message may contain multiple lines in `influx`, `prometheus`, `graph The following command starts `vmagent`, which reads metrics in InfluxDB line protocol format from Kafka broker at `localhost:9092` from the topic `metrics-by-telegraf` and sends them to remote storage at `http://localhost:8428/api/v1/write`: -```bash +```console ./bin/vmagent -remoteWrite.url=http://localhost:8428/api/v1/write \ -kafka.consumer.topic.brokers=localhost:9092 \ -kafka.consumer.topic.format=influx \ @@ -655,13 +655,13 @@ Two types of auth are supported: * sasl with username and password: -```bash +```console ./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SASL_SSL&sasl.mechanisms=PLAIN -remoteWrite.basicAuth.username=user -remoteWrite.basicAuth.password=password ``` * tls certificates: -```bash +```console ./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SSL -remoteWrite.tlsCAFile=/opt/ca.pem -remoteWrite.tlsCertFile=/opt/cert.pem -remoteWrite.tlsKeyFile=/opt/key.pem ``` @@ -690,7 +690,7 @@ The `` may be manually set via `PKG_TAG=foobar make package-vmagent`. The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image by setting it via `` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: -```bash +```console ROOT_IMAGE=scratch make package-vmagent ``` @@ -718,7 +718,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
-```bash +```console curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof ``` @@ -728,7 +728,7 @@ curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof
-```bash +```console curl http://0.0.0.0:8429/debug/pprof/profile > cpu.pprof ``` diff --git a/app/vmalert/README.md b/app/vmalert/README.md index 384267a5a..9ad7f661e 100644 --- a/app/vmalert/README.md +++ b/app/vmalert/README.md @@ -36,7 +36,7 @@ implementation and aims to be compatible with its syntax. To build `vmalert` from sources: -```bash +```console git clone https://github.com/VictoriaMetrics/VictoriaMetrics cd VictoriaMetrics make vmalert @@ -58,7 +58,7 @@ To start using `vmalert` you will need the following things: Then configure `vmalert` accordingly: -```bash +```console ./bin/vmalert -rule=alert.rules \ # Path to the file with rules configuration. Supports wildcard -datasource.url=http://localhost:8428 \ # PromQL compatible datasource -notifier.url=http://localhost:9093 \ # AlertManager URL (required if alerting rules are used) @@ -1038,7 +1038,7 @@ It is recommended using You can build `vmalert` docker image from source and push it to your own docker repository. Run the following commands from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics): -```bash +```console make package-vmalert docker tag victoria-metrics/vmalert:version my-repo:my-version-name docker push my-repo:my-version-name diff --git a/app/vmauth/README.md b/app/vmauth/README.md index d14ee5551..6cf88ce96 100644 --- a/app/vmauth/README.md +++ b/app/vmauth/README.md @@ -10,7 +10,7 @@ The `-auth.config` can point to either local file or to http url. Just download `vmutils-*` archive from [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases), unpack it and pass the following flag to `vmauth` binary in order to start authorizing and routing requests: -```bash +```console /path/to/vmauth -auth.config=/path/to/auth/config.yml ``` @@ -129,7 +129,7 @@ It is expected that all the backend services protected by `vmauth` are located i Do not transfer Basic Auth headers in plaintext over untrusted networks. Enable https. This can be done by passing the following `-tls*` command-line flags to `vmauth`: -```bash +```console -tls Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set -tlsCertFile string @@ -181,7 +181,7 @@ The `` may be manually set via `PKG_TAG=foobar make package-vmauth`. The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image by setting it via `` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: -```bash +```console ROOT_IMAGE=scratch make package-vmauth ``` @@ -193,7 +193,7 @@ ROOT_IMAGE=scratch make package-vmauth
-```bash +```console curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof ``` @@ -203,7 +203,7 @@ curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof
-```bash +```console curl http://0.0.0.0:8427/debug/pprof/profile > cpu.pprof ``` @@ -217,7 +217,7 @@ The collected profiles may be analyzed with [go tool pprof](https://github.com/g Pass `-help` command-line arg to `vmauth` in order to see all the configuration options: -```bash +```console ./vmauth -help vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics. diff --git a/app/vmbackup/README.md b/app/vmbackup/README.md index a3f960b9f..a28c057d5 100644 --- a/app/vmbackup/README.md +++ b/app/vmbackup/README.md @@ -28,7 +28,7 @@ creation of hourly, daily, weekly and monthly backups. Regular backup can be performed with the following command: -```bash +```console vmbackup -storageDataPath= -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs:/// ``` @@ -43,7 +43,7 @@ vmbackup -storageDataPath= -snapshot.createURL=h If the destination GCS bucket already contains the previous backup at `-origin` path, then new backup can be sped up with the following command: -```bash +```console ./vmbackup -storageDataPath= -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs:/// -origin=gs:/// ``` @@ -54,7 +54,7 @@ It saves time and network bandwidth costs by performing server-side copy for the Incremental backups are performed if `-dst` points to an already existing backup. In this case only new data is uploaded to remote storage. It saves time and network bandwidth costs when working with big backups: -```bash +```console ./vmbackup -storageDataPath= -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs:/// ``` @@ -64,7 +64,7 @@ Smart backups mean storing full daily backups into `YYYYMMDD` folders and creati * Run the following command every hour: -```bash +```console ./vmbackup -storageDataPath= -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs:///latest ``` @@ -73,7 +73,7 @@ The command will upload only changed data to `gs:///latest`. * Run the following command once a day: -```bash +```console vmbackup -storageDataPath= -snapshot.createURL=http://localhost:8428/snapshot/create -dst=gs:/// -origin=gs:///latest ``` @@ -129,7 +129,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time- for s3 (aws, minio or other s3 compatible storages): - ```bash + ```console [default] aws_access_key_id=theaccesskey aws_secret_access_key=thesecretaccesskeyvalue @@ -155,7 +155,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time- * Usage with s3 custom url endpoint. It is possible to use `vmbackup` with s3 compatible storages like minio, cloudian, etc. You have to add a custom url endpoint via flag: -```bash +```console # for minio -customS3Endpoint=http://localhost:9000 @@ -165,7 +165,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time- * Run `vmbackup -help` in order to see all the available options: -```bash +```console -concurrency int The number of concurrent workers. Higher concurrency may reduce backup duration (default 10) -configFilePath string @@ -280,6 +280,6 @@ The `` may be manually set via `PKG_TAG=foobar make package-vmbackup`. The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image by setting it via `` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: -```bash +```console ROOT_IMAGE=scratch make package-vmbackup ``` diff --git a/app/vmctl/README.md b/app/vmctl/README.md index 2fca7b184..8488f0cc1 100644 --- a/app/vmctl/README.md +++ b/app/vmctl/README.md @@ -15,7 +15,7 @@ Features: To see the full list of supported modes run the following command: -```bash +```console $ ./vmctl --help NAME: vmctl - VictoriaMetrics command-line tool @@ -527,7 +527,7 @@ and specify `accountID` param. In this mode, `vmctl` allows verifying correctness and integrity of data exported via [native format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-export-data-in-native-format) from VictoriaMetrics. You can verify exported data at disk before uploading it by `vmctl verify-block` command: -```bash +```console # export blocks from VictoriaMetrics curl localhost:8428/api/v1/export/native -g -d 'match[]={__name__!=""}' -o exported_data_block # verify block content @@ -650,7 +650,7 @@ The `` may be manually set via `PKG_TAG=foobar make package-vmctl`. The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image by setting it via `` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: -```bash +```console ROOT_IMAGE=scratch make package-vmctl ``` diff --git a/app/vmgateway/README.md b/app/vmgateway/README.md index 51e71b723..9214edb2e 100644 --- a/app/vmgateway/README.md +++ b/app/vmgateway/README.md @@ -54,7 +54,7 @@ Where: Start the single version of VictoriaMetrics -```bash +```console # single # start node ./bin/victoria-metrics --selfScrapeInterval=10s @@ -62,19 +62,19 @@ Start the single version of VictoriaMetrics Start vmgateway -```bash +```console ./bin/vmgateway -eula -enable.auth -read.url http://localhost:8428 --write.url http://localhost:8428 ``` Retrieve data from the database -```bash +```console curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2bV9hY2Nlc3MiOnsidGVuYW50X2lkIjp7fSwicm9sZSI6MX0sImV4cCI6MTkzOTM0NjIxMH0.5WUxEfdcV9hKo4CtQdtuZYOGpGXWwaqM9VuVivMMrVg' ``` A request with an incorrect token or without any token will be rejected: -```bash +```console curl 'http://localhost:8431/api/v1/series/count' curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer incorrect-token' @@ -124,7 +124,7 @@ limits: cluster version of VictoriaMetrics is required for rate limiting. -```bash +```console # start datasource for cluster metrics cat << EOF > cluster.yaml diff --git a/app/vmrestore/README.md b/app/vmrestore/README.md index dae554c7b..c0ceb3f4a 100644 --- a/app/vmrestore/README.md +++ b/app/vmrestore/README.md @@ -10,7 +10,7 @@ when restarting `vmrestore` with the same args. VictoriaMetrics must be stopped during the restore process. -```bash +```console vmrestore -src=gs:/// -storageDataPath= ``` @@ -36,7 +36,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q for s3 (aws, minio or other s3 compatible storages): - ```bash + ```console [default] aws_access_key_id=theaccesskey aws_secret_access_key=thesecretaccesskeyvalue @@ -62,7 +62,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q * Usage with s3 custom url endpoint. It is possible to use `vmrestore` with s3 api compatible storages, like minio, cloudian and other. You have to add custom url endpoint with a flag: -```bash +```console # for minio: -customS3Endpoint=http://localhost:9000 @@ -72,7 +72,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q * Run `vmrestore -help` in order to see all the available options: -```bash +```console -concurrency int The number of concurrent workers. Higher concurrency may reduce restore duration (default 10) -configFilePath string @@ -180,6 +180,6 @@ The `` may be manually set via `PKG_TAG=foobar make package-vmrestore`. The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image by setting it via `` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image: -```bash +```console ROOT_IMAGE=scratch make package-vmrestore ``` diff --git a/deployment/marketplace/digitialocean/one-click-droplet/README.md b/deployment/marketplace/digitialocean/one-click-droplet/README.md index 2a6126474..5b7229b10 100644 --- a/deployment/marketplace/digitialocean/one-click-droplet/README.md +++ b/deployment/marketplace/digitialocean/one-click-droplet/README.md @@ -42,7 +42,7 @@ To check it, open the following in your browser `http://your_droplet_public_ipv4 Run the following command to query and retrieve a result from VictoriaMetrics Single with `curl`: -```bash +```console curl -sg http://your_droplet_public_ipv4:8428/api/v1/query_range?query=vm_app_uptime_seconds | jq ``` @@ -50,6 +50,6 @@ curl -sg http://your_droplet_public_ipv4:8428/api/v1/query_range?query=vm_app_up Once the Droplet is created, you can use DigitalOcean's web console to start a session or SSH directly to the server as root: -```bash +```console ssh root@your_droplet_public_ipv4 ``` diff --git a/deployment/marketplace/digitialocean/one-click-droplet/RELEASE_GUIDE.md b/deployment/marketplace/digitialocean/one-click-droplet/RELEASE_GUIDE.md index 0813eec44..d33216d2c 100644 --- a/deployment/marketplace/digitialocean/one-click-droplet/RELEASE_GUIDE.md +++ b/deployment/marketplace/digitialocean/one-click-droplet/RELEASE_GUIDE.md @@ -6,13 +6,13 @@ 2. API Token can be generated on [https://cloud.digitalocean.com/account/api/tokens](https://cloud.digitalocean.com/account/api/tokens) or use already generated from OnePassword. 3. Set variable `DIGITALOCEAN_API_TOKEN` for environment: -```bash +```console export DIGITALOCEAN_API_TOKEN="your_token_here" ``` or set it by with make: -```bash +```console make release-victoria-metrics-digitalocean-oneclick-droplet DIGITALOCEAN_API_TOKEN="your_token_here" ``` diff --git a/snap/local/README.md b/snap/local/README.md index 101890a6b..fe55d355c 100644 --- a/snap/local/README.md +++ b/snap/local/README.md @@ -10,7 +10,7 @@ Install snapcraft or docker build snap package with command - ```bash + ```console make build-snap ``` @@ -21,7 +21,7 @@ You can install it with command: `snap install victoriametrics_v1.46.0+git1.1beb installation and configuration: -```bash +```console # install snap install victoriametrics # logs @@ -34,7 +34,7 @@ Configuration management: Prometheus scrape config can be edited with your favorite editor, its located at -```bash +```console vi /var/snap/victoriametrics/current/etc/victoriametrics-scrape-config.yaml ``` @@ -42,7 +42,7 @@ after changes, you can trigger config reread with `curl localhost:8248/-/reload` Configuration tuning is possible with editing extra_flags: -```bash +```console echo 'FLAGS="-selfScrapeInterval=10s -search.logSlowQueryDuration=20s"' > /var/snap/victoriametrics/current/extra_flags snap restart victoriametrics ```