diff --git a/app/vmagent/README.md b/app/vmagent/README.md index ebe8804a3c..7f2498ce40 100644 --- a/app/vmagent/README.md +++ b/app/vmagent/README.md @@ -1130,13 +1130,13 @@ It may be needed to build `vmagent` from source code when developing or testing ### Development build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmagent` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmagent` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds the `vmagent` binary and puts it into the `bin` folder. ### Production build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmagent-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmagent-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmagent-prod` binary and puts it into the `bin` folder. ### Building docker images @@ -1159,13 +1159,13 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b ### Development ARM build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmagent-linux-arm` or `make vmagent-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics) +1. Run `make vmagent-linux-arm` or `make vmagent-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics) It builds `vmagent-linux-arm` or `vmagent-linux-arm64` binary respectively and puts it into the `bin` folder. ### Production ARM build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmagent-linux-arm-prod` or `make vmagent-linux-arm64-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmagent-linux-arm-prod` or `make vmagent-linux-arm64-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmagent-linux-arm-prod` or `vmagent-linux-arm64-prod` binary respectively and puts it into the `bin` folder. ## Profiling diff --git a/app/vmalert/README.md b/app/vmalert/README.md index 71e0f15df8..43332d03d5 100644 --- a/app/vmalert/README.md +++ b/app/vmalert/README.md @@ -1735,13 +1735,13 @@ spec: ### Development build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmalert` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmalert` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmalert` binary and puts it into the `bin` folder. ### Production build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmalert-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmalert-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmalert-prod` binary and puts it into the `bin` folder. ### ARM build @@ -1751,11 +1751,11 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b ### Development ARM build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmalert-linux-arm` or `make vmalert-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmalert-linux-arm` or `make vmalert-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmalert-linux-arm` or `vmalert-linux-arm64` binary respectively and puts it into the `bin` folder. ### Production ARM build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmalert-linux-arm-prod` or `make vmalert-linux-arm64-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmalert-linux-arm-prod` or `make vmalert-linux-arm64-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmalert-linux-arm-prod` or `vmalert-linux-arm64-prod` binary respectively and puts it into the `bin` folder. diff --git a/app/vmauth/README.md b/app/vmauth/README.md index 648d535eeb..72daeffc24 100644 --- a/app/vmauth/README.md +++ b/app/vmauth/README.md @@ -277,13 +277,13 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic ### Development build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmauth` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmauth` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmauth` binary and puts it into the `bin` folder. ### Production build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmauth-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmauth-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmauth-prod` binary and puts it into the `bin` folder. ### Building docker images diff --git a/app/vmbackup/README.md b/app/vmbackup/README.md index 82f24b82e9..62c93c4fb3 100644 --- a/app/vmbackup/README.md +++ b/app/vmbackup/README.md @@ -94,13 +94,13 @@ See also [vmbackupmanager tool](https://docs.victoriametrics.com/vmbackupmanager The backup algorithm is the following: 1. Create a snapshot by querying the provided `-snapshot.createURL` -2. Collect information about files in the created snapshot, in the `-dst` and in the `-origin`. -3. Determine which files in `-dst` are missing in the created snapshot, and delete them. These are usually small files, which are already merged into bigger files in the snapshot. -4. Determine which files in the created snapshot are missing in `-dst`. These are usually small new files and bigger merged files. -5. Determine which files from step 3 exist in the `-origin`, and perform server-side copy of these files from `-origin` to `-dst`. +1. Collect information about files in the created snapshot, in the `-dst` and in the `-origin`. +1. Determine which files in `-dst` are missing in the created snapshot, and delete them. These are usually small files, which are already merged into bigger files in the snapshot. +1. Determine which files in the created snapshot are missing in `-dst`. These are usually small new files and bigger merged files. +1. Determine which files from step 3 exist in the `-origin`, and perform server-side copy of these files from `-origin` to `-dst`. These are usually the biggest and the oldest files, which are shared between backups. -6. Upload the remaining files from step 3 from the created snapshot to `-dst`. -7. Delete the created snapshot. +1. Upload the remaining files from step 3 from the created snapshot to `-dst`. +1. Delete the created snapshot. The algorithm splits source files into 1 GiB chunks in the backup. Each chunk is stored as a separate file in the backup. Such splitting balances between the number of files in the backup and the amounts of data that needs to be re-transferred after temporary errors. @@ -302,13 +302,13 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic ### Development build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmbackup` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmbackup` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmbackup` binary and puts it into the `bin` folder. ### Production build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmbackup-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmbackup-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmbackup-prod` binary and puts it into the `bin` folder. ### Building docker images diff --git a/app/vmbackupmanager/README.md b/app/vmbackupmanager/README.md index 640631c6c8..4ed8b3a1e3 100644 --- a/app/vmbackupmanager/README.md +++ b/app/vmbackupmanager/README.md @@ -298,7 +298,7 @@ If restore mark doesn't exist at `storageDataPath`(restore wasn't requested) `vm $ /vmbackupmanager-prod backup list [{"name":"daily/2023-04-07","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:07+00:00"},{"name":"hourly/2023-04-07:11","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:06+00:00"},{"name":"latest","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:04+00:00"},{"name":"monthly/2023-04","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:10+00:00"},{"name":"weekly/2023-14","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:09+00:00"}] ``` -2. Run `vmbackupmanager restore create` to create restore mark: +1. Run `vmbackupmanager restore create` to create restore mark: - Use relative path to backup to restore from currently used remote storage: ```console $ /vmbackupmanager-prod restore create daily/2023-04-07 @@ -307,12 +307,12 @@ If restore mark doesn't exist at `storageDataPath`(restore wasn't requested) `vm ```console $ /vmbackupmanager-prod restore create azblob://test1/vmbackupmanager/daily/2023-04-07 ``` -3. Stop `vmstorage` or `vmsingle` node -4. Run `vmbackupmanager restore` to restore backup: +1. Stop `vmstorage` or `vmsingle` node +1. Run `vmbackupmanager restore` to restore backup: ```console $ /vmbackupmanager-prod restore -credsFilePath=credentials.json -storageDataPath=/vmstorage-data ``` -5. Start `vmstorage` or `vmsingle` node +1. Start `vmstorage` or `vmsingle` node ### How to restore in Kubernetes @@ -326,13 +326,13 @@ If restore mark doesn't exist at `storageDataPath`(restore wasn't requested) `vm enabled: "true" ``` See operator `VMStorage` schema [here](https://docs.victoriametrics.com/operator/api.html#vmstorage) and `VMSingle` [here](https://docs.victoriametrics.com/operator/api.html#vmsinglespec). -2. Enter container running `vmbackupmanager` -2. Use `vmbackupmanager backup list` to get list of available backups: +1. Enter container running `vmbackupmanager` +1. Use `vmbackupmanager backup list` to get list of available backups: ```console $ /vmbackupmanager-prod backup list [{"name":"daily/2023-04-07","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:07+00:00"},{"name":"hourly/2023-04-07:11","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:06+00:00"},{"name":"latest","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:04+00:00"},{"name":"monthly/2023-04","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:10+00:00"},{"name":"weekly/2023-14","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:09+00:00"}] ``` -3. Use `vmbackupmanager restore create` to create restore mark: +1. Use `vmbackupmanager restore create` to create restore mark: - Use relative path to backup to restore from currently used remote storage: ```console $ /vmbackupmanager-prod restore create daily/2023-04-07 @@ -341,7 +341,7 @@ If restore mark doesn't exist at `storageDataPath`(restore wasn't requested) `vm ```console $ /vmbackupmanager-prod restore create azblob://test1/vmbackupmanager/daily/2023-04-07 ``` -4. Restart pod +1. Restart pod #### Restore cluster into another cluster @@ -358,13 +358,13 @@ Clusters here are referred to as `source` and `destination`. ``` Note: it is safe to leave this section in the cluster configuration, since it will be ignored if restore mark doesn't exist. > Important! Use different `-dst` for *destination* cluster to avoid overwriting backup data of the *source* cluster. -2. Enter container running `vmbackupmanager` in *source* cluster -2. Use `vmbackupmanager backup list` to get list of available backups: +1. Enter container running `vmbackupmanager` in *source* cluster +1. Use `vmbackupmanager backup list` to get list of available backups: ```console $ /vmbackupmanager-prod backup list [{"name":"daily/2023-04-07","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:07+00:00"},{"name":"hourly/2023-04-07:11","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:06+00:00"},{"name":"latest","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:04+00:00"},{"name":"monthly/2023-04","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:10+00:00"},{"name":"weekly/2023-14","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:09+00:00"}] ``` -3. Use `vmbackupmanager restore create` to create restore mark at each pod of the *destination* cluster. +1. Use `vmbackupmanager restore create` to create restore mark at each pod of the *destination* cluster. Each pod in *destination* cluster should be restored from backup of respective pod in *source* cluster. For example: `vmstorage-source-0` in *source* cluster should be restored from `vmstorage-destination-0` in *destination* cluster. ```console diff --git a/app/vmctl/README.md b/app/vmctl/README.md index ff8c917525..c5f008995f 100644 --- a/app/vmctl/README.md +++ b/app/vmctl/README.md @@ -85,24 +85,24 @@ See `./vmctl opentsdb --help` for details and full list of flags. OpenTSDB migration works like so: -1. Find metrics based on selected filters (or the default filter set `['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z']`) +1. Find metrics based on selected filters (or the default filter set `['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z']`): -- e.g. `curl -Ss "http://opentsdb:4242/api/suggest?type=metrics&q=sys"` + `curl -Ss "http://opentsdb:4242/api/suggest?type=metrics&q=sys"` -2. Find series associated with each returned metric +1. Find series associated with each returned metric: -- e.g. `curl -Ss "http://opentsdb:4242/api/search/lookup?m=system.load5&limit=1000000"` + `curl -Ss "http://opentsdb:4242/api/search/lookup?m=system.load5&limit=1000000"` -Here `results` return field should not be empty. Otherwise, it means that meta tables are absent and needs to be turned on previously. + Here `results` return field should not be empty. Otherwise, it means that meta tables are absent and needs to be turned on previously. -3. Download data for each series in chunks defined in the CLI switches +1. Download data for each series in chunks defined in the CLI switches: -- e.g. `-retention=sum-1m-avg:1h:90d` means - - `curl -Ss "http://opentsdb:4242/api/query?start=1h-ago&end=now&m=sum:1m-avg-none:system.load5\{host=host1\}"` - - `curl -Ss "http://opentsdb:4242/api/query?start=2h-ago&end=1h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"` - - `curl -Ss "http://opentsdb:4242/api/query?start=3h-ago&end=2h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"` - - ... - - `curl -Ss "http://opentsdb:4242/api/query?start=2160h-ago&end=2159h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"` + `-retention=sum-1m-avg:1h:90d` means: + - `curl -Ss "http://opentsdb:4242/api/query?start=1h-ago&end=now&m=sum:1m-avg-none:system.load5\{host=host1\}"` + - `curl -Ss "http://opentsdb:4242/api/query?start=2h-ago&end=1h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"` + - `curl -Ss "http://opentsdb:4242/api/query?start=3h-ago&end=2h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"` + - ... + - `curl -Ss "http://opentsdb:4242/api/query?start=2160h-ago&end=2159h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"` This means that we must stream data from OpenTSDB to VictoriaMetrics in chunks. This is where concurrency for OpenTSDB comes in. We can query multiple chunks at once, but we shouldn't perform too many chunks at a time to avoid overloading the OpenTSDB cluster. @@ -131,7 +131,7 @@ Starting with a relatively simple retention string (`sum-1m-avg:1h:30d`), let's There are two essential parts of a retention string: 1. [aggregation](#aggregation) -2. [windows/time ranges](#windows) +1. [windows/time ranges](#windows) #### Aggregation @@ -163,7 +163,7 @@ We do not allow for defining the "null value" portion of the rollup window (e.g. There are two important windows we define in a retention string: 1. the "chunk" range of each query -2. The time range we will be querying on with that "chunk" +1. The time range we will be querying on with that "chunk" From our example, our windows are `1h:30d`. @@ -445,11 +445,11 @@ See `./vmctl remote-read --help` for details and full list of flags. To start the migration process configure the following flags: 1. `--remote-read-src-addr` - data source address to read from; -2. `--vm-addr` - VictoriaMetrics address to write to. For single-node VM is usually equal to `--httpListenAddr`, -and for cluster version is equal to `--httpListenAddr` flag of vminsert component (for example `http://:8480/insert//prometheus`); -3. `--remote-read-filter-time-start` - the time filter in RFC3339 format to select time series with timestamp equal or higher than provided value. E.g. '2020-01-01T20:07:00Z'; -4. `--remote-read-filter-time-end` - the time filter in RFC3339 format to select time series with timestamp equal or smaller than provided value. E.g. '2020-01-01T20:07:00Z'. Current time is used when omitted.; -5. `--remote-read-step-interval` - split export data into chunks. Valid values are `month, day, hour, minute`; +1. `--vm-addr` - VictoriaMetrics address to write to. For single-node VM is usually equal to `--httpListenAddr`, + and for cluster version is equal to `--httpListenAddr` flag of vminsert component (for example `http://:8480/insert//prometheus`); +1. `--remote-read-filter-time-start` - the time filter in RFC3339 format to select time series with timestamp equal or higher than provided value. E.g. '2020-01-01T20:07:00Z'; +1. `--remote-read-filter-time-end` - the time filter in RFC3339 format to select time series with timestamp equal or smaller than provided value. E.g. '2020-01-01T20:07:00Z'. Current time is used when omitted.; +1. `--remote-read-step-interval` - split export data into chunks. Valid values are `month, day, hour, minute`; The importing process example for local installation of Prometheus and single-node VictoriaMetrics(`http://localhost:8428`): @@ -516,7 +516,7 @@ and that you have a separate Thanos Store installation. - url: http://victoria-metrics:8428/api/v1/write ``` -2. Make sure VM is running, of course. Now check the logs to make sure that Prometheus is sending and VM is receiving. +1. Make sure VM is running, of course. Now check the logs to make sure that Prometheus is sending and VM is receiving. In Prometheus, make sure there are no errors. On the VM side, you should see messages like this: ``` @@ -524,7 +524,7 @@ and that you have a separate Thanos Store installation. 2020-04-27T18:38:46.506Z info VictoriaMetrics/lib/storage/partition.go:222 partition "2020_04" has been created ``` -3. Now just wait. Within two hours, Prometheus should finish its current data file and hand it off to Thanos Store for long term +1. Now just wait. Within two hours, Prometheus should finish its current data file and hand it off to Thanos Store for long term storage. ### Historical data @@ -736,7 +736,7 @@ See `./vmctl vm-native --help` for details and full list of flags. Migration in `vm-native` mode takes two steps: 1. Explore the list of the metrics to migrate via `api/v1/label/__name__/values` API; -2. Migrate explored metrics one-by-one. +1. Migrate explored metrics one-by-one. ``` ./vmctl vm-native \ @@ -770,54 +770,54 @@ _To disable explore phase and switch to the old way of data migration via single Importing tips: 1. Migrating big volumes of data may result in reaching the safety limits on `src` side. -Please verify that `-search.maxExportDuration` and `-search.maxExportSeries` were set with -proper values for `src`. If hitting the limits, follow the recommendations -[here](https://docs.victoriametrics.com/#how-to-export-data-in-native-format). -If hitting `the number of matching timeseries exceeds...` error, adjust filters to match less time series or -update `-search.maxSeries` command-line flag on vmselect/vmsingle; -2. Migrating all the metrics from one VM to another may collide with existing application metrics -(prefixed with `vm_`) at destination and lead to confusion when using -[official Grafana dashboards](https://grafana.com/orgs/victoriametrics/dashboards). -To avoid such situation try to filter out VM process metrics via `--vm-native-filter-match='{__name__!~"vm_.*"}'` flag. -3. Migrating data with overlapping time range or via unstable network can produce duplicates series at destination. -To avoid duplicates set `-dedup.minScrapeInterval=1ms` for `vmselect`/`vmstorage` at the destination. -This will instruct `vmselect`/`vmstorage` to ignore duplicates with identical timestamps. -4. When migrating large volumes of data use `--vm-native-step-interval` flag to split migration [into steps](#using-time-based-chunking-of-migration). -5. When migrating data from one VM cluster to another, consider using [cluster-to-cluster mode](#cluster-to-cluster-migration-mode). -Or manually specify addresses according to [URL format](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format): - ```console - # Migrating from cluster specific tenantID to single - --vm-native-src-addr=http://:8481/select/0/prometheus - --vm-native-dst-addr=http://:8428 + Please verify that `-search.maxExportDuration` and `-search.maxExportSeries` were set with + proper values for `src`. If hitting the limits, follow the recommendations + [here](https://docs.victoriametrics.com/#how-to-export-data-in-native-format). + If hitting `the number of matching timeseries exceeds...` error, adjust filters to match less time series or + update `-search.maxSeries` command-line flag on vmselect/vmsingle; +1. Migrating all the metrics from one VM to another may collide with existing application metrics + (prefixed with `vm_`) at destination and lead to confusion when using + [official Grafana dashboards](https://grafana.com/orgs/victoriametrics/dashboards). + To avoid such situation try to filter out VM process metrics via `--vm-native-filter-match='{__name__!~"vm_.*"}'` flag. +1. Migrating data with overlapping time range or via unstable network can produce duplicates series at destination. + To avoid duplicates set `-dedup.minScrapeInterval=1ms` for `vmselect`/`vmstorage` at the destination. + This will instruct `vmselect`/`vmstorage` to ignore duplicates with identical timestamps. +1. When migrating large volumes of data use `--vm-native-step-interval` flag to split migration [into steps](#using-time-based-chunking-of-migration). +1. When migrating data from one VM cluster to another, consider using [cluster-to-cluster mode](#cluster-to-cluster-migration-mode). + Or manually specify addresses according to [URL format](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format): + ```console + # Migrating from cluster specific tenantID to single + --vm-native-src-addr=http://:8481/select/0/prometheus + --vm-native-dst-addr=http://:8428 - # Migrating from single to cluster specific tenantID - --vm-native-src-addr=http://:8428 - --vm-native-src-addr=http://:8480/insert/0/prometheus + # Migrating from single to cluster specific tenantID + --vm-native-src-addr=http://:8428 + --vm-native-src-addr=http://:8480/insert/0/prometheus - # Migrating single to single - --vm-native-src-addr=http://:8428 - --vm-native-dst-addr=http://:8428 + # Migrating single to single + --vm-native-src-addr=http://:8428 + --vm-native-dst-addr=http://:8428 - # Migrating cluster to cluster for specific tenant ID - --vm-native-src-addr=http://:8481/select/0/prometheus - --vm-native-dst-addr=http://:8480/insert/0/prometheus - ``` -6. Migrating data from VM cluster which had replication (`-replicationFactor` > 1) enabled won't produce the same amount -of data copies for the destination database, and will result only in creating duplicates. To remove duplicates, -destination database need to be configured with `-dedup.minScrapeInterval=1ms`. To restore the replication factor -the destination `vminsert` component need to be configured with the according `-replicationFactor` value. -See more about replication [here](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#replication-and-data-safety). -7. Migration speed can be adjusted via `--vm-concurrency` cmd-line flag, which controls the number of concurrent -workers busy with processing. Please note, that each worker can load up to a single vCPU core on VictoriaMetrics. -So try to set it according to allocated CPU resources of your VictoriaMetrics destination installation. -7. Migration is a backfilling process, so it is recommended to read -[Backfilling tips](https://github.com/VictoriaMetrics/VictoriaMetrics#backfilling) section. -8. `vmctl` doesn't provide relabeling or other types of labels management. -Instead, use [relabeling in VictoriaMetrics](https://github.com/VictoriaMetrics/vmctl/issues/4#issuecomment-683424375). -9. `vmctl` supports `--vm-native-src-headers` and `--vm-native-dst-headers` to define headers sent with each request -to the corresponding source address. -10. `vmctl` supports `--vm-native-disable-http-keep-alive` to allow `vmctl` to use non-persistent HTTP connections to avoid -error `use of closed network connection` when run a longer export. + # Migrating cluster to cluster for specific tenant ID + --vm-native-src-addr=http://:8481/select/0/prometheus + --vm-native-dst-addr=http://:8480/insert/0/prometheus + ``` +1. Migrating data from VM cluster which had replication (`-replicationFactor` > 1) enabled won't produce the same amount + of data copies for the destination database, and will result only in creating duplicates. To remove duplicates, + destination database need to be configured with `-dedup.minScrapeInterval=1ms`. To restore the replication factor + the destination `vminsert` component need to be configured with the according `-replicationFactor` value. + See more about replication [here](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#replication-and-data-safety). +1. Migration speed can be adjusted via `--vm-concurrency` cmd-line flag, which controls the number of concurrent + workers busy with processing. Please note, that each worker can load up to a single vCPU core on VictoriaMetrics. + So try to set it according to allocated CPU resources of your VictoriaMetrics destination installation. +1. Migration is a backfilling process, so it is recommended to read + [Backfilling tips](https://github.com/VictoriaMetrics/VictoriaMetrics#backfilling) section. +1. `vmctl` doesn't provide relabeling or other types of labels management. + Instead, use [relabeling in VictoriaMetrics](https://github.com/VictoriaMetrics/vmctl/issues/4#issuecomment-683424375). +1. `vmctl` supports `--vm-native-src-headers` and `--vm-native-dst-headers` to define headers sent with each request + to the corresponding source address. +1. `vmctl` supports `--vm-native-disable-http-keep-alive` to allow `vmctl` to use non-persistent HTTP connections to avoid + error `use of closed network connection` when run a longer export. ### Using time-based chunking of migration @@ -1024,13 +1024,13 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic ### Development build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmctl` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmctl` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmctl` binary and puts it into the `bin` folder. ### Production build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmctl-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmctl-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmctl-prod` binary and puts it into the `bin` folder. ### Building docker images @@ -1053,11 +1053,11 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b #### Development ARM build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmctl-linux-arm` or `make vmctl-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmctl-linux-arm` or `make vmctl-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmctl-linux-arm` or `vmctl-linux-arm64` binary respectively and puts it into the `bin` folder. #### Production ARM build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmctl-linux-arm-prod` or `make vmctl-linux-arm64-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmctl-linux-arm-prod` or `make vmctl-linux-arm64-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmctl-linux-arm-prod` or `vmctl-linux-arm64-prod` binary respectively and puts it into the `bin` folder. diff --git a/app/vmrestore/README.md b/app/vmrestore/README.md index 722e1e4fea..4e77e60e68 100644 --- a/app/vmrestore/README.md +++ b/app/vmrestore/README.md @@ -202,13 +202,13 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic ### Development build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmrestore` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmrestore` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmrestore` binary and puts it into the `bin` folder. ### Production build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmrestore-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmrestore-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmrestore-prod` binary and puts it into the `bin` folder. ### Building docker images diff --git a/docs/guides/README.md b/docs/guides/README.md index 7fd3cdc21f..a56b420f5c 100644 --- a/docs/guides/README.md +++ b/docs/guides/README.md @@ -11,13 +11,13 @@ menu: # Guides 1. [K8s monitoring via VM Single](k8s-monitoring-via-vm-single.html) -2. [K8s monitoring via VM Cluster](k8s-monitoring-via-vm-cluster.html) -3. [HA monitoring setup in K8s via VM Cluster](k8s-ha-monitoring-via-vm-cluster.html) -4. [Getting started with VM Operator](getting-started-with-vm-operator.html) -5. [Multi Retention Setup within VictoriaMetrics Cluster](guide-vmcluster-multiple-retention-setup.html) -6. [Migrate from InfluxDB to VictoriaMetrics](migrate-from-influx.html) -7. [Multi-regional setup with VictoriaMetrics: Dedicated regions for monitoring](multi-regional-setup-dedicated-regions.html) -8. [How to delete or replace metrics in VictoriaMetrics](guide-delete-or-replace-metrics.html) -9. [How to monitor kubernetes cluster using Managed VictoriaMetrics](/managed-victoriametrics/how-to-monitor-k8s.html) -10. [How to configure vmgateway for multi-tenant access using Grafana and OpenID Connect](grafana-vmgateway-openid-configuration.html) -11. [How to setup vmanomaly together with vmalert](guide-vmanomaly-vmalert.html) +1. [K8s monitoring via VM Cluster](k8s-monitoring-via-vm-cluster.html) +1. [HA monitoring setup in K8s via VM Cluster](k8s-ha-monitoring-via-vm-cluster.html) +1. [Getting started with VM Operator](getting-started-with-vm-operator.html) +1. [Multi Retention Setup within VictoriaMetrics Cluster](guide-vmcluster-multiple-retention-setup.html) +1. [Migrate from InfluxDB to VictoriaMetrics](migrate-from-influx.html) +1. [Multi-regional setup with VictoriaMetrics: Dedicated regions for monitoring](multi-regional-setup-dedicated-regions.html) +1. [How to delete or replace metrics in VictoriaMetrics](guide-delete-or-replace-metrics.html) +1. [How to monitor kubernetes cluster using Managed VictoriaMetrics](/managed-victoriametrics/how-to-monitor-k8s.html) +1. [How to configure vmgateway for multi-tenant access using Grafana and OpenID Connect](grafana-vmgateway-openid-configuration.html) +1. [How to setup vmanomaly together with vmalert](guide-vmanomaly-vmalert.html) diff --git a/docs/guides/grafana-vmgateway-openid-configuration.md b/docs/guides/grafana-vmgateway-openid-configuration.md index 64bc8cdae9..63cc7206ae 100644 --- a/docs/guides/grafana-vmgateway-openid-configuration.md +++ b/docs/guides/grafana-vmgateway-openid-configuration.md @@ -44,35 +44,35 @@ See details about all supported options in the [vmgateway documentation](https:/ [Keycloak](https://www.keycloak.org/) is an open source identity service that can be used to issue JWT tokens. 1. Log in with admin credentials to your Keycloak instance -2. Go to `Clients` -> `Create`.
+1. Go to `Clients` -> `Create`.
Use `OpenID Connect` as `Client Type`.
Specify `grafana` as `Client ID`.
Click `Next`.
-3. Enable `Client authentication`.
+1. Enable `Client authentication`.
Enable `Authorization`.

Click `Next`.
-4. Add Grafana URL as `Root URL`. For example, `http://localhost:3000/`.
+1. Add Grafana URL as `Root URL`. For example, `http://localhost:3000/`.

Click `Save`.
-5. Go to `Clients` -> `grafana` -> `Credentials`.
+1. Go to `Clients` -> `grafana` -> `Credentials`.

Copy the value of `Client secret`. It will be used later in Grafana configuration.
-6. Go to `Clients` -> `grafana` -> `Client scopes`.
+1. Go to `Clients` -> `grafana` -> `Client scopes`.
Click at `grafana-dedicated` -> `Add mapper` -> `By configuration` -> `User attribute`.


Configure the mapper as follows
- - `Name` as `vm_access`. - - `Token Claim Name` as `vm_access`. - - `User Attribute` as `vm_access`. - - `Claim JSON Type` as `JSON`. - Enable `Add to ID token` and `Add to access token`.
+ - `Name` as `vm_access`. + - `Token Claim Name` as `vm_access`. + - `User Attribute` as `vm_access`. + - `Claim JSON Type` as `JSON`. + Enable `Add to ID token` and `Add to access token`.
-
- Click `Save`.
-7. Go to `Users` -> select user to configure claims -> `Attributes`.
+
+ Click `Save`.
+1. Go to `Users` -> select user to configure claims -> `Attributes`.
Specify `vm_access` as `Key`.
For the purpose of this example, we will use 2 users:
- for the first user we will specify `{"tenant_id" : {"account_id": 0, "project_id": 0 },"extra_labels":{ "team": "admin" }}` as `Value`. diff --git a/docs/guides/guide-vmcluster-multiple-retention-setup.md b/docs/guides/guide-vmcluster-multiple-retention-setup.md index 53dec9f545..a00565eb8c 100644 --- a/docs/guides/guide-vmcluster-multiple-retention-setup.md +++ b/docs/guides/guide-vmcluster-multiple-retention-setup.md @@ -38,11 +38,12 @@ The diagram below shows a proposed solution

**Implementation Details** - 1. Groups of vminserts A know about only vmstorages A and this is explicitly specified via `-storageNode` [configuration](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#cluster-setup). - 2. Groups of vminserts B know about only vmstorages B and this is explicitly specified via `-storageNode` [configuration](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#cluster-setup). - 3. Groups of vminserts C know about only vmstorages A and this is explicitly specified via `-storageNode` [configuration](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#cluster-setup). - 4. Vmselect reads data from all vmstorage nodes via `-storageNode` [configuration](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#cluster-setup). - 5. Vmagent routes incoming metrics to the given set of `vminsert` nodes using relabeling rules specified at `-remoteWrite.urlRelabelConfig` [configuration](https://docs.victoriametrics.com/vmagent.html#relabeling). + +1. Groups of vminserts A know about only vmstorages A and this is explicitly specified via `-storageNode` [configuration](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#cluster-setup). +1. Groups of vminserts B know about only vmstorages B and this is explicitly specified via `-storageNode` [configuration](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#cluster-setup). +1. Groups of vminserts C know about only vmstorages A and this is explicitly specified via `-storageNode` [configuration](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#cluster-setup). +1. Vmselect reads data from all vmstorage nodes via `-storageNode` [configuration](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#cluster-setup). +1. Vmagent routes incoming metrics to the given set of `vminsert` nodes using relabeling rules specified at `-remoteWrite.urlRelabelConfig` [configuration](https://docs.victoriametrics.com/vmagent.html#relabeling). **Multi-Tenant Setup** diff --git a/docs/guides/multi-regional-setup-dedicated-regions.md b/docs/guides/multi-regional-setup-dedicated-regions.md index 539bb12e17..fc61580b0a 100644 --- a/docs/guides/multi-regional-setup-dedicated-regions.md +++ b/docs/guides/multi-regional-setup-dedicated-regions.md @@ -55,11 +55,11 @@ You can use one of the following options: 1. Multi-level [vmselect setup](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#multi-level-cluster-setup) in cluster setup, top-level vmselect(s) reads data from cluster-level vmselects * Returns data in one of the clusters is unavailable * Merges data from both sources. You need to turn on [deduplication](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#deduplication) to remove duplicates -2. Regional endpoints - use one regional endpoint as default and switch to another if there is an issue. -3. Load balancer - that sends queries to a particular region. The benefit and disadvantage of this setup is that it's simple. -4. Promxy - proxy that reads data from multiple Prometheus-like sources. It allows reading data more intelligently to cover the region's unavailability out of the box. It doesn't support MetricsQL yet (please check this issue). -5. Global vmselect in cluster setup - you can set up an additional subset of vmselects that knows about all storages in all regions. - * The [deduplication](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#deduplication) in 1ms on the vmselect side must be turned on. This setup allows you to query data using MetricsQL. +1. Regional endpoints - use one regional endpoint as default and switch to another if there is an issue. +1. Load balancer - that sends queries to a particular region. The benefit and disadvantage of this setup is that it's simple. +1. Promxy - proxy that reads data from multiple Prometheus-like sources. It allows reading data more intelligently to cover the region's unavailability out of the box. It doesn't support MetricsQL yet (please check this issue). +1. Global vmselect in cluster setup - you can set up an additional subset of vmselects that knows about all storages in all regions. + * The [deduplication](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#deduplication) in 1ms on the vmselect side must be turned on. This setup allows you to query data using MetricsQL. * The downside is that vmselect waits for a response from all storages in all regions. @@ -94,4 +94,4 @@ Additional context ### What more can we do? Setup vmagents in Ground Control regions. That allows it to accept data close to storage and add more reliability if storage is temporarily offline. -g \ No newline at end of file +g diff --git a/docs/managed-victoriametrics/how-to-monitor-k8s.md b/docs/managed-victoriametrics/how-to-monitor-k8s.md index 82a255d08b..252338ef32 100644 --- a/docs/managed-victoriametrics/how-to-monitor-k8s.md +++ b/docs/managed-victoriametrics/how-to-monitor-k8s.md @@ -38,7 +38,7 @@ Install the Helm chart in a custom namespace kubectl create namespace monitoring ``` -2. Create kubernetes-secrets with token to access your dbaas deployment +1. Create kubernetes-secrets with token to access your dbaas deployment
```bash kubectl --namespace monitoring create secret generic dbaas-write-access-token --from-literal=bearerToken=your-token @@ -47,8 +47,7 @@ Install the Helm chart in a custom namespace
You can find your access token on the "Access" tab of your deployment - -3. Set up a Helm repository using the following commands: +1. Set up a Helm repository using the following commands:
```bash helm repo add grafana https://grafana.github.io/helm-charts @@ -57,7 +56,7 @@ Install the Helm chart in a custom namespace helm repo update ```
-4. Create a YAML file of Helm values called dbaas.yaml with following content +1. Create a YAML file of Helm values called dbaas.yaml with following content
```yaml externalVM: @@ -97,7 +96,7 @@ Install the Helm chart in a custom namespace enabled: true ```
-5. Install VictoriaMetrics-k8s-stack helm chart +1. Install VictoriaMetrics-k8s-stack helm chart
```bash helm --namespace monitoring install vm vm/victoria-metrics-k8s-stack -f dbaas.yaml -n monitoring @@ -116,16 +115,16 @@ Connect to grafana and create your datasource kubectl --namespace monitoring get secret vm-grafana -o jsonpath="{.data.admin-password}" | base64 -d ```
-2. Connect to grafana +1. Connect to grafana
```bash kubectl --namespace monitoring port-forward service/vm-grafana 3000:80 ```
-3. Open grafana in your browser [http://localhost:3000/datasources](http://localhost:3000/datasources) +1. Open grafana in your browser [http://localhost:3000/datasources](http://localhost:3000/datasources) Use admin as username and password from previous step -4. Click on add datasource +1. Click on add datasource Choose VictoriaMetrics or Prometheus as datasource type. Make sure you made this datasource as default for dashboards to work. > You can find token and URL in your deployment, on Access tab diff --git a/docs/managed-victoriametrics/quickstart.md b/docs/managed-victoriametrics/quickstart.md index 363756c15c..b037efe9fa 100644 --- a/docs/managed-victoriametrics/quickstart.md +++ b/docs/managed-victoriametrics/quickstart.md @@ -18,10 +18,10 @@ monitoring, logs collection, access protection, software updates, backups, etc. The document covers the following topics 1. [How to register](#how-to-register) -2. [How to restore password](#how-to-restore-password) -3. [Creating deployment](#creating-deployment) -4. [Deployment access](#deployment-access) -5. [Modifying deployment](#modifying-deployment) +1. [How to restore password](#how-to-restore-password) +1. [Creating deployment](#creating-deployment) +1. [Deployment access](#deployment-access) +1. [Modifying deployment](#modifying-deployment) ## How to register @@ -77,34 +77,33 @@ If you forgot password, it can be restored in the following way: 1. Click `Forgot your password?` link at [this page](https://dbaas.victoriametrics.com/signIn): -

- -

+

+ +

-2. Enter your email in the field and click `Send Email` button: +1. Enter your email in the field and click `Send Email` button: -

- -

+

+ +

-3. Follow the instruction sent to your email in order to gain access to your VictoriaMetrics cloud account: +1. Follow the instruction sent to your email in order to gain access to your VictoriaMetrics cloud account: -

- -

+

+ +

-4. Navigate to the Profile page by clicking the corresponding link at the top right corner: +1. Navigate to the Profile page by clicking the corresponding link at the top right corner: -

- -

+

+ +

-5. Enter new password at the Profile page and press `Save` button: - -

- -

+1. Enter new password at the Profile page and press `Save` button: +

+ +

## Creating deployment diff --git a/docs/managed-victoriametrics/setup-notifications.md b/docs/managed-victoriametrics/setup-notifications.md index 75fa91539b..a5b04fde3b 100644 --- a/docs/managed-victoriametrics/setup-notifications.md +++ b/docs/managed-victoriametrics/setup-notifications.md @@ -15,8 +15,8 @@ The guide covers how to enable email and Slack notifications. Table of content: 1. [Setup Slack notifications](#setup-slack-notifications) -2. [Setup emails notifications](#setup-emails-notifications) -3. [Send test notification](#send-test-notification) +1. [Setup emails notifications](#setup-emails-notifications) +1. [Send test notification](#send-test-notification) When you enter the notification section, you will be able to fill in the channels in which you want to receive notifications @@ -28,23 +28,23 @@ want to receive notifications ## Setup Slack notifications 1. Setup Slack webhook -How to do this is indicated on the following link https://api.slack.com/messaging/webhooks + How to do this is indicated on the following link https://api.slack.com/messaging/webhooks -

- -

+

+ +

-2. Specify Slack channels +1. Specify Slack channels -Enter one or more channels into input and press enter or choose it after each input. + Enter one or more channels into input and press enter or choose it after each input. -

- -

+

+ +

-

- -

+

+ +

## Setup emails notifications diff --git a/docs/managed-victoriametrics/user-managment.md b/docs/managed-victoriametrics/user-managment.md index 1f25e77d49..bbae30a1d8 100644 --- a/docs/managed-victoriametrics/user-managment.md +++ b/docs/managed-victoriametrics/user-managment.md @@ -15,10 +15,10 @@ The user management system enables admins to control user access and onboard and The document covers the following topics 1. [User Roles](#user-roles) -2. [User List](#user-list) -3. [How to Add User](#how-to-add-user) -4. [How to Update User](#how-to-update-user) -5. [How to Delete User](#how-to-delete-user) +1. [User List](#user-list) +1. [How to Add User](#how-to-add-user) +1. [How to Update User](#how-to-update-user) +1. [How to Delete User](#how-to-delete-user) ## User roles diff --git a/docs/operator/FAQ.md b/docs/operator/FAQ.md index 6453794bd8..79ca2662d4 100644 --- a/docs/operator/FAQ.md +++ b/docs/operator/FAQ.md @@ -18,14 +18,14 @@ aliases: With Helm chart deployment: 1. Update the PVCs manually -2. Run `kubectl delete statefulset --cascade=orphan {vmstorage-sts}` which will delete the sts but keep the pods -3. Update helm chart with the new storage class in the volumeClaimTemplate -4. Run the helm chart to recreate the sts with the updated value +1. Run `kubectl delete statefulset --cascade=orphan {vmstorage-sts}` which will delete the sts but keep the pods +1. Update helm chart with the new storage class in the volumeClaimTemplate +1. Run the helm chart to recreate the sts with the updated value With Operator deployment: 1. Update the PVCs manually -2. Run `kubectl delete vmcluster --cascade=orphan {cluster-name}` -3. Run `kubectl delete statefulset --cascade=orphan {vmstorage-sts}` -4. Update VMCluster spec to use new storage class -5. Apply cluster configuration +1. Run `kubectl delete vmcluster --cascade=orphan {cluster-name}` +1. Run `kubectl delete statefulset --cascade=orphan {vmstorage-sts}` +1. Update VMCluster spec to use new storage class +1. Apply cluster configuration diff --git a/docs/operator/README.md b/docs/operator/README.md index c888b063d9..7658725935 100644 --- a/docs/operator/README.md +++ b/docs/operator/README.md @@ -7,15 +7,15 @@ disableToc: true # VictoriaMetrics Operator 1. [VictoriaMetrics Operator](VictoriaMetrics-Operator.html) -2. [Additional Scrape Configuration](additional-scrape.html) -3. [API Docs](api.html) -4. [Authorization and exposing components](auth.html) -5. [vmbackupmanager](backups.html) -6. [Design](design.html) -7. [High Availability](high-availability.html) -8. [VMAlert, VMAgent, VMAlertmanager, VMSingle version](managing-versions.html) -9. [Victoria Metrics Operator Quick Start](quick-start.html) -10. [VMAgent relabel](relabeling.html) -11. [CRD Validation](resources-validation.html) -12. [Security](security.html) -13. [Auto Generated vars for package config](vars.html) +1. [Additional Scrape Configuration](additional-scrape.html) +1. [API Docs](api.html) +1. [Authorization and exposing components](auth.html) +1. [vmbackupmanager](backups.html) +1. [Design](design.html) +1. [High Availability](high-availability.html) +1. [VMAlert, VMAgent, VMAlertmanager, VMSingle version](managing-versions.html) +1. [Victoria Metrics Operator Quick Start](quick-start.html) +1. [VMAgent relabel](relabeling.html) +1. [CRD Validation](resources-validation.html) +1. [Security](security.html) +1. [Auto Generated vars for package config](vars.html) diff --git a/docs/operator/backups.md b/docs/operator/backups.md index 6f1be19531..127177f0df 100644 --- a/docs/operator/backups.md +++ b/docs/operator/backups.md @@ -95,12 +95,12 @@ You have to stop `VMSingle` by scaling it replicas to zero and manually restore Steps: 1. edit `VMSingle` CRD, set replicaCount: 0 -2. wait until database stops -3. ssh to some server, where you can mount `VMSingle` disk and mount it manually -4. restore files with `vmrestore` -5. umount disk -6. edit `VMSingle` CRD, set replicaCount: 1 -7. wait database start +1. wait until database stops +1. ssh to some server, where you can mount `VMSingle` disk and mount it manually +1. restore files with `vmrestore` +1. umount disk +1. edit `VMSingle` CRD, set replicaCount: 1 +1. wait database start ### Using VMRestore init container @@ -140,9 +140,9 @@ Steps: name: remote-storage-keys key: credentials ``` -2. apply it, and db will be restored from s3 +1. apply it, and db will be restored from s3 -3. remove initContainers and apply crd. +1. remove initContainers and apply crd. Note that using `VMRestore` will require adjusting `src` for each pod because restore will be handled per-pod. diff --git a/docs/operator/design.md b/docs/operator/design.md index 1dbc27f483..b83eda9db9 100644 --- a/docs/operator/design.md +++ b/docs/operator/design.md @@ -54,9 +54,9 @@ For each `VMCluster` resource, the Operator creates `VMStorage` as `StatefulSet` as deployment. For `VMStorage` and `VMSelect` headless services are created. `VMInsert` is created as service with clusterIP. There is a strict order for these objects creation and reconciliation: - 1. `VMStorage` is synced - the Operator waits until all its pods are ready; - 2. Then it syncs `VMSelect` with the same manner; - 3. `VMInsert` is the last object to sync. +1. `VMStorage` is synced - the Operator waits until all its pods are ready; +1. Then it syncs `VMSelect` with the same manner; +1. `VMInsert` is the last object to sync. All statefulsets are created with [OnDelete](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#on-delete) update type. It allows to manually manage the rolling update process for Operator by deleting pods one by one and waiting diff --git a/docs/sd_configs.md b/docs/sd_configs.md index a5b9fa25da..9c0c18a5e8 100644 --- a/docs/sd_configs.md +++ b/docs/sd_configs.md @@ -771,8 +771,8 @@ scrape_configs: Credentials are discovered by looking in the following places, preferring the first location found: 1. a JSON file specified by the `GOOGLE_APPLICATION_CREDENTIALS` environment variable -2. a JSON file in the well-known path `$HOME/.config/gcloud/application_default_credentials.json` -3. fetched from the GCE metadata server +1. a JSON file in the well-known path `$HOME/.config/gcloud/application_default_credentials.json` +1. fetched from the GCE metadata server Each discovered target has an [`__address__`](https://docs.victoriametrics.com/relabeling.html#how-to-modify-scrape-urls-in-targets) label set to `:`, where `` is private IP of the discovered instance, while `` is the `port` value diff --git a/docs/stream-aggregation.md b/docs/stream-aggregation.md index 8f9becf0ca..acbb9ecb71 100644 --- a/docs/stream-aggregation.md +++ b/docs/stream-aggregation.md @@ -302,7 +302,7 @@ The resulting histogram buckets can be queried with [MetricsQL](https://docs.vic This query uses [histogram_quantiles](https://docs.victoriametrics.com/MetricsQL.html#histogram_quantiles) function. -2. An estimated [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) of the request duration over the last hour: +1. An estimated [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) of the request duration over the last hour: ```metricsql histogram_stddev(sum(increase(request_duration_seconds:60s_histogram_bucket[1h])) by (vmrange)) @@ -310,7 +310,7 @@ The resulting histogram buckets can be queried with [MetricsQL](https://docs.vic This query uses [histogram_stddev](https://docs.victoriametrics.com/MetricsQL.html#histogram_stddev) function. -3. An estimated share of requests with the duration smaller than `0.5s` over the last hour: +1. An estimated share of requests with the duration smaller than `0.5s` over the last hour: ```metricsql histogram_share(0.5, sum(increase(request_duration_seconds:60s_histogram_bucket[1h])) by (vmrange)) diff --git a/docs/vmagent.md b/docs/vmagent.md index a3ca8aa6bb..c2011f2dce 100644 --- a/docs/vmagent.md +++ b/docs/vmagent.md @@ -1141,13 +1141,13 @@ It may be needed to build `vmagent` from source code when developing or testing ### Development build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmagent` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmagent` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds the `vmagent` binary and puts it into the `bin` folder. ### Production build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmagent-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmagent-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmagent-prod` binary and puts it into the `bin` folder. ### Building docker images @@ -1170,13 +1170,13 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b ### Development ARM build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmagent-linux-arm` or `make vmagent-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics) +1. Run `make vmagent-linux-arm` or `make vmagent-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics) It builds `vmagent-linux-arm` or `vmagent-linux-arm64` binary respectively and puts it into the `bin` folder. ### Production ARM build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmagent-linux-arm-prod` or `make vmagent-linux-arm64-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmagent-linux-arm-prod` or `make vmagent-linux-arm64-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmagent-linux-arm-prod` or `vmagent-linux-arm64-prod` binary respectively and puts it into the `bin` folder. ## Profiling diff --git a/docs/vmalert.md b/docs/vmalert.md index 84d212b6d4..8585af0e86 100644 --- a/docs/vmalert.md +++ b/docs/vmalert.md @@ -1746,13 +1746,13 @@ spec: ### Development build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmalert` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmalert` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmalert` binary and puts it into the `bin` folder. ### Production build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmalert-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmalert-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmalert-prod` binary and puts it into the `bin` folder. ### ARM build @@ -1762,11 +1762,11 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b ### Development ARM build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmalert-linux-arm` or `make vmalert-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmalert-linux-arm` or `make vmalert-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmalert-linux-arm` or `vmalert-linux-arm64` binary respectively and puts it into the `bin` folder. ### Production ARM build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmalert-linux-arm-prod` or `make vmalert-linux-arm64-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmalert-linux-arm-prod` or `make vmalert-linux-arm64-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmalert-linux-arm-prod` or `vmalert-linux-arm64-prod` binary respectively and puts it into the `bin` folder. diff --git a/docs/vmanomaly.md b/docs/vmanomaly.md index 607a56487f..aa383aa9ae 100644 --- a/docs/vmanomaly.md +++ b/docs/vmanomaly.md @@ -58,7 +58,7 @@ Currently, vmanomaly ships with a few common models: from time-series mean (straight line). Keeps only two model parameters internally: `mean` and `std` (standard deviation). -2. **Prophet** +1. **Prophet** _(simplest in configuration, recommended for getting starting)_ @@ -72,24 +72,24 @@ Currently, vmanomaly ships with a few common models: See [Prophet documentation](https://facebook.github.io/prophet/) -3. **Holt-Winters** +1. **Holt-Winters** Very popular forecasting algorithm. See [statsmodels.org documentation]( https://www.statsmodels.org/stable/generated/statsmodels.tsa.holtwinters.ExponentialSmoothing.html) for Holt-Winters exponential smoothing. -4. **Seasonal-Trend Decomposition** +1. **Seasonal-Trend Decomposition** Extracts three components: season, trend, and residual, that can be plotted individually for easier debugging. Uses LOESS (locally estimated scatterplot smoothing). See [statsmodels.org documentation](https://www.statsmodels.org/dev/examples/notebooks/generated/stl_decomposition.html) for LOESS STD. -5. **ARIMA** +1. **ARIMA** Commonly used forecasting model. See [statsmodels.org documentation](https://www.statsmodels.org/stable/generated/statsmodels.tsa.arima.model.ARIMA.html) for ARIMA. -6. **Rolling Quantile** +1. **Rolling Quantile** A simple moving window of quantiles. Easy to use, easy to understand, but not as powerful as other models. diff --git a/docs/vmauth.md b/docs/vmauth.md index 75ea5eea78..c7965aedb1 100644 --- a/docs/vmauth.md +++ b/docs/vmauth.md @@ -288,13 +288,13 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic ### Development build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmauth` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmauth` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmauth` binary and puts it into the `bin` folder. ### Production build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmauth-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmauth-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmauth-prod` binary and puts it into the `bin` folder. ### Building docker images diff --git a/docs/vmbackup.md b/docs/vmbackup.md index 979eb50540..f6ef69bb02 100644 --- a/docs/vmbackup.md +++ b/docs/vmbackup.md @@ -105,13 +105,13 @@ See also [vmbackupmanager tool](https://docs.victoriametrics.com/vmbackupmanager The backup algorithm is the following: 1. Create a snapshot by querying the provided `-snapshot.createURL` -2. Collect information about files in the created snapshot, in the `-dst` and in the `-origin`. -3. Determine which files in `-dst` are missing in the created snapshot, and delete them. These are usually small files, which are already merged into bigger files in the snapshot. -4. Determine which files in the created snapshot are missing in `-dst`. These are usually small new files and bigger merged files. -5. Determine which files from step 3 exist in the `-origin`, and perform server-side copy of these files from `-origin` to `-dst`. +1. Collect information about files in the created snapshot, in the `-dst` and in the `-origin`. +1. Determine which files in `-dst` are missing in the created snapshot, and delete them. These are usually small files, which are already merged into bigger files in the snapshot. +1. Determine which files in the created snapshot are missing in `-dst`. These are usually small new files and bigger merged files. +1. Determine which files from step 3 exist in the `-origin`, and perform server-side copy of these files from `-origin` to `-dst`. These are usually the biggest and the oldest files, which are shared between backups. -6. Upload the remaining files from step 3 from the created snapshot to `-dst`. -7. Delete the created snapshot. +1. Upload the remaining files from step 3 from the created snapshot to `-dst`. +1. Delete the created snapshot. The algorithm splits source files into 1 GiB chunks in the backup. Each chunk is stored as a separate file in the backup. Such splitting balances between the number of files in the backup and the amounts of data that needs to be re-transferred after temporary errors. @@ -313,13 +313,13 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic ### Development build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmbackup` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmbackup` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmbackup` binary and puts it into the `bin` folder. ### Production build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmbackup-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmbackup-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmbackup-prod` binary and puts it into the `bin` folder. ### Building docker images diff --git a/docs/vmbackupmanager.md b/docs/vmbackupmanager.md index 767375febb..335c293479 100644 --- a/docs/vmbackupmanager.md +++ b/docs/vmbackupmanager.md @@ -309,7 +309,7 @@ If restore mark doesn't exist at `storageDataPath`(restore wasn't requested) `vm $ /vmbackupmanager-prod backup list [{"name":"daily/2023-04-07","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:07+00:00"},{"name":"hourly/2023-04-07:11","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:06+00:00"},{"name":"latest","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:04+00:00"},{"name":"monthly/2023-04","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:10+00:00"},{"name":"weekly/2023-14","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:09+00:00"}] ``` -2. Run `vmbackupmanager restore create` to create restore mark: +1. Run `vmbackupmanager restore create` to create restore mark: - Use relative path to backup to restore from currently used remote storage: ```console $ /vmbackupmanager-prod restore create daily/2023-04-07 @@ -318,12 +318,12 @@ If restore mark doesn't exist at `storageDataPath`(restore wasn't requested) `vm ```console $ /vmbackupmanager-prod restore create azblob://test1/vmbackupmanager/daily/2023-04-07 ``` -3. Stop `vmstorage` or `vmsingle` node -4. Run `vmbackupmanager restore` to restore backup: +1. Stop `vmstorage` or `vmsingle` node +1. Run `vmbackupmanager restore` to restore backup: ```console $ /vmbackupmanager-prod restore -credsFilePath=credentials.json -storageDataPath=/vmstorage-data ``` -5. Start `vmstorage` or `vmsingle` node +1. Start `vmstorage` or `vmsingle` node ### How to restore in Kubernetes @@ -337,13 +337,13 @@ If restore mark doesn't exist at `storageDataPath`(restore wasn't requested) `vm enabled: "true" ``` See operator `VMStorage` schema [here](https://docs.victoriametrics.com/operator/api.html#vmstorage) and `VMSingle` [here](https://docs.victoriametrics.com/operator/api.html#vmsinglespec). -2. Enter container running `vmbackupmanager` -2. Use `vmbackupmanager backup list` to get list of available backups: +1. Enter container running `vmbackupmanager` +1. Use `vmbackupmanager backup list` to get list of available backups: ```console $ /vmbackupmanager-prod backup list [{"name":"daily/2023-04-07","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:07+00:00"},{"name":"hourly/2023-04-07:11","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:06+00:00"},{"name":"latest","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:04+00:00"},{"name":"monthly/2023-04","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:10+00:00"},{"name":"weekly/2023-14","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:09+00:00"}] ``` -3. Use `vmbackupmanager restore create` to create restore mark: +1. Use `vmbackupmanager restore create` to create restore mark: - Use relative path to backup to restore from currently used remote storage: ```console $ /vmbackupmanager-prod restore create daily/2023-04-07 @@ -352,7 +352,7 @@ If restore mark doesn't exist at `storageDataPath`(restore wasn't requested) `vm ```console $ /vmbackupmanager-prod restore create azblob://test1/vmbackupmanager/daily/2023-04-07 ``` -4. Restart pod +1. Restart pod #### Restore cluster into another cluster @@ -369,13 +369,13 @@ Clusters here are referred to as `source` and `destination`. ``` Note: it is safe to leave this section in the cluster configuration, since it will be ignored if restore mark doesn't exist. > Important! Use different `-dst` for *destination* cluster to avoid overwriting backup data of the *source* cluster. -2. Enter container running `vmbackupmanager` in *source* cluster -2. Use `vmbackupmanager backup list` to get list of available backups: +1. Enter container running `vmbackupmanager` in *source* cluster +1. Use `vmbackupmanager backup list` to get list of available backups: ```console $ /vmbackupmanager-prod backup list [{"name":"daily/2023-04-07","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:07+00:00"},{"name":"hourly/2023-04-07:11","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:06+00:00"},{"name":"latest","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:04+00:00"},{"name":"monthly/2023-04","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:10+00:00"},{"name":"weekly/2023-14","size_bytes":318837,"size":"311.4ki","created_at":"2023-04-07T16:15:09+00:00"}] ``` -3. Use `vmbackupmanager restore create` to create restore mark at each pod of the *destination* cluster. +1. Use `vmbackupmanager restore create` to create restore mark at each pod of the *destination* cluster. Each pod in *destination* cluster should be restored from backup of respective pod in *source* cluster. For example: `vmstorage-source-0` in *source* cluster should be restored from `vmstorage-destination-0` in *destination* cluster. ```console diff --git a/docs/vmctl.md b/docs/vmctl.md index 0dae0a3ee7..6015c0ba51 100644 --- a/docs/vmctl.md +++ b/docs/vmctl.md @@ -96,24 +96,24 @@ See `./vmctl opentsdb --help` for details and full list of flags. OpenTSDB migration works like so: -1. Find metrics based on selected filters (or the default filter set `['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z']`) +1. Find metrics based on selected filters (or the default filter set `['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z']`): -- e.g. `curl -Ss "http://opentsdb:4242/api/suggest?type=metrics&q=sys"` + `curl -Ss "http://opentsdb:4242/api/suggest?type=metrics&q=sys"` -2. Find series associated with each returned metric +1. Find series associated with each returned metric: -- e.g. `curl -Ss "http://opentsdb:4242/api/search/lookup?m=system.load5&limit=1000000"` + `curl -Ss "http://opentsdb:4242/api/search/lookup?m=system.load5&limit=1000000"` -Here `results` return field should not be empty. Otherwise, it means that meta tables are absent and needs to be turned on previously. + Here `results` return field should not be empty. Otherwise, it means that meta tables are absent and needs to be turned on previously. -3. Download data for each series in chunks defined in the CLI switches +1. Download data for each series in chunks defined in the CLI switches: -- e.g. `-retention=sum-1m-avg:1h:90d` means - - `curl -Ss "http://opentsdb:4242/api/query?start=1h-ago&end=now&m=sum:1m-avg-none:system.load5\{host=host1\}"` - - `curl -Ss "http://opentsdb:4242/api/query?start=2h-ago&end=1h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"` - - `curl -Ss "http://opentsdb:4242/api/query?start=3h-ago&end=2h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"` - - ... - - `curl -Ss "http://opentsdb:4242/api/query?start=2160h-ago&end=2159h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"` + `-retention=sum-1m-avg:1h:90d` means: + - `curl -Ss "http://opentsdb:4242/api/query?start=1h-ago&end=now&m=sum:1m-avg-none:system.load5\{host=host1\}"` + - `curl -Ss "http://opentsdb:4242/api/query?start=2h-ago&end=1h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"` + - `curl -Ss "http://opentsdb:4242/api/query?start=3h-ago&end=2h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"` + - ... + - `curl -Ss "http://opentsdb:4242/api/query?start=2160h-ago&end=2159h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"` This means that we must stream data from OpenTSDB to VictoriaMetrics in chunks. This is where concurrency for OpenTSDB comes in. We can query multiple chunks at once, but we shouldn't perform too many chunks at a time to avoid overloading the OpenTSDB cluster. @@ -142,7 +142,7 @@ Starting with a relatively simple retention string (`sum-1m-avg:1h:30d`), let's There are two essential parts of a retention string: 1. [aggregation](#aggregation) -2. [windows/time ranges](#windows) +1. [windows/time ranges](#windows) #### Aggregation @@ -174,7 +174,7 @@ We do not allow for defining the "null value" portion of the rollup window (e.g. There are two important windows we define in a retention string: 1. the "chunk" range of each query -2. The time range we will be querying on with that "chunk" +1. The time range we will be querying on with that "chunk" From our example, our windows are `1h:30d`. @@ -456,11 +456,11 @@ See `./vmctl remote-read --help` for details and full list of flags. To start the migration process configure the following flags: 1. `--remote-read-src-addr` - data source address to read from; -2. `--vm-addr` - VictoriaMetrics address to write to. For single-node VM is usually equal to `--httpListenAddr`, -and for cluster version is equal to `--httpListenAddr` flag of vminsert component (for example `http://:8480/insert//prometheus`); -3. `--remote-read-filter-time-start` - the time filter in RFC3339 format to select time series with timestamp equal or higher than provided value. E.g. '2020-01-01T20:07:00Z'; -4. `--remote-read-filter-time-end` - the time filter in RFC3339 format to select time series with timestamp equal or smaller than provided value. E.g. '2020-01-01T20:07:00Z'. Current time is used when omitted.; -5. `--remote-read-step-interval` - split export data into chunks. Valid values are `month, day, hour, minute`; +1. `--vm-addr` - VictoriaMetrics address to write to. For single-node VM is usually equal to `--httpListenAddr`, + and for cluster version is equal to `--httpListenAddr` flag of vminsert component (for example `http://:8480/insert//prometheus`); +1. `--remote-read-filter-time-start` - the time filter in RFC3339 format to select time series with timestamp equal or higher than provided value. E.g. '2020-01-01T20:07:00Z'; +1. `--remote-read-filter-time-end` - the time filter in RFC3339 format to select time series with timestamp equal or smaller than provided value. E.g. '2020-01-01T20:07:00Z'. Current time is used when omitted.; +1. `--remote-read-step-interval` - split export data into chunks. Valid values are `month, day, hour, minute`; The importing process example for local installation of Prometheus and single-node VictoriaMetrics(`http://localhost:8428`): @@ -527,7 +527,7 @@ and that you have a separate Thanos Store installation. - url: http://victoria-metrics:8428/api/v1/write ``` -2. Make sure VM is running, of course. Now check the logs to make sure that Prometheus is sending and VM is receiving. +1. Make sure VM is running, of course. Now check the logs to make sure that Prometheus is sending and VM is receiving. In Prometheus, make sure there are no errors. On the VM side, you should see messages like this: ``` @@ -535,7 +535,7 @@ and that you have a separate Thanos Store installation. 2020-04-27T18:38:46.506Z info VictoriaMetrics/lib/storage/partition.go:222 partition "2020_04" has been created ``` -3. Now just wait. Within two hours, Prometheus should finish its current data file and hand it off to Thanos Store for long term +1. Now just wait. Within two hours, Prometheus should finish its current data file and hand it off to Thanos Store for long term storage. ### Historical data @@ -747,7 +747,7 @@ See `./vmctl vm-native --help` for details and full list of flags. Migration in `vm-native` mode takes two steps: 1. Explore the list of the metrics to migrate via `api/v1/label/__name__/values` API; -2. Migrate explored metrics one-by-one. +1. Migrate explored metrics one-by-one. ``` ./vmctl vm-native \ @@ -781,54 +781,54 @@ _To disable explore phase and switch to the old way of data migration via single Importing tips: 1. Migrating big volumes of data may result in reaching the safety limits on `src` side. -Please verify that `-search.maxExportDuration` and `-search.maxExportSeries` were set with -proper values for `src`. If hitting the limits, follow the recommendations -[here](https://docs.victoriametrics.com/#how-to-export-data-in-native-format). -If hitting `the number of matching timeseries exceeds...` error, adjust filters to match less time series or -update `-search.maxSeries` command-line flag on vmselect/vmsingle; -2. Migrating all the metrics from one VM to another may collide with existing application metrics -(prefixed with `vm_`) at destination and lead to confusion when using -[official Grafana dashboards](https://grafana.com/orgs/victoriametrics/dashboards). -To avoid such situation try to filter out VM process metrics via `--vm-native-filter-match='{__name__!~"vm_.*"}'` flag. -3. Migrating data with overlapping time range or via unstable network can produce duplicates series at destination. -To avoid duplicates set `-dedup.minScrapeInterval=1ms` for `vmselect`/`vmstorage` at the destination. -This will instruct `vmselect`/`vmstorage` to ignore duplicates with identical timestamps. -4. When migrating large volumes of data use `--vm-native-step-interval` flag to split migration [into steps](#using-time-based-chunking-of-migration). -5. When migrating data from one VM cluster to another, consider using [cluster-to-cluster mode](#cluster-to-cluster-migration-mode). -Or manually specify addresses according to [URL format](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format): - ```console - # Migrating from cluster specific tenantID to single - --vm-native-src-addr=http://:8481/select/0/prometheus - --vm-native-dst-addr=http://:8428 + Please verify that `-search.maxExportDuration` and `-search.maxExportSeries` were set with + proper values for `src`. If hitting the limits, follow the recommendations + [here](https://docs.victoriametrics.com/#how-to-export-data-in-native-format). + If hitting `the number of matching timeseries exceeds...` error, adjust filters to match less time series or + update `-search.maxSeries` command-line flag on vmselect/vmsingle; +1. Migrating all the metrics from one VM to another may collide with existing application metrics + (prefixed with `vm_`) at destination and lead to confusion when using + [official Grafana dashboards](https://grafana.com/orgs/victoriametrics/dashboards). + To avoid such situation try to filter out VM process metrics via `--vm-native-filter-match='{__name__!~"vm_.*"}'` flag. +1. Migrating data with overlapping time range or via unstable network can produce duplicates series at destination. + To avoid duplicates set `-dedup.minScrapeInterval=1ms` for `vmselect`/`vmstorage` at the destination. + This will instruct `vmselect`/`vmstorage` to ignore duplicates with identical timestamps. +1. When migrating large volumes of data use `--vm-native-step-interval` flag to split migration [into steps](#using-time-based-chunking-of-migration). +1. When migrating data from one VM cluster to another, consider using [cluster-to-cluster mode](#cluster-to-cluster-migration-mode). + Or manually specify addresses according to [URL format](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format): + ```console + # Migrating from cluster specific tenantID to single + --vm-native-src-addr=http://:8481/select/0/prometheus + --vm-native-dst-addr=http://:8428 - # Migrating from single to cluster specific tenantID - --vm-native-src-addr=http://:8428 - --vm-native-src-addr=http://:8480/insert/0/prometheus + # Migrating from single to cluster specific tenantID + --vm-native-src-addr=http://:8428 + --vm-native-src-addr=http://:8480/insert/0/prometheus - # Migrating single to single - --vm-native-src-addr=http://:8428 - --vm-native-dst-addr=http://:8428 + # Migrating single to single + --vm-native-src-addr=http://:8428 + --vm-native-dst-addr=http://:8428 - # Migrating cluster to cluster for specific tenant ID - --vm-native-src-addr=http://:8481/select/0/prometheus - --vm-native-dst-addr=http://:8480/insert/0/prometheus - ``` -6. Migrating data from VM cluster which had replication (`-replicationFactor` > 1) enabled won't produce the same amount -of data copies for the destination database, and will result only in creating duplicates. To remove duplicates, -destination database need to be configured with `-dedup.minScrapeInterval=1ms`. To restore the replication factor -the destination `vminsert` component need to be configured with the according `-replicationFactor` value. -See more about replication [here](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#replication-and-data-safety). -7. Migration speed can be adjusted via `--vm-concurrency` cmd-line flag, which controls the number of concurrent -workers busy with processing. Please note, that each worker can load up to a single vCPU core on VictoriaMetrics. -So try to set it according to allocated CPU resources of your VictoriaMetrics destination installation. -7. Migration is a backfilling process, so it is recommended to read -[Backfilling tips](https://github.com/VictoriaMetrics/VictoriaMetrics#backfilling) section. -8. `vmctl` doesn't provide relabeling or other types of labels management. -Instead, use [relabeling in VictoriaMetrics](https://github.com/VictoriaMetrics/vmctl/issues/4#issuecomment-683424375). -9. `vmctl` supports `--vm-native-src-headers` and `--vm-native-dst-headers` to define headers sent with each request -to the corresponding source address. -10. `vmctl` supports `--vm-native-disable-http-keep-alive` to allow `vmctl` to use non-persistent HTTP connections to avoid -error `use of closed network connection` when run a longer export. + # Migrating cluster to cluster for specific tenant ID + --vm-native-src-addr=http://:8481/select/0/prometheus + --vm-native-dst-addr=http://:8480/insert/0/prometheus + ``` +1. Migrating data from VM cluster which had replication (`-replicationFactor` > 1) enabled won't produce the same amount + of data copies for the destination database, and will result only in creating duplicates. To remove duplicates, + destination database need to be configured with `-dedup.minScrapeInterval=1ms`. To restore the replication factor + the destination `vminsert` component need to be configured with the according `-replicationFactor` value. + See more about replication [here](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#replication-and-data-safety). +1. Migration speed can be adjusted via `--vm-concurrency` cmd-line flag, which controls the number of concurrent + workers busy with processing. Please note, that each worker can load up to a single vCPU core on VictoriaMetrics. + So try to set it according to allocated CPU resources of your VictoriaMetrics destination installation. +1. Migration is a backfilling process, so it is recommended to read + [Backfilling tips](https://github.com/VictoriaMetrics/VictoriaMetrics#backfilling) section. +1. `vmctl` doesn't provide relabeling or other types of labels management. + Instead, use [relabeling in VictoriaMetrics](https://github.com/VictoriaMetrics/vmctl/issues/4#issuecomment-683424375). +1. `vmctl` supports `--vm-native-src-headers` and `--vm-native-dst-headers` to define headers sent with each request + to the corresponding source address. +1. `vmctl` supports `--vm-native-disable-http-keep-alive` to allow `vmctl` to use non-persistent HTTP connections to avoid + error `use of closed network connection` when run a longer export. ### Using time-based chunking of migration @@ -1035,13 +1035,13 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic ### Development build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmctl` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmctl` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmctl` binary and puts it into the `bin` folder. ### Production build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmctl-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmctl-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmctl-prod` binary and puts it into the `bin` folder. ### Building docker images @@ -1064,11 +1064,11 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b #### Development ARM build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmctl-linux-arm` or `make vmctl-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmctl-linux-arm` or `make vmctl-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmctl-linux-arm` or `vmctl-linux-arm64` binary respectively and puts it into the `bin` folder. #### Production ARM build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmctl-linux-arm-prod` or `make vmctl-linux-arm64-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmctl-linux-arm-prod` or `make vmctl-linux-arm64-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmctl-linux-arm-prod` or `vmctl-linux-arm64-prod` binary respectively and puts it into the `bin` folder. diff --git a/docs/vmrestore.md b/docs/vmrestore.md index b699d40522..e3c41e3fed 100644 --- a/docs/vmrestore.md +++ b/docs/vmrestore.md @@ -213,13 +213,13 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic ### Development build 1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19. -2. Run `make vmrestore` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmrestore` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmrestore` binary and puts it into the `bin` folder. ### Production build 1. [Install docker](https://docs.docker.com/install/). -2. Run `make vmrestore-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). +1. Run `make vmrestore-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics). It builds `vmrestore-prod` binary and puts it into the `bin` folder. ### Building docker images