mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2025-01-10 15:14:09 +00:00
app: sync Markdown changes from a8de1ab000
This commit is contained in:
parent
b421a1f57b
commit
c8f356a6a8
8 changed files with 669 additions and 659 deletions
|
@ -6,7 +6,6 @@ or any other Prometheus-compatible storage systems that support the `remote_writ
|
|||
|
||||
<img alt="vmagent" src="vmagent.png">
|
||||
|
||||
|
||||
## Motivation
|
||||
|
||||
While VictoriaMetrics provides an efficient solution to store and observe metrics, our users needed something fast
|
||||
|
@ -14,7 +13,6 @@ and RAM friendly to scrape metrics from Prometheus-compatible exporters into Vic
|
|||
Also, we found that our user's infrastructure are like snowflakes in that no two are alike. Therefore we decided to add more flexibility
|
||||
to `vmagent` such as the ability to push metrics additionally to pulling them. We did our best and will continue to improve `vmagent`.
|
||||
|
||||
|
||||
## Features
|
||||
|
||||
* Can be used as a drop-in replacement for Prometheus for scraping targets such as [node_exporter](https://github.com/prometheus/node_exporter). See [Quick Start](#quick-start) for details.
|
||||
|
@ -67,7 +65,6 @@ Then send InfluxDB data to `http://vmagent-host:8429`. See [these docs](https://
|
|||
|
||||
Pass `-help` to `vmagent` in order to see [the full list of supported command-line flags with their descriptions](#advanced-usage).
|
||||
|
||||
|
||||
## Configuration update
|
||||
|
||||
`vmagent` should be restarted in order to update config options set via command-line args.
|
||||
|
@ -75,6 +72,7 @@ Pass `-help` to `vmagent` in order to see [the full list of supported command-li
|
|||
`vmagent` supports multiple approaches for reloading configs from updated config files such as `-promscrape.config`, `-remoteWrite.relabelConfig` and `-remoteWrite.urlRelabelConfig`:
|
||||
|
||||
* Sending `SUGHUP` signal to `vmagent` process:
|
||||
|
||||
```bash
|
||||
kill -SIGHUP `pidof vmagent`
|
||||
```
|
||||
|
@ -83,10 +81,8 @@ Pass `-help` to `vmagent` in order to see [the full list of supported command-li
|
|||
|
||||
There is also `-promscrape.configCheckInterval` command-line option, which can be used for automatic reloading configs from updated `-promscrape.config` file.
|
||||
|
||||
|
||||
## Use cases
|
||||
|
||||
|
||||
### IoT and Edge monitoring
|
||||
|
||||
`vmagent` can run and collect metrics in IoT and industrial networks with unreliable or scheduled connections to their remote storage.
|
||||
|
@ -97,28 +93,24 @@ The maximum buffer size can be limited with `-remoteWrite.maxDiskUsagePerURL`.
|
|||
`vmagent` works on various architectures from the IoT world - 32-bit arm, 64-bit arm, ppc64, 386, amd64.
|
||||
See [the corresponding Makefile rules](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmagent/Makefile) for details.
|
||||
|
||||
|
||||
### Drop-in replacement for Prometheus
|
||||
|
||||
If you use Prometheus only for scraping metrics from various targets and forwarding those metrics to remote storage
|
||||
then `vmagent` can replace Prometheus. Typically, `vmagent` requires lower amounts of RAM, CPU and network bandwidth compared with Prometheus.
|
||||
See [these docs](#how-to-collect-metrics-in-prometheus-format) for details.
|
||||
|
||||
|
||||
### Replication and high availability
|
||||
|
||||
`vmagent` replicates the collected metrics among multiple remote storage instances configured via `-remoteWrite.url` args.
|
||||
If a single remote storage instance temporarily is out of service, then the collected data remains available in another remote storage instance.
|
||||
`vmagent` buffers the collected data in files at `-remoteWrite.tmpDataPath` until the remote storage becomes available again and then it sends the buffered data to the remote storage in order to prevent data gaps.
|
||||
|
||||
|
||||
### Relabeling and filtering
|
||||
|
||||
`vmagent` can add, remove or update labels on the collected data before sending it to the remote storage. Additionally,
|
||||
it can remove unwanted samples via Prometheus-like relabeling before sending the collected data to remote storage.
|
||||
Please see [these docs](#relabeling) for details.
|
||||
|
||||
|
||||
### Splitting data streams among multiple systems
|
||||
|
||||
`vmagent` supports splitting the collected data between muliple destinations with the help of `-remoteWrite.urlRelabelConfig`,
|
||||
|
@ -126,7 +118,6 @@ which is applied independently for each configured `-remoteWrite.url` destinatio
|
|||
data among long-term remote storage, short-term remote storage and a real-time analytical system [built on top of Kafka](https://github.com/Telefonica/prometheus-kafka-adapter).
|
||||
Note that each destination can receive it's own subset of the collected data due to per-destination relabeling via `-remoteWrite.urlRelabelConfig`.
|
||||
|
||||
|
||||
### Prometheus remote_write proxy
|
||||
|
||||
`vmagent` can be used as a proxy for Prometheus data sent via Prometheus `remote_write` protocol. It can accept data via the `remote_write` API
|
||||
|
@ -134,7 +125,6 @@ at the`/api/v1/write` endpoint. Then apply relabeling and filtering and proxy it
|
|||
The `vmagent` can be configured to encrypt the incoming `remote_write` requests with `-tls*` command-line flags.
|
||||
Also, Basic Auth can be enabled for the incoming `remote_write` requests with `-httpAuth.*` command-line flags.
|
||||
|
||||
|
||||
### remote_write for clustered version
|
||||
|
||||
While `vmagent` can accept data in several supported protocols (OpenTSDB, Influx, Prometheus, Graphite) and scrape data from various targets, writes are always peformed in Promethes remote_write protocol. Therefore for the [clustered version](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html), `-remoteWrite.url` the command-line flag should be configured as `<schema>://<vminsert-host>:8480/insert/<accountID>/prometheus/api/v1/write` according to [these docs](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format). There is also support for multitenant writes. See [these docs](#multitenancy).
|
||||
|
@ -143,7 +133,6 @@ While `vmagent` can accept data in several supported protocols (OpenTSDB, Influx
|
|||
|
||||
By default `vmagent` collects the data without tenant identifiers and routes it to the configured `-remoteWrite.url`. But it can accept multitenant data if `-remoteWrite.multitenantURL` is set. In this case it accepts multitenant data at `http://vmagent:8429/insert/<accountID>/...` in the same way as cluster version of VictoriaMetrics does according to [these docs](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format) and routes it to `<-remoteWrite.multitenantURL>/insert/<accountID>/prometheus/api/v1/write`. If multiple `-remoteWrite.multitenantURL` command-line options are set, then `vmagent` replicates the collected data across all the configured urls. This allows using a single `vmagent` instance in front of VictoriaMetrics clusters for processing the data from all the tenants.
|
||||
|
||||
|
||||
## How to collect metrics in Prometheus format
|
||||
|
||||
Specify the path to `prometheus.yml` file via `-promscrape.config` command-line flag. `vmagent` takes into account the following
|
||||
|
@ -211,7 +200,6 @@ entries to 60s. Run `vmagent -help` in order to see default values for the `-pro
|
|||
|
||||
The file pointed by `-promscrape.config` may contain `%{ENV_VAR}` placeholders which are substituted by the corresponding `ENV_VAR` environment variable values.
|
||||
|
||||
|
||||
## Loading scrape configs from multiple files
|
||||
|
||||
`vmagent` supports loading scrape configs from multiple files specified in the `scrape_config_files` section of `-promscrape.config` file. For example, the following `-promscrape.config` instructs `vmagent` loading scrape configs from all the `*.yml` files under `configs` directory, from `single_scrape_config.yml` local file and from `https://config-server/scrape_config.yml` url:
|
||||
|
@ -236,7 +224,6 @@ Every referred file can contain arbitrary number of [supported scrape configs](#
|
|||
|
||||
`vmagent` dynamically reloads these files on `SIGHUP` signal or on the request to `http://vmagent:8429/-/reload`.
|
||||
|
||||
|
||||
## Unsupported Prometheus config sections
|
||||
|
||||
`vmagent` doesn't support the following sections in Prometheus config file passed to `-promscrape.config` command-line flag:
|
||||
|
@ -249,7 +236,6 @@ The list of supported service discovery types is available [here](#how-to-collec
|
|||
|
||||
Additionally `vmagent` doesn't support `refresh_interval` option at service discovery sections. This option is substituted with `-promscrape.*CheckInterval` command-line options, which are specific per each service discovery type. See [the full list of command-line flags for vmagent](#advanced-usage).
|
||||
|
||||
|
||||
## Adding labels to metrics
|
||||
|
||||
Labels can be added to metrics by the following mechanisms:
|
||||
|
@ -261,7 +247,6 @@ Labels can be added to metrics by the following mechanisms:
|
|||
/path/to/vmagent -remoteWrite.label=datacenter=foobar ...
|
||||
```
|
||||
|
||||
|
||||
## Relabeling
|
||||
|
||||
VictoriaMetrics components (including `vmagent`) support Prometheus-compatible relabeling.
|
||||
|
@ -320,7 +305,6 @@ You can read more about relabeling in the following articles:
|
|||
* [Extracting labels from legacy metric names](https://www.robustperception.io/extracting-labels-from-legacy-metric-names)
|
||||
* [relabel_configs vs metric_relabel_configs](https://www.robustperception.io/relabel_configs-vs-metric_relabel_configs)
|
||||
|
||||
|
||||
## Prometheus staleness markers
|
||||
|
||||
`vmagent` sends [Prometheus staleness markers](https://www.robustperception.io/staleness-and-promql) to `-remoteWrite.url` in the following cases:
|
||||
|
@ -332,16 +316,15 @@ You can read more about relabeling in the following articles:
|
|||
|
||||
Prometheus staleness markers' tracking needs additional memory, since it must store the previous response body per each scrape target in order to compare it to the current response body. The memory usage may be reduced by passing `-promscrape.noStaleMarkers` command-line flag to `vmagent`. This disables staleness tracking. This also disables tracking the number of new time series per each scrape with the auto-generated `scrape_series_added` metric. See [these docs](https://prometheus.io/docs/concepts/jobs_instances/#automatically-generated-labels-and-time-series) for details.
|
||||
|
||||
|
||||
## Stream parsing mode
|
||||
|
||||
By default `vmagent` reads the full response body from scrape target into memory, then parses it, applies [relabeling](#relabeling) and then pushes the resulting metrics to the configured `-remoteWrite.url`. This mode works good for the majority of cases when the scrape target exposes small number of metrics (e.g. less than 10 thousand). But this mode may take big amounts of memory when the scrape target exposes big number of metrics. In this case it is recommended enabling stream parsing mode. When this mode is enabled, then `vmagent` reads response from scrape target in chunks, then immediately processes every chunk and pushes the processed metrics to remote storage. This allows saving memory when scraping targets that expose millions of metrics.
|
||||
|
||||
Stream parsing mode is automatically enabled for scrape targets returning response bodies with sizes bigger than the `-promscrape.minResponseSizeForStreamParse` command-line flag value. Additionally, the stream parsing mode can be explicitly enabled in the following places:
|
||||
|
||||
- Via `-promscrape.streamParse` command-line flag. In this case all the scrape targets defined in the file pointed by `-promscrape.config` are scraped in stream parsing mode.
|
||||
- Via `stream_parse: true` option at `scrape_configs` section. In this case all the scrape targets defined in this section are scraped in stream parsing mode.
|
||||
- Via `__stream_parse__=true` label, which can be set via [relabeling](#relabeling) at `relabel_configs` section. In this case stream parsing mode is enabled for the corresponding scrape targets. Typical use case: to set the label via [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) for targets exposing big number of metrics.
|
||||
* Via `-promscrape.streamParse` command-line flag. In this case all the scrape targets defined in the file pointed by `-promscrape.config` are scraped in stream parsing mode.
|
||||
* Via `stream_parse: true` option at `scrape_configs` section. In this case all the scrape targets defined in this section are scraped in stream parsing mode.
|
||||
* Via `__stream_parse__=true` label, which can be set via [relabeling](#relabeling) at `relabel_configs` section. In this case stream parsing mode is enabled for the corresponding scrape targets. Typical use case: to set the label via [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) for targets exposing big number of metrics.
|
||||
|
||||
Examples:
|
||||
|
||||
|
@ -361,7 +344,6 @@ scrape_configs:
|
|||
|
||||
Note that `sample_limit` and `series_limit` options cannot be used in stream parsing mode because the parsed data is pushed to remote storage as soon as it is parsed.
|
||||
|
||||
|
||||
## Scraping big number of targets
|
||||
|
||||
A single `vmagent` instance can scrape tens of thousands of scrape targets. Sometimes this isn't enough due to limitations on CPU, network, RAM, etc.
|
||||
|
@ -389,7 +371,6 @@ start a cluster of three `vmagent` instances, where each target is scraped by tw
|
|||
If each target is scraped by multiple `vmagent` instances, then data deduplication must be enabled at remote storage pointed by `-remoteWrite.url`.
|
||||
See [these docs](https://docs.victoriametrics.com/#deduplication) for details.
|
||||
|
||||
|
||||
## Scraping targets via a proxy
|
||||
|
||||
`vmagent` supports scraping targets via http, https and socks5 proxies. Proxy address must be specified in `proxy_url` option. For example, the following scrape config instructs
|
||||
|
@ -429,9 +410,9 @@ scrape_configs:
|
|||
|
||||
By default `vmagent` doesn't limit the number of time series each scrape target can expose. The limit can be enforced in the following places:
|
||||
|
||||
- Via `-promscrape.seriesLimitPerTarget` command-line option. This limit is applied individually to all the scrape targets defined in the file pointed by `-promscrape.config`.
|
||||
- Via `series_limit` config option at `scrape_config` section. This limit is applied individually to all the scrape targets defined in the given `scrape_config`.
|
||||
- Via `__series_limit__` label, which can be set with [relabeling](#relabeling) at `relabel_configs` section. This limit is applied to the corresponding scrape targets. Typical use case: to set the limit via [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) for targets, which may expose too high number of time series.
|
||||
* Via `-promscrape.seriesLimitPerTarget` command-line option. This limit is applied individually to all the scrape targets defined in the file pointed by `-promscrape.config`.
|
||||
* Via `series_limit` config option at `scrape_config` section. This limit is applied individually to all the scrape targets defined in the given `scrape_config`.
|
||||
* Via `__series_limit__` label, which can be set with [relabeling](#relabeling) at `relabel_configs` section. This limit is applied to the corresponding scrape targets. Typical use case: to set the limit via [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) for targets, which may expose too high number of time series.
|
||||
|
||||
All the scraped metrics are dropped for time series exceeding the given limit. The exceeded limit can be [monitored](#monitoring) via `promscrape_series_limit_rows_dropped_total` metric.
|
||||
|
||||
|
@ -451,7 +432,6 @@ The exceeded limits can be [monitored](#monitoring) with the following metrics:
|
|||
|
||||
These limits are approximate, so `vmagent` can underflow/overflow the limit by a small percentage (usually less than 1%).
|
||||
|
||||
|
||||
## Monitoring
|
||||
|
||||
`vmagent` exports various metrics in Prometheus exposition format at `http://vmagent-host:8429/metrics` page. We recommend setting up regular scraping of this page
|
||||
|
@ -470,7 +450,6 @@ This information may be useful for debugging target relabeling.
|
|||
* `http://vmagent-host:8429/ready`. This handler returns http 200 status code when `vmagent` finishes it's initialization for all service_discovery configs.
|
||||
It may be useful to perform `vmagent` rolling update without any scrape loss.
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
* We recommend you [set up the official Grafana dashboard](#monitoring) in order to monitor the state of `vmagent'.
|
||||
|
@ -534,12 +513,14 @@ It may be useful to perform `vmagent` rolling update without any scrape loss.
|
|||
See the available options below if you prefer fixing the root cause of the error:
|
||||
|
||||
The following relabeling rule may be added to `relabel_configs` section in order to filter out pods with unneeded ports:
|
||||
|
||||
```yml
|
||||
- action: keep_if_equal
|
||||
source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_container_port_number]
|
||||
```
|
||||
|
||||
The following relabeling rule may be added to `relabel_configs` section in order to filter out init container pods:
|
||||
|
||||
```yml
|
||||
- action: drop
|
||||
source_labels: [__meta_kubernetes_pod_container_init]
|
||||
|
@ -555,7 +536,6 @@ It may be useful to perform `vmagent` rolling update without any scrape loss.
|
|||
|
||||
The enterprise version of vmagent is available for evaluation at [releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) page in `vmutils-*-enteprise.tar.gz` archives and in [docker images](https://hub.docker.com/r/victoriametrics/vmagent/tags) with tags containing `enterprise` suffix.
|
||||
|
||||
|
||||
### Reading metrics from Kafka
|
||||
|
||||
[Enterprise version](https://victoriametrics.com/products/enterprise/) of `vmagent` can read metrics in various formats from Kafka messages. These formats can be configured with `-kafka.consumer.topic.defaultFormat` or `-kafka.consumer.topic.format` command-line options. The following formats are supported:
|
||||
|
@ -591,7 +571,6 @@ topic = "influx"
|
|||
data_format = "influx"
|
||||
```
|
||||
|
||||
|
||||
#### Command-line flags for Kafka consumer
|
||||
|
||||
These command-line flags are available only in [enterprise](https://victoriametrics.com/products/enterprise/) version of `vmagent`, which can be downloaded for evaluation from [releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) page (see `vmutils-*-enteprise.tar.gz` archives) and from [docker images](https://hub.docker.com/r/victoriametrics/vmagent/tags) with tags containing `enterprise` suffix.
|
||||
|
@ -631,7 +610,6 @@ These command-line flags are available only in [enterprise](https://victoriametr
|
|||
|
||||
Additional Kafka options can be passed as query params to `-remoteWrite.url`. For instance, `kafka://localhost:9092/?topic=prom-rw&client.id=my-favorite-id` sets `client.id` Kafka option to `my-favorite-id`. The full list of Kafka options is available [here](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md).
|
||||
|
||||
|
||||
#### Kafka broker authorization and authentication
|
||||
|
||||
Two types of auth are supported:
|
||||
|
@ -648,12 +626,10 @@ Two types of auth are supported:
|
|||
./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SSL -remoteWrite.tlsCAFile=/opt/ca.pem -remoteWrite.tlsCertFile=/opt/cert.pem -remoteWrite.tlsKeyFile=/opt/key.pem
|
||||
```
|
||||
|
||||
|
||||
## How to build from sources
|
||||
|
||||
We recommend using [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) - `vmagent` is located in the `vmutils-*` archives .
|
||||
|
||||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.17.
|
||||
|
@ -695,7 +671,6 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
2. Run `make vmagent-arm-prod` or `make vmagent-arm64-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmagent-arm-prod` or `vmagent-arm64-prod` binary respectively and puts it into the `bin` folder.
|
||||
|
||||
|
||||
## Profiling
|
||||
|
||||
`vmagent` provides handlers for collecting the following [Go profiles](https://blog.golang.org/profiling-go-programs):
|
||||
|
@ -724,7 +699,6 @@ The command for collecting CPU profile waits for 30 seconds before returning.
|
|||
|
||||
The collected profiles may be analyzed with [go tool pprof](https://github.com/google/pprof).
|
||||
|
||||
|
||||
## Advanced usage
|
||||
|
||||
`vmagent` can be fine-tuned with various command-line flags. Run `./vmagent -help` in order to see the full list of these flags with their desciptions and default values:
|
||||
|
|
|
@ -10,6 +10,7 @@ Vmalert is heavily inspired by [Prometheus](https://prometheus.io/docs/alerting/
|
|||
implementation and aims to be compatible with its syntax.
|
||||
|
||||
## Features
|
||||
|
||||
* Integration with [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics) TSDB;
|
||||
* VictoriaMetrics [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html)
|
||||
support and expressions validation;
|
||||
|
@ -22,6 +23,7 @@ implementation and aims to be compatible with its syntax.
|
|||
* Lightweight without extra dependencies.
|
||||
|
||||
## Limitations
|
||||
|
||||
* `vmalert` execute queries against remote datasource which has reliability risks because of the network.
|
||||
It is recommended to configure alerts thresholds and rules expressions with the understanding that network
|
||||
requests may fail;
|
||||
|
@ -32,14 +34,17 @@ recording rule is reused in the next one;
|
|||
## QuickStart
|
||||
|
||||
To build `vmalert` from sources:
|
||||
```
|
||||
|
||||
```bash
|
||||
git clone https://github.com/VictoriaMetrics/VictoriaMetrics
|
||||
cd VictoriaMetrics
|
||||
make vmalert
|
||||
```
|
||||
|
||||
The build binary will be placed in `VictoriaMetrics/bin` folder.
|
||||
|
||||
To start using `vmalert` you will need the following things:
|
||||
|
||||
* list of rules - PromQL/MetricsQL expressions to execute;
|
||||
* datasource address - reachable MetricsQL endpoint to run queries against;
|
||||
* notifier address [optional] - reachable [Alert Manager](https://github.com/prometheus/alertmanager) instance for processing,
|
||||
|
@ -50,7 +55,8 @@ aggregating alerts, and sending notifications. Please note, notifier address als
|
|||
* remote read address [optional] - MetricsQL compatible datasource to restore alerts state from.
|
||||
|
||||
Then configure `vmalert` accordingly:
|
||||
```
|
||||
|
||||
```bash
|
||||
./bin/vmalert -rule=alert.rules \ # Path to the file with rules configuration. Supports wildcard
|
||||
-datasource.url=http://localhost:8428 \ # PromQL compatible datasource
|
||||
-notifier.url=http://localhost:9093 \ # AlertManager URL (required if alerting rules are used)
|
||||
|
@ -77,6 +83,7 @@ and [alerting](https://prometheus.io/docs/prometheus/latest/configuration/alerti
|
|||
similar to Prometheus rules and configured using YAML. Configuration examples may be found
|
||||
in [testdata](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmalert/config/testdata) folder.
|
||||
Every `rule` belongs to a `group` and every configuration file may contain arbitrary number of groups:
|
||||
|
||||
```yaml
|
||||
groups:
|
||||
[ - <rule_group> ]
|
||||
|
@ -85,6 +92,7 @@ groups:
|
|||
### Groups
|
||||
|
||||
Each group has the following attributes:
|
||||
|
||||
```yaml
|
||||
# The name of the group. Must be unique within a file.
|
||||
name: <string>
|
||||
|
@ -136,6 +144,7 @@ or [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html) expression. Vmal
|
|||
expression and then act according to the Rule type.
|
||||
|
||||
There are two types of Rules:
|
||||
|
||||
* [alerting](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) -
|
||||
Alerting rules allow defining alert conditions via `expr` field and to send notifications to
|
||||
[Alertmanager](https://github.com/prometheus/alertmanager) if execution result is not empty.
|
||||
|
@ -150,6 +159,7 @@ within one group.
|
|||
#### Alerting rules
|
||||
|
||||
The syntax for alerting rule is the following:
|
||||
|
||||
```yaml
|
||||
# The name of the alert. Must be a valid metric name.
|
||||
alert: <string>
|
||||
|
@ -182,6 +192,7 @@ listed [here](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app
|
|||
#### Recording rules
|
||||
|
||||
The syntax for recording rules is following:
|
||||
|
||||
```yaml
|
||||
# The name of the time series to output to. Must be a valid metric name.
|
||||
record: <string>
|
||||
|
@ -198,11 +209,11 @@ labels:
|
|||
|
||||
For recording rules to work `-remoteWrite.url` must be specified.
|
||||
|
||||
|
||||
### Alerts state on restarts
|
||||
|
||||
`vmalert` has no local storage, so alerts state is stored in the process memory. Hence, after restart of `vmalert`
|
||||
the process alerts state will be lost. To avoid this situation, `vmalert` should be configured via the following flags:
|
||||
|
||||
* `-remoteWrite.url` - URL to VictoriaMetrics (Single) or vminsert (Cluster). `vmalert` will persist alerts state
|
||||
into the configured address in the form of time series named `ALERTS` and `ALERTS_FOR_STATE` via remote-write protocol.
|
||||
These are regular time series and maybe queried from VM just as any other time series.
|
||||
|
@ -214,7 +225,6 @@ Both flags are required for proper state restoration. Restore process may fail i
|
|||
in configured `-remoteRead.url`, weren't updated in the last `1h` (controlled by `-remoteRead.lookback`)
|
||||
or received state doesn't match current `vmalert` rules configuration.
|
||||
|
||||
|
||||
### Multitenancy
|
||||
|
||||
There are the following approaches exist for alerting and recording rules across
|
||||
|
@ -263,6 +273,7 @@ The following sections are showing how `vmalert` may be used and configured
|
|||
for different scenarios.
|
||||
|
||||
Please note, not all flags in examples are required:
|
||||
|
||||
* `-remoteWrite.url` and `-remoteRead.url` are optional and are needed only if
|
||||
you have recording rules or want to store [alerts state](#alerts-state-on-restarts) on `vmalert` restarts;
|
||||
* `-notifier.url` is optional and is needed only if you have alerting rules.
|
||||
|
@ -273,6 +284,7 @@ The simplest configuration where one single-node VM server is used for
|
|||
rules execution, storing recording rules results and alerts state.
|
||||
|
||||
`vmalert` configuration flags:
|
||||
|
||||
```
|
||||
./bin/vmalert -rule=rules.yml \ # Path to the file with rules configuration. Supports wildcard
|
||||
-datasource.url=http://victoriametrics:8428 \ # VM-single addr for executing rules expressions
|
||||
|
@ -283,7 +295,6 @@ rules execution, storing recording rules results and alerts state.
|
|||
|
||||
<img alt="vmalert single" width="500" src="vmalert_single.png">
|
||||
|
||||
|
||||
#### Cluster VictoriaMetrics
|
||||
|
||||
In [cluster mode](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html)
|
||||
|
@ -293,6 +304,7 @@ and `vminsert` is used to persist recording rules results and alerts state.
|
|||
Cluster mode could have multiple `vminsert` and `vmselect` components.
|
||||
|
||||
`vmalert` configuration flags:
|
||||
|
||||
```
|
||||
./bin/vmalert -rule=rules.yml \ # Path to the file with rules configuration. Supports wildcard
|
||||
-datasource.url=http://vmselect:8481/select/0/prometheus # vmselect addr for executing rules expressions
|
||||
|
@ -315,6 +327,7 @@ the same destinations, and send alert notifications to multiple configured
|
|||
Alertmanagers.
|
||||
|
||||
`vmalert` configuration flags:
|
||||
|
||||
```
|
||||
./bin/vmalert -rule=rules.yml \ # Path to the file with rules configuration. Supports wildcard
|
||||
-datasource.url=http://victoriametrics:8428 \ # VM-single addr for executing rules expressions
|
||||
|
@ -338,7 +351,6 @@ for Alertmanagers for better reliability.
|
|||
This example uses single-node VM server for the sake of simplicity.
|
||||
Check how to replace it with [cluster VictoriaMetrics](#cluster-victoriametrics) if needed.
|
||||
|
||||
|
||||
#### Downsampling and aggregation via vmalert
|
||||
|
||||
The following example shows how to build a topology where `vmalert` will process data from one cluster
|
||||
|
@ -349,6 +361,7 @@ recording rules to process raw data from "hot" cluster (by applying additional t
|
|||
or reducing resolution) and push results to "cold" cluster.
|
||||
|
||||
`vmalert` configuration flags:
|
||||
|
||||
```
|
||||
./bin/vmalert -rule=downsampling-rules.yml \ # Path to the file with rules configuration. Supports wildcard
|
||||
-datasource.url=http://raw-cluster-vmselect:8481/select/0/prometheus # vmselect addr for executing recordi ng rules expressions
|
||||
|
@ -363,19 +376,18 @@ Flags `-remoteRead.url` and `-notifier.url` are omitted since we assume only rec
|
|||
|
||||
See also [downsampling docs](https://docs.victoriametrics.com/#downsampling).
|
||||
|
||||
|
||||
### Web
|
||||
|
||||
`vmalert` runs a web-server (`-httpListenAddr`) for serving metrics and alerts endpoints:
|
||||
|
||||
* `http://<vmalert-addr>` - UI;
|
||||
* `http://<vmalert-addr>/api/v1/groups` - list of all loaded groups and rules;
|
||||
* `http://<vmalert-addr>/api/v1/alerts` - list of all active alerts;
|
||||
* `http://<vmalert-addr>/api/v1/<groupID>/<alertID>/status" ` - get alert status by ID.
|
||||
* `http://<vmalert-addr>/api/v1/<groupID>/<alertID>/status"` - get alert status by ID.
|
||||
Used as alert source in AlertManager.
|
||||
* `http://<vmalert-addr>/metrics` - application metrics.
|
||||
* `http://<vmalert-addr>/-/reload` - hot configuration reload.
|
||||
|
||||
|
||||
## Graphite
|
||||
|
||||
vmalert sends requests to `<-datasource.url>/render?format=json` during evaluation of alerting and recording rules
|
||||
|
@ -395,6 +407,7 @@ data source for backfilling.
|
|||
|
||||
In `replay` mode vmalert works as a cli-tool and exits immediately after work is done.
|
||||
To run vmalert in `replay` mode:
|
||||
|
||||
```
|
||||
./bin/vmalert -rule=path/to/your.rules \ # path to files with rules you usually use with vmalert
|
||||
-datasource.url=http://localhost:8428 \ # PromQL/MetricsQL compatible datasource
|
||||
|
@ -404,6 +417,7 @@ To run vmalert in `replay` mode:
|
|||
```
|
||||
|
||||
The output of the command will look like the following:
|
||||
|
||||
```
|
||||
Replay mode:
|
||||
from: 2021-05-11 07:21:43 +0000 UTC # set by -replay.timeFrom
|
||||
|
@ -447,9 +461,11 @@ The result of recording rules `replay` should match with results of normal rules
|
|||
|
||||
The result of alerting rules `replay` is time series reflecting [alert's state](#alerts-state-on-restarts).
|
||||
To see if `replayed` alert has fired in the past use the following PromQL/MetricsQL expression:
|
||||
|
||||
```
|
||||
ALERTS{alertname="your_alertname", alertstate="firing"}
|
||||
```
|
||||
|
||||
Execute the query against storage which was used for `-remoteWrite.url` during the `replay`.
|
||||
|
||||
### Additional configuration
|
||||
|
@ -473,7 +489,6 @@ See full description for these flags in `./vmalert --help`.
|
|||
* Graphite engine isn't supported yet;
|
||||
* `query` template function is disabled for performance reasons (might be changed in future);
|
||||
|
||||
|
||||
## Monitoring
|
||||
|
||||
`vmalert` exports various metrics in Prometheus exposition format at `http://vmalert-host:8880/metrics` page.
|
||||
|
@ -484,7 +499,6 @@ Use the official [Grafana dashboard](https://grafana.com/grafana/dashboards/1495
|
|||
If you have suggestions for improvements or have found a bug - please open an issue on github or add
|
||||
a review to the dashboard.
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
### Flags
|
||||
|
@ -493,6 +507,7 @@ Pass `-help` to `vmalert` in order to see the full list of supported
|
|||
command-line flags with their descriptions.
|
||||
|
||||
The shortlist of configuration flags is the following:
|
||||
|
||||
```
|
||||
-clusterMode
|
||||
If clusterMode is enabled, then vmalert automatically adds the tenant specified in config groups to -datasource.url, -remoteWrite.url and -remoteRead.url. See https://docs.victoriametrics.com/vmalert.html#multitenancy
|
||||
|
@ -791,7 +806,9 @@ The shortlist of configuration flags is the following:
|
|||
```
|
||||
|
||||
### Hot config reload
|
||||
|
||||
`vmalert` supports "hot" config reload via the following methods:
|
||||
|
||||
* send SIGHUP signal to `vmalert` process;
|
||||
* send GET request to `/-/reload` endpoint;
|
||||
* configure `-configCheckInterval` flag for periodic reload
|
||||
|
@ -804,6 +821,7 @@ just add them in address: `-datasource.url=http://localhost:8428?nocache=1`.
|
|||
|
||||
To set additional URL params for specific [group of rules](#Groups) modify
|
||||
the `params` group:
|
||||
|
||||
```yaml
|
||||
groups:
|
||||
- name: TestGroup
|
||||
|
@ -811,6 +829,7 @@ groups:
|
|||
denyPartialResponse: ["true"]
|
||||
extra_label: ["env=dev"]
|
||||
```
|
||||
|
||||
Please note, `params` are used only for executing rules expressions (requests to `datasource.url`).
|
||||
If there would be a conflict between URL params set in `datasource.url` flag and params in group definition
|
||||
the latter will have higher priority.
|
||||
|
@ -818,6 +837,7 @@ the latter will have higher priority.
|
|||
### Notifier configuration file
|
||||
|
||||
Notifier also supports configuration via file specified with flag `notifier.config`:
|
||||
|
||||
```
|
||||
./bin/vmalert -rule=app/vmalert/config/testdata/rules.good.rules \
|
||||
-datasource.url=http://localhost:8428 \
|
||||
|
@ -827,6 +847,7 @@ Notifier also supports configuration via file specified with flag `notifier.conf
|
|||
The configuration file allows to configure static notifiers or discover notifiers via
|
||||
[Consul](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config).
|
||||
For example:
|
||||
|
||||
```
|
||||
static_configs:
|
||||
- targets:
|
||||
|
@ -843,6 +864,7 @@ The list of configured or discovered Notifiers can be explored via [UI](#Web).
|
|||
|
||||
The configuration file [specification](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmalert/notifier/config.go)
|
||||
is the following:
|
||||
|
||||
```
|
||||
# Per-target Notifier timeout when pushing alerts.
|
||||
[ timeout: <duration> | default = 10s ]
|
||||
|
@ -897,7 +919,6 @@ relabel_configs:
|
|||
|
||||
The configuration file can be [hot-reloaded](#hot-config-reload).
|
||||
|
||||
|
||||
## Contributing
|
||||
|
||||
`vmalert` is mostly designed and built by VictoriaMetrics community.
|
||||
|
@ -908,8 +929,8 @@ software. Please keep simplicity as the main priority.
|
|||
|
||||
It is recommended using
|
||||
[binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases)
|
||||
- `vmalert` is located in `vmutils-*` archives there.
|
||||
|
||||
* `vmalert` is located in `vmutils-*` archives there.
|
||||
|
||||
### Development build
|
||||
|
||||
|
@ -923,7 +944,6 @@ It is recommended using
|
|||
2. Run `make vmalert-prod` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmalert-prod` binary and puts it into the `bin` folder.
|
||||
|
||||
|
||||
### ARM build
|
||||
|
||||
ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://blog.cloudflare.com/arm-takes-wing/).
|
||||
|
|
|
@ -10,7 +10,7 @@ The `-auth.config` can point to either local file or to http url.
|
|||
Just download `vmutils-*` archive from [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases), unpack it
|
||||
and pass the following flag to `vmauth` binary in order to start authorizing and routing requests:
|
||||
|
||||
```
|
||||
```bash
|
||||
/path/to/vmauth -auth.config=/path/to/auth/config.yml
|
||||
```
|
||||
|
||||
|
@ -129,7 +129,7 @@ It is expected that all the backend services protected by `vmauth` are located i
|
|||
|
||||
Do not transfer Basic Auth headers in plaintext over untrusted networks. Enable https. This can be done by passing the following `-tls*` command-line flags to `vmauth`:
|
||||
|
||||
```
|
||||
```bash
|
||||
-tls
|
||||
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
|
@ -217,7 +217,7 @@ The collected profiles may be analyzed with [go tool pprof](https://github.com/g
|
|||
|
||||
Pass `-help` command-line arg to `vmauth` in order to see all the configuration options:
|
||||
|
||||
```
|
||||
```bash
|
||||
./vmauth -help
|
||||
|
||||
vmauth authenticates and authorizes incoming requests and proxies them to VictoriaMetrics.
|
||||
|
|
|
@ -22,14 +22,13 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
|
|||
See also [vmbackupmanager](https://docs.victoriametrics.com/vmbackupmanager.html) tool built on top of `vmbackup`. This tool simplifies
|
||||
creation of hourly, daily, weekly and monthly backups.
|
||||
|
||||
|
||||
## Use cases
|
||||
|
||||
### Regular backups
|
||||
|
||||
Regular backup can be performed with the following command:
|
||||
|
||||
```
|
||||
```bash
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gs://<bucket>/<path/to/new/backup>
|
||||
```
|
||||
|
||||
|
@ -39,36 +38,33 @@ vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-
|
|||
* `<bucket>` is an already existing name for [GCS bucket](https://cloud.google.com/storage/docs/creating-buckets).
|
||||
* `<path/to/new/backup>` is the destination path where new backup will be placed.
|
||||
|
||||
|
||||
### Regular backups with server-side copy from existing backup
|
||||
|
||||
If the destination GCS bucket already contains the previous backup at `-origin` path, then new backup can be sped up
|
||||
with the following command:
|
||||
|
||||
```
|
||||
```bash
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gs://<bucket>/<path/to/new/backup> -origin=gs://<bucket>/<path/to/existing/backup>
|
||||
```
|
||||
|
||||
It saves time and network bandwidth costs by performing server-side copy for the shared data from the `-origin` to `-dst`.
|
||||
|
||||
|
||||
### Incremental backups
|
||||
|
||||
Incremental backups are performed if `-dst` points to an already existing backup. In this case only new data is uploaded to remote storage.
|
||||
It saves time and network bandwidth costs when working with big backups:
|
||||
|
||||
```
|
||||
```bash
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gs://<bucket>/<path/to/existing/backup>
|
||||
```
|
||||
|
||||
|
||||
### Smart backups
|
||||
|
||||
Smart backups mean storing full daily backups into `YYYYMMDD` folders and creating incremental hourly backup into `latest` folder:
|
||||
|
||||
* Run the following command every hour:
|
||||
|
||||
```
|
||||
```bash
|
||||
vmbackup -snapshotName=<latest-snapshot> -dst=gs://<bucket>/latest
|
||||
```
|
||||
|
||||
|
@ -77,13 +73,12 @@ The command will upload only changed data to `gs://<bucket>/latest`.
|
|||
|
||||
* Run the following command once a day:
|
||||
|
||||
```
|
||||
```bash
|
||||
vmbackup -snapshotName=<daily-snapshot> -dst=gs://<bucket>/<YYYYMMDD> -origin=gs://<bucket>/latest
|
||||
```
|
||||
|
||||
Where `<daily-snapshot>` is the snapshot for the last day `<YYYYMMDD>`.
|
||||
|
||||
|
||||
This apporach saves network bandwidth costs on hourly backups (since they are incremental) and allows recovering data from either the last hour (`latest` backup)
|
||||
or from any day (`YYYYMMDD` backups). Note that hourly backup shouldn't run when creating daily backup.
|
||||
|
||||
|
@ -91,7 +86,6 @@ Do not forget to remove old snapshots and backups when they are no longer needed
|
|||
|
||||
See also [vmbackupmanager tool](https://docs.victoriametrics.com/vmbackupmanager.html) for automating smart backups.
|
||||
|
||||
|
||||
## How does it work?
|
||||
|
||||
The backup algorithm is the following:
|
||||
|
@ -108,16 +102,15 @@ Such splitting minimizes the amounts of data to re-transfer after temporary erro
|
|||
|
||||
`vmbackup` relies on [instant snapshot](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282) properties:
|
||||
|
||||
- All the files in the snapshot are immutable.
|
||||
- Old files are periodically merged into new files.
|
||||
- Smaller files have higher probability to be merged.
|
||||
- Consecutive snapshots share many identical files.
|
||||
* All the files in the snapshot are immutable.
|
||||
* Old files are periodically merged into new files.
|
||||
* Smaller files have higher probability to be merged.
|
||||
* Consecutive snapshots share many identical files.
|
||||
|
||||
These properties allow performing fast and cheap incremental backups and server-side copying from `-origin` paths.
|
||||
See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-series-databases-533c1a927883) for more details.
|
||||
`vmbackup` can work improperly or slowly when these properties are violated.
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
* If the backup is slow, then try setting higher value for `-concurrency` flag. This will increase the number of concurrent workers that upload data to backup storage.
|
||||
|
@ -126,15 +119,14 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
|
|||
* Backups created from [single-node VictoriaMetrics](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html) cannot be restored
|
||||
at [cluster VictoriaMetrics](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html) and vice versa.
|
||||
|
||||
|
||||
## Advanced usage
|
||||
|
||||
|
||||
* Obtaining credentials from a file.
|
||||
|
||||
Add flag `-credsFilePath=/etc/credentials` with the following content:
|
||||
|
||||
for s3 (aws, minio or other s3 compatible storages):
|
||||
|
||||
```bash
|
||||
[default]
|
||||
aws_access_key_id=theaccesskey
|
||||
|
@ -142,6 +134,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
|
|||
```
|
||||
|
||||
for gce cloud storage:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "service_account",
|
||||
|
@ -159,7 +152,8 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
|
|||
|
||||
* Usage with s3 custom url endpoint. It is possible to use `vmbackup` with s3 compatible storages like minio, cloudian, etc.
|
||||
You have to add a custom url endpoint via flag:
|
||||
```
|
||||
|
||||
```bash
|
||||
# for minio
|
||||
-customS3Endpoint=http://localhost:9000
|
||||
|
||||
|
@ -169,7 +163,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
|
|||
|
||||
* Run `vmbackup -help` in order to see all the available options:
|
||||
|
||||
```
|
||||
```bash
|
||||
-concurrency int
|
||||
The number of concurrent workers. Higher concurrency may reduce backup duration (default 10)
|
||||
-configFilePath string
|
||||
|
@ -259,12 +253,10 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
|
|||
Show VictoriaMetrics version
|
||||
```
|
||||
|
||||
|
||||
## How to build from sources
|
||||
|
||||
It is recommended using [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) - see `vmutils-*` archives there.
|
||||
|
||||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.17.
|
||||
|
|
|
@ -9,11 +9,10 @@ The required flags for running the service are as follows:
|
|||
|
||||
* -eula - should be true and means that you have the legal right to run a backup manager. That can either be a signed contract or an email with confirmation to run the service in a trial period
|
||||
* -storageDataPath - path to VictoriaMetrics or vmstorage data path to make backup from
|
||||
* -snapshot.createURL - VictoriaMetrics creates snapshot URL which will automatically be created during backup. Example: http://victoriametrics:8428/snapshot/create
|
||||
* -snapshot.createURL - VictoriaMetrics creates snapshot URL which will automatically be created during backup. Example: <http://victoriametrics:8428/snapshot/create>
|
||||
* -dst - backup destination at s3, gcs or local filesystem
|
||||
* -credsFilePath - path to file with GCS or S3 credentials. Credentials are loaded from default locations if not set. See [https://cloud.google.com/iam/docs/creating-managing-service-account-keys](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) and [https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html](https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html)
|
||||
|
||||
|
||||
Backup schedule is controlled by the following flags:
|
||||
|
||||
* -disableHourly - disable hourly run. Default false
|
||||
|
@ -23,7 +22,6 @@ Backup schedule is controlled by the following flags:
|
|||
|
||||
By default, all flags are turned on and Backup Manager backups data every hour for every interval (hourly, daily, weekly and monthly).
|
||||
|
||||
|
||||
The backup manager creates the following directory hierarchy at **-dst**:
|
||||
|
||||
* /latest/ - contains the latest backup
|
||||
|
@ -32,7 +30,6 @@ The backup manager creates the following directory hierarchy at **-dst**:
|
|||
* /weekly/ - contains weekly backups. Each backup is named as *YYYY-WW*
|
||||
* /monthly/ - contains monthly backups. Each backup is named as *YYYY-MM*
|
||||
|
||||
|
||||
To get the full list of supported flags please run the following command:
|
||||
|
||||
```console
|
||||
|
@ -48,7 +45,6 @@ There are two flags which could help with performance tuning:
|
|||
* -maxBytesPerSecond - the maximum upload speed. There is no limit if it is set to 0
|
||||
* -concurrency - The number of concurrent workers. Higher concurrency may improve upload speed (default 10)
|
||||
|
||||
|
||||
## Example of Usage
|
||||
|
||||
GCS and cluster version. You need to have a credentials file in json format with following structure
|
||||
|
@ -96,11 +92,11 @@ info VictoriaMetrics/lib/storage/storage.go:319 deleted snapshot "/vmstora
|
|||
|
||||
The result on the GCS bucket
|
||||
|
||||
- The root folder
|
||||
* The root folder
|
||||
|
||||
![root](vmbackupmanager_root_folder.png)
|
||||
|
||||
- The latest folder
|
||||
* The latest folder
|
||||
|
||||
![latest](vmbackupmanager_latest_folder.png)
|
||||
|
||||
|
@ -119,7 +115,6 @@ Let’s assume we have a backup manager collecting daily backups for the past 10
|
|||
|
||||
![daily](vmbackupmanager_rp_daily_1.png)
|
||||
|
||||
|
||||
We enable backup retention policy for backup manager by using following configuration:
|
||||
|
||||
```console
|
||||
|
|
|
@ -3,6 +3,7 @@
|
|||
VictoriaMetrics command-line tool
|
||||
|
||||
Features:
|
||||
|
||||
- [x] Prometheus: migrate data from Prometheus to VictoriaMetrics using snapshot API
|
||||
- [x] Thanos: migrate data from Thanos to VictoriaMetrics
|
||||
- [ ] ~~Prometheus: migrate data from Prometheus to VictoriaMetrics by query~~(discarded)
|
||||
|
@ -14,7 +15,8 @@ vmctl acts as a proxy between data source ([Prometheus](#migrating-data-from-pro
|
|||
[InfluxDB](#migrating-data-from-influxdb-1x), [VictoriaMetrics](##migrating-data-from-victoriametrics), etc.)
|
||||
and destination - VictoriaMetrics single or cluster version. To see the full list of supported modes
|
||||
run the following command:
|
||||
```
|
||||
|
||||
```bash
|
||||
./vmctl --help
|
||||
NAME:
|
||||
vmctl - VictoriaMetrics command-line tool
|
||||
|
@ -31,6 +33,7 @@ COMMANDS:
|
|||
|
||||
Each mode has its own unique set of flags specific (e.g. prefixed with `influx` for influx mode)
|
||||
to the data source and common list of flags for destination (prefixed with `vm` for VictoriaMetrics):
|
||||
|
||||
```
|
||||
./vmctl influx --help
|
||||
OPTIONS:
|
||||
|
@ -50,6 +53,7 @@ destination (where to migrate data). Every mode has additional details and nuanc
|
|||
them below in corresponding sections.
|
||||
|
||||
For the destination flags see the full description by running the following command:
|
||||
|
||||
```
|
||||
./vmctl influx --help | grep vm-
|
||||
```
|
||||
|
@ -64,9 +68,8 @@ forget to specify the `--vm-account-id` flag. See more details for cluster versi
|
|||
|
||||
## Articles
|
||||
|
||||
* [How to migrate data from Prometheus](https://medium.com/@romanhavronenko/victoriametrics-how-to-migrate-data-from-prometheus-d44a6728f043)
|
||||
* [How to migrate data from Prometheus. Filtering and modifying time series](https://medium.com/@romanhavronenko/victoriametrics-how-to-migrate-data-from-prometheus-filtering-and-modifying-time-series-6d40cea4bf21)
|
||||
|
||||
- [How to migrate data from Prometheus](https://medium.com/@romanhavronenko/victoriametrics-how-to-migrate-data-from-prometheus-d44a6728f043)
|
||||
- [How to migrate data from Prometheus. Filtering and modifying time series](https://medium.com/@romanhavronenko/victoriametrics-how-to-migrate-data-from-prometheus-filtering-and-modifying-time-series-6d40cea4bf21)
|
||||
|
||||
## Migrating data from OpenTSDB
|
||||
|
||||
|
@ -79,16 +82,21 @@ See `./vmctl opentsdb --help` for details and full list of flags.
|
|||
OpenTSDB migration works like so:
|
||||
|
||||
1. Find metrics based on selected filters (or the default filter set ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z'])
|
||||
* e.g. `curl -Ss "http://opentsdb:4242/api/suggest?type=metrics&q=sys"`
|
||||
|
||||
- e.g. `curl -Ss "http://opentsdb:4242/api/suggest?type=metrics&q=sys"`
|
||||
|
||||
2. Find series associated with each returned metric
|
||||
* e.g. `curl -Ss "http://opentsdb:4242/api/search/lookup?m=system.load5&limit=1000000"`
|
||||
|
||||
- e.g. `curl -Ss "http://opentsdb:4242/api/search/lookup?m=system.load5&limit=1000000"`
|
||||
|
||||
3. Download data for each series in chunks defined in the CLI switches
|
||||
* e.g. `-retention=sum-1m-avg:1h:90d` ==
|
||||
* `curl -Ss "http://opentsdb:4242/api/query?start=1h-ago&end=now&m=sum:1m-avg-none:system.load5\{host=host1\}"`
|
||||
* `curl -Ss "http://opentsdb:4242/api/query?start=2h-ago&end=1h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"`
|
||||
* `curl -Ss "http://opentsdb:4242/api/query?start=3h-ago&end=2h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"`
|
||||
* ...
|
||||
* `curl -Ss "http://opentsdb:4242/api/query?start=2160h-ago&end=2159h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"`
|
||||
|
||||
- e.g. `-retention=sum-1m-avg:1h:90d` ==
|
||||
- `curl -Ss "http://opentsdb:4242/api/query?start=1h-ago&end=now&m=sum:1m-avg-none:system.load5\{host=host1\}"`
|
||||
- `curl -Ss "http://opentsdb:4242/api/query?start=2h-ago&end=1h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"`
|
||||
- `curl -Ss "http://opentsdb:4242/api/query?start=3h-ago&end=2h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"`
|
||||
- ...
|
||||
- `curl -Ss "http://opentsdb:4242/api/query?start=2160h-ago&end=2159h-ago&m=sum:1m-avg-none:system.load5\{host=host1\}"`
|
||||
|
||||
This means that we must stream data from OpenTSDB to VictoriaMetrics in chunks. This is where concurrency for OpenTSDB comes in. We can query multiple chunks at once, but we shouldn't perform too many chunks at a time to avoid overloading the OpenTSDB cluster.
|
||||
|
||||
|
@ -107,6 +115,7 @@ Found 9 metrics to import. Continue? [Y/n]
|
|||
Starting with a relatively simple retention string (`sum-1m-avg:1h:30d`), let's describe how this is converted into actual queries.
|
||||
|
||||
There are two essential parts of a retention string:
|
||||
|
||||
1. [aggregation](#aggregation)
|
||||
2. [windows/time ranges](#windows)
|
||||
|
||||
|
@ -115,8 +124,9 @@ There are two essential parts of a retention string:
|
|||
Retention strings essentially define the two levels of aggregation for our collected series.
|
||||
|
||||
`sum-1m-avg` would become:
|
||||
* First order: `sum`
|
||||
* Second order: `1m-avg-none`
|
||||
|
||||
- First order: `sum`
|
||||
- Second order: `1m-avg-none`
|
||||
|
||||
##### First Order Aggregations
|
||||
|
||||
|
@ -137,6 +147,7 @@ We do not allow for defining the "null value" portion of the rollup window (e.g.
|
|||
#### Windows
|
||||
|
||||
There are two important windows we define in a retention string:
|
||||
|
||||
1. the "chunk" range of each query
|
||||
2. The time range we will be querying on with that "chunk"
|
||||
|
||||
|
@ -193,6 +204,7 @@ VM importer then accumulates received samples in batches and sends import reques
|
|||
|
||||
The importing process example for local installation of InfluxDB(`http://localhost:8086`)
|
||||
and single-node VictoriaMetrics(`http://localhost:8428`):
|
||||
|
||||
```
|
||||
./vmctl influx --influx-database benchmark
|
||||
InfluxDB import mode
|
||||
|
@ -218,19 +230,21 @@ Found 40000 timeseries to import. Continue? [Y/n] y
|
|||
|
||||
Vmctl maps InfluxDB data the same way as VictoriaMetrics does by using the following rules:
|
||||
|
||||
* `influx-database` arg is mapped into `db` label value unless `db` tag exists in the InfluxDB line.
|
||||
* Field names are mapped to time series names prefixed with {measurement}{separator} value,
|
||||
- `influx-database` arg is mapped into `db` label value unless `db` tag exists in the InfluxDB line.
|
||||
- Field names are mapped to time series names prefixed with {measurement}{separator} value,
|
||||
where {separator} equals to _ by default.
|
||||
It can be changed with `--influx-measurement-field-separator` command-line flag.
|
||||
* Field values are mapped to time series values.
|
||||
* Tags are mapped to Prometheus labels format as-is.
|
||||
- Field values are mapped to time series values.
|
||||
- Tags are mapped to Prometheus labels format as-is.
|
||||
|
||||
For example, the following InfluxDB line:
|
||||
|
||||
```
|
||||
foo,tag1=value1,tag2=value2 field1=12,field2=40
|
||||
```
|
||||
|
||||
is converted into the following Prometheus format data points:
|
||||
|
||||
```
|
||||
foo_field1{tag1="value1", tag2="value2"} 12
|
||||
foo_field2{tag1="value1", tag2="value2"} 40
|
||||
|
@ -246,6 +260,7 @@ The filtering consists of two parts: timeseries and time.
|
|||
The first step of application is to select all available timeseries
|
||||
for given database and retention. User may specify additional filtering
|
||||
condition via `--influx-filter-series` flag. For example:
|
||||
|
||||
```
|
||||
./vmctl influx --influx-database benchmark \
|
||||
--influx-filter-series "on benchmark from cpu where hostname='host_1703'"
|
||||
|
@ -256,13 +271,15 @@ InfluxDB import mode
|
|||
2020/01/26 14:23:29 fetching series: command: "show series on benchmark from cpu where hostname='host_1703'"; database: "benchmark"; retention: "autogen"
|
||||
Found 10 timeseries to import. Continue? [Y/n]
|
||||
```
|
||||
|
||||
The timeseries select query would be following:
|
||||
`fetching series: command: "show series on benchmark from cpu where hostname='host_1703'"; database: "benchmark"; retention: "autogen"`
|
||||
|
||||
The second step of filtering is a time filter and it applies when fetching the datapoints from Influx.
|
||||
Time filtering may be configured with two flags:
|
||||
* --influx-filter-time-start
|
||||
* --influx-filter-time-end
|
||||
|
||||
- --influx-filter-time-start
|
||||
- --influx-filter-time-end
|
||||
Here's an example of importing timeseries for one day only:
|
||||
`./vmctl influx --influx-database benchmark --influx-filter-series "where hostname='host_1703'" --influx-filter-time-start "2020-01-01T10:07:00Z" --influx-filter-time-end "2020-01-01T15:07:00Z"`
|
||||
|
||||
|
@ -271,8 +288,7 @@ Please see more about time filtering [here](https://docs.influxdata.com/influxdb
|
|||
## Migrating data from InfluxDB (2.x)
|
||||
|
||||
Migrating data from InfluxDB v2.x is not supported yet ([#32](https://github.com/VictoriaMetrics/vmctl/issues/32)).
|
||||
You may find useful a 3rd party solution for this - https://github.com/jonppe/influx_to_victoriametrics.
|
||||
|
||||
You may find useful a 3rd party solution for this - <https://github.com/jonppe/influx_to_victoriametrics>.
|
||||
|
||||
## Migrating data from Prometheus
|
||||
|
||||
|
@ -301,6 +317,7 @@ The data processed in chunks and then sent to VM.
|
|||
|
||||
The importing process example for local installation of Prometheus
|
||||
and single-node VictoriaMetrics(`http://localhost:8428`):
|
||||
|
||||
```
|
||||
./vmctl prometheus --prom-snapshot=/path/to/snapshot \
|
||||
--vm-concurrency=1 \
|
||||
|
@ -347,6 +364,7 @@ in in RFC3339 format. This filter applied twice: to drop blocks out of range and
|
|||
overlapping time range.
|
||||
|
||||
Example of applying time filter:
|
||||
|
||||
```
|
||||
./vmctl prometheus --prom-snapshot=/path/to/snapshot \
|
||||
--prom-filter-time-start=2020-02-07T00:07:01Z \
|
||||
|
@ -366,12 +384,13 @@ Please notice, that total amount of blocks in provided snapshot is 14, but only
|
|||
time range. So other 12 blocks were marked as `skipped`. The amount of samples and series is not taken into account,
|
||||
since this is heavy operation and will be done during import process.
|
||||
|
||||
|
||||
Filtering by timeseries is configured with following flags:
|
||||
* `--prom-filter-label` - the label name, e.g. `__name__` or `instance`;
|
||||
* `--prom-filter-label-value` - the regular expression to filter the label value. By default matches all `.*`
|
||||
|
||||
- `--prom-filter-label` - the label name, e.g. `__name__` or `instance`;
|
||||
- `--prom-filter-label-value` - the regular expression to filter the label value. By default matches all `.*`
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
./vmctl prometheus --prom-snapshot=/path/to/snapshot \
|
||||
--prom-filter-label="__name__" \
|
||||
|
@ -413,16 +432,20 @@ and that you have a separate Thanos Store installation.
|
|||
|
||||
1. For now, keep your Thanos Sidecar and Thanos-related Prometheus configuration, but add this to also stream
|
||||
metrics to VictoriaMetrics:
|
||||
|
||||
```
|
||||
remote_write:
|
||||
- url: http://victoria-metrics:8428/api/v1/write
|
||||
```
|
||||
|
||||
2. Make sure VM is running, of course. Now check the logs to make sure that Prometheus is sending and VM is receiving.
|
||||
In Prometheus, make sure there are no errors. On the VM side, you should see messages like this:
|
||||
|
||||
```
|
||||
2020-04-27T18:38:46.474Z info VictoriaMetrics/lib/storage/partition.go:207 creating a partition "2020_04" with smallPartsPath="/victoria-metrics-data/data/small/2020_04", bigPartsPath="/victoria-metrics-data/data/big/2020_04"
|
||||
2020-04-27T18:38:46.506Z info VictoriaMetrics/lib/storage/partition.go:222 partition "2020_04" has been created
|
||||
```
|
||||
|
||||
3. Now just wait. Within two hours, Prometheus should finish its current data file and hand it off to Thanos Store for long term
|
||||
storage.
|
||||
|
||||
|
@ -430,6 +453,7 @@ and that you have a separate Thanos Store installation.
|
|||
|
||||
Let's assume your data is stored on S3 served by minio. You first need to copy that out to a local filesystem,
|
||||
then import it into VM using `vmctl` in `prometheus` mode.
|
||||
|
||||
1. Copy data from minio.
|
||||
1. Run the `minio/mc` Docker container.
|
||||
1. `mc config host add minio http://minio:9000 accessKey secretKey`, substituting appropriate values for the last 3 items.
|
||||
|
@ -437,6 +461,7 @@ then import it into VM using `vmctl` in `prometheus` mode.
|
|||
1. Import using `vmctl`.
|
||||
1. Follow the [instructions](#how-to-build) to compile `vmctl` on your machine.
|
||||
1. Use [prometheus](#migrating-data-from-prometheus) mode to import data:
|
||||
|
||||
```
|
||||
vmctl prometheus --prom-snapshot thanos-data --vm-addr http://victoria-metrics:8428
|
||||
```
|
||||
|
@ -471,6 +496,7 @@ Total: 336.75 KiB ↖ Speed: 454.46 KiB p/s
|
|||
```
|
||||
|
||||
Importing tips:
|
||||
|
||||
1. Migrating all the metrics from one VM to another may collide with existing application metrics
|
||||
(prefixed with `vm_`) at destination and lead to confusion when using
|
||||
[official Grafana dashboards](https://grafana.com/orgs/victoriametrics/dashboards).
|
||||
|
@ -514,7 +540,8 @@ than 4 chunks before sending the request.
|
|||
|
||||
After successful import `vmctl` prints some statistics for details.
|
||||
The important numbers to watch are following:
|
||||
- `idle duration` - shows time that importer spent while waiting for data from InfluxDB/Prometheus
|
||||
|
||||
- `idle duration` - shows time that importer spent while waiting for data from InfluxDB/Prometheus
|
||||
to fill up `--vm-batch-size` batch size. Value shows total duration across all workers configured
|
||||
via `--vm-concurrency`. High value may be a sign of too slow InfluxDB/Prometheus fetches or too
|
||||
high `--vm-concurrency` value. Try to improve it by increasing `--<mode>-concurrency` value or
|
||||
|
@ -529,6 +556,7 @@ a sign of network issues or VM being overloaded. See the logs during import for
|
|||
|
||||
By default `vmctl` waits confirmation from user before starting the import. If this is unwanted
|
||||
behavior and no user interaction required - pass `-s` flag to enable "silence" mode:
|
||||
|
||||
```
|
||||
-s Whether to run in silent mode. If set to true no confirmation prompts will appear. (default: false)
|
||||
```
|
||||
|
@ -545,10 +573,10 @@ according to [information theory](https://en.wikipedia.org/wiki/Information_theo
|
|||
|
||||
`vmctl` provides the following flags for improving data compression:
|
||||
|
||||
* `--vm-round-digits` flag for rounding processed values to the given number of decimal digits after the point.
|
||||
- `--vm-round-digits` flag for rounding processed values to the given number of decimal digits after the point.
|
||||
For example, `--vm-round-digits=2` would round `1.2345` to `1.23`. By default the rounding is disabled.
|
||||
|
||||
* `--vm-significant-figures` flag for limiting the number of significant figures in processed values. It takes no effect if set
|
||||
- `--vm-significant-figures` flag for limiting the number of significant figures in processed values. It takes no effect if set
|
||||
to 0 (by default), but set `--vm-significant-figures=5` and `102.342305` will be rounded to `102.34`.
|
||||
|
||||
The most common case for using these flags is to improve data compression for time series storing aggregation
|
||||
|
@ -568,12 +596,10 @@ The rate limit may be set in bytes-per-second via `--vm-rate-limit` flag.
|
|||
Please note, you can also use [vmagent](https://docs.victoriametrics.com/vmagent.html)
|
||||
as a proxy between `vmctl` and destination with `-remoteWrite.rateLimit` flag enabled.
|
||||
|
||||
|
||||
## How to build
|
||||
|
||||
It is recommended using [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) - `vmctl` is located in `vmutils-*` archives there.
|
||||
|
||||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.17.
|
||||
|
|
|
@ -2,7 +2,6 @@
|
|||
|
||||
***vmgateway is a part of [enterprise package](https://victoriametrics.com/products/enterprise/). It is available for download and evaluation at [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases)***
|
||||
|
||||
|
||||
<img alt="vmgateway" src="vmgateway-overview.jpeg">
|
||||
|
||||
`vmgateway` is a proxy for the VictoriaMetrics Time Series Database (TSDB). It provides the following features:
|
||||
|
@ -16,7 +15,6 @@
|
|||
|
||||
`vmgateway` is included in our [enterprise packages](https://victoriametrics.com/products/enterprise/).
|
||||
|
||||
|
||||
## Access Control
|
||||
|
||||
<img alt="vmgateway-ac" src="vmgateway-access-control.jpg">
|
||||
|
@ -24,6 +22,7 @@
|
|||
`vmgateway` supports jwt based authentication. With jwt payload can be configured to give access to specific tenants and labels as well as to read/write.
|
||||
|
||||
jwt token must be in following format:
|
||||
|
||||
```json
|
||||
{
|
||||
"exp": 1617304574,
|
||||
|
@ -41,13 +40,15 @@ jwt token must be in following format:
|
|||
}
|
||||
}
|
||||
```
|
||||
|
||||
Where:
|
||||
- `exp` - required, expire time in unix_timestamp. If the token expires then `vmgateway` rejects the request.
|
||||
- `vm_access` - required, dict with claim info, minimum form: `{"vm_access": {"tenand_id": {}}`
|
||||
- `tenant_id` - optional, for cluster mode, routes requests to the corresponding tenant.
|
||||
- `extra_labels` - optional, key-value pairs for label filters added to the ingested or selected metrics. Multiple filters are added with `and` operation. If defined, `extra_label` from original request removed.
|
||||
- `extra_filters` - optional, [series selectors](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) added to the select query requests. Multiple selectors are added with `or` operation. If defined, `extra_filter` from original request removed.
|
||||
- `mode` - optional, access mode for api - read, write, or full. Supported values: 0 - full (default value), 1 - read, 2 - write.
|
||||
|
||||
* `exp` - required, expire time in unix_timestamp. If the token expires then `vmgateway` rejects the request.
|
||||
* `vm_access` - required, dict with claim info, minimum form: `{"vm_access": {"tenand_id": {}}`
|
||||
* `tenant_id` - optional, for cluster mode, routes requests to the corresponding tenant.
|
||||
* `extra_labels` - optional, key-value pairs for label filters added to the ingested or selected metrics. Multiple filters are added with `and` operation. If defined, `extra_label` from original request removed.
|
||||
* `extra_filters` - optional, [series selectors](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) added to the select query requests. Multiple selectors are added with `or` operation. If defined, `extra_filter` from original request removed.
|
||||
* `mode` - optional, access mode for api - read, write, or full. Supported values: 0 - full (default value), 1 - read, 2 - write.
|
||||
|
||||
## QuickStart
|
||||
|
||||
|
@ -66,18 +67,19 @@ Start vmgateway
|
|||
```
|
||||
|
||||
Retrieve data from the database
|
||||
|
||||
```bash
|
||||
curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2bV9hY2Nlc3MiOnsidGVuYW50X2lkIjp7fSwicm9sZSI6MX0sImV4cCI6MTkzOTM0NjIxMH0.5WUxEfdcV9hKo4CtQdtuZYOGpGXWwaqM9VuVivMMrVg'
|
||||
```
|
||||
|
||||
A request with an incorrect token or without any token will be rejected:
|
||||
|
||||
```bash
|
||||
curl 'http://localhost:8431/api/v1/series/count'
|
||||
|
||||
curl 'http://localhost:8431/api/v1/series/count' -H 'Authorization: Bearer incorrect-token'
|
||||
```
|
||||
|
||||
|
||||
## Rate Limiter
|
||||
|
||||
<img alt="vmgateway-rl" src="vmgateway-rate-limiting.jpg">
|
||||
|
@ -88,14 +90,16 @@ Limits incoming requests by given, pre-configured limits. It supports read and w
|
|||
The metrics that you want to rate limit must be scraped from the cluster.
|
||||
|
||||
List of supported limit types:
|
||||
- `queries` - count of api requests made at tenant to read the api, such as `/api/v1/query`, `/api/v1/series` and others.
|
||||
- `active_series` - count of current active series at any given tenant.
|
||||
- `new_series` - count of created series; aka churn rate
|
||||
- `rows_inserted` - count of inserted rows per tenant.
|
||||
|
||||
* `queries` - count of api requests made at tenant to read the api, such as `/api/v1/query`, `/api/v1/series` and others.
|
||||
* `active_series` - count of current active series at any given tenant.
|
||||
* `new_series` - count of created series; aka churn rate
|
||||
* `rows_inserted` - count of inserted rows per tenant.
|
||||
|
||||
List of supported time windows:
|
||||
- `minute`
|
||||
- `hour`
|
||||
|
||||
* `minute`
|
||||
* `hour`
|
||||
|
||||
Limits can be specified per tenant or at a global level if you omit `project_id` and `account_id`.
|
||||
|
||||
|
@ -119,6 +123,7 @@ limits:
|
|||
## QuickStart
|
||||
|
||||
cluster version of VictoriaMetrics is required for rate limiting.
|
||||
|
||||
```bash
|
||||
# start datasource for cluster metrics
|
||||
|
||||
|
@ -169,6 +174,7 @@ curl 'http://localhost:8431/api/v1/labels' -H 'Authorization: Bearer eyJhbGciOiJ
|
|||
## Configuration
|
||||
|
||||
The shortlist of configuration flags include the following:
|
||||
|
||||
```console
|
||||
-clusterMode
|
||||
enable this for the cluster version
|
||||
|
@ -276,12 +282,11 @@ The shortlist of configuration flags include the following:
|
|||
## TroubleShooting
|
||||
|
||||
* Access control:
|
||||
* incorrect `jwt` format, try https://jwt.io/#debugger-io with our tokens
|
||||
* incorrect `jwt` format, try <https://jwt.io/#debugger-io> with our tokens
|
||||
* expired token, check `exp` field.
|
||||
* Rate Limiting:
|
||||
* `scrape_interval` at datasource, reduce it to apply limits faster.
|
||||
|
||||
|
||||
## Limitations
|
||||
|
||||
* Access Control:
|
||||
|
|
|
@ -6,12 +6,11 @@ VictoriaMetrics `v1.29.0` and newer versions must be used for working with the r
|
|||
Restore process can be interrupted at any time. It is automatically resumed from the interruption point
|
||||
when restarting `vmrestore` with the same args.
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
VictoriaMetrics must be stopped during the restore process.
|
||||
|
||||
```
|
||||
```bash
|
||||
vmrestore -src=gs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/restore>
|
||||
|
||||
```
|
||||
|
@ -24,13 +23,11 @@ vmrestore -src=gs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/re
|
|||
The original `-storageDataPath` directory may contain old files. They will be substituted by the files from backup,
|
||||
i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/questions/476041/how-do-i-make-rsync-delete-files-that-have-been-deleted-from-the-source-folder).
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
* If `vmrestore` eats all the network bandwidth, then set `-maxBytesPerSecond` to the desired value.
|
||||
* If `vmrestore` has been interrupted due to temporary error, then just restart it with the same args. It will resume the restore process.
|
||||
|
||||
|
||||
## Advanced usage
|
||||
|
||||
* Obtaining credentials from a file.
|
||||
|
@ -38,6 +35,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
|
|||
Add flag `-credsFilePath=/etc/credentials` with following content:
|
||||
|
||||
for s3 (aws, minio or other s3 compatible storages):
|
||||
|
||||
```bash
|
||||
[default]
|
||||
aws_access_key_id=theaccesskey
|
||||
|
@ -45,6 +43,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
|
|||
```
|
||||
|
||||
for gce cloud storage:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "service_account",
|
||||
|
@ -62,7 +61,8 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
|
|||
|
||||
* Usage with s3 custom url endpoint. It is possible to use `vmrestore` with s3 api compatible storages, like minio, cloudian and other.
|
||||
You have to add custom url endpoint with a flag:
|
||||
```
|
||||
|
||||
```bash
|
||||
# for minio:
|
||||
-customS3Endpoint=http://localhost:9000
|
||||
|
||||
|
@ -72,7 +72,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
|
|||
|
||||
* Run `vmrestore -help` in order to see all the available options:
|
||||
|
||||
```
|
||||
```bash
|
||||
-concurrency int
|
||||
The number of concurrent workers. Higher concurrency may reduce restore duration (default 10)
|
||||
-configFilePath string
|
||||
|
@ -155,12 +155,10 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
|
|||
Show VictoriaMetrics version
|
||||
```
|
||||
|
||||
|
||||
## How to build from sources
|
||||
|
||||
It is recommended using [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) - see `vmutils-*` archives there.
|
||||
|
||||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.17.
|
||||
|
|
Loading…
Reference in a new issue