mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2024-11-21 14:44:00 +00:00
fixed broken links
This commit is contained in:
parent
0bb0338bee
commit
85110762b5
9 changed files with 66 additions and 65 deletions
|
@ -33,7 +33,7 @@ See also [LTS releases](./LTS-releases.md).
|
|||
|
||||
* SECURITY: upgrade base docker image (Alpine) from 3.20.1 to 3.20.2. See [alpine 3.20.2 release notes](https://alpinelinux.org/posts/Alpine-3.20.2-released.html).
|
||||
|
||||
* FEATURE: [VictoriaMetrics Single-Node](https://docs.victoriametrics.com/) and [VictoriaMetrics Cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): Refactor the code located in the `MustAddRows` function of `vmstorage` to improve performance and readability.
|
||||
* FEATURE: [VictoriaMetrics Single-Node](./README.md) and [VictoriaMetrics Cluster](./Cluster-VictoriaMetrics.md): Refactor the code located in the `MustAddRows` function of `vmstorage` to improve performance and readability.
|
||||
* FEATURE: [vmauth](./vmauth.md): add `keep_original_host` option, which can be used for proxying the original `Host` header from client request to the backend. By default the backend host is used as `Host` header when proxying requests to the configured backends. See [these docs](./vmauth.md#host-http-header).
|
||||
* FEATURE: [vmauth](./vmauth.md) now returns HTTP 502 status code when all upstream backends are not available. Previously, it returned HTTP 503 status code. This change aligns vmauth behavior with other well-known reverse-proxies behavior.
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ Just download VictoriaMetrics and follow
|
|||
Then read [Prometheus setup](./Single-Server-VictoriaMetrics.md#prometheus-setup)
|
||||
and [Grafana setup](./Single-Server-VictoriaMetrics.md#grafana-setup) docs.
|
||||
|
||||
VictoriaMetrics is developed at a fast pace, so it is recommended periodically checking the [CHANGELOG](./CHANGELOG.md) and performing [regular upgrades](how-to-upgrade-victoriametrics).
|
||||
VictoriaMetrics is developed at a fast pace, so it is recommended periodically checking the [CHANGELOG](./CHANGELOG.md) and performing [regular upgrades](./#how-to-upgrade-victoriametrics).
|
||||
|
||||
|
||||
### Starting VictoriaMetrics Single Node via Docker {anchor="starting-vm-single-via-docker"}
|
||||
|
@ -53,7 +53,7 @@ docker run -it --rm -v `pwd`/victoria-metrics-data:/victoria-metrics-data -p 842
|
|||
|
||||
|
||||
Open <a href="http://localhost:8428">http://localhost:8428</a> in web browser
|
||||
and read [these docs](operation).
|
||||
and read [these docs](./#operation).
|
||||
|
||||
There is also [VictoriaMetrics cluster](./Cluster-VictoriaMetrics.md)
|
||||
- horizontally scalable installation, which scales to multiple nodes.
|
||||
|
@ -128,7 +128,7 @@ WantedBy=multi-user.target
|
|||
END
|
||||
```
|
||||
|
||||
Extra [command-line flags](list-of-command-line-flags) can be added to `ExecStart` line.
|
||||
Extra [command-line flags](./#list-of-command-line-flags) can be added to `ExecStart` line.
|
||||
|
||||
If you want to deploy VictoriaMetrics Single Node as a Windows Service review the [running as a Windows service docs](./Single-Server-VictoriaMetrics.md#running-as-windows-service).
|
||||
|
||||
|
@ -146,14 +146,14 @@ sudo systemctl daemon-reload && sudo systemctl enable --now victoriametrics.serv
|
|||
sudo systemctl status victoriametrics.service
|
||||
```
|
||||
|
||||
8. After VictoriaMetrics is in `Running` state, verify [vmui](vmui) is working
|
||||
8. After VictoriaMetrics is in `Running` state, verify [vmui](./#vmui) is working
|
||||
by going to `http://<ip_or_hostname>:8428/vmui`.
|
||||
|
||||
|
||||
### Starting VictoriaMetrics Cluster from Binaries {anchor="starting-vm-cluster-from-binaries"}
|
||||
|
||||
VictoriaMetrics cluster consists of [3 components](./Cluster-VictoriaMetrics.md#architecture-overview).
|
||||
It is recommended to run these components in the same private network (for [security reasons](security)),
|
||||
It is recommended to run these components in the same private network (for [security reasons](./#security)),
|
||||
but on the separate physical nodes for the best performance.
|
||||
|
||||
On all nodes you will need to do the following:
|
||||
|
@ -327,7 +327,7 @@ sudo systemctl status vmselect.service
|
|||
```
|
||||
|
||||
5. After `vmselect` is in `Running` state, confirm the service is healthy by visiting `http://<ip_or_hostname>:8481/select/0/vmui` link.
|
||||
It should open [vmui](vmui) page.
|
||||
It should open [vmui](./#vmui) page.
|
||||
|
||||
## Write data
|
||||
|
||||
|
@ -429,5 +429,5 @@ To avoid excessive resource usage or performance degradation limits must be in p
|
|||
|
||||
### Security recommendations
|
||||
|
||||
* [Security recommendations for single-node VictoriaMetrics](security)
|
||||
* [Security recommendations for single-node VictoriaMetrics](./#security)
|
||||
* [Security recommendations for cluster version of VictoriaMetrics](./Cluster-VictoriaMetrics.md#security)
|
||||
|
|
|
@ -9,6 +9,7 @@ menu:
|
|||
weight: 5
|
||||
aliases:
|
||||
- /anomaly-detection/CHANGELOG.html
|
||||
- /anomaly-detection/CHANGELOG/
|
||||
---
|
||||
Please find the changelog for VictoriaMetrics Anomaly Detection below.
|
||||
|
||||
|
|
|
@ -21,7 +21,7 @@
|
|||
|
||||
All the service parameters are defined in a config file.
|
||||
|
||||
> **Note**: Starting from [1.10.0](../../CHANGELOG.md#v1100), each `vmanomaly` configuration file can support more that one model type. To utilize *different models* on your data, it is no longer necessary to run multiple instances of the `vmanomaly` process. Please refer to [model](../..//components/models.md) config section for more details.
|
||||
> **Note**: Starting from [1.10.0](../../CHANGELOG.md#v1100), each `vmanomaly` configuration file can support more that one model type. To utilize *different models* on your data, it is no longer necessary to run multiple instances of the `vmanomaly` process. Please refer to [model](../../components/models.md) config section for more details.
|
||||
|
||||
> **Note**: Starting from [1.11.0](../../CHANGELOG.md#v1110), each `vmanomaly` configuration file can support more that one model type, each attached to one (or more) schedulers. To utilize *different models* with *different schedulers* on your data, it is no longer necessary to run multiple instances of the `vmanomaly` process. Please refer to [model](../../components/models.md#schedulers) and [scheduler](../../components/scheduler.md) config sections for more details.
|
||||
|
||||
|
|
|
@ -108,7 +108,7 @@ Metrics could be sent to VictoriaMetrics via OpenTelemetry instrumentation libra
|
|||
In our example, we'll create a WEB server in [Golang](https://go.dev/) and instrument it with metrics.
|
||||
|
||||
### Building the Go application instrumented with metrics
|
||||
Copy the go file from [here](/guides/getting-started-with-opentelemetry-app.go-collector.example). This will give you a basic implementation of a dice roll WEB server with the urls for opentelemetry-collector pointing to localhost:4318.
|
||||
Copy the go file from [here](./getting-started-with-opentelemetry-app.go-collector.example). This will give you a basic implementation of a dice roll WEB server with the urls for opentelemetry-collector pointing to localhost:4318.
|
||||
In the same directory run the following command to create the `go.mod` file:
|
||||
```sh
|
||||
go mod init vm/otel
|
||||
|
@ -175,7 +175,7 @@ In our example, we'll create a WEB server in [Golang](https://go.dev/) and instr
|
|||
|
||||
### Building the Go application instrumented with metrics
|
||||
|
||||
See the full source code of the example [here](/guides/getting-started-with-opentelemetry-app.go.example).
|
||||
See the full source code of the example [here](./getting-started-with-opentelemetry-app.go.example).
|
||||
|
||||
The list of OpenTelemetry dependencies for `go.mod` is the following:
|
||||
|
||||
|
@ -322,7 +322,7 @@ func newMetricsController(ctx context.Context) (*controller.Controller, error) {
|
|||
|
||||
This controller will collect and push collected metrics to VictoriaMetrics address with interval of `10s`.
|
||||
|
||||
See the full source code of the example [here](/guides/getting-started-with-opentelemetry-app.go.example).
|
||||
See the full source code of the example [here](./getting-started-with-opentelemetry-app.go.example).
|
||||
|
||||
### Test metrics ingestion
|
||||
|
||||
|
|
|
@ -318,22 +318,22 @@ vmagent will write data into VictoriaMetrics single-node and cluster(with tenant
|
|||
|
||||
Grafana datasources configuration will be the following:
|
||||
|
||||
[Test datasources](grafana-vmgateway-openid-configuration/grafana-test-datasources.webp)
|
||||
[Test datasources](grafana-test-datasources.webp)
|
||||
|
||||
Let's login as user with `team=dev` labels limitation set via claims.
|
||||
|
||||
Using `vmgateway-cluster` results into `No data` response as proxied request will go to tenant `0:1`.
|
||||
Since vmagent is only configured to write to `0:0` `No data` is an expected response.
|
||||
|
||||
[Dev cluster nodata](grafana-vmgateway-openid-configuration/dev-cluster-nodata.webp)
|
||||
[Dev cluster nodata](dev-cluster-nodata.webp)
|
||||
|
||||
Switching to `vmgateway-single` does have data. Note that it is limited to metrics with `team=dev` label.
|
||||
|
||||
[Dev single data](grafana-vmgateway-openid-configuration/dev-single-data.webp)
|
||||
[Dev single data](dev-single-data.webp)
|
||||
|
||||
Now lets login as user with `team=admin`.
|
||||
|
||||
Both cluster and single node datasources now return metrics for `team=admin`.
|
||||
|
||||
[Admin cluster data](grafana-vmgateway-openid-configuration/admin-cluster-data.webp)
|
||||
[Admin single data](grafana-vmgateway-openid-configuration/admin-single-data.webp)
|
||||
[Admin cluster data](admin-cluster-data.webp)
|
||||
[Admin single data](admin-single-data.webp)
|
||||
|
|
|
@ -18,8 +18,8 @@ sometimes old known solutions just can't keep up with the new expectations.
|
|||
VictoriaMetrics is a high-performance opensource time series database specifically designed to deal with huge volumes of
|
||||
monitoring data while remaining cost-efficient at the same time. Many companies are choosing to migrate from InfluxDB to
|
||||
VictoriaMetrics specifically for performance and scalability reasons. Along them see case studies provided by
|
||||
[ARNES](./CaseStudies.md#arnes)
|
||||
and [Brandwatch](./CaseStudies.md#brandwatch).
|
||||
[ARNES](../CaseStudies.md#arnes)
|
||||
and [Brandwatch](../CaseStudies.md#brandwatch).
|
||||
|
||||
This guide will cover the differences between two solutions, most commonly asked questions, and approaches for migrating
|
||||
from InfluxDB to VictoriaMetrics.
|
||||
|
@ -28,13 +28,13 @@ from InfluxDB to VictoriaMetrics.
|
|||
|
||||
While readers are likely familiar
|
||||
with [InfluxDB key concepts](https://docs.influxdata.com/influxdb/v2.2/reference/key-concepts/), the data model of
|
||||
VictoriaMetrics is something [new to explore](./keyConcepts.md#data-model). Let's start
|
||||
VictoriaMetrics is something [new to explore](../keyConcepts.md#data-model). Let's start
|
||||
with similarities and differences:
|
||||
|
||||
* both solutions are **schemaless**, which means there is no need to define metrics or their tags in advance;
|
||||
* multidimensional data support is implemented
|
||||
via [tags](https://docs.influxdata.com/influxdb/v2.2/reference/key-concepts/data-elements/#tags)
|
||||
in InfluxDB and via [labels](./keyConcepts.md#structure-of-a-metric) in
|
||||
in InfluxDB and via [labels](../keyConcepts.md#structure-of-a-metric) in
|
||||
VictoriaMetrics. However, labels in VictoriaMetrics are always `strings`, while InfluxDB supports multiple data types;
|
||||
* timestamps are stored with nanosecond resolution in InfluxDB, while in VictoriaMetrics it is **milliseconds**;
|
||||
* in VictoriaMetrics metric value is always `float64`, while InfluxDB supports multiple data types.
|
||||
|
@ -47,8 +47,8 @@ with similarities and differences:
|
|||
[buckets](https://docs.influxdata.com/influxdb/v2.2/reference/key-concepts/data-elements/#bucket)
|
||||
or [organizations](https://docs.influxdata.com/influxdb/v2.2/reference/key-concepts/data-elements/#organization). All
|
||||
data in VictoriaMetrics is stored in a global namespace or within
|
||||
a [tenant](./Cluster-VictoriaMetrics.md#multitenancy).
|
||||
See more about multi-tenancy [here](./keyConcepts.md#multi-tenancy).
|
||||
a [tenant](../Cluster-VictoriaMetrics.md#multitenancy).
|
||||
See more about multi-tenancy [here](../keyConcepts.md#multi-tenancy).
|
||||
|
||||
Let's consider the
|
||||
following [sample data](https://docs.influxdata.com/influxdb/v2.2/reference/key-concepts/data-elements/#sample-data)
|
||||
|
@ -78,7 +78,7 @@ VictoriaMetrics, so lookups by names or labels have the same query speed.
|
|||
## Write data
|
||||
|
||||
VictoriaMetrics
|
||||
supports [InfluxDB line protocol](./#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)
|
||||
supports [InfluxDB line protocol](../#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)
|
||||
for data ingestion. For example, to write a measurement to VictoriaMetrics we need to send an HTTP POST request with
|
||||
payload in a line protocol format:
|
||||
|
||||
|
@ -116,7 +116,7 @@ The expected response is the following:
|
|||
```
|
||||
|
||||
Please note, VictoriaMetrics performed additional
|
||||
[data mapping](./#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)
|
||||
[data mapping](../#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)
|
||||
to the data ingested via InfluxDB line protocol.
|
||||
|
||||
Support of InfluxDB line protocol also means VictoriaMetrics is compatible with
|
||||
|
@ -129,20 +129,20 @@ add `http://<victoriametric-addr>:8428` URL to Telegraf configs:
|
|||
```
|
||||
|
||||
In addition to InfluxDB line protocol, VictoriaMetrics supports many other ways for
|
||||
[metrics collection](./keyConcepts.md#write-data).
|
||||
[metrics collection](../keyConcepts.md#write-data).
|
||||
|
||||
## Query data
|
||||
|
||||
VictoriaMetrics does not have a command-line interface (CLI). Instead, it provides
|
||||
an [HTTP API](./Single-Server-VictoriaMetrics.md#prometheus-querying-api-usage)
|
||||
an [HTTP API](../Single-server-VictoriaMetrics.md#prometheus-querying-api-usage)
|
||||
for serving read queries. This API is used in various integrations such as
|
||||
[Grafana](./Single-Server-VictoriaMetrics.md#grafana-setup). The same API is also used
|
||||
by [VMUI](./Single-Server-VictoriaMetrics.md#vmui) - a graphical User Interface for
|
||||
[Grafana](../Single-Server-VictoriaMetrics.md#grafana-setup). The same API is also used
|
||||
by [VMUI](../Single-Server-VictoriaMetrics.md#vmui) - a graphical User Interface for
|
||||
querying and visualizing metrics:
|
||||
|
||||
![Migrate from Influx](migrate-from-influx_vmui.webp)
|
||||
|
||||
See more about [how to query data in VictoriaMetrics](./keyConcepts.md#query-data).
|
||||
See more about [how to query data in VictoriaMetrics](../keyConcepts.md#query-data).
|
||||
|
||||
### Basic concepts
|
||||
|
||||
|
@ -186,19 +186,19 @@ Having this, let's import the same data sample in VictoriaMetrics and plot it in
|
|||
InfluxQL query might be translated to MetricsQL let's break it into components first:
|
||||
|
||||
* `SELECT last("bar") FROM "foo"` - all requests
|
||||
to [instant](./keyConcepts.md#instant-query)
|
||||
or [range](./keyConcepts.md#range-query) VictoriaMetrics APIs are reads, so no need
|
||||
to [instant](../keyConcepts.md#instant-query)
|
||||
or [range](../keyConcepts.md#range-query) VictoriaMetrics APIs are reads, so no need
|
||||
to specify the `SELECT` statement. There are no `measurements` or `fields` in VictoriaMetrics, so the whole expression
|
||||
can be replaced with `foo_bar` in MetricsQL;
|
||||
* `WHERE ("instance" = 'localhost')`- [filtering by labels](./keyConcepts.md#filtering)
|
||||
* `WHERE ("instance" = 'localhost')`- [filtering by labels](../keyConcepts.md#filtering)
|
||||
in MetricsQL requires specifying the filter in curly braces next to the metric name. So in MetricsQL filter expression
|
||||
will be translated to `{instance="localhost"}`;
|
||||
* `WHERE $timeFilter` - filtering by time is done via request params sent along with query, so in MetricsQL no need to
|
||||
specify this filter;
|
||||
* `GROUP BY time(1m)` - grouping by time is done by default
|
||||
in [range](./keyConcepts.md#range-query) API according to specified `step` param.
|
||||
in [range](../keyConcepts.md#range-query) API according to specified `step` param.
|
||||
This param is also a part of params sent along with request. See how to perform additional
|
||||
[aggregations and grouping via MetricsQL](./keyConcepts.md#aggregation-and-grouping-functions)
|
||||
[aggregations and grouping via MetricsQL](../keyConcepts.md#aggregation-and-grouping-functions)
|
||||
.
|
||||
|
||||
In result, executing the `foo_bar{instance="localhost"}` MetricsQL expression with `step=1m` for the same set of data in
|
||||
|
@ -208,13 +208,13 @@ Grafana will have the following form:
|
|||
|
||||
Visualizations from both databases are a bit different - VictoriaMetrics shows some extra points
|
||||
filling the gaps in the graph. This behavior is described in more
|
||||
detail [here](./keyConcepts.md#range-query). In InfluxDB, we can achieve a similar
|
||||
detail [here](../keyConcepts.md#range-query). In InfluxDB, we can achieve a similar
|
||||
behavior by adding `fill(previous)` to the query.
|
||||
|
||||
VictoriaMetrics fills the gaps on the graph assuming time series are always continuous and not discrete.
|
||||
To limit the interval on which VictoriaMetrics will try to fill the gaps, set `-search.setLookbackToStep`
|
||||
command-line flag. This limits the gap filling to a single `step` interval passed to
|
||||
[/api/v1/query_range](./keyConcepts.md#range-query).
|
||||
[/api/v1/query_range](../keyConcepts.md#range-query).
|
||||
This behavior is close to InfluxDB data model.
|
||||
|
||||
|
||||
|
@ -227,56 +227,56 @@ about 230 PromQL queries in it! But a closer look at those queries shows the fol
|
|||
|
||||
* ~120 queries are just selecting a metric with label filters,
|
||||
e.g. `node_textfile_scrape_error{instance="$node",job="$job"}`;
|
||||
* ~80 queries are using [rate](./MetricsQL.md#rate) function for selected metric,
|
||||
* ~80 queries are using [rate](../MetricsQL.md#rate) function for selected metric,
|
||||
e.g. `rate(node_netstat_Tcp_InSegs{instance=\"$node\",job=\"$job\"})`
|
||||
* and the rest
|
||||
are [aggregation functions](./keyConcepts.md#aggregation-and-grouping-functions)
|
||||
like [sum](./MetricsQL.md#sum)
|
||||
or [count](./MetricsQL.md#count).
|
||||
are [aggregation functions](../keyConcepts.md#aggregation-and-grouping-functions)
|
||||
like [sum](../MetricsQL.md#sum)
|
||||
or [count](../MetricsQL.md#count).
|
||||
|
||||
To get a better understanding of how MetricsQL works, see the following resources:
|
||||
|
||||
* [MetricsQL concepts](./keyConcepts.md#metricsql);
|
||||
* [MetricsQL functions](./MetricsQL.md);
|
||||
* [MetricsQL concepts](../keyConcepts.md#metricsql);
|
||||
* [MetricsQL functions](../MetricsQL.md);
|
||||
* [PromQL tutorial for beginners](https://valyala.medium.com/promql-tutorial-for-beginners-9ab455142085).
|
||||
|
||||
## How to migrate current data from InfluxDB to VictoriaMetrics
|
||||
|
||||
Migrating data from other TSDBs to VictoriaMetrics is as simple as importing data via any of
|
||||
[supported formats](./keyConcepts.md#push-model).
|
||||
[supported formats](../keyConcepts.md#push-model).
|
||||
|
||||
But migration from InfluxDB might get easier when using [vmctl](./vmctl.md) -
|
||||
But migration from InfluxDB might get easier when using [vmctl](../vmctl.md) -
|
||||
VictoriaMetrics command-line tool. See more about
|
||||
migrating [from InfluxDB v1.x versions](./vmctl.md#migrating-data-from-influxdb-1x).
|
||||
migrating [from InfluxDB v1.x versions](../vmctl.md#migrating-data-from-influxdb-1x).
|
||||
Migrating data from InfluxDB v2.x is not supported yet. But there is
|
||||
useful [3rd party solution](./vmctl.md#migrating-data-from-influxdb-2x) for this.
|
||||
useful [3rd party solution](../vmctl.md#migrating-data-from-influxdb-2x) for this.
|
||||
|
||||
Please note, that data migration is a backfilling process. So, please
|
||||
consider [backfilling tips](./Single-Server-VictoriaMetrics.md#backfilling).
|
||||
consider [backfilling tips](../Single-server-VictoriaMetrics.md#backfilling).
|
||||
|
||||
## Frequently asked questions
|
||||
|
||||
* How does VictoriaMetrics compare to InfluxDB?
|
||||
* _[Answer](./FAQ.md#how-does-victoriametrics-compare-to-influxdb)_
|
||||
* _[Answer](../FAQ.md#how-does-victoriametrics-compare-to-influxdb)_
|
||||
* Why don't VictoriaMetrics support Remote Read API, so I don't need to learn MetricsQL?
|
||||
* _[Answer](./FAQ.md#why-doesnt-victoriametrics-support-the-prometheus-remote-read-api)_
|
||||
* _[Answer](../FAQ.md#why-doesnt-victoriametrics-support-the-prometheus-remote-read-api)_
|
||||
* The PromQL and MetricsQL are often mentioned together - why is that?
|
||||
* _MetricsQL - query language inspired by PromQL. MetricsQL is backward-compatible with PromQL, so Grafana
|
||||
dashboards backed by Prometheus datasource should work the same after switching from Prometheus to
|
||||
VictoriaMetrics. Both languages mostly share the same concepts with slight differences._
|
||||
* Query returns more data points than expected - why?
|
||||
* _VictoriaMetrics may return non-existing data points if `step` param is lower than the actual data resolution. See
|
||||
more about this [here](./keyConcepts.md#range-query)._
|
||||
more about this [here](../keyConcepts.md#range-query)._
|
||||
* How do I get the `real` last data point, not `ephemeral`?
|
||||
* _[last_over_time](./MetricsQL.md#last_over_time) function can be used for
|
||||
* _[last_over_time](../MetricsQL.md#last_over_time) function can be used for
|
||||
limiting the lookbehind window for calculated data. For example, `last_over_time(metric[10s])` would return
|
||||
calculated samples only if the real samples are located closer than 10 seconds to the calculated timestamps
|
||||
according to
|
||||
`start`, `end` and `step` query args passed
|
||||
to [range query](./keyConcepts.md#range-query)._
|
||||
to [range query](../keyConcepts.md#range-query)._
|
||||
* How do I get raw data points with MetricsQL?
|
||||
* _For getting raw data points specify the interval at which you want them in square brackets and send
|
||||
as [instant query](./keyConcepts.md#instant-query). For
|
||||
as [instant query](../keyConcepts.md#instant-query). For
|
||||
example, `GET api/v1/query?query=my_metric[5m]&time=<time>` will return raw samples for `my_metric` in interval
|
||||
from `<time>` to `<time>-5m`._
|
||||
* Can you have multiple aggregators in a MetricsQL query, e.g. `SELECT MAX(field), MIN(field) ...`?
|
||||
|
|
|
@ -667,7 +667,7 @@ Please note, [replay](#rules-backfilling) feature may be used for transforming h
|
|||
|
||||
Flags `-remoteRead.url` and `-notifier.url` are omitted since we assume only recording rules are used.
|
||||
|
||||
See also [stream aggregation](./stream-aggregation.md) and [downsampling](downsampling).
|
||||
See also [stream aggregation](./stream-aggregation.md) and [downsampling](./#downsampling).
|
||||
|
||||
#### Multiple remote writes
|
||||
|
||||
|
|
|
@ -172,7 +172,7 @@ info app/vmbackupmanager/retention.go:106 daily backups to delete [daily/2
|
|||
|
||||
The result on the GCS bucket. We see only 3 daily backups:
|
||||
|
||||
[retention policy daily after retention cycle](vmbackupmanager_rp_daily_2.webp "retention policy daily after retention cycle")
|
||||
![retention policy daily after retention cycle](vmbackupmanager_rp_daily_2.webp "retention policy daily after retention cycle")
|
||||
|
||||
### Protection backups against deletion by retention policy
|
||||
|
||||
|
@ -440,7 +440,7 @@ command-line flags:
|
|||
-customS3Endpoint string
|
||||
Custom S3 endpoint for use with S3-compatible storages (e.g. MinIO). S3 is used if not set
|
||||
-deleteAllObjectVersions
|
||||
Whether to prune previous object versions when deleting an object. By default, when object storage has versioning enabled deleting the file removes only current version. This option forces removal of all previous versions. See: ./vmbackup.md#permanent-deletion-of-objects-in-s3-compatible-storages
|
||||
Whether to prune previous object versions when deleting an object. By default, when object storage has versioning enabled deleting the file removes only current version. This option forces removal of all previous versions. See: {{% ref "./vmbackup.md#permanent-deletion-of-objects-in-s3-compatible-storages" %}}
|
||||
-disableDaily
|
||||
Disable daily run. Default false
|
||||
-disableHourly
|
||||
|
@ -454,11 +454,11 @@ command-line flags:
|
|||
-enableTCP6
|
||||
Whether to enable IPv6 for listening and dialing. By default, only IPv4 TCP and UDP are used
|
||||
-envflag.enable
|
||||
Whether to enable reading flags from environment variables in addition to the command line. Command line flag values have priority over values from environment vars. Flags are read only from the command line if this flag isn't set. See ./#environment-variables for more details
|
||||
Whether to enable reading flags from environment variables in addition to the command line. Command line flag values have priority over values from environment vars. Flags are read only from the command line if this flag isn't set. See {{% ref "./#environment-variables" %}} for more details
|
||||
-envflag.prefix string
|
||||
Prefix for environment variables if -envflag.enable is set
|
||||
-eula
|
||||
Deprecated, please use -license or -licenseFile flags instead. By specifying this flag, you confirm that you have an enterprise license and accept the ESA https://victoriametrics.com/legal/esa/ . This flag is available only in Enterprise binaries. See ./enterprise.md
|
||||
Deprecated, please use -license or -licenseFile flags instead. By specifying this flag, you confirm that you have an enterprise license and accept the ESA https://victoriametrics.com/legal/esa/ . This flag is available only in Enterprise binaries. See {{% ref "./enterprise.md" %}}
|
||||
-filestream.disableFadvise
|
||||
Whether to disable fadvise() syscall when reading large data files. The fadvise() syscall prevents from eviction of recently accessed data from OS page cache during background merges and backups. In some rare cases it is better to disable the syscall if it uses too much CPU
|
||||
-flagsAuthKey value
|
||||
|
@ -544,11 +544,11 @@ command-line flags:
|
|||
Auth key for /metrics endpoint. It must be passed via authKey query arg. It overrides -httpAuth.*
|
||||
Flag value can be read from the given file when using -metricsAuthKey=file:///abs/path/to/file or -metricsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -metricsAuthKey=http://host/path or -metricsAuthKey=https://host/path
|
||||
-mtls array
|
||||
Whether to require valid client certificate for https requests to the corresponding -httpListenAddr . This flag works only if -tls flag is set. See also -mtlsCAFile . This flag is available only in Enterprise binaries. See ./enterprise.md
|
||||
Whether to require valid client certificate for https requests to the corresponding -httpListenAddr . This flag works only if -tls flag is set. See also -mtlsCAFile . This flag is available only in Enterprise binaries. See {{% ref "./enterprise.md" %}}
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-mtlsCAFile array
|
||||
Optional path to TLS Root CA for verifying client certificates at the corresponding -httpListenAddr when -mtls is enabled. By default the host system TLS Root CA is used for client certificate verification. This flag is available only in Enterprise binaries. See ./enterprise.md
|
||||
Optional path to TLS Root CA for verifying client certificates at the corresponding -httpListenAddr when -mtls is enabled. By default the host system TLS Root CA is used for client certificate verification. This flag is available only in Enterprise binaries. See {{% ref "./enterprise.md" %}}
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-pprofAuthKey value
|
||||
|
@ -567,7 +567,7 @@ command-line flags:
|
|||
-pushmetrics.interval duration
|
||||
Interval for pushing metrics to every -pushmetrics.url (default 10s)
|
||||
-pushmetrics.url array
|
||||
Optional URL to push metrics exposed at /metrics page. See ./#push-metrics . By default, metrics exposed at /metrics page aren't pushed to any remote storage
|
||||
Optional URL to push metrics exposed at /metrics page. See {{% ref "./#push-metrics" %}}. By default, metrics exposed at /metrics page aren't pushed to any remote storage
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-runOnStart
|
||||
|
@ -590,11 +590,11 @@ command-line flags:
|
|||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-tlsAutocertCacheDir string
|
||||
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See ./enterprise.md
|
||||
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See {{% ref "./enterprise.md" %}}
|
||||
-tlsAutocertEmail string
|
||||
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See ./enterprise.md
|
||||
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See {{% ref "./enterprise.md" %}}
|
||||
-tlsAutocertHosts array
|
||||
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See ./enterprise.md
|
||||
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See {{% ref "./enterprise.md" %}}
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsCertFile array
|
||||
|
|
Loading…
Reference in a new issue