diff --git a/docs/CHANGELOG.md b/docs/CHANGELOG.md
index fb9c7f292..b6363bbad 100644
--- a/docs/CHANGELOG.md
+++ b/docs/CHANGELOG.md
@@ -33,7 +33,7 @@ See also [LTS releases](./LTS-releases.md).
* SECURITY: upgrade base docker image (Alpine) from 3.20.1 to 3.20.2. See [alpine 3.20.2 release notes](https://alpinelinux.org/posts/Alpine-3.20.2-released.html).
-* FEATURE: [VictoriaMetrics Single-Node](https://docs.victoriametrics.com/) and [VictoriaMetrics Cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): Refactor the code located in the `MustAddRows` function of `vmstorage` to improve performance and readability.
+* FEATURE: [VictoriaMetrics Single-Node](./README.md) and [VictoriaMetrics Cluster](./Cluster-VictoriaMetrics.md): Refactor the code located in the `MustAddRows` function of `vmstorage` to improve performance and readability.
* FEATURE: [vmauth](./vmauth.md): add `keep_original_host` option, which can be used for proxying the original `Host` header from client request to the backend. By default the backend host is used as `Host` header when proxying requests to the configured backends. See [these docs](./vmauth.md#host-http-header).
* FEATURE: [vmauth](./vmauth.md) now returns HTTP 502 status code when all upstream backends are not available. Previously, it returned HTTP 503 status code. This change aligns vmauth behavior with other well-known reverse-proxies behavior.
diff --git a/docs/Quick-Start.md b/docs/Quick-Start.md
index 404be75c7..ba5ec7fa6 100644
--- a/docs/Quick-Start.md
+++ b/docs/Quick-Start.md
@@ -35,7 +35,7 @@ Just download VictoriaMetrics and follow
Then read [Prometheus setup](./Single-Server-VictoriaMetrics.md#prometheus-setup)
and [Grafana setup](./Single-Server-VictoriaMetrics.md#grafana-setup) docs.
-VictoriaMetrics is developed at a fast pace, so it is recommended periodically checking the [CHANGELOG](./CHANGELOG.md) and performing [regular upgrades](how-to-upgrade-victoriametrics).
+VictoriaMetrics is developed at a fast pace, so it is recommended periodically checking the [CHANGELOG](./CHANGELOG.md) and performing [regular upgrades](./#how-to-upgrade-victoriametrics).
### Starting VictoriaMetrics Single Node via Docker {anchor="starting-vm-single-via-docker"}
@@ -53,7 +53,7 @@ docker run -it --rm -v `pwd`/victoria-metrics-data:/victoria-metrics-data -p 842
Open http://localhost:8428 in web browser
-and read [these docs](operation).
+and read [these docs](./#operation).
There is also [VictoriaMetrics cluster](./Cluster-VictoriaMetrics.md)
- horizontally scalable installation, which scales to multiple nodes.
@@ -128,7 +128,7 @@ WantedBy=multi-user.target
END
```
-Extra [command-line flags](list-of-command-line-flags) can be added to `ExecStart` line.
+Extra [command-line flags](./#list-of-command-line-flags) can be added to `ExecStart` line.
If you want to deploy VictoriaMetrics Single Node as a Windows Service review the [running as a Windows service docs](./Single-Server-VictoriaMetrics.md#running-as-windows-service).
@@ -146,14 +146,14 @@ sudo systemctl daemon-reload && sudo systemctl enable --now victoriametrics.serv
sudo systemctl status victoriametrics.service
```
-8. After VictoriaMetrics is in `Running` state, verify [vmui](vmui) is working
+8. After VictoriaMetrics is in `Running` state, verify [vmui](./#vmui) is working
by going to `http://:8428/vmui`.
### Starting VictoriaMetrics Cluster from Binaries {anchor="starting-vm-cluster-from-binaries"}
VictoriaMetrics cluster consists of [3 components](./Cluster-VictoriaMetrics.md#architecture-overview).
-It is recommended to run these components in the same private network (for [security reasons](security)),
+It is recommended to run these components in the same private network (for [security reasons](./#security)),
but on the separate physical nodes for the best performance.
On all nodes you will need to do the following:
@@ -327,7 +327,7 @@ sudo systemctl status vmselect.service
```
5. After `vmselect` is in `Running` state, confirm the service is healthy by visiting `http://:8481/select/0/vmui` link.
-It should open [vmui](vmui) page.
+It should open [vmui](./#vmui) page.
## Write data
@@ -429,5 +429,5 @@ To avoid excessive resource usage or performance degradation limits must be in p
### Security recommendations
-* [Security recommendations for single-node VictoriaMetrics](security)
+* [Security recommendations for single-node VictoriaMetrics](./#security)
* [Security recommendations for cluster version of VictoriaMetrics](./Cluster-VictoriaMetrics.md#security)
diff --git a/docs/anomaly-detection/CHANGELOG.md b/docs/anomaly-detection/CHANGELOG.md
index c9184f311..7af200757 100644
--- a/docs/anomaly-detection/CHANGELOG.md
+++ b/docs/anomaly-detection/CHANGELOG.md
@@ -9,6 +9,7 @@ menu:
weight: 5
aliases:
- /anomaly-detection/CHANGELOG.html
+- /anomaly-detection/CHANGELOG/
---
Please find the changelog for VictoriaMetrics Anomaly Detection below.
diff --git a/docs/anomaly-detection/guides/guide-vmanomaly-vmalert/README.md b/docs/anomaly-detection/guides/guide-vmanomaly-vmalert/README.md
index 8a518e9f3..fd03338e1 100644
--- a/docs/anomaly-detection/guides/guide-vmanomaly-vmalert/README.md
+++ b/docs/anomaly-detection/guides/guide-vmanomaly-vmalert/README.md
@@ -21,7 +21,7 @@
All the service parameters are defined in a config file.
-> **Note**: Starting from [1.10.0](../../CHANGELOG.md#v1100), each `vmanomaly` configuration file can support more that one model type. To utilize *different models* on your data, it is no longer necessary to run multiple instances of the `vmanomaly` process. Please refer to [model](../..//components/models.md) config section for more details.
+> **Note**: Starting from [1.10.0](../../CHANGELOG.md#v1100), each `vmanomaly` configuration file can support more that one model type. To utilize *different models* on your data, it is no longer necessary to run multiple instances of the `vmanomaly` process. Please refer to [model](../../components/models.md) config section for more details.
> **Note**: Starting from [1.11.0](../../CHANGELOG.md#v1110), each `vmanomaly` configuration file can support more that one model type, each attached to one (or more) schedulers. To utilize *different models* with *different schedulers* on your data, it is no longer necessary to run multiple instances of the `vmanomaly` process. Please refer to [model](../../components/models.md#schedulers) and [scheduler](../../components/scheduler.md) config sections for more details.
diff --git a/docs/guides/getting-started-with-opentelemetry.md b/docs/guides/getting-started-with-opentelemetry.md
index b23a918f5..d5bc160ff 100644
--- a/docs/guides/getting-started-with-opentelemetry.md
+++ b/docs/guides/getting-started-with-opentelemetry.md
@@ -108,7 +108,7 @@ Metrics could be sent to VictoriaMetrics via OpenTelemetry instrumentation libra
In our example, we'll create a WEB server in [Golang](https://go.dev/) and instrument it with metrics.
### Building the Go application instrumented with metrics
-Copy the go file from [here](/guides/getting-started-with-opentelemetry-app.go-collector.example). This will give you a basic implementation of a dice roll WEB server with the urls for opentelemetry-collector pointing to localhost:4318.
+Copy the go file from [here](./getting-started-with-opentelemetry-app.go-collector.example). This will give you a basic implementation of a dice roll WEB server with the urls for opentelemetry-collector pointing to localhost:4318.
In the same directory run the following command to create the `go.mod` file:
```sh
go mod init vm/otel
@@ -175,7 +175,7 @@ In our example, we'll create a WEB server in [Golang](https://go.dev/) and instr
### Building the Go application instrumented with metrics
-See the full source code of the example [here](/guides/getting-started-with-opentelemetry-app.go.example).
+See the full source code of the example [here](./getting-started-with-opentelemetry-app.go.example).
The list of OpenTelemetry dependencies for `go.mod` is the following:
@@ -322,7 +322,7 @@ func newMetricsController(ctx context.Context) (*controller.Controller, error) {
This controller will collect and push collected metrics to VictoriaMetrics address with interval of `10s`.
-See the full source code of the example [here](/guides/getting-started-with-opentelemetry-app.go.example).
+See the full source code of the example [here](./getting-started-with-opentelemetry-app.go.example).
### Test metrics ingestion
diff --git a/docs/guides/grafana-vmgateway-openid-configuration/README.md b/docs/guides/grafana-vmgateway-openid-configuration/README.md
index f523e62a3..407d0c49b 100644
--- a/docs/guides/grafana-vmgateway-openid-configuration/README.md
+++ b/docs/guides/grafana-vmgateway-openid-configuration/README.md
@@ -318,22 +318,22 @@ vmagent will write data into VictoriaMetrics single-node and cluster(with tenant
Grafana datasources configuration will be the following:
-[Test datasources](grafana-vmgateway-openid-configuration/grafana-test-datasources.webp)
+[Test datasources](grafana-test-datasources.webp)
Let's login as user with `team=dev` labels limitation set via claims.
Using `vmgateway-cluster` results into `No data` response as proxied request will go to tenant `0:1`.
Since vmagent is only configured to write to `0:0` `No data` is an expected response.
-[Dev cluster nodata](grafana-vmgateway-openid-configuration/dev-cluster-nodata.webp)
+[Dev cluster nodata](dev-cluster-nodata.webp)
Switching to `vmgateway-single` does have data. Note that it is limited to metrics with `team=dev` label.
-[Dev single data](grafana-vmgateway-openid-configuration/dev-single-data.webp)
+[Dev single data](dev-single-data.webp)
Now lets login as user with `team=admin`.
Both cluster and single node datasources now return metrics for `team=admin`.
-[Admin cluster data](grafana-vmgateway-openid-configuration/admin-cluster-data.webp)
-[Admin single data](grafana-vmgateway-openid-configuration/admin-single-data.webp)
+[Admin cluster data](admin-cluster-data.webp)
+[Admin single data](admin-single-data.webp)
diff --git a/docs/guides/migrate-from-influx.md b/docs/guides/migrate-from-influx.md
index 441d1f30a..17880c62c 100644
--- a/docs/guides/migrate-from-influx.md
+++ b/docs/guides/migrate-from-influx.md
@@ -18,8 +18,8 @@ sometimes old known solutions just can't keep up with the new expectations.
VictoriaMetrics is a high-performance opensource time series database specifically designed to deal with huge volumes of
monitoring data while remaining cost-efficient at the same time. Many companies are choosing to migrate from InfluxDB to
VictoriaMetrics specifically for performance and scalability reasons. Along them see case studies provided by
-[ARNES](./CaseStudies.md#arnes)
-and [Brandwatch](./CaseStudies.md#brandwatch).
+[ARNES](../CaseStudies.md#arnes)
+and [Brandwatch](../CaseStudies.md#brandwatch).
This guide will cover the differences between two solutions, most commonly asked questions, and approaches for migrating
from InfluxDB to VictoriaMetrics.
@@ -28,13 +28,13 @@ from InfluxDB to VictoriaMetrics.
While readers are likely familiar
with [InfluxDB key concepts](https://docs.influxdata.com/influxdb/v2.2/reference/key-concepts/), the data model of
-VictoriaMetrics is something [new to explore](./keyConcepts.md#data-model). Let's start
+VictoriaMetrics is something [new to explore](../keyConcepts.md#data-model). Let's start
with similarities and differences:
* both solutions are **schemaless**, which means there is no need to define metrics or their tags in advance;
* multidimensional data support is implemented
via [tags](https://docs.influxdata.com/influxdb/v2.2/reference/key-concepts/data-elements/#tags)
- in InfluxDB and via [labels](./keyConcepts.md#structure-of-a-metric) in
+ in InfluxDB and via [labels](../keyConcepts.md#structure-of-a-metric) in
VictoriaMetrics. However, labels in VictoriaMetrics are always `strings`, while InfluxDB supports multiple data types;
* timestamps are stored with nanosecond resolution in InfluxDB, while in VictoriaMetrics it is **milliseconds**;
* in VictoriaMetrics metric value is always `float64`, while InfluxDB supports multiple data types.
@@ -47,8 +47,8 @@ with similarities and differences:
[buckets](https://docs.influxdata.com/influxdb/v2.2/reference/key-concepts/data-elements/#bucket)
or [organizations](https://docs.influxdata.com/influxdb/v2.2/reference/key-concepts/data-elements/#organization). All
data in VictoriaMetrics is stored in a global namespace or within
- a [tenant](./Cluster-VictoriaMetrics.md#multitenancy).
- See more about multi-tenancy [here](./keyConcepts.md#multi-tenancy).
+ a [tenant](../Cluster-VictoriaMetrics.md#multitenancy).
+ See more about multi-tenancy [here](../keyConcepts.md#multi-tenancy).
Let's consider the
following [sample data](https://docs.influxdata.com/influxdb/v2.2/reference/key-concepts/data-elements/#sample-data)
@@ -78,7 +78,7 @@ VictoriaMetrics, so lookups by names or labels have the same query speed.
## Write data
VictoriaMetrics
-supports [InfluxDB line protocol](./#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)
+supports [InfluxDB line protocol](../#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)
for data ingestion. For example, to write a measurement to VictoriaMetrics we need to send an HTTP POST request with
payload in a line protocol format:
@@ -116,7 +116,7 @@ The expected response is the following:
```
Please note, VictoriaMetrics performed additional
-[data mapping](./#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)
+[data mapping](../#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)
to the data ingested via InfluxDB line protocol.
Support of InfluxDB line protocol also means VictoriaMetrics is compatible with
@@ -129,20 +129,20 @@ add `http://:8428` URL to Telegraf configs:
```
In addition to InfluxDB line protocol, VictoriaMetrics supports many other ways for
-[metrics collection](./keyConcepts.md#write-data).
+[metrics collection](../keyConcepts.md#write-data).
## Query data
VictoriaMetrics does not have a command-line interface (CLI). Instead, it provides
-an [HTTP API](./Single-Server-VictoriaMetrics.md#prometheus-querying-api-usage)
+an [HTTP API](../Single-server-VictoriaMetrics.md#prometheus-querying-api-usage)
for serving read queries. This API is used in various integrations such as
-[Grafana](./Single-Server-VictoriaMetrics.md#grafana-setup). The same API is also used
-by [VMUI](./Single-Server-VictoriaMetrics.md#vmui) - a graphical User Interface for
+[Grafana](../Single-Server-VictoriaMetrics.md#grafana-setup). The same API is also used
+by [VMUI](../Single-Server-VictoriaMetrics.md#vmui) - a graphical User Interface for
querying and visualizing metrics:
![Migrate from Influx](migrate-from-influx_vmui.webp)
-See more about [how to query data in VictoriaMetrics](./keyConcepts.md#query-data).
+See more about [how to query data in VictoriaMetrics](../keyConcepts.md#query-data).
### Basic concepts
@@ -186,19 +186,19 @@ Having this, let's import the same data sample in VictoriaMetrics and plot it in
InfluxQL query might be translated to MetricsQL let's break it into components first:
* `SELECT last("bar") FROM "foo"` - all requests
- to [instant](./keyConcepts.md#instant-query)
- or [range](./keyConcepts.md#range-query) VictoriaMetrics APIs are reads, so no need
+ to [instant](../keyConcepts.md#instant-query)
+ or [range](../keyConcepts.md#range-query) VictoriaMetrics APIs are reads, so no need
to specify the `SELECT` statement. There are no `measurements` or `fields` in VictoriaMetrics, so the whole expression
can be replaced with `foo_bar` in MetricsQL;
-* `WHERE ("instance" = 'localhost')`- [filtering by labels](./keyConcepts.md#filtering)
+* `WHERE ("instance" = 'localhost')`- [filtering by labels](../keyConcepts.md#filtering)
in MetricsQL requires specifying the filter in curly braces next to the metric name. So in MetricsQL filter expression
will be translated to `{instance="localhost"}`;
* `WHERE $timeFilter` - filtering by time is done via request params sent along with query, so in MetricsQL no need to
specify this filter;
* `GROUP BY time(1m)` - grouping by time is done by default
- in [range](./keyConcepts.md#range-query) API according to specified `step` param.
+ in [range](../keyConcepts.md#range-query) API according to specified `step` param.
This param is also a part of params sent along with request. See how to perform additional
- [aggregations and grouping via MetricsQL](./keyConcepts.md#aggregation-and-grouping-functions)
+ [aggregations and grouping via MetricsQL](../keyConcepts.md#aggregation-and-grouping-functions)
.
In result, executing the `foo_bar{instance="localhost"}` MetricsQL expression with `step=1m` for the same set of data in
@@ -208,13 +208,13 @@ Grafana will have the following form:
Visualizations from both databases are a bit different - VictoriaMetrics shows some extra points
filling the gaps in the graph. This behavior is described in more
-detail [here](./keyConcepts.md#range-query). In InfluxDB, we can achieve a similar
+detail [here](../keyConcepts.md#range-query). In InfluxDB, we can achieve a similar
behavior by adding `fill(previous)` to the query.
VictoriaMetrics fills the gaps on the graph assuming time series are always continuous and not discrete.
To limit the interval on which VictoriaMetrics will try to fill the gaps, set `-search.setLookbackToStep`
command-line flag. This limits the gap filling to a single `step` interval passed to
-[/api/v1/query_range](./keyConcepts.md#range-query).
+[/api/v1/query_range](../keyConcepts.md#range-query).
This behavior is close to InfluxDB data model.
@@ -227,56 +227,56 @@ about 230 PromQL queries in it! But a closer look at those queries shows the fol
* ~120 queries are just selecting a metric with label filters,
e.g. `node_textfile_scrape_error{instance="$node",job="$job"}`;
-* ~80 queries are using [rate](./MetricsQL.md#rate) function for selected metric,
+* ~80 queries are using [rate](../MetricsQL.md#rate) function for selected metric,
e.g. `rate(node_netstat_Tcp_InSegs{instance=\"$node\",job=\"$job\"})`
* and the rest
- are [aggregation functions](./keyConcepts.md#aggregation-and-grouping-functions)
- like [sum](./MetricsQL.md#sum)
- or [count](./MetricsQL.md#count).
+ are [aggregation functions](../keyConcepts.md#aggregation-and-grouping-functions)
+ like [sum](../MetricsQL.md#sum)
+ or [count](../MetricsQL.md#count).
To get a better understanding of how MetricsQL works, see the following resources:
-* [MetricsQL concepts](./keyConcepts.md#metricsql);
-* [MetricsQL functions](./MetricsQL.md);
+* [MetricsQL concepts](../keyConcepts.md#metricsql);
+* [MetricsQL functions](../MetricsQL.md);
* [PromQL tutorial for beginners](https://valyala.medium.com/promql-tutorial-for-beginners-9ab455142085).
## How to migrate current data from InfluxDB to VictoriaMetrics
Migrating data from other TSDBs to VictoriaMetrics is as simple as importing data via any of
-[supported formats](./keyConcepts.md#push-model).
+[supported formats](../keyConcepts.md#push-model).
-But migration from InfluxDB might get easier when using [vmctl](./vmctl.md) -
+But migration from InfluxDB might get easier when using [vmctl](../vmctl.md) -
VictoriaMetrics command-line tool. See more about
-migrating [from InfluxDB v1.x versions](./vmctl.md#migrating-data-from-influxdb-1x).
+migrating [from InfluxDB v1.x versions](../vmctl.md#migrating-data-from-influxdb-1x).
Migrating data from InfluxDB v2.x is not supported yet. But there is
-useful [3rd party solution](./vmctl.md#migrating-data-from-influxdb-2x) for this.
+useful [3rd party solution](../vmctl.md#migrating-data-from-influxdb-2x) for this.
Please note, that data migration is a backfilling process. So, please
-consider [backfilling tips](./Single-Server-VictoriaMetrics.md#backfilling).
+consider [backfilling tips](../Single-server-VictoriaMetrics.md#backfilling).
## Frequently asked questions
* How does VictoriaMetrics compare to InfluxDB?
- * _[Answer](./FAQ.md#how-does-victoriametrics-compare-to-influxdb)_
+ * _[Answer](../FAQ.md#how-does-victoriametrics-compare-to-influxdb)_
* Why don't VictoriaMetrics support Remote Read API, so I don't need to learn MetricsQL?
- * _[Answer](./FAQ.md#why-doesnt-victoriametrics-support-the-prometheus-remote-read-api)_
+ * _[Answer](../FAQ.md#why-doesnt-victoriametrics-support-the-prometheus-remote-read-api)_
* The PromQL and MetricsQL are often mentioned together - why is that?
* _MetricsQL - query language inspired by PromQL. MetricsQL is backward-compatible with PromQL, so Grafana
dashboards backed by Prometheus datasource should work the same after switching from Prometheus to
VictoriaMetrics. Both languages mostly share the same concepts with slight differences._
* Query returns more data points than expected - why?
* _VictoriaMetrics may return non-existing data points if `step` param is lower than the actual data resolution. See
- more about this [here](./keyConcepts.md#range-query)._
+ more about this [here](../keyConcepts.md#range-query)._
* How do I get the `real` last data point, not `ephemeral`?
- * _[last_over_time](./MetricsQL.md#last_over_time) function can be used for
+ * _[last_over_time](../MetricsQL.md#last_over_time) function can be used for
limiting the lookbehind window for calculated data. For example, `last_over_time(metric[10s])` would return
calculated samples only if the real samples are located closer than 10 seconds to the calculated timestamps
according to
`start`, `end` and `step` query args passed
- to [range query](./keyConcepts.md#range-query)._
+ to [range query](../keyConcepts.md#range-query)._
* How do I get raw data points with MetricsQL?
* _For getting raw data points specify the interval at which you want them in square brackets and send
- as [instant query](./keyConcepts.md#instant-query). For
+ as [instant query](../keyConcepts.md#instant-query). For
example, `GET api/v1/query?query=my_metric[5m]&time=