relative links

This commit is contained in:
AndrewChubatiuk 2024-07-25 09:45:25 +03:00 committed by Andrii Chubatiuk
parent c9b913d201
commit 0bb0338bee
45 changed files with 709 additions and 709 deletions

View file

@ -14,21 +14,21 @@ If you like VictoriaMetrics and want to contribute, then it would be great:
- Joining VictoriaMetrics community Slack ([Slack inviter](https://slack.victoriametrics.com/) and [Slack channel](https://victoriametrics.slack.com/))
and helping other community members there.
- Filing issues, feature requests and questions [at VictoriaMetrics GitHub](https://github.com/VictoriaMetrics/VictoriaMetrics/issues).
- Improving [VictoriaMetrics docs](https://docs.victoriametrics.com/). See how to update docs [here](https://docs.victoriametrics.com/#documentation).
- Improving [VictoriaMetrics docs](./README.md). See how to update docs [here](./#documentation).
- Spreading the word about VictoriaMetrics via various channels:
- conference talks
- blogposts, articles and [case studies](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/CaseStudies.md)
- blogposts, articles and [case studies](./CaseStudies.md)
- comments at Hacker News, Twitter, LinkedIn, Reddit, Facebook, etc.
- experience sharing with colleagues.
- Convincing your management to sign [Enterprise contract](https://docs.victoriametrics.com/enterprise/) with VictoriaMetrics.
- Convincing your management to sign [Enterprise contract](./enterprise.md) with VictoriaMetrics.
## Pull request checklist
Before sending a pull request to [VictoriaMetrics repository](https://github.com/VictoriaMetrics/VictoriaMetrics/) please make sure it **conforms all** the following checks:
- The pull request conforms [VictoriaMetrics goals](https://docs.victoriametrics.com/goals/).
- The pull request conforms [VictoriaMetrics goals](./goals.md).
- The pull request conforms [`KISS` principle](https://en.wikipedia.org/wiki/KISS_principle). See [these docs](#kiss-principle) for more details.
- The pull request contains clear description of the change, with links to the related GitHub issues and [docs](https://docs.victoriametrics.com/), if needed.
- The pull request contains clear description of the change, with links to the related GitHub issues and [docs](./README.md), if needed.
- Commit messages contain concise yet clear descriptions. Include links to related GitHub issues in commit messages, if such issues exist.
- All the commits are signed and include `Signed-off-by` line. Use `git commit -s` to include `Signed-off-by` your commits.
See [this doc](https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work) about how to sign git commits.
@ -44,7 +44,7 @@ Before sending a pull request to [VictoriaMetrics repository](https://github.com
Further checks are optional for external contributions:
- The change must be described in **clear user-readable** form at [docs/CHANGELOG.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/CHANGELOG.md),
- The change must be described in **clear user-readable** form at [docs/CHANGELOG.md](./CHANGELOG.md),
since it is read by **VictoriaMetrics users** who may not know implementation details of VictoriaMetrics products. The change description must **clearly** answer the following questions:
- What does this change do? There is no need to provide technical details for the change, since they may confuse VictoriaMetrics users, who do not know Go.
@ -62,14 +62,14 @@ Further checks are optional for external contributions:
- After your pull request is merged, please add a message to the issue with instructions for how to test the change you added before the new release.
[Here is an example](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4048#issuecomment-1546453726).
- Do not close the original issue before the change is released. In some cases Github can automatically close the issue once PR is merged. Re-open the issue in such case.
- If the change introduces a new feature, this feature must be documented in **user-readable** form at the appropriate parts of [VictoriaMetrics docs](https://docs.victoriametrics.com/).
- If the change introduces a new feature, this feature must be documented in **user-readable** form at the appropriate parts of [VictoriaMetrics docs](./README.md).
The docs' sources are located in the [`docs` folder](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/docs).
Examples of good changelog messages:
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent/): add support for [VictoriaMetrics remote write protocol](https://docs.victoriametrics.com/vmagent/#victoriametrics-remote-write-protocol) when [sending / receiving data to / from Kafka](https://docs.victoriametrics.com/vmagent/#kafka-integration). This protocol allows saving egress network bandwidth costs when sending data from `vmagent` to `Kafka` located in another datacenter or availability zone. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1225).
* FEATURE: [vmagent](./vmagent.md): add support for [VictoriaMetrics remote write protocol](./vmagent.md#victoriametrics-remote-write-protocol) when [sending / receiving data to / from Kafka](./vmagent.md#kafka-integration). This protocol allows saving egress network bandwidth costs when sending data from `vmagent` to `Kafka` located in another datacenter or availability zone. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1225).
* BUGFIX: [stream aggregation](https://docs.victoriametrics.com/stream-aggregation/): suppress `series after dedup` error message in logs when `-remoteWrite.streamAggr.dedupInterval` command-line flag is set at [vmagent](https://docs.victoriametrics.com/vmagent/) or when `-streamAggr.dedupInterval` command-line flag is set at [single-node VictoriaMetrics](https://docs.victoriametrics.com/).
* BUGFIX: [stream aggregation](./stream-aggregation.md): suppress `series after dedup` error message in logs when `-remoteWrite.streamAggr.dedupInterval` command-line flag is set at [vmagent](./vmagent.md) or when `-streamAggr.dedupInterval` command-line flag is set at [single-node VictoriaMetrics](./README.md).
## KISS principle
@ -78,7 +78,7 @@ We are open to third-party pull requests provided they follow [KISS design princ
- Prefer simple code and architecture.
- Avoid complex abstractions.
- Avoid magic code and fancy algorithms.
- Apply optimizations, which make the code harder to understand, only if [profiling](https://docs.victoriametrics.com/#profiling)
- Apply optimizations, which make the code harder to understand, only if [profiling](./#profiling)
shows significant improvements in performance and scalability or significant reduction in RAM usage.
Profiling must be performed on [Go benchmarks](https://pkg.go.dev/testing#hdr-Benchmarks) and on production workload.
- Avoid [big external dependencies](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d).
@ -87,11 +87,11 @@ We are open to third-party pull requests provided they follow [KISS design princ
Adhering `KISS` principle simplifies the resulting code and architecture, so it can be reviewed, understood and debugged by wider audience.
Due to `KISS`, [cluster version of VictoriaMetrics](https://docs.victoriametrics.com/cluster-victoriametrics/) has no the following "features" popular in distributed computing world:
Due to `KISS`, [cluster version of VictoriaMetrics](./Cluster-VictoriaMetrics.md) has no the following "features" popular in distributed computing world:
- Fragile gossip protocols. See [failed attempt in Thanos](https://github.com/improbable-eng/thanos/blob/030bc345c12c446962225221795f4973848caab5/docs/proposals/completed/201809_gossip-removal.md).
- Hard-to-understand-and-implement-properly [Paxos protocols](https://www.quora.com/In-distributed-systems-what-is-a-simple-explanation-of-the-Paxos-algorithm).
- Complex replication schemes, which may go nuts in unforeseen edge cases. See [replication docs](https://docs.victoriametrics.com/cluster-victoriametrics/#replication-and-data-safety) for details.
- Complex replication schemes, which may go nuts in unforeseen edge cases. See [replication docs](./Cluster-VictoriaMetrics.md#replication-and-data-safety) for details.
- Automatic data reshuffling between storage nodes, which may hurt cluster performance and availability.
- Automatic cluster resizing, which may cost you a lot of money if improperly configured.
- Automatic discovering and addition of new nodes in the cluster, which may mix data between dev and prod clusters :)

View file

@ -43,7 +43,7 @@ where you can chat with VictoriaMetrics users to get additional references, revi
- [Zerodha](#zerodha)
- [zhihu](#zhihu)
You can also read [articles about VictoriaMetrics from our users](https://docs.victoriametrics.com/articles/#third-party-articles-and-slides-about-victoriametrics).
You can also read [articles about VictoriaMetrics from our users](./Articles.md#third-party-articles-and-slides-about-victoriametrics).
## AbiosGaming
@ -86,12 +86,12 @@ We ended up with the following configuration:
We learned that remote write protocol generated too much traffic and connections so after 8 months we started looking for alternatives.
Around the same time, VictoriaMetrics released [vmagent](https://docs.victoriametrics.com/vmagent/).
Around the same time, VictoriaMetrics released [vmagent](./vmagent.md).
We tried to scrape all the metrics via a single instance of vmagent but it that didn't work because vmagent wasn't able to catch up with writes
into VictoriaMetrics. We tested different options and end up with the following scheme:
- We removed Prometheus from our setup.
- VictoriaMetrics [can scrape targets](https://docs.victoriametrics.com/single-server-victoriametrics/#how-to-scrape-prometheus-exporters-such-as-node-exporter) as well
- VictoriaMetrics [can scrape targets](./Single-Server-VictoriaMetrics.md#how-to-scrape-prometheus-exporters-such-as-node-exporter) as well
so we removed vmagent. Now, VictoriaMetrics scrapes all the metrics from 110 jobs and 5531 targets.
- We use [Promxy](https://github.com/jacksontj/promxy) for alerting.
@ -102,7 +102,7 @@ Such a scheme has generated the following benefits compared with Prometheus:
Cons are the following:
- VictoriaMetrics didn't support replication (it [supports replication now](https://docs.victoriametrics.com/cluster-victoriametrics/#replication-and-data-safety)) - we run an extra instance of VictoriaMetrics and Promxy in front of a VictoriaMetrics pair for high availability.
- VictoriaMetrics didn't support replication (it [supports replication now](./Cluster-VictoriaMetrics.md#replication-and-data-safety)) - we run an extra instance of VictoriaMetrics and Promxy in front of a VictoriaMetrics pair for high availability.
- VictoriaMetrics stores 1 extra month for defined retention (if retention is set to N months, then VM stores N+1 months of data), but this is still better than other solutions.
Here are some numbers from our single-node VictoriaMetrics setup:
@ -515,7 +515,7 @@ See more details [in this article](https://www.datanami.com/2023/05/30/why-roblo
> Our initial requirements for monitoring solution: the metrics must be stored for 15 days, the solution must be scalable and must offer high availability of the metrics. It must being integrated into Grafana and allowing the use of PromQL when creating/editing dashboards in Grafana to obtain metrics from the Prometheus datasource. The solution also needs to receive data from Prometheus using HTTPS and needs to request a login and password to write/read the metrics. Details are available [in this article](https://nordicapis.com/api-monitoring-with-prometheus-grafana-alertmanager-and-victoriametrics/).
> We evaluated VictoriaMetrics, InfluxDB OpenSource and Enterprise, Elasticsearch, Thanos, Cortex, TimescaleDB/PostgreSQL and M3DB. We selected VictoriaMetrics because it has [good community support](https://slack.victoriametrics.com/), [good documentation](https://docs.victoriametrics.com/) and it just works.
> We evaluated VictoriaMetrics, InfluxDB OpenSource and Enterprise, Elasticsearch, Thanos, Cortex, TimescaleDB/PostgreSQL and M3DB. We selected VictoriaMetrics because it has [good community support](https://slack.victoriametrics.com/), [good documentation](./README.md) and it just works.
> We started using VictoriaMetrics in the production environment days before the start of BlackFriday in 2020, the period of greatest use of the Sensedia API-Platform by customers. There was a record in the generation of metrics and there was no instability with the monitoring stack.
@ -569,7 +569,7 @@ Numbers:
## Wedos.com
> [Wedos](https://www.wedos.com/) is the biggest hosting provider in the Czech Republic. We have two our own private data centers that hold our servers and technologies, such as cooling the servers in bath oils. We started using [cluster VictoriaMetrics](https://docs.victoriametrics.com/cluster-victoriametrics/) to store Prometheus metrics from all our infrastructure after receiving positive references from people who had successfully used VictoriaMetrics. We're using it throughout our services, including the new WEDOS Global Protection.
> [Wedos](https://www.wedos.com/) is the biggest hosting provider in the Czech Republic. We have two our own private data centers that hold our servers and technologies, such as cooling the servers in bath oils. We started using [cluster VictoriaMetrics](./Cluster-VictoriaMetrics.md) to store Prometheus metrics from all our infrastructure after receiving positive references from people who had successfully used VictoriaMetrics. We're using it throughout our services, including the new WEDOS Global Protection.
Numbers:
@ -584,7 +584,7 @@ Numbers:
[Wix.com](https://en.wikipedia.org/wiki/Wix.com) is the leading web development platform.
> We needed to redesign our metrics infrastructure from the ground up after the move to Kubernetes. We had tried out a few different options before landing on this solution which is working great. We have a Prometheus instance in every datacenter with 2 hours retention for local storage and remote write into [HA pair of single-node VictoriaMetrics instances](https://docs.victoriametrics.com/single-server-victoriametrics/#high-availability).
> We needed to redesign our metrics infrastructure from the ground up after the move to Kubernetes. We had tried out a few different options before landing on this solution which is working great. We have a Prometheus instance in every datacenter with 2 hours retention for local storage and remote write into [HA pair of single-node VictoriaMetrics instances](./Single-Server-VictoriaMetrics.md#high-availability).
Numbers:
@ -607,7 +607,7 @@ Numbers:
- Enough headroom/scaling capacity for future growth which is planned to be up to 100M active time series.
- Ability to split DB replicas per workload. Alert queries go to one replica and user queries go to another (speed for users, effective cache).
> Optimizing for those points and our specific workload, VictoriaMetrics proved to be the best option. As icing on the cake weve got [PromQL extensions](https://docs.victoriametrics.com/metricsql/) - `default 0` and `histogram` are my favorite ones. We really like having a lot of tsdb params easily available via config options which makes tsdb easy to tune for each specific use case. We've also found a great community in [Slack channel](https://slack.victoriametrics.com/) and responsive and helpful maintainer support.
> Optimizing for those points and our specific workload, VictoriaMetrics proved to be the best option. As icing on the cake weve got [PromQL extensions](./MetricsQL.md) - `default 0` and `histogram` are my favorite ones. We really like having a lot of tsdb params easily available via config options which makes tsdb easy to tune for each specific use case. We've also found a great community in [Slack channel](https://slack.victoriametrics.com/) and responsive and helpful maintainer support.
Alex Ulstein, Head of Monitoring, Wix.com
@ -644,7 +644,7 @@ Thanos, Cortex and VictoriaMetrics were evaluated as a long-term storage for Pro
- Blazingly fast benchmarks for a single node setup.
- Single binary mode. Easy to scale vertically with far fewer operational headaches.
- Considerable [improvements on creating Histograms](https://medium.com/@valyala/improving-histogram-usability-for-prometheus-and-grafana-bc7e5df0e350).
- [MetricsQL](https://docs.victoriametrics.com/metricsql/) gives us the ability to extend PromQL with more aggregation operators.
- [MetricsQL](./MetricsQL.md) gives us the ability to extend PromQL with more aggregation operators.
- The API is compatible with Prometheus and nearly all standard PromQL queries work well out of the box.
- Handles storage well, with periodic compaction which makes it easy to take snapshots.

View file

@ -9,14 +9,14 @@ menu:
aliases:
- /LTS-releases.html
---
[Enterprise version of VictoriaMetrics](https://docs.victoriametrics.com/enterprise/) provides long-term support lines of releases (aka LTS releases).
[Enterprise version of VictoriaMetrics](./enterprise.md) provides long-term support lines of releases (aka LTS releases).
Every LTS line receives bugfixes and [security fixes](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/SECURITY.md) for 12 months after
the initial release. New LTS lines are published every 6 months, so the latest two LTS lines are supported at any given moment. This gives up to 6 months
for the migration to new LTS lines for [VictoriaMetrics Enterprise](https://docs.victoriametrics.com/enterprise/) users.
for the migration to new LTS lines for [VictoriaMetrics Enterprise](./enterprise.md) users.
All the bugfixes and security fixes, which are included in LTS releases, are also available in [the latest release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest),
so non-enterprise users are advised to regularly [upgrade](https://docs.victoriametrics.com/#how-to-upgrade-victoriametrics) VictoriaMetrics products
to [the latest available releases](https://docs.victoriametrics.com/changelog/).
so non-enterprise users are advised to regularly [upgrade](./#how-to-upgrade-victoriametrics) VictoriaMetrics products
to [the latest available releases](./CHANGELOG.md).
## Currently supported LTS release lines

View file

@ -11,7 +11,7 @@ aliases:
---
![cluster-per-tenant-stat](PerTenantStatistic-stats.webp)
***The per-tenant statistic is a part of [enterprise package](https://docs.victoriametrics.com/enterprise/). It is available for download and evaluation at [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest).
***The per-tenant statistic is a part of [enterprise package](./enterprise.md). It is available for download and evaluation at [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest).
To get the license key you can request a [free trial license](https://victoriametrics.com/products/enterprise/trial/).***
VictoriaMetrics cluster for enterprise provides various metrics and statistics usage per tenant:
@ -77,7 +77,7 @@ Check the Billing section of [Grafana Dashboard](#visualization), it contains bi
## Integration with vmgateway
`vmgateway` supports integration with Per Tenant Statistics data for rate limiting purposes.
More information can be found [here](https://docs.victoriametrics.com/vmgateway/)
More information can be found [here](./vmgateway.md)
## Integration with vmalert

View file

@ -2636,13 +2636,13 @@ Report bugs and propose new features [here](https://github.com/VictoriaMetrics/V
## Documentation
VictoriaMetrics documentation is available at [https://docs.victoriametrics.com/](https://docs.victoriametrics.com/).
VictoriaMetrics documentation is available at [{{% ref "/" %}}](/).
It is built from `*.md` files located in [docs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/docs) folder
and gets automatically updated once changes are merged to [master](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master) branch.
To update the documentation follow the steps below:
- [Fork](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/about-forks)
VictoriaMetrics repo and apply changes to the docs:
- To update [the main page](https://docs.victoriametrics.com/) modify [this file](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md).
- To update [the main page](/) modify [this file](./README.md).
- To update other pages, apply changes to the corresponding file in [docs folder](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/docs).
- If your changes contain an image then see [images in documentation](./README.md#images-in-documentation).
- Once changes are made, execute the command below to finalize and sync the changes:

View file

@ -40,7 +40,7 @@ Bumping the limits may significantly improve build speed.
## Release version and Docker images
1. Make sure all the changes are documented in [CHANGELOG.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/CHANGELOG.md).
1. Make sure all the changes are documented in [CHANGELOG.md](./CHANGELOG.md).
Ideally, every change must be documented in the commit with the change. Alternatively, the change must be documented immediately
after the commit, which adds the change.
1. Make sure all the changes are synced between `master`, `cluster`, `enterprise-single-node` and `enterprise-cluster` branches.
@ -48,7 +48,7 @@ Bumping the limits may significantly improve build speed.
1. Make sure that the release branches have no security issues.
1. Update release versions if needed in [SECURITY.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/SECURITY.md).
1. Add `(available starting from v1.xx.y)` line to feature docs introduced in the upcoming release.
1. Cut new version in [CHANGELOG.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/CHANGELOG.md)
1. Cut new version in [CHANGELOG.md](./CHANGELOG.md)
and make it merged. See example in this [commit](https://github.com/VictoriaMetrics/VictoriaMetrics/commit/b771152039d23b5ccd637a23ea748bc44a9511a7).
1. Cherry-pick bug fixes relevant for [LTS releases](./LTS-releases.md).
1. Make sure you get all changes fetched `git fetch --all`.
@ -89,7 +89,7 @@ Bumping the limits may significantly improve build speed.
and all the needed assets are re-uploaded to it.
1. Go to <https://github.com/VictoriaMetrics/VictoriaMetrics/releases> and verify that draft release with the name `TAG` has been created
and this release contains all the needed binaries and checksums.
1. Update the release description with the content of [CHANGELOG](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/CHANGELOG.md) for this release.
1. Update the release description with the content of [CHANGELOG](./CHANGELOG.md) for this release.
1. Publish release by pressing "Publish release" green button in GitHub's UI.
1. Bump version of the VictoriaMetrics cluster in the [sandbox environment](https://github.com/VictoriaMetrics/ops/blob/main/gcp-test/sandbox/manifests/benchmark-vm/vmcluster.yaml)
by [opening and merging PR](https://github.com/VictoriaMetrics/ops/pull/58).

View file

@ -2647,13 +2647,13 @@ Report bugs and propose new features [here](https://github.com/VictoriaMetrics/V
## Documentation
VictoriaMetrics documentation is available at [https://docs.victoriametrics.com/](https://docs.victoriametrics.com/).
VictoriaMetrics documentation is available at [{{% ref "/" %}}](./README.md).
It is built from `*.md` files located in [docs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/docs) folder
and gets automatically updated once changes are merged to [master](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master) branch.
To update the documentation follow the steps below:
- [Fork](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/about-forks)
VictoriaMetrics repo and apply changes to the docs:
- To update [the main page](https://docs.victoriametrics.com/) modify [this file](./README.md).
- To update [the main page]({{% ref "/" %}}) modify [this file](./README.md).
- To update other pages, apply changes to the corresponding file in [docs folder](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/docs).
- If your changes contain an image then see [images in documentation](./#images-in-documentation).
- Once changes are made, execute the command below to finalize and sync the changes:

View file

@ -81,7 +81,7 @@ then please follow the following steps in order to quickly find the solution:
1. Pro tip 1: if you see that [VictoriaMetrics docs](./Single-server-VictoriaMetrics.md) contain incomplete or incorrect information,
then please create a pull request with the relevant changes. This will help VictoriaMetrics community.
All the docs published at `https://docs.victoriametrics.com` are located in the [docs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/docs)
All the docs published at `{{% ref "/" %}}` are located in the [docs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/docs)
folder inside VictoriaMetrics repository.
1. Pro tip 2: please provide links to existing docs / GitHub issues / StackOverflow questions

View file

@ -1,13 +1,12 @@
---
sort: 7
weight: 7
title: VictoriaLogs changelog
title: CHANGELOG
menu:
docs:
identifier: "victorialogs-changelog"
parent: "victorialogs"
weight: 7
title: CHANGELOG
aliases:
- /VictoriaLogs/CHANGELOG.html
---

View file

@ -1,13 +1,12 @@
---
sort: 6
weight: 6
title: VictoriaLogs FAQ
title: FAQ
menu:
docs:
identifier: "victorialogs-faq"
parent: "victorialogs"
weight: 6
title: FAQ
aliases:
- /VictoriaLogs/FAQ.html
- /VictoriaLogs/faq.html
@ -32,15 +31,15 @@ VictoriaLogs is optimized specifically for logs. So it provides the following fe
- Up to 30x less RAM usage than Elasticsearch for the same workload.
- Up to 15x less disk space usage than Elasticsearch for the same amounts of stored logs.
- Ability to work with hundreds of terabytes of logs on a single node.
- Very easy to use query language optimized for typical log analysis tasks - [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
- Fast full-text search over all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) out of the box.
- Good integration with traditional command-line tools for log analysis. See [these docs](https://docs.victoriametrics.com/victorialogs/querying/#command-line).
- Very easy to use query language optimized for typical log analysis tasks - [LogsQL](./LogsQL.md).
- Fast full-text search over all the [log fields](./keyConcepts.md#data-model) out of the box.
- Good integration with traditional command-line tools for log analysis. See [these docs](./querying/#command-line).
## What is the difference between VictoriaLogs and Grafana Loki?
Both Grafana Loki and VictoriaLogs are designed for log management and processing.
Both systems support [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) concept.
Both systems support [log stream](./keyConcepts.md#stream-fields) concept.
VictoriaLogs and Grafana Loki have the following differences:
@ -48,13 +47,13 @@ VictoriaLogs and Grafana Loki have the following differences:
It starts consuming huge amounts of RAM and working very slow when logs with high-cardinality fields are ingested into it.
See [these docs](https://grafana.com/docs/loki/latest/best-practices/) for details.
VictoriaMetrics supports high-cardinality [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
VictoriaMetrics supports high-cardinality [log fields](./keyConcepts.md#data-model).
It automatically indexes all the ingested log fields and allows performing fast full-text search over any field.
- Grafana Loki provides very inconvenient query language - [LogQL](https://grafana.com/docs/loki/latest/logql/).
This query language is hard to use for typical log analysis tasks.
VictoriaMetrics provides easy to use query language for typical log analysis tasks - [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
VictoriaMetrics provides easy to use query language for typical log analysis tasks - [LogsQL](./LogsQL.md).
- VictoriaLogs performs typical full-text queries up to 1000x faster than Grafana Loki.
@ -68,7 +67,7 @@ VictoriaLogs and Grafana Loki have the following differences:
ClickHouse is an extremely fast and efficient analytical database. It can be used for logs storage, analysis and processing.
VictoriaLogs is designed solely for logs. VictoriaLogs uses [similar design ideas as ClickHouse](#how-does-victorialogs-work) for achieving high performance.
- ClickHouse is good for logs if you know the set of [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) beforehand.
- ClickHouse is good for logs if you know the set of [log fields](./keyConcepts.md#data-model) beforehand.
Then you can create a table with a column per each log field and achieve the maximum possible query performance.
If the set of log fields isn't known beforehand, or if it can change at any time, then ClickHouse can still be used,
@ -78,18 +77,18 @@ VictoriaLogs is designed solely for logs. VictoriaLogs uses [similar design idea
for achieving high efficiency and query performance.
VictoriaLogs works optimally with any log types out of the box - structured, unstructured and mixed.
It works optimally with any sets of [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model),
It works optimally with any sets of [log fields](./keyConcepts.md#data-model),
which can change in any way across different log sources.
- ClickHouse provides SQL dialect with additional analytical functionality. It allows performing arbitrary complex analytical queries
over the stored logs.
VictoriaLogs provides easy to use query language with full-text search specifically optimized
for log analysis - [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
for log analysis - [LogsQL](./LogsQL.md).
LogsQL is usually much easier to use than SQL for typical log analysis tasks, while some
non-trivial analytics may require SQL power.
- VictoriaLogs accepts logs from popular log shippers out of the box - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
- VictoriaLogs accepts logs from popular log shippers out of the box - see [these docs](./data-ingestion/README.md).
ClickHouse needs an intermediate applications for converting the ingested logs into `INSERT` SQL statements for the particular database schema.
This may increase the complexity of the system and, subsequently, increase its' maintenance costs.
@ -97,7 +96,7 @@ VictoriaLogs is designed solely for logs. VictoriaLogs uses [similar design idea
## How does VictoriaLogs work?
VictoriaLogs accepts logs as [JSON entries](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
VictoriaLogs accepts logs as [JSON entries](./keyConcepts.md#data-model).
It then stores every field value into a distinct data block. E.g. values for the same field across multiple log entries
are stored in a single data block. This allow reading data blocks only for the needed fields during querying.
@ -116,18 +115,18 @@ This architecture is inspired by [ClickHouse architecture](https://clickhouse.co
On top of this, VictoriaLogs employs additional optimizations for achieving high query performance:
- It uses [bloom filters](https://en.wikipedia.org/wiki/Bloom_filter) for skipping blocks without the given
[word](https://docs.victoriametrics.com/victorialogs/logsql/#word-filter) or [phrase](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter).
[word](./LogsQL.md#word-filter) or [phrase](./LogsQL.md#phrase-filter).
- It uses custom encoding and compression for fields with different data types.
For example, it encodes IP addresses as 4-byte tuples. Custom fields' encoding reduces data size on disk and improves query performance.
- It physically groups logs for the same [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields)
- It physically groups logs for the same [log stream](./keyConcepts.md#stream-fields)
close to each other. This improves compression ratio, which helps reducing disk space usage. This also improves query performance
by skipping blocks for unneeded streams when [stream filter](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter) is used.
- It maintains sparse index for [log timestamps](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field),
which allow improving query performance when [time filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter) is used.
by skipping blocks for unneeded streams when [stream filter](./LogsQL.md#stream-filter) is used.
- It maintains sparse index for [log timestamps](./keyConcepts.md#time-field),
which allow improving query performance when [time filter](./LogsQL.md#time-filter) is used.
## How to export logs from VictoriaLogs?
Just send the query with the needed [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters)
to [`/select/logsql/query`](https://docs.victoriametrics.com/victorialogs/querying/#querying-logs) - VictoriaLogs will return
the requested logs as a [stream of JSON lines](https://jsonlines.org/). It is recommended specifying [time filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter)
Just send the query with the needed [filters](./LogsQL.md#filters)
to [`/select/logsql/query`](./querying/#querying-logs) - VictoriaLogs will return
the requested logs as a [stream of JSON lines](https://jsonlines.org/). It is recommended specifying [time filter](./LogsQL.md#time-filter)
for limiting the amounts of exported logs.

View file

@ -1,19 +1,19 @@
---
sort: 1
weight: 1
title: VictoriaLogs Quick Start
title: Quick Start
menu:
docs:
identifier: victorialogs-quick-start
parent: "victorialogs"
weight: 1
title: Quick Start
aliases:
- /VictoriaLogs/QuickStart.html
- /victorialogs/quick-start.html
- /victorialogs/quick-start/
---
It is recommended to read [README](https://docs.victoriametrics.com/victorialogs/)
and [Key Concepts](https://docs.victoriametrics.com/victorialogs/keyconcepts/)
It is recommended to read [README](./README.md)
and [Key Concepts](./keyConcepts.md)
before you start working with VictoriaLogs.
## How to install and run VictoriaLogs
@ -38,17 +38,17 @@ tar xzf victoria-logs-linux-amd64-v0.28.0-victorialogs.tar.gz
./victoria-logs-prod
```
VictoriaLogs is ready for [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
and [querying](https://docs.victoriametrics.com/victorialogs/querying/) at the TCP port `9428` now!
VictoriaLogs is ready for [data ingestion](./data-ingestion/README.md)
and [querying](./querying/README.md) at the TCP port `9428` now!
It has no any external dependencies, so it may run in various environments without additional setup and configuration.
VictoriaLogs automatically adapts to the available CPU and RAM resources. It also automatically setups and creates
the needed indexes during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
the needed indexes during [data ingestion](./data-ingestion/README.md).
See also:
- [How to configure VictoriaLogs](#how-to-configure-victorialogs)
- [How to ingest logs into VictoriaLogs](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/)
- [How to ingest logs into VictoriaLogs](./data-ingestion/README.md)
- [How to query VictoriaLogs](./querying/README.md)
### Docker image
@ -64,8 +64,8 @@ docker run --rm -it -p 9428:9428 -v ./victoria-logs-data:/victoria-logs-data \
See also:
- [How to configure VictoriaLogs](#how-to-configure-victorialogs)
- [How to ingest logs into VictoriaLogs](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/)
- [How to ingest logs into VictoriaLogs](./data-ingestion/README.md)
- [How to query VictoriaLogs](./querying/README.md)
### Helm charts
@ -95,17 +95,17 @@ Follow the following steps in order to build VictoriaLogs from source code:
bin/victoria-logs
```
VictoriaLogs is ready for [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
and [querying](https://docs.victoriametrics.com/victorialogs/querying/) at the TCP port `9428` now!
VictoriaLogs is ready for [data ingestion](./data-ingestion/README.md)
and [querying](./querying/README.md) at the TCP port `9428` now!
It has no any external dependencies, so it may run in various environments without additional setup and configuration.
VictoriaLogs automatically adapts to the available CPU and RAM resources. It also automatically setups and creates
the needed indexes during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
the needed indexes during [data ingestion](./data-ingestion/README.md).
See also:
- [How to configure VictoriaLogs](#how-to-configure-victorialogs)
- [How to ingest logs into VictoriaLogs](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/)
- [How to ingest logs into VictoriaLogs](./data-ingestion/README.md)
- [How to query VictoriaLogs](./querying/README.md)
## How to configure VictoriaLogs
@ -121,19 +121,19 @@ Pass `-help` to VictoriaLogs in order to see the list of supported command-line
```
VictoriaLogs stores the ingested data to the `victoria-logs-data` directory by default. The directory can be changed
via `-storageDataPath` command-line flag. See [these docs](https://docs.victoriametrics.com/victorialogs/#storage) for details.
via `-storageDataPath` command-line flag. See [these docs](./#storage) for details.
By default VictoriaLogs stores [log entries](https://docs.victoriametrics.com/victorialogs/keyconcepts/) with timestamps
By default VictoriaLogs stores [log entries](./keyConcepts.md) with timestamps
in the time range `[now-7d, now]`, while dropping logs outside the given time range.
E.g. it uses the retention of 7 days. Read [these docs](https://docs.victoriametrics.com/victorialogs/#retention) on how to control the retention
for the [ingested](https://docs.victoriametrics.com/victorialogs/data-ingestion/) logs.
E.g. it uses the retention of 7 days. Read [these docs](./#retention) on how to control the retention
for the [ingested](./data-ingestion/README.md) logs.
It is recommended setting up monitoring of VictoriaLogs according to [these docs](https://docs.victoriametrics.com/victorialogs/#monitoring).
It is recommended setting up monitoring of VictoriaLogs according to [these docs](./#monitoring).
See also:
- [How to ingest logs into VictoriaLogs](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/)
- [How to ingest logs into VictoriaLogs](./data-ingestion/README.md)
- [How to query VictoriaLogs](./querying/README.md)
## Docker demos

View file

@ -3,48 +3,48 @@ from [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics/).
VictoriaLogs provides the following features:
- VictoriaLogs can accept logs from popular log collectors. See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
- VictoriaLogs can accept logs from popular log collectors. See [these docs](./data-ingestion/).
- VictoriaLogs is much easier to set up and operate compared to Elasticsearch and Grafana Loki.
See [these docs](https://docs.victoriametrics.com/victorialogs/quickstart/).
See [these docs](./QuickStart.md).
- VictoriaLogs provides easy yet powerful query language with full-text search across
all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
See [LogsQL docs](https://docs.victoriametrics.com/victorialogs/logsql/).
all the [log fields](../keyConcepts.md#data-model).
See [LogsQL docs](./LogsQL.md).
- VictoriaLogs can be seamlessly combined with good old Unix tools for log analysis such as `grep`, `less`, `sort`, `jq`, etc.
See [these docs](https://docs.victoriametrics.com/victorialogs/querying/#command-line) for details.
See [these docs](./querying/README.md#command-line) for details.
- VictoriaLogs capacity and performance scales linearly with the available resources (CPU, RAM, disk IO, disk space).
It runs smoothly on both Raspberry PI and a server with hundreds of CPU cores and terabytes of RAM.
- VictoriaLogs can handle up to 30x bigger data volumes than Elasticsearch and Grafana Loki when running on the same hardware.
See [these docs](#benchmarks).
- VictoriaLogs supports fast full-text search over high-cardinality [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
- VictoriaLogs supports fast full-text search over high-cardinality [log fields](../keyConcepts.md#data-model)
such as `trace_id`, `user_id` and `ip`.
- VictoriaLogs supports multitenancy - see [these docs](#multitenancy).
- VictoriaLogs supports out-of-order logs' ingestion aka backfilling.
- VictoriaLogs supports live tailing for newly ingested logs. See [these docs](https://docs.victoriametrics.com/victorialogs/querying/#live-tailing).
- VictoriaLogs supports selecting surrounding logs in front and after the selected logs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#stream_context-pipe).
- VictoriaLogs provides web UI for querying logs - see [these docs](https://docs.victoriametrics.com/victorialogs/querying/#web-ui).
- VictoriaLogs supports live tailing for newly ingested logs. See [these docs](./querying/README.md#live-tailing).
- VictoriaLogs supports selecting surrounding logs in front and after the selected logs. See [these docs](./LogsQL.md#stream_context-pipe).
- VictoriaLogs provides web UI for querying logs - see [these docs](./querying/README.md#web-ui).
If you have questions about VictoriaLogs, then read [this FAQ](https://docs.victoriametrics.com/victorialogs/faq/).
If you have questions about VictoriaLogs, then read [this FAQ](./FAQ.md).
Also feel free asking any questions at [VictoriaMetrics community Slack chat](https://victoriametrics.slack.com/),
you can join it via [Slack Inviter](https://slack.victoriametrics.com/).
See [Quick start docs](https://docs.victoriametrics.com/victorialogs/quickstart/) for start working with VictoriaLogs.
See [Quick start docs](./QuickStart.md) for start working with VictoriaLogs.
## Monitoring
VictoriaLogs exposes internal metrics in Prometheus exposition format at `http://localhost:9428/metrics` page.
It is recommended to set up monitoring of these metrics via VictoriaMetrics
(see [these docs](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter)),
vmagent (see [these docs](https://docs.victoriametrics.com/vmagent/#how-to-collect-metrics-in-prometheus-format)) or via Prometheus.
(see [these docs](../#how-to-scrape-prometheus-exporters-such-as-node-exporter)),
vmagent (see [these docs](../vmagent.md#how-to-collect-metrics-in-prometheus-format)) or via Prometheus.
VictoriaLogs emits its own logs to stdout. It is recommended to investigate these logs during troubleshooting.
## Upgrading
It is safe upgrading VictoriaLogs to new versions unless [release notes](https://docs.victoriametrics.com/victorialogs/changelog/) say otherwise.
It is safe to skip multiple versions during the upgrade unless [release notes](https://docs.victoriametrics.com/victorialogs/changelog/) say otherwise.
It is safe upgrading VictoriaLogs to new versions unless [release notes](./CHANGELOG.md) say otherwise.
It is safe to skip multiple versions during the upgrade unless [release notes](./CHANGELOG.md) say otherwise.
It is recommended to perform regular upgrades to the latest version, since it may contain important bug fixes, performance optimizations or new features.
It is also safe to downgrade to older versions unless [release notes](https://docs.victoriametrics.com/victorialogs/changelog/) say otherwise.
It is also safe to downgrade to older versions unless [release notes](./CHANGELOG.md) say otherwise.
The following steps must be performed during the upgrade / downgrade procedure:
@ -68,13 +68,13 @@ For example, the following command starts VictoriaLogs with the retention of 8 w
See also [retention by disk space usage](#retention-by-disk-space-usage).
VictoriaLogs stores the [ingested](https://docs.victoriametrics.com/victorialogs/data-ingestion/) logs in per-day partition directories.
VictoriaLogs stores the [ingested](./data-ingestion/) logs in per-day partition directories.
It automatically drops partition directories outside the configured retention.
VictoriaLogs automatically drops logs at [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/) stage
VictoriaLogs automatically drops logs at [data ingestion](./data-ingestion/) stage
if they have timestamps outside the configured retention. A sample of dropped logs is logged with `WARN` message in order to simplify troubleshooting.
The `vl_rows_dropped_total` [metric](#monitoring) is incremented each time an ingested log entry is dropped because of timestamp outside the retention.
It is recommended to set up the following alerting rule at [vmalert](https://docs.victoriametrics.com/vmalert/) in order to be notified
It is recommended to set up the following alerting rule at [vmalert](../vmalert.md) in order to be notified
when logs with wrong timestamps are ingested into VictoriaLogs:
```metricsql
@ -132,25 +132,25 @@ VictoriaLogs automatically creates the `-storageDataPath` directory on the first
## Multitenancy
VictoriaLogs supports multitenancy. A tenant is identified by `(AccountID, ProjectID)` pair, where `AccountID` and `ProjectID` are arbitrary 32-bit unsigned integers.
The `AccountID` and `ProjectID` fields can be set during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
and [querying](https://docs.victoriametrics.com/victorialogs/querying/) via `AccountID` and `ProjectID` request headers.
The `AccountID` and `ProjectID` fields can be set during [data ingestion](./data-ingestion/)
and [querying](./querying/README.md) via `AccountID` and `ProjectID` request headers.
If `AccountID` and/or `ProjectID` request headers aren't set, then the default `0` value is used.
VictoriaLogs has very low overhead for per-tenant management, so it is OK to have thousands of tenants in a single VictoriaLogs instance.
VictoriaLogs doesn't perform per-tenant authorization. Use [vmauth](https://docs.victoriametrics.com/vmauth/) or similar tools for per-tenant authorization.
VictoriaLogs doesn't perform per-tenant authorization. Use [vmauth](../vmauth.md) or similar tools for per-tenant authorization.
## Benchmarks
Here is a [benchmark suite](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/logs-benchmark) for comparing data ingestion performance
and resource usage between VictoriaLogs and Elasticsearch or Loki.
It is recommended [setting up VictoriaLogs](https://docs.victoriametrics.com/victorialogs/quickstart/) in production alongside the existing
It is recommended [setting up VictoriaLogs](./QuickStart.md) in production alongside the existing
log management systems and comparing resource usage + query performance between VictoriaLogs and your system such as Elasticsearch or Grafana Loki.
Please share benchmark results and ideas on how to improve benchmarks / VictoriaLogs
via [VictoriaMetrics community channels](https://docs.victoriametrics.com/#community-and-contributions).
via [VictoriaMetrics community channels](../#community-and-contributions).
## List of command-line flags
@ -166,7 +166,7 @@ Pass `-help` to VictoriaLogs in order to see the list of supported command-line
-enableTCP6
Whether to enable IPv6 for listening and dialing. By default, only IPv4 TCP and UDP are used
-envflag.enable
Whether to enable reading flags from environment variables in addition to the command line. Command line flag values have priority over values from environment vars. Flags are read only from the command line if this flag isn't set. See https://docs.victoriametrics.com/#environment-variables for more details
Whether to enable reading flags from environment variables in addition to the command line. Command line flag values have priority over values from environment vars. Flags are read only from the command line if this flag isn't set. See {{% ref "../#environment-variables" %}} for more details
-envflag.prefix string
Prefix for environment variables if -envflag.enable is set
-filestream.disableFadvise
@ -177,7 +177,7 @@ Pass `-help` to VictoriaLogs in order to see the list of supported command-line
-fs.disableMmap
Whether to use pread() instead of mmap() for reading data files. By default, mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread()
-futureRetention value
Log entries with timestamps bigger than now+futureRetention are rejected during data ingestion; see https://docs.victoriametrics.com/victorialogs/#retention
Log entries with timestamps bigger than now+futureRetention are rejected during data ingestion; see {{% ref "./#retention" %}}
The following optional suffixes are supported: s (second), m (minute), h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 2d)
-http.connTimeout duration
Incoming connections to -httpListenAddr are closed after the configured timeout. This may help evenly spreading load among a cluster of services behind TCP-level load balancer. Zero value disables closing of incoming connections (default 2m0s)
@ -226,9 +226,9 @@ Pass `-help` to VictoriaLogs in order to see the list of supported command-line
-internStringMaxLen int
The maximum length for strings to intern. A lower limit may save memory at the cost of higher CPU usage. See https://en.wikipedia.org/wiki/String_interning . See also -internStringDisableCache and -internStringCacheExpireDuration (default 500)
-logIngestedRows
Whether to log all the ingested log entries; this can be useful for debugging of data ingestion; see https://docs.victoriametrics.com/victorialogs/data-ingestion/ ; see also -logNewStreams
Whether to log all the ingested log entries; this can be useful for debugging of data ingestion; see {{% ref "./data-ingestion/README.md" %}} ; see also -logNewStreams
-logNewStreams
Whether to log creation of new streams; this can be useful for debugging of high cardinality issues with log streams; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields ; see also -logIngestedRows
Whether to log creation of new streams; this can be useful for debugging of high cardinality issues with log streams; see ../keyConcepts.md#stream-fields ; see also -logIngestedRows
-loggerDisableTimestamps
Whether to disable writing timestamps in logs
-loggerErrorsPerSecondLimit int
@ -277,14 +277,14 @@ Pass `-help` to VictoriaLogs in order to see the list of supported command-line
-pushmetrics.interval duration
Interval for pushing metrics to every -pushmetrics.url (default 10s)
-pushmetrics.url array
Optional URL to push metrics exposed at /metrics page. See https://docs.victoriametrics.com/#push-metrics . By default, metrics exposed at /metrics page aren't pushed to any remote storage
Optional URL to push metrics exposed at /metrics page. See {{% ref "./#push-metrics" %}} . By default, metrics exposed at /metrics page aren't pushed to any remote storage
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-retention.maxDiskSpaceUsageBytes size
The maximum disk space usage at -storageDataPath before older per-day partitions are automatically dropped; see https://docs.victoriametrics.com/victorialogs/#retention-by-disk-space-usage ; see also -retentionPeriod
The maximum disk space usage at -storageDataPath before older per-day partitions are automatically dropped; see {{% ref "./#retention-by-disk-space-usage" %}} ; see also -retentionPeriod
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0)
-retentionPeriod value
Log entries with timestamps older than now-retentionPeriod are automatically deleted; log entries with timestamps outside the retention are also rejected during data ingestion; the minimum supported retention is 1d (one day); see https://docs.victoriametrics.com/victorialogs/#retention ; see also -retention.maxDiskSpaceUsageBytes
Log entries with timestamps older than now-retentionPeriod are automatically deleted; log entries with timestamps outside the retention are also rejected during data ingestion; the minimum supported retention is 1d (one day); see {{% ref "./#retention" %}} ; see also -retention.maxDiskSpaceUsageBytes
The following optional suffixes are supported: s (second), m (minute), h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 7d)
-search.maxConcurrentRequests int
The maximum number of concurrent search requests. It shouldn't be high, since a single request can saturate all the CPU cores, while many concurrently executed requests may require high amounts of memory. See also -search.maxQueueDuration (default 16)
@ -296,57 +296,57 @@ Pass `-help` to VictoriaLogs in order to see the list of supported command-line
The minimum free disk space at -storageDataPath after which the storage stops accepting new data
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 10000000)
-storageDataPath string
Path to directory where to store VictoriaLogs data; see https://docs.victoriametrics.com/victorialogs/#storage (default "victoria-logs-data")
Path to directory where to store VictoriaLogs data; see {{% ref "./#storage" %}} (default "victoria-logs-data")
-syslog.compressMethod.tcp array
Compression method for syslog messages received at the corresponding -syslog.listenAddr.tcp. Supported values: none, gzip, deflate. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#compression
Compression method for syslog messages received at the corresponding -syslog.listenAddr.tcp. Supported values: none, gzip, deflate. See {{% ref "./data-ingestion/syslog.md#compression" %}}
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-syslog.compressMethod.udp array
Compression method for syslog messages received at the corresponding -syslog.listenAddr.udp. Supported values: none, gzip, deflate. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#compression
Compression method for syslog messages received at the corresponding -syslog.listenAddr.udp. Supported values: none, gzip, deflate. See {{% ref "./data-ingestion/syslog.md#compression" %}}
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-syslog.listenAddr.tcp array
Comma-separated list of TCP addresses to listen to for Syslog messages. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/
Comma-separated list of TCP addresses to listen to for Syslog messages. See {{% ref "./data-ingestion/syslog.md" %}}
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-syslog.listenAddr.udp array
Comma-separated list of UDP address to listen to for Syslog messages. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/
Comma-separated list of UDP address to listen to for Syslog messages. See {{% ref "./data-ingestion/syslog.md" %}}
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-syslog.tenantID.tcp array
TenantID for logs ingested via the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/
TenantID for logs ingested via the corresponding -syslog.listenAddr.tcp. See {{% ref "./data-ingestion/syslog.md" %}}
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-syslog.tenantID.udp array
TenantID for logs ingested via the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/
TenantID for logs ingested via the corresponding -syslog.listenAddr.udp. See {{% ref "./data-ingestion/syslog.md" %}}
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-syslog.timezone string
Timezone to use when parsing timestamps in RFC3164 syslog messages. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 . See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/ (default "Local")
Timezone to use when parsing timestamps in RFC3164 syslog messages. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 . See {{% ref "./data-ingestion/syslog.md" %}} (default "Local")
-syslog.tls array
Whether to enable TLS for receiving syslog messages at the corresponding -syslog.listenAddr.tcp. The corresponding -syslog.tlsCertFile and -syslog.tlsKeyFile must be set if -syslog.tls is set. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security
Whether to enable TLS for receiving syslog messages at the corresponding -syslog.listenAddr.tcp. The corresponding -syslog.tlsCertFile and -syslog.tlsKeyFile must be set if -syslog.tls is set. See {{% ref "./data-ingestion/syslog.md#security" %}}
Supports array of values separated by comma or specified via multiple flags.
Empty values are set to false.
-syslog.tlsCertFile array
Path to file with TLS certificate for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security
Path to file with TLS certificate for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated. See {{% ref "./data-ingestion/syslog.md#security" %}}
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-syslog.tlsCipherSuites array
Optional list of TLS cipher suites for -syslog.listenAddr.tcp if -syslog.tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants . See also https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security
Optional list of TLS cipher suites for -syslog.listenAddr.tcp if -syslog.tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants . See also {{% ref "./data-ingestion/syslog.md#security" %}}
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-syslog.tlsKeyFile array
Path to file with TLS key for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security
Path to file with TLS key for the corresponding -syslog.listenAddr.tcp if the corresponding -syslog.tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated. See {{% ref "./data-ingestion/syslog.md#security" %}}
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-syslog.tlsMinVersion string
The minimum TLS version to use for -syslog.listenAddr.tcp if -syslog.tls is set. Supported values: TLS10, TLS11, TLS12, TLS13. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#security (default "TLS13")
The minimum TLS version to use for -syslog.listenAddr.tcp if -syslog.tls is set. Supported values: TLS10, TLS11, TLS12, TLS13. See {{% ref "./data-ingestion/syslog.md#security" %}} (default "TLS13")
-syslog.useLocalTimestamp.tcp array
Whether to use local timestamp instead of the original timestamp for the ingested syslog messages at the corresponding -syslog.listenAddr.tcp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#log-timestamps
Whether to use local timestamp instead of the original timestamp for the ingested syslog messages at the corresponding -syslog.listenAddr.tcp. See {{% ref "./data-ingestion/syslog.md#log-timestamps" %}}
Supports array of values separated by comma or specified via multiple flags.
Empty values are set to false.
-syslog.useLocalTimestamp.udp array
Whether to use local timestamp instead of the original timestamp for the ingested syslog messages at the corresponding -syslog.listenAddr.udp. See https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#log-timestamps
Whether to use local timestamp instead of the original timestamp for the ingested syslog messages at the corresponding -syslog.listenAddr.udp. See {{% ref "./data-ingestion/syslog.md#log-timestamps" %}}
Supports array of values separated by comma or specified via multiple flags.
Empty values are set to false.
-tls array

View file

@ -1,35 +1,35 @@
---
sort: 8
weight: 8
title: VictoriaLogs roadmap
title: Roadmap
disableToc: true
menu:
docs:
identifier: victorialogs-roadmap
parent: "victorialogs"
weight: 8
title: Roadmap
aliases:
- /VictoriaLogs/Roadmap.html
---
The following functionality is available in [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
The following functionality is available in [VictoriaLogs](./README.md):
- [Data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
- [Querying](https://docs.victoriametrics.com/victorialogs/querying/).
- [Querying via command-line](https://docs.victoriametrics.com/victorialogs/querying/#command-line).
- [Data ingestion](./data-ingestion/README.md).
- [Querying](./querying/README.md).
- [Querying via command-line](./querying/#command-line).
See [these docs](https://docs.victoriametrics.com/victorialogs/) for details.
See [these docs](./README.md) for details.
The following functionality is planned in the future versions of VictoriaLogs:
- Support for [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/) from popular log collectors and formats:
- Support for [data ingestion](./data-ingestion/README.md) from popular log collectors and formats:
- [ ] [OpenTelemetry for logs](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4839)
- [ ] Fluentd
- [ ] [Journald](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4618) (systemd)
- [ ] [Datadog protocol for logs](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6632)
- [ ] [Telegraf http output](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5310)
- [ ] Integration with Grafana. Partially done, check the [documentation](https://docs.victoriametrics.com/victorialogs/victorialogs-datasource/) and [datasource repository](https://github.com/VictoriaMetrics/victorialogs-datasource).
- [ ] Ability to make instant snapshots and backups in the way [similar to VictoriaMetrics](https://docs.victoriametrics.com/#how-to-work-with-snapshots).
- [ ] Integration with Grafana. Partially done, check the [documentation](./victorialogs-datasource.md) and [datasource repository](https://github.com/VictoriaMetrics/victorialogs-datasource).
- [ ] Ability to make instant snapshots and backups in the way [similar to VictoriaMetrics](../#how-to-work-with-snapshots).
- [ ] Cluster version of VictoriaLogs.
- [ ] Ability to store data to object storage (such as S3, GCS, Minio).
- [ ] Alerting on LogsQL queries.
- [ ] Data migration tool from Grafana Loki to VictoriaLogs (similar to [vmctl](https://docs.victoriametrics.com/vmctl/)).
- [ ] Data migration tool from Grafana Loki to VictoriaLogs (similar to [vmctl](../vmctl.md)).

View file

@ -12,7 +12,7 @@ aliases:
- /victorialogs/data-ingestion/filebeat.html
---
Specify [`output.elasticsearch`](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) section in the `filebeat.yml`
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
for sending the collected logs to [VictoriaLogs](../README.md):
```yaml
output.elasticsearch:
@ -25,11 +25,11 @@ output.elasticsearch:
Substitute the `localhost:9428` address inside `hosts` section with the real TCP address of VictoriaLogs.
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details on the `parameters` section.
See [these docs](./#http-parameters) for details on the `parameters` section.
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters)
It is recommended verifying whether the initial setup generates the needed [log fields](../keyConcepts.md#data-model)
and uses the correct [stream fields](../keyConcepts.md#stream-fields).
This can be done by specifying `debug` [parameter](./#http-parameters)
and inspecting VictoriaLogs logs then:
```yaml
@ -42,8 +42,8 @@ output.elasticsearch:
debug: "1"
```
If some [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) must be skipped
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters).
If some [log fields](../keyConcepts.md#data-model) must be skipped
during data ingestion, then they can be put into `ignore_fields` [parameter](./#http-parameters).
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
```yaml
@ -83,7 +83,7 @@ output.elasticsearch:
compression_level: 1
```
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy).
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](../#multitenancy).
If you need storing logs in other tenant, then specify the needed tenant via `headers` at `output.elasticsearch` section.
For example, the following `filebeat.yml` config instructs Filebeat to store the data to `(AccountID=12, ProjectID=34)` tenant:
@ -117,7 +117,7 @@ command-line flag.
See also:
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
- [Data ingestion troubleshooting](./#troubleshooting).
- [How to query VictoriaLogs](../querying/README.md).
- [Filebeat `output.elasticsearch` docs](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html).
- [Docker-compose demo for Filebeat integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/filebeat-docker).

View file

@ -15,7 +15,7 @@ aliases:
# Fluentbit setup
Specify [http output](https://docs.fluentbit.io/manual/pipeline/outputs/http) section in the `fluentbit.conf`
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
for sending the collected logs to [VictoriaLogs](../README.md):
```fluentbit
[Output]
@ -30,11 +30,11 @@ for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.co
Substitute the host (`localhost`) and port (`9428`) with the real TCP address of VictoriaLogs.
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details on the query args specified in the `uri`.
See [these docs](./#http-parameters) for details on the query args specified in the `uri`.
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) in the `uri`
It is recommended verifying whether the initial setup generates the needed [log fields](../keyConcepts.md#data-model)
and uses the correct [stream fields](../keyConcepts.md#stream-fields).
This can be done by specifying `debug` [parameter](./#http-parameters) in the `uri`
and inspecting VictoriaLogs logs then:
```fluentbit
@ -48,8 +48,8 @@ and inspecting VictoriaLogs logs then:
json_date_format iso8601
```
If some [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) must be skipped
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters).
If some [log fields](../keyConcepts.md#data-model) must be skipped
during data ingestion, then they can be put into `ignore_fields` [parameter](./#http-parameters).
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
```fluentbit
@ -78,7 +78,7 @@ This usually allows saving network bandwidth and costs by up to 5 times:
compress gzip
```
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/keyconcepts/#multitenancy).
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](../keyConcepts.md#multitenancy).
If you need storing logs in other tenant, then specify the needed tenant via `header` options.
For example, the following `fluentbit.conf` config instructs Fluentbit to store the data to `(AccountID=12, ProjectID=34)` tenant:
@ -97,7 +97,7 @@ For example, the following `fluentbit.conf` config instructs Fluentbit to store
See also:
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
- [Data ingestion troubleshooting](./#troubleshooting).
- [How to query VictoriaLogs](../querying/README.md).
- [Fluentbit HTTP output config docs](https://docs.fluentbit.io/manual/pipeline/outputs/http).
- [Docker-compose demo for Fluentbit integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentbit-docker).

View file

@ -12,7 +12,7 @@ aliases:
- /victorialogs/data-ingestion/Logstash.html
---
Specify [`output.elasticsearch`](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) section in the `logstash.conf` file
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
for sending the collected logs to [VictoriaLogs](../README.md):
```logstash
output {
@ -29,11 +29,11 @@ output {
Substitute `localhost:9428` address inside `hosts` with the real TCP address of VictoriaLogs.
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details on the `parameters` section.
See [these docs](./#http-parameters) for details on the `parameters` section.
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters)
It is recommended verifying whether the initial setup generates the needed [log fields](../keyConcepts.md#data-model)
and uses the correct [stream fields](../keyConcepts.md#stream-fields).
This can be done by specifying `debug` [parameter](./#http-parameters)
and inspecting VictoriaLogs logs then:
```logstash
@ -50,8 +50,8 @@ output {
}
```
If some [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) must be skipped
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters).
If some [log fields](../keyConcepts.md#data-model) must be skipped
during data ingestion, then they can be put into `ignore_fields` [parameter](./#http-parameters).
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
```logstash
@ -85,7 +85,7 @@ output {
}
```
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy).
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](../#multitenancy).
If you need storing logs in other tenant, then specify the needed tenant via `custom_headers` at `output.elasticsearch` section.
For example, the following `logstash.conf` config instructs Logstash to store the data to `(AccountID=12, ProjectID=34)` tenant:
@ -108,7 +108,7 @@ output {
See also:
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
- [Data ingestion troubleshooting](./#troubleshooting).
- [How to query VictoriaLogs](../querying/README.md).
- [Logstash `output.elasticsearch` docs](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html).
- [Docker-compose demo for Logstash integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/logstash).

View file

@ -15,7 +15,7 @@ aliases:
Promtail can be configured to send the collected logs to VictoriaLogs according to the following docs.
Specify [`clients`](https://grafana.com/docs/loki/latest/clients/promtail/configuration/#clients) section in the configuration file
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
for sending the collected logs to [VictoriaLogs](../README.md):
```yaml
clients:
@ -24,18 +24,18 @@ clients:
Substitute `localhost:9428` address inside `clients` with the real TCP address of VictoriaLogs.
By default VictoriaLogs stores all the ingested logs into a single [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
By default VictoriaLogs stores all the ingested logs into a single [log stream](../keyConcepts.md#stream-fields).
Storing all the logs in a single log stream may be not so efficient, so it is recommended to specify `_stream_fields` query arg
with the list of labels, which uniquely identify log streams. There is no need in specifying all the labels Promtail generates there -
it is usually enough specifying `instance` and `job` labels. See [these docs](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields)
it is usually enough specifying `instance` and `job` labels. See [these docs](../keyConcepts.md#stream-fields)
for details.
See also [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details on other supported query args.
See also [these docs](./#http-parameters) for details on other supported query args.
There is no need in specifying `_msg_field` and `_time_field` query args, since VictoriaLogs automatically extracts log message and timestamp from the ingested Loki data.
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters)
It is recommended verifying whether the initial setup generates the needed [log fields](../keyConcepts.md#data-model)
and uses the correct [stream fields](../keyConcepts.md#stream-fields).
This can be done by specifying `debug` [parameter](./#http-parameters)
and inspecting VictoriaLogs logs then:
```yaml
@ -43,8 +43,8 @@ clients:
- url: http://localhost:9428/insert/loki/api/v1/push?_stream_fields=instance,job,host,app&debug=1
```
If some [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) must be skipped
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters).
If some [log fields](../keyConcepts.md#data-model) must be skipped
during data ingestion, then they can be put into `ignore_fields` [parameter](./#http-parameters).
For example, the following config instructs VictoriaLogs to ignore `filename` and `stream` fields in the ingested logs:
```yaml
@ -52,11 +52,11 @@ clients:
- url: http://localhost:9428/insert/loki/api/v1/push?_stream_fields=instance,job,host,app&ignore_fields=filename,stream
```
By default the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy).
By default the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](../#multitenancy).
If you need storing logs in other tenant, then specify the needed tenant via `tenant_id` field
in the [Loki client configuration](https://grafana.com/docs/loki/latest/clients/promtail/configuration/#clients)
The `tenant_id` must have `AccountID:ProjectID` format, where `AccountID` and `ProjectID` are arbitrary uint32 numbers.
For example, the following config instructs VictoriaLogs to store logs in the `(AccountID=12, ProjectID=34)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy):
For example, the following config instructs VictoriaLogs to store logs in the `(AccountID=12, ProjectID=34)` [tenant](../#multitenancy):
```yaml
clients:
@ -64,6 +64,6 @@ clients:
tenant_id: "12:34"
```
The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/victorialogs/querying/).
The ingested log entries can be queried according to [these docs](../querying/README.md).
See also [data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting) docs.
See also [data ingestion troubleshooting](./#troubleshooting) docs.

View file

@ -1,13 +1,13 @@
[VictoriaLogs](https://docs.victoriametrics.com/victorialogs/) can accept logs from the following log collectors:
[VictoriaLogs](../README.md) can accept logs from the following log collectors:
- Syslog, Rsyslog and Syslog-ng - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/).
- Filebeat - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/filebeat/).
- Fluentbit - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/fluentbit/).
- Logstash - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/logstash/).
- Vector - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/vector/).
- Promtail (aka Grafana Loki) - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/promtail/).
- Syslog, Rsyslog and Syslog-ng - see [these docs](./syslog.md).
- Filebeat - see [these docs](./Filebeat.md).
- Fluentbit - see [these docs](./Fluentbit.md).
- Logstash - see [these docs](./Logstash.md).
- Vector - see [these docs](./Vector.md).
- Promtail (aka Grafana Loki) - see [these docs](./Promtail.md).
The ingested logs can be queried according to [these docs](https://docs.victoriametrics.com/victorialogs/querying/).
The ingested logs can be queried according to [these docs](../querying/README.md).
See also:
@ -41,18 +41,18 @@ echo '{"create":{}}
It is possible to push thousands of log lines in a single request to this API.
If the [timestamp field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field) is set to `"0"`,
If the [timestamp field](../keyConcepts.md#time-field) is set to `"0"`,
then the current timestamp at VictoriaLogs side is used per each ingested log line.
Otherwise the timestamp field must be in the [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) format. For example, `2023-06-20T15:32:10Z`.
Optional fractional part of seconds can be specified after the dot - `2023-06-20T15:32:10.123Z`.
Timezone can be specified instead of `Z` suffix - `2023-06-20T15:32:10+02:00`.
See [these docs](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) for details on fields,
See [these docs](../keyConcepts.md#data-model) for details on fields,
which must be present in the ingested log messages.
The API accepts various http parameters, which can change the data ingestion behavior - [these docs](#http-parameters) for details.
The following command verifies that the data has been successfully ingested to VictoriaLogs by [querying](https://docs.victoriametrics.com/victorialogs/querying/) it:
The following command verifies that the data has been successfully ingested to VictoriaLogs by [querying](../querying/README.md) it:
```sh
curl http://localhost:9428/select/logsql/query -d 'query=host.name:host123'
@ -64,8 +64,8 @@ The command should return the following response:
{"_msg":"cannot open file","_stream":"{}","_time":"2023-06-21T04:24:24Z","host.name":"host123"}
```
The response by default contains all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
See [how to query specific fields](https://docs.victoriametrics.com/victorialogs/logsql/#querying-specific-fields).
The response by default contains all the [log fields](../keyConcepts.md#data-model).
See [how to query specific fields](../LogsQL.md#querying-specific-fields).
The duration of requests to `/insert/elasticsearch/_bulk` can be monitored with `vl_http_request_duration_seconds{path="/insert/elasticsearch/_bulk"}` metric.
@ -73,7 +73,7 @@ See also:
- [How to debug data ingestion](#troubleshooting).
- [HTTP parameters, which can be passed to the API](#http-parameters).
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
- [How to query VictoriaLogs](../querying/README.md).
### JSON stream API
@ -91,18 +91,18 @@ echo '{ "log": { "level": "info", "message": "hello world" }, "date": "0", "stre
It is possible to push unlimited number of log lines in a single request to this API.
If the [timestamp field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field) is set to `"0"`,
If the [timestamp field](../keyConcepts.md#time-field) is set to `"0"`,
then the current timestamp at VictoriaLogs side is used per each ingested log line.
Otherwise the timestamp field must be in the [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) format. For example, `2023-06-20T15:32:10Z`.
Optional fractional part of seconds can be specified after the dot - `2023-06-20T15:32:10.123Z`.
Timezone can be specified instead of `Z` suffix - `2023-06-20T15:32:10+02:00`.
See [these docs](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) for details on fields,
See [these docs](../keyConcepts.md#data-model) for details on fields,
which must be present in the ingested log messages.
The API accepts various http parameters, which can change the data ingestion behavior - [these docs](#http-parameters) for details.
The following command verifies that the data has been successfully ingested into VictoriaLogs by [querying](https://docs.victoriametrics.com/victorialogs/querying/) it:
The following command verifies that the data has been successfully ingested into VictoriaLogs by [querying](../querying/README.md) it:
```sh
curl http://localhost:9428/select/logsql/query -d 'query=log.level:*'
@ -116,8 +116,8 @@ The command should return the following response:
{"_msg":"oh no!","_stream":"{stream=\"stream1\"}","_time":"2023-06-20T15:32:10.567Z","log.level":"error"}
```
The response by default contains all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
See [how to query specific fields](https://docs.victoriametrics.com/victorialogs/logsql/#querying-specific-fields).
The response by default contains all the [log fields](../keyConcepts.md#data-model).
See [how to query specific fields](../LogsQL.md#querying-specific-fields).
The duration of requests to `/insert/jsonline` can be monitored with `vl_http_request_duration_seconds{path="/insert/jsonline"}` metric.
@ -125,7 +125,7 @@ See also:
- [How to debug data ingestion](#troubleshooting).
- [HTTP parameters, which can be passed to the API](#http-parameters).
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
- [How to query VictoriaLogs](../querying/README.md).
### Loki JSON API
@ -143,7 +143,7 @@ It is possible to push thousands of log streams and log lines in a single reques
The API accepts various http parameters, which can change the data ingestion behavior - [these docs](#http-parameters) for details.
There is no need in specifying `_msg_field` and `_time_field` query args, since VictoriaLogs automatically extracts log message and timestamp from the ingested Loki data.
The following command verifies that the data has been successfully ingested into VictoriaLogs by [querying](https://docs.victoriametrics.com/victorialogs/querying/) it:
The following command verifies that the data has been successfully ingested into VictoriaLogs by [querying](../querying/README.md) it:
```sh
curl http://localhost:9428/select/logsql/query -d 'query=fizzbuzz'
@ -155,8 +155,8 @@ The command should return the following response:
{"_msg":"foo fizzbuzz bar","_stream":"{instance=\"host123\",job=\"app42\"}","_time":"2023-07-20T23:01:19.288676497Z"}
```
The response by default contains all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
See [how to query specific fields](https://docs.victoriametrics.com/victorialogs/logsql/#querying-specific-fields).
The response by default contains all the [log fields](../keyConcepts.md#data-model).
See [how to query specific fields](../LogsQL.md#querying-specific-fields).
The duration of requests to `/insert/loki/api/v1/push` can be monitored with `vl_http_request_duration_seconds{path="/insert/loki/api/v1/push"}` metric.
@ -164,28 +164,28 @@ See also:
- [How to debug data ingestion](#troubleshooting).
- [HTTP parameters, which can be passed to the API](#http-parameters).
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
- [How to query VictoriaLogs](../querying/README.md).
### HTTP parameters
VictoriaLogs accepts the following parameters at [data ingestion HTTP APIs](#http-apis):
- `_msg_field` - it must contain the name of the [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
with the [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field) generated by the log shipper.
- `_msg_field` - it must contain the name of the [log field](../keyConcepts.md#data-model)
with the [log message](../keyConcepts.md#message-field) generated by the log shipper.
This is usually the `message` field for Filebeat and Logstash.
If the `_msg_field` parameter isn't set, then VictoriaLogs reads the log message from the `_msg` field.
- `_time_field` - it must contain the name of the [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
with the [log timestamp](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field) generated by the log shipper.
- `_time_field` - it must contain the name of the [log field](../keyConcepts.md#data-model)
with the [log timestamp](../keyConcepts.md#time-field) generated by the log shipper.
This is usually the `@timestamp` field for Filebeat and Logstash.
If the `_time_field` parameter isn't set, then VictoriaLogs reads the timestamp from the `_time` field.
If this field doesn't exist, then the current timestamp is used.
- `_stream_fields` - it should contain comma-separated list of [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) names,
which uniquely identify every [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) collected the log shipper.
- `_stream_fields` - it should contain comma-separated list of [log field](../keyConcepts.md#data-model) names,
which uniquely identify every [log stream](../keyConcepts.md#stream-fields) collected the log shipper.
If the `_stream_fields` parameter isn't set, then all the ingested logs are written to default log stream - `{}`.
- `ignore_fields` - this parameter may contain the list of [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) names,
- `ignore_fields` - this parameter may contain the list of [log field](../keyConcepts.md#data-model) names,
which must be ignored during data ingestion.
- `debug` - if this parameter is set to `1`, then the ingested logs aren't stored in VictoriaLogs. Instead,
@ -196,7 +196,7 @@ See also [HTTP headers](#http-headers).
### HTTP headers
VictoriaLogs accepts optional `AccountID` and `ProjectID` headers at [data ingestion HTTP APIs](#http-apis).
These headers may contain the needed tenant to ingest data to. See [multitenancy docs](https://docs.victoriametrics.com/victorialogs/#multitenancy) for details.
These headers may contain the needed tenant to ingest data to. See [multitenancy docs](../#multitenancy) for details.
## Troubleshooting
@ -206,35 +206,35 @@ The following command can be used for verifying whether the data is successfully
curl http://localhost:9428/select/logsql/query -d 'query=*' | head
```
This command selects all the data ingested into VictoriaLogs via [HTTP query API](https://docs.victoriametrics.com/victorialogs/querying/#http-api)
using [any value filter](https://docs.victoriametrics.com/victorialogs/logsql/#any-value-filter),
while `head` cancels query execution after reading the first 10 log lines. See [these docs](https://docs.victoriametrics.com/victorialogs/querying/#command-line)
This command selects all the data ingested into VictoriaLogs via [HTTP query API](../querying/#http-api)
using [any value filter](../LogsQL.md#any-value-filter),
while `head` cancels query execution after reading the first 10 log lines. See [these docs](../querying/#command-line)
for more details on how `head` integrates with VictoriaLogs.
The response by default contains all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
See [how to query specific fields](https://docs.victoriametrics.com/victorialogs/logsql/#querying-specific-fields).
The response by default contains all the [log fields](../keyConcepts.md#data-model).
See [how to query specific fields](../LogsQL.md#querying-specific-fields).
VictoriaLogs provides the following command-line flags, which can help debugging data ingestion issues:
- `-logNewStreams` - if this flag is passed to VictoriaLogs, then it logs all the newly
registered [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
This may help debugging [high cardinality issues](https://docs.victoriametrics.com/victorialogs/keyconcepts/#high-cardinality).
registered [log streams](../keyConcepts.md#stream-fields).
This may help debugging [high cardinality issues](../keyConcepts.md#high-cardinality).
- `-logIngestedRows` - if this flag is passed to VictoriaLogs, then it logs all the ingested
[log entries](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
[log entries](../keyConcepts.md#data-model).
See also `debug` [parameter](#http-parameters).
VictoriaLogs exposes various [metrics](https://docs.victoriametrics.com/victorialogs/#monitoring), which may help debugging data ingestion issues:
VictoriaLogs exposes various [metrics](../#monitoring), which may help debugging data ingestion issues:
- `vl_rows_ingested_total` - the number of ingested [log entries](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
- `vl_rows_ingested_total` - the number of ingested [log entries](../keyConcepts.md#data-model)
since the last VictoriaLogs restart. If this number increases over time, then logs are successfully ingested into VictoriaLogs.
The ingested logs can be inspected in the following ways:
- By passing `debug=1` parameter to every request to [data ingestion APIs](#http-apis). The ingested rows aren't stored in VictoriaLogs
in this case. Instead, they are logged, so they can be investigated later.
The `vl_rows_dropped_total` [metric](https://docs.victoriametrics.com/victorialogs/#monitoring) is incremented for each logged row.
The `vl_rows_dropped_total` [metric](../#monitoring) is incremented for each logged row.
- By passing `-logIngestedRows` command-line flag to VictoriaLogs. In this case it logs all the ingested data, so it can be investigated later.
- `vl_streams_created_total` - the number of created [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields)
- `vl_streams_created_total` - the number of created [log streams](../keyConcepts.md#stream-fields)
since the last VictoriaLogs restart. If this metric grows rapidly during extended periods of time, then this may lead
to [high cardinality issues](https://docs.victoriametrics.com/victorialogs/keyconcepts/#high-cardinality).
to [high cardinality issues](../keyConcepts.md#high-cardinality).
The newly created log streams can be inspected in logs by passing `-logNewStreams` command-line flag to VictoriaLogs.
## Log collectors and data ingestion formats
@ -243,10 +243,10 @@ Here is the list of log collectors and their ingestion formats supported by Vict
| How to setup the collector | Format: Elasticsearch | Format: JSON Stream | Format: Loki | Format: syslog |
|----------------------------|-----------------------|---------------------|--------------|----------------|
| [Rsyslog](https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/) | [Yes](https://www.rsyslog.com/doc/configuration/modules/omelasticsearch.html) | No | No | [Yes](https://www.rsyslog.com/doc/configuration/modules/omfwd.html) |
| [Syslog-ng](https://docs.victoriametrics.com/victorialogs/data-ingestion/filebeat/) | Yes, [v1](https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.16/administration-guide/28#TOPIC-956489), [v2](https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.16/administration-guide/29#TOPIC-956494) | No | No | [Yes](https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.16/administration-guide/44#TOPIC-956553) |
| [Filebeat](https://docs.victoriametrics.com/victorialogs/data-ingestion/filebeat/) | [Yes](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) | No | No | No |
| [Fluentbit](https://docs.victoriametrics.com/victorialogs/data-ingestion/fluentbit/) | No | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/http) | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/loki) | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/syslog) |
| [Logstash](https://docs.victoriametrics.com/victorialogs/data-ingestion/logstash/) | [Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) | No | No | [Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-syslog.html) |
| [Vector](https://docs.victoriametrics.com/victorialogs/data-ingestion/vector/) | [Yes](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) | [Yes](https://vector.dev/docs/reference/configuration/sinks/http/) | [Yes](https://vector.dev/docs/reference/configuration/sinks/loki/) | No |
| [Promtail](https://docs.victoriametrics.com/victorialogs/data-ingestion/promtail/) | No | No | [Yes](https://grafana.com/docs/loki/latest/clients/promtail/configuration/#clients) | No |
| [Rsyslog](./syslog.md) | [Yes](https://www.rsyslog.com/doc/configuration/modules/omelasticsearch.html) | No | No | [Yes](https://www.rsyslog.com/doc/configuration/modules/omfwd.html) |
| [Syslog-ng](./Filebeat.md) | Yes, [v1](https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.16/administration-guide/28#TOPIC-956489), [v2](https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.16/administration-guide/29#TOPIC-956494) | No | No | [Yes](https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.16/administration-guide/44#TOPIC-956553) |
| [Filebeat](./Filebeat.md) | [Yes](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) | No | No | No |
| [Fluentbit](./Fluentbit.md) | No | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/http) | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/loki) | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/syslog) |
| [Logstash](./Logstash.md) | [Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) | No | No | [Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-syslog.html) |
| [Vector](./Vector.md) | [Yes](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) | [Yes](https://vector.dev/docs/reference/configuration/sinks/http/) | [Yes](https://vector.dev/docs/reference/configuration/sinks/loki/) | No |
| [Promtail](./promtail/) | No | No | [Yes](https://grafana.com/docs/loki/latest/clients/promtail/configuration/#clients) | No |

View file

@ -14,7 +14,7 @@ aliases:
## Elasticsearch sink
Specify [Elasticsearch sink type](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) in the `vector.toml`
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
for sending the collected logs to [VictoriaLogs](../README.md):
```toml
[sinks.vlogs]
@ -35,12 +35,12 @@ Substitute the `localhost:9428` address inside `endpoints` section with the real
Replace `your_input` with the name of the `inputs` section, which collects logs. See [these docs](https://vector.dev/docs/reference/configuration/sources/) for details.
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details on parameters specified
See [these docs](./#http-parameters) for details on parameters specified
in the `[sinks.vlogs.query]` section.
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters)
It is recommended verifying whether the initial setup generates the needed [log fields](../keyConcepts.md#data-model)
and uses the correct [stream fields](../keyConcepts.md#stream-fields).
This can be done by specifying `debug` [parameter](./#http-parameters)
in the `[sinks.vlogs.query]` section and inspecting VictoriaLogs logs then:
```toml
@ -59,8 +59,8 @@ in the `[sinks.vlogs.query]` section and inspecting VictoriaLogs logs then:
debug = "1"
```
If some [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) must be skipped
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters).
If some [log fields](../keyConcepts.md#data-model) must be skipped
during data ingestion, then they can be put into `ignore_fields` [parameter](./#http-parameters).
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
```toml
@ -119,7 +119,7 @@ This usually allows saving network bandwidth and costs by up to 5 times:
_stream_fields = "host,container_name"
```
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/keyconcepts/#multitenancy).
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](../keyConcepts.md#multitenancy).
If you need storing logs in other tenant, then specify the needed tenant via `[sinks.vlogs.request.headers]` section.
For example, the following `vector.toml` config instructs Vector to store the data to `(AccountID=12, ProjectID=34)` tenant:
@ -145,7 +145,7 @@ For example, the following `vector.toml` config instructs Vector to store the da
## HTTP sink
Vector can be configured with [HTTP](https://vector.dev/docs/reference/configuration/sinks/http/) sink type
for sending data to [JSON stream API](https://docs.victoriametrics.com/victorialogs/data-ingestion/#json-stream-api):
for sending data to [JSON stream API](./#json-stream-api):
```toml
[sinks.vlogs]
@ -163,7 +163,7 @@ for sending data to [JSON stream API](https://docs.victoriametrics.com/victorial
See also:
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
- [Data ingestion troubleshooting](./#troubleshooting).
- [How to query VictoriaLogs](../querying/README.md).
- [Elasticsearch output docs for Vector](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/).
- [Docker-compose demo for Filebeat integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/vector-docker).

View file

@ -7,7 +7,7 @@ menu:
parent: "victorialogs-data-ingestion"
weight: 10
---
[VictoriaLogs](https://docs.victoriametrics.com/victorialogs/) can accept logs in [Syslog formats](https://en.wikipedia.org/wiki/Syslog) at the specified TCP and UDP addresses
[VictoriaLogs](../README.md) can accept logs in [Syslog formats](https://en.wikipedia.org/wiki/Syslog) at the specified TCP and UDP addresses
via `-syslog.listenAddr.tcp` and `-syslog.listenAddr.udp` command-line flags. The following syslog formats are supported:
- [RFC3164](https://datatracker.ietf.org/doc/html/rfc3164) aka `<PRI>MMM DD hh:mm:ss HOSTNAME APP-NAME[PROCID]: MESSAGE`
@ -36,12 +36,12 @@ VictoriaLogs can accept logs from the following syslog collectors:
Multiple logs in Syslog format can be ingested via a single TCP connection or via a single UDP packet - just put every log on a separate line
and delimit them with `\n` char.
VictoriaLogs automatically extracts the following [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
VictoriaLogs automatically extracts the following [log fields](../keyConcepts.md#data-model)
from the received Syslog lines:
- [`_time`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field) - log timestamp. See also [log timestamps](#log-timestamps)
- [`_msg`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field) - the `MESSAGE` field from the supported syslog formats above
- `hostname`, `app_name` and `proc_id` - [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) for unique identification
- [`_time`](../keyConcepts.md#time-field) - log timestamp. See also [log timestamps](#log-timestamps)
- [`_msg`](../keyConcepts.md#message-field) - the `MESSAGE` field from the supported syslog formats above
- `hostname`, `app_name` and `proc_id` - [stream fields](../keyConcepts.md#stream-fields) for unique identification
over every log stream
- `priority`, `facility` and `severity` - these fields are extracted from `<PRI>` field
- `format` - this field is set to either `rfc3164` or `rfc5424` depending on the format of the parsed syslog line
@ -58,8 +58,8 @@ which parses syslog timestamps in `rfc3164` using `Europe/Berlin` timezone:
./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.timezone='Europe/Berlin'
```
The ingested logs can be queried via [logs querying API](https://docs.victoriametrics.com/victorialogs/querying/#http-api). For example, the following command
returns ingested logs for the last 5 minutes by using [time filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter):
The ingested logs can be queried via [logs querying API](../querying/#http-api). For example, the following command
returns ingested logs for the last 5 minutes by using [time filter](../LogsQL.md#time-filter):
```sh
curl http://localhost:9428/select/logsql/query -d 'query=_time:5m'
@ -71,21 +71,21 @@ See also:
- [Security](#security)
- [Compression](#compression)
- [Multitenancy](#multitenancy)
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
- [Data ingestion troubleshooting](./#troubleshooting).
- [How to query VictoriaLogs](../querying/README.md).
## Log timestamps
By default VictoriaLogs uses the timestamp from the parsed Syslog message as [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field).
By default VictoriaLogs uses the timestamp from the parsed Syslog message as [`_time` field](../keyConcepts.md#time-field).
Sometimes the ingested Syslog messages may contain incorrect timestamps (for example, timestamps with incorrect timezone). In this case VictoriaLogs can be configured
for using the log ingestion timestamp as [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field). This can be done by specifying
for using the log ingestion timestamp as [`_time` field](../keyConcepts.md#time-field). This can be done by specifying
`-syslog.useLocalTimestamp.tcp` command-line flag for the corresponding `-syslog.listenAddr.tcp` address:
```sh
./victoria-logs -syslog.listenAddr.tcp=:514 -syslog.useLocalTimestamp.tcp
```
In this case the original timestamp from the Syslog message is stored in `timestamp` [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
In this case the original timestamp from the Syslog message is stored in `timestamp` [log field](../keyConcepts.md#data-model).
The `-syslog.useLocalTimestamp.udp` command-line flag can be used for instructing VictoriaLogs to use local timestamps for the ingested logs
via the corresponding `-syslog.listenAddr.udp` address:
@ -123,7 +123,7 @@ For example, the following command starts VictoriaLogs, which accepts gzip-compr
## Multitenancy
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy).
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](../#multitenancy).
If you need storing logs in other tenant, then specify the needed tenant via `-syslog.tenantID.tcp` or `-syslog.tenantID.udp` command-line flags
depending on whether TCP or UDP ports are listened for syslog messages.
For example, the following command starts VictoriaLogs, which writes syslog messages received at TCP port 514, to `(AccountID=12, ProjectID=34)` tenant:
@ -136,8 +136,8 @@ For example, the following command starts VictoriaLogs, which writes syslog mess
VictoriaLogs can accept syslog messages via multiple TCP and UDP ports with individual configurations for [log timestamps](#log-timestamps), [compression](#compression), [security](#security)
and [multitenancy](#multitenancy). Specify multiple command-line flags for this. For example, the following command starts VictoriaLogs,
which accepts gzip-compressed syslog messages via TCP port 514 at localhost interface and stores them to [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy) `123:0`,
plus it accepts TLS-encrypted syslog messages via TCP port 6514 and stores them to [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy) `567:0`:
which accepts gzip-compressed syslog messages via TCP port 514 at localhost interface and stores them to [tenant](../#multitenancy) `123:0`,
plus it accepts TLS-encrypted syslog messages via TCP port 6514 and stores them to [tenant](../#multitenancy) `567:0`:
```sh
./victoria-logs \

View file

@ -1,9 +1,10 @@
---
sort: 2
weight: 2
title: VictoriaLogs key concepts
title: Key concepts
menu:
docs:
identifier: victorialogs-key-concept
parent: "victorialogs"
weight: 2
title: Key concepts
@ -12,7 +13,7 @@ aliases:
---
## Data model
[VictoriaLogs](https://docs.victoriametrics.com/victorialogs/) works with both structured and unstructured logs.
[VictoriaLogs](./README.md) works with both structured and unstructured logs.
Every log entry must contain at least [log message field](#message-field) plus arbitrary number of additional `key=value` fields.
A single log entry can be expressed as a single-level [JSON](https://www.json.org/json-en.html) object with string keys and string values.
For example:
@ -53,7 +54,7 @@ since they have only one identical non-empty field - [`_msg`](#message-field):
```
VictoriaLogs automatically transforms multi-level JSON (aka nested JSON) into single-level JSON
during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/) according to the following rules:
during [data ingestion](./data-ingestion/README.md) according to the following rules:
- Nested dictionaries are flattened by concatenating dictionary keys with `.` char. For example, the following multi-level JSON
is transformed into the following single-level JSON:
@ -76,7 +77,7 @@ during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-inges
}
```
- Arrays, numbers and boolean values are converted into strings. This simplifies [full-text search](https://docs.victoriametrics.com/victorialogs/logsql/) over such values.
- Arrays, numbers and boolean values are converted into strings. This simplifies [full-text search](./LogsQL.md) over such values.
For example, the following JSON with an array, a number and a boolean value is converted into the following JSON with string values:
```json
@ -96,7 +97,7 @@ during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-inges
```
Both field name and field value may contain arbitrary chars. Such chars must be encoded
during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
during [data ingestion](./data-ingestion/README.md)
according to [JSON string encoding](https://www.rfc-editor.org/rfc/rfc7159.html#section-7).
Unicode chars must be encoded with [UTF-8](https://en.wikipedia.org/wiki/UTF-8) encoding:
@ -107,8 +108,8 @@ Unicode chars must be encoded with [UTF-8](https://en.wikipedia.org/wiki/UTF-8)
}
```
VictoriaLogs automatically indexes all the fields in all the [ingested](https://docs.victoriametrics.com/victorialogs/data-ingestion/) logs.
This enables [full-text search](https://docs.victoriametrics.com/victorialogs/logsql/) across all the fields.
VictoriaLogs automatically indexes all the fields in all the [ingested](./data-ingestion/README.md) logs.
This enables [full-text search](./LogsQL.md) across all the fields.
VictoriaLogs supports the following special fields additionally to arbitrary [other fields](#other-field):
@ -128,9 +129,9 @@ log entry, which can be ingested into VictoriaLogs:
```
If the actual log message has other than `_msg` field name, then it is possible to specify the real log message field
via `_msg_field` query arg during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
via `_msg_field` query arg during [data ingestion](./data-ingestion/README.md).
For example, if log message is located in the `event.original` field, then specify `_msg_field=event.original` query arg
during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
during [data ingestion](./data-ingestion/README.md).
### Time field
@ -147,13 +148,13 @@ For example, the following [log entry](#data-model) contains valid timestamp wit
```
If the actual timestamp has other than `_time` field name, then it is possible to specify the real timestamp
field via `_time_field` query arg during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
field via `_time_field` query arg during [data ingestion](./data-ingestion/README.md).
For example, if timestamp is located in the `event.created` field, then specify `_time_field=event.created` query arg
during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
during [data ingestion](./data-ingestion/README.md).
If `_time` field is missing, then the data ingestion time is used as log entry timestamp.
The `_time` field is used in [time filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter) for quickly narrowing down
The `_time` field is used in [time filter](./LogsQL.md#time-filter) for quickly narrowing down
the search to a particular time range.
### Stream fields
@ -164,32 +165,32 @@ This may be either a single field such as `instance="host123:456"` or a set of f
`{kubernetes.namespace="...", kubernetes.node.name="...", kubernetes.pod.name="...", kubernetes.container.name="..."}`.
Log entries received from a single application instance form a **log stream** in VictoriaLogs.
VictoriaLogs optimizes storing and [querying](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter) of individual log streams.
VictoriaLogs optimizes storing and [querying](./LogsQL.md#stream-filter) of individual log streams.
This provides the following benefits:
- Reduced disk space usage, since a log stream from a single application instance is usually compressed better
than a mixed log stream from multiple distinct applications.
- Increased query performance, since VictoriaLogs needs to scan lower amounts of data
when [searching by stream fields](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter).
when [searching by stream fields](./LogsQL.md#stream-filter).
Every ingested log entry is associated with a log stream. Every log stream consists of two fields:
- `_stream_id` - this is an unique identifier for the log stream. All the logs for the particular stream can be selected
via [`_stream_id:...` filter](https://docs.victoriametrics.com/victorialogs/logsql/#_stream_id-filter).
via [`_stream_id:...` filter](./LogsQL.md#_stream_id-filter).
- `_stream` - this field contains stream labels in the format similar to [labels in Prometheus metrics](https://docs.victoriametrics.com/keyconcepts/#labels):
- `_stream` - this field contains stream labels in the format similar to [labels in Prometheus metrics](../keyConcepts.md#labels):
```
{field1="value1", ..., fieldN="valueN"}
```
For example, if `host` and `app` fields are associated with the stream, then the `_stream` field will have `{host="host-123",app="my-app"}` value
for the log entry with `host="host-123"` and `app="my-app"` fields. The `_stream` field can be searched
with [stream filters](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter).
with [stream filters](./LogsQL.md#stream-filter).
By default the value of `_stream` field is `{}`, since VictoriaLogs cannot determine automatically,
which fields uniquely identify every log stream. This may lead to not-so-optimal resource usage and query performance.
Therefore it is recommended specifying stream-level fields via `_stream_fields` query arg
during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
during [data ingestion](./data-ingestion/README.md).
For example, if logs from Kubernetes containers have the following fields:
```json
@ -203,7 +204,7 @@ For example, if logs from Kubernetes containers have the following fields:
```
then specify `_stream_fields=kubernetes.namespace,kubernetes.node.name,kubernetes.pod.name,kubernetes.container.name`
query arg during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/) in order to properly store
query arg during [data ingestion](./data-ingestion/README.md) in order to properly store
per-container logs into distinct streams.
#### How to determine which fields must be associated with log streams?
@ -213,7 +214,7 @@ For example, `container`, `instance` and `host` are good candidates for stream f
Additional fields may be added to log streams if they **remain constant during application instance lifetime**.
For example, `namespace`, `node`, `pod` and `job` are good candidates for additional stream fields. Adding such fields to log streams
makes sense if you are going to use these fields during search and want speeding up it with [stream filters](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter).
makes sense if you are going to use these fields during search and want speeding up it with [stream filters](./LogsQL.md#stream-filter).
There is **no need to add all the constant fields to log streams**, since this may increase resource usage during data ingestion and querying.
@ -228,14 +229,14 @@ VictoriaLogs works perfectly with such fields unless they are associated with [l
**Never** associate high-cardinality fields with [log streams](#stream-fields), since this may lead to the following issues:
- Performance degradation during [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
and [querying](https://docs.victoriametrics.com/victorialogs/querying/)
- Performance degradation during [data ingestion](./data-ingestion/README.md)
and [querying](./querying/README.md)
- Increased memory usage
- Increased CPU usage
- Increased disk space usage
- Increased disk read / write IO
VictoriaLogs exposes `vl_streams_created_total` [metric](https://docs.victoriametrics.com/victorialogs/#monitoring),
VictoriaLogs exposes `vl_streams_created_total` [metric](./#monitoring),
which shows the number of created streams since the last VictoriaLogs restart. If this metric grows at a rapid rate
during long period of time, then there are high chances of high cardinality issues mentioned above.
VictoriaLogs can log all the newly registered streams when `-logNewStreams` command-line flag is passed to it.
@ -244,8 +245,8 @@ This can help narrowing down and eliminating high-cardinality fields from [log s
### Other fields
Every ingested log entry may contain arbitrary number of [fields](#data-model) additionally to [`_msg`](#message-field) and [`_time`](#time-field).
For example, `level`, `ip`, `user_id`, `trace_id`, etc. Such fields can be used for simplifying and optimizing [search queries](https://docs.victoriametrics.com/victorialogs/logsql/).
For example, `level`, `ip`, `user_id`, `trace_id`, etc. Such fields can be used for simplifying and optimizing [search queries](./LogsQL.md).
It is usually faster to search over a dedicated `trace_id` field instead of searching for the `trace_id` inside long [log message](#message-field).
E.g. the `trace_id:="XXXX-YYYY-ZZZZ"` query usually works faster than the `_msg:"trace_id=XXXX-YYYY-ZZZZ"` query.
See [LogsQL docs](https://docs.victoriametrics.com/victorialogs/logsql/) for more details.
See [LogsQL docs](./LogsQL.md) for more details.

View file

@ -9,22 +9,22 @@ menu:
---
## How to select recently ingested logs?
[Run](https://docs.victoriametrics.com/victorialogs/querying/) the following query:
[Run](./querying/README.md) the following query:
```logsql
_time:5m
```
It returns logs over the last 5 minutes by using [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter).
It returns logs over the last 5 minutes by using [`_time` filter](./LogsQL.md#time-filter).
The logs are returned in arbitrary order because of performance reasons.
Add [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe) to the query if you need sorting
the returned logs by some field (usually [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field)):
Add [`sort` pipe](./LogsQL.md#sort-pipe) to the query if you need sorting
the returned logs by some field (usually [`_time` field](./keyConcepts.md#time-field)):
```logsql
_time:5m | sort by (_time)
```
If the number of returned logs is too big, it may be limited with the [`limit` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#limit-pipe).
If the number of returned logs is too big, it may be limited with the [`limit` pipe](./LogsQL.md#limit-pipe).
For example, the following query returns 10 most recent logs, which were ingested during the last 5 minutes:
```logsql
@ -38,33 +38,33 @@ See also:
## How to select logs with the given word in log message?
Just put the needed [word](https://docs.victoriametrics.com/victorialogs/logsql/#word) in the query.
For example, the following query returns all the logs with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
in [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field):
Just put the needed [word](./LogsQL.md#word) in the query.
For example, the following query returns all the logs with the `error` [word](./LogsQL.md#word)
in [log message](./keyConcepts.md#message-field):
```logsql
error
```
If the number of returned logs is too big, then add [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter)
for limiting the time range for the selected logs. For example, the following query returns logs with `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
If the number of returned logs is too big, then add [`_time` filter](./LogsQL.md#time-filter)
for limiting the time range for the selected logs. For example, the following query returns logs with `error` [word](./LogsQL.md#word)
over the last hour:
```logsql
error _time:1h
```
If the number of returned logs is still too big, then consider adding more specific [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters)
to the query. For example, the following query selects logs with `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word),
which do not contain `kubernetes` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word), over the last hour:
If the number of returned logs is still too big, then consider adding more specific [filters](./LogsQL.md#filters)
to the query. For example, the following query selects logs with `error` [word](./LogsQL.md#word),
which do not contain `kubernetes` [word](./LogsQL.md#word), over the last hour:
```logsql
error !kubernetes _time:1h
```
The logs are returned in arbitrary order because of performance reasons. Add [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe)
for sorting logs by the needed [fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model). For example, the following query
sorts the selected logs by [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field):
The logs are returned in arbitrary order because of performance reasons. Add [`sort` pipe](./LogsQL.md#sort-pipe)
for sorting logs by the needed [fields](./keyConcepts.md#data-model). For example, the following query
sorts the selected logs by [`_time` field](./keyConcepts.md#time-field):
```logsql
error _time:1h | sort by (_time)
@ -75,39 +75,39 @@ See also:
- [How to select logs with all the given words in log message?](#how-to-select-logs-with-all-the-given-words-in-log-message)
- [How to select logs with some of the given words in log message?](#how-to-select-logs-with-some-of-the-given-words-in-log-message)
- [How to skip logs with the given word in log message?](#how-to-skip-logs-with-the-given-word-in-log-message)
- [Filtering by phrase](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter)
- [Filtering by prefix](https://docs.victoriametrics.com/victorialogs/logsql/#prefix-filter)
- [Filtering by regular expression](https://docs.victoriametrics.com/victorialogs/logsql/#regexp-filter)
- [Filtering by substring](https://docs.victoriametrics.com/victorialogs/logsql/#substring-filter)
- [Filtering by phrase](./LogsQL.md#phrase-filter)
- [Filtering by prefix](./LogsQL.md#prefix-filter)
- [Filtering by regular expression](./LogsQL.md#regexp-filter)
- [Filtering by substring](./LogsQL.md#substring-filter)
## How to skip logs with the given word in log message?
Use [`NOT` logical filter](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter). For example, the following query returns all the logs
without the `INFO` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word) in the [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field):
Use [`NOT` logical filter](./LogsQL.md#logical-filter). For example, the following query returns all the logs
without the `INFO` [word](./LogsQL.md#word) in the [log message](./keyConcepts.md#message-field):
```logsql
!INFO
```
If the number of returned logs is too big, then add [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter)
If the number of returned logs is too big, then add [`_time` filter](./LogsQL.md#time-filter)
for limiting the time range for the selected logs. For example, the following query returns matching logs over the last hour:
```logsql
!INFO _time:1h
```
If the number of returned logs is still too big, then consider adding more specific [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters)
to the query. For example, the following query selects logs without `INFO` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word),
which contain `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word), over the last hour:
If the number of returned logs is still too big, then consider adding more specific [filters](./LogsQL.md#filters)
to the query. For example, the following query selects logs without `INFO` [word](./LogsQL.md#word),
which contain `error` [word](./LogsQL.md#word), over the last hour:
```logsql
!INFO error _time:1h
```
The logs are returned in arbitrary order because of performance reasons. Add [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe)
for sorting logs by the needed [fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model). For example, the following query
sorts the selected logs by [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field):
The logs are returned in arbitrary order because of performance reasons. Add [`sort` pipe](./LogsQL.md#sort-pipe)
for sorting logs by the needed [fields](./keyConcepts.md#data-model). For example, the following query
sorts the selected logs by [`_time` field](./keyConcepts.md#time-field):
```logsql
!INFO _time:1h | sort by (_time)
@ -117,42 +117,42 @@ See also:
- [How to select logs with all the given words in log message?](#how-to-select-logs-with-all-the-given-words-in-log-message)
- [How to select logs with some of given words in log message?](#how-to-select-logs-with-some-of-the-given-words-in-log-message)
- [Filtering by phrase](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter)
- [Filtering by prefix](https://docs.victoriametrics.com/victorialogs/logsql/#prefix-filter)
- [Filtering by regular expression](https://docs.victoriametrics.com/victorialogs/logsql/#regexp-filter)
- [Filtering by substring](https://docs.victoriametrics.com/victorialogs/logsql/#substring-filter)
- [Filtering by phrase](./LogsQL.md#phrase-filter)
- [Filtering by prefix](./LogsQL.md#prefix-filter)
- [Filtering by regular expression](./LogsQL.md#regexp-filter)
- [Filtering by substring](./LogsQL.md#substring-filter)
## How to select logs with all the given words in log message?
Just enumerate the needed [words](https://docs.victoriametrics.com/victorialogs/logsql/#word) in the query, by deliming them with whitespace.
For example, the following query selects logs containing both `error` and `kubernetes` [words](https://docs.victoriametrics.com/victorialogs/logsql/#word)
in the [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field):
Just enumerate the needed [words](./LogsQL.md#word) in the query, by deliming them with whitespace.
For example, the following query selects logs containing both `error` and `kubernetes` [words](./LogsQL.md#word)
in the [log message](./keyConcepts.md#message-field):
```logsql
error kubernetes
```
This query uses [`AND` logical filter](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter).
This query uses [`AND` logical filter](./LogsQL.md#logical-filter).
If the number of returned logs is too big, then add [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter)
If the number of returned logs is too big, then add [`_time` filter](./LogsQL.md#time-filter)
for limiting the time range for the selected logs. For example, the following query returns matching logs over the last hour:
```logsql
error kubernetes _time:1h
```
If the number of returned logs is still too big, then consider adding more specific [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters)
to the query. For example, the following query selects logs with `error` and `kubernetes` [words](https://docs.victoriametrics.com/victorialogs/logsql/#word)
from [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) containing `container="my-app"` field, over the last hour:
If the number of returned logs is still too big, then consider adding more specific [filters](./LogsQL.md#filters)
to the query. For example, the following query selects logs with `error` and `kubernetes` [words](./LogsQL.md#word)
from [log streams](./keyConcepts.md#stream-fields) containing `container="my-app"` field, over the last hour:
```logsql
error kubernetes _stream:{container="my-app"} _time:1h
```
The logs are returned in arbitrary order because of performance reasons. Add [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe)
for sorting logs by the needed [fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model). For example, the following query
sorts the selected logs by [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field):
The logs are returned in arbitrary order because of performance reasons. Add [`sort` pipe](./LogsQL.md#sort-pipe)
for sorting logs by the needed [fields](./keyConcepts.md#data-model). For example, the following query
sorts the selected logs by [`_time` field](./keyConcepts.md#time-field):
```logsql
error kubernetes _time:1h | sort by (_time)
@ -162,42 +162,42 @@ See also:
- [How to select logs with some of given words in log message?](#how-to-select-logs-with-some-of-the-given-words-in-log-message)
- [How to skip logs with the given word in log message?](#how-to-skip-logs-with-the-given-word-in-log-message)
- [Filtering by phrase](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter)
- [Filtering by prefix](https://docs.victoriametrics.com/victorialogs/logsql/#prefix-filter)
- [Filtering by regular expression](https://docs.victoriametrics.com/victorialogs/logsql/#regexp-filter)
- [Filtering by substring](https://docs.victoriametrics.com/victorialogs/logsql/#substring-filter)
- [Filtering by phrase](./LogsQL.md#phrase-filter)
- [Filtering by prefix](./LogsQL.md#prefix-filter)
- [Filtering by regular expression](./LogsQL.md#regexp-filter)
- [Filtering by substring](./LogsQL.md#substring-filter)
## How to select logs with some of the given words in log message?
Put the needed [words](https://docs.victoriametrics.com/victorialogs/logsql/#word) into `(...)`, by delimiting them with ` or `.
For example, the following query selects logs with `error`, `ERROR` or `Error` [words](https://docs.victoriametrics.com/victorialogs/logsql/#word)
in the [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field):
Put the needed [words](./LogsQL.md#word) into `(...)`, by delimiting them with ` or `.
For example, the following query selects logs with `error`, `ERROR` or `Error` [words](./LogsQL.md#word)
in the [log message](./keyConcepts.md#message-field):
```logsql
(error or ERROR or Error)
```
This query uses [`OR` logical filter](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter).
This query uses [`OR` logical filter](./LogsQL.md#logical-filter).
If the number of returned logs is too big, then add [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter)
If the number of returned logs is too big, then add [`_time` filter](./LogsQL.md#time-filter)
for limiting the time range for the selected logs. For example, the following query returns matching logs over the last hour:
```logsql
(error or ERROR or Error) _time:1h
```
If the number of returned logs is still too big, then consider adding more specific [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters)
to the query. For example, the following query selects logs without `error`, `ERROR` or `Error` [words](https://docs.victoriametrics.com/victorialogs/logsql/#word),
which do not contain `kubernetes` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word), over the last hour:
If the number of returned logs is still too big, then consider adding more specific [filters](./LogsQL.md#filters)
to the query. For example, the following query selects logs without `error`, `ERROR` or `Error` [words](./LogsQL.md#word),
which do not contain `kubernetes` [word](./LogsQL.md#word), over the last hour:
```logsql
(error or ERROR or Error) !kubernetes _time:1h
```
The logs are returned in arbitrary order because of performance reasons. Add [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe)
for sorting logs by the needed [fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model). For example, the following query
sorts the selected logs by [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field):
The logs are returned in arbitrary order because of performance reasons. Add [`sort` pipe](./LogsQL.md#sort-pipe)
for sorting logs by the needed [fields](./keyConcepts.md#data-model). For example, the following query
sorts the selected logs by [`_time` field](./keyConcepts.md#time-field):
```logsql
(error or ERROR or Error) _time:1h | sort by (_time)
@ -207,42 +207,42 @@ See also:
- [How to select logs with all the given words in log message?](#how-to-select-logs-with-all-the-given-words-in-log-message)
- [How to skip logs with the given word in log message?](#how-to-skip-logs-with-the-given-word-in-log-message)
- [Filtering by phrase](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter)
- [Filtering by prefix](https://docs.victoriametrics.com/victorialogs/logsql/#prefix-filter)
- [Filtering by regular expression](https://docs.victoriametrics.com/victorialogs/logsql/#regexp-filter)
- [Filtering by substring](https://docs.victoriametrics.com/victorialogs/logsql/#substring-filter)
- [Filtering by phrase](./LogsQL.md#phrase-filter)
- [Filtering by prefix](./LogsQL.md#prefix-filter)
- [Filtering by regular expression](./LogsQL.md#regexp-filter)
- [Filtering by substring](./LogsQL.md#substring-filter)
## How to select logs from the given application instance?
Make sure the application is properly configured with [stream-level log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
Then just use [`_stream` filter](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter) for selecting logs for the given application instance.
For example, if the application contains `job="app-42"` and `instance="host-123:5678"` [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields),
Make sure the application is properly configured with [stream-level log fields](./keyConcepts.md#stream-fields).
Then just use [`_stream` filter](./LogsQL.md#stream-filter) for selecting logs for the given application instance.
For example, if the application contains `job="app-42"` and `instance="host-123:5678"` [stream fields](./keyConcepts.md#stream-fields),
then the following query selects all the logs from this application:
```logsql
_stream:{job="app-42",instance="host-123:5678"}
```
If the number of returned logs is too big, it is recommended adding [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter)
If the number of returned logs is too big, it is recommended adding [`_time` filter](./LogsQL.md#time-filter)
to the query in order to reduce the number of matching logs. For example, the following query returns logs for the given application for the last day:
```logsql
_stream:{job="app-42",instance="host-123:5678"} _time:1d
```
If the number of returned logs is still too big, then consider adding more specific [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters)
to the query. For example, the following query selects logs from the given [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields),
which contain `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word) in the [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field),
If the number of returned logs is still too big, then consider adding more specific [filters](./LogsQL.md#filters)
to the query. For example, the following query selects logs from the given [log stream](./keyConcepts.md#stream-fields),
which contain `error` [word](./LogsQL.md#word) in the [log message](./keyConcepts.md#message-field),
over the last day:
```logsql
_stream:{job="app-42",instance="host-123:5678"} error _time:1d
```
The logs are returned in arbitrary order because of performance reasons. Use [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe)
The logs are returned in arbitrary order because of performance reasons. Use [`sort` pipe](./LogsQL.md#sort-pipe)
for sorting the returned logs by the needed fields. For example, the following query sorts the selected logs
by [`_time`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field):
by [`_time`](./keyConcepts.md#time-field):
```logsql
_stream:{job="app-42",instance="host-123:5678"} _time:1d | sort by (_time)
@ -256,7 +256,7 @@ See also:
## How to count the number of matching logs?
Use [`count()` stats function](https://docs.victoriametrics.com/victorialogs/logsql/#count-stats). For example, the following query returns
Use [`count()` stats function](./LogsQL.md#count-stats). For example, the following query returns
the number of results returned by `your_query_here`:
```logsql
@ -265,25 +265,25 @@ your_query_here | count()
## How to determine applications with the most logs?
[Run](https://docs.victoriametrics.com/victorialogs/querying/) the following query:
[Run](./querying/README.md) the following query:
```logsql
_time:5m | stats by (_stream) count() as logs | sort by (logs desc) | limit 10
```
This query returns top 10 application instances (aka [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields))
This query returns top 10 application instances (aka [log streams](./keyConcepts.md#stream-fields))
with the most logs over the last 5 minutes.
This query uses the following [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) features:
This query uses the following [LogsQL](./LogsQL.md) features:
- [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter) for selecting logs on the given time range (5 minutes in the query above).
- [`stats` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#stats-pipe) for calculating the number of logs.
per each [`_stream`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields). [`count` stats function](https://docs.victoriametrics.com/victorialogs/logsql/#count-stats)
- [`_time` filter](./LogsQL.md#time-filter) for selecting logs on the given time range (5 minutes in the query above).
- [`stats` pipe](./LogsQL.md#stats-pipe) for calculating the number of logs.
per each [`_stream`](./keyConcepts.md#stream-fields). [`count` stats function](./LogsQL.md#count-stats)
is used for calculating the needed stats.
- [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe) for sorting the stats by `logs` field in descending order.
- [`limit` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#limit-pipe) for limiting the number of returned results to 10.
- [`sort` pipe](./LogsQL.md#sort-pipe) for sorting the stats by `logs` field in descending order.
- [`limit` pipe](./LogsQL.md#limit-pipe) for limiting the number of returned results to 10.
This query can be simplified into the following one, which uses [`top` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#top-pipe):
This query can be simplified into the following one, which uses [`top` pipe](./LogsQL.md#top-pipe):
```logsql
_time:5m | top 10 by (_stream)
@ -298,25 +298,25 @@ See also:
## How to parse JSON inside log message?
It is better from performance and resource usage PoV to avoid storing JSON inside [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field).
It is recommended storing individual JSON fields as log fields instead according to [VictoriaLogs data model](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
It is better from performance and resource usage PoV to avoid storing JSON inside [log message](./keyConcepts.md#message-field).
It is recommended storing individual JSON fields as log fields instead according to [VictoriaLogs data model](./keyConcepts.md#data-model).
If you have to store JSON inside log message or inside any other [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model),
then the stored JSON can be parsed during query time via [`unpack_json` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#unpack_json-pipe).
For example, the following query unpacks JSON from the [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field)
If you have to store JSON inside log message or inside any other [log fields](./keyConcepts.md#data-model),
then the stored JSON can be parsed during query time via [`unpack_json` pipe](./LogsQL.md#unpack_json-pipe).
For example, the following query unpacks JSON from the [`_msg` field](./keyConcepts.md#message-field)
across all the logs for the last 5 minutes:
```logsql
_time:5m | unpack_json
```
If you need to parse JSON array, then take a look at [`unroll` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#unroll-pipe).
If you need to parse JSON array, then take a look at [`unroll` pipe](./LogsQL.md#unroll-pipe).
## How to extract some data from text log message?
Use [`extract`](https://docs.victoriametrics.com/victorialogs/logsql/#extract-pipe) or [`extract_regexp`](https://docs.victoriametrics.com/victorialogs/logsql/#extract_regexp-pipe) pipe.
For example, the following query extracts `username` and `user_id` fields from text [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field):
Use [`extract`](./LogsQL.md#extract-pipe) or [`extract_regexp`](./LogsQL.md#extract_regexp-pipe) pipe.
For example, the following query extracts `username` and `user_id` fields from text [log message](./keyConcepts.md#message-field):
```logsql
_time:5m | extract "username=<username>, user_id=<user_id>,"
@ -324,13 +324,13 @@ _time:5m | extract "username=<username>, user_id=<user_id>,"
See also:
- [Replacing substrings in text fields](https://docs.victoriametrics.com/victorialogs/logsql/#replace-pipe)
- [Replacing substrings in text fields](./LogsQL.md#replace-pipe)
## How to filter out data after stats calculation?
Use [`filter` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#filter-pipe). For example, the following query
returns only [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) with more than 1000 logs
Use [`filter` pipe](./LogsQL.md#filter-pipe). For example, the following query
returns only [log streams](./keyConcepts.md#stream-fields) with more than 1000 logs
over the last 5 minutes:
```logsql
@ -339,33 +339,33 @@ _time:5m | stats by (_stream) count() rows | filter rows:>1000
## How to calculate the number of logs per the given interval?
Use [`stats` by time bucket](https://docs.victoriametrics.com/victorialogs/logsql/#stats-by-time-buckets). For example, the following query
returns per-hour number of logs with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word) for the last day:
Use [`stats` by time bucket](./LogsQL.md#stats-by-time-buckets). For example, the following query
returns per-hour number of logs with the `error` [word](./LogsQL.md#word) for the last day:
```logsql
_time:1d error | stats by (_time:1h) count() rows | sort by (_time)
```
This query uses [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe) in order to sort per-hour stats
by [`_time`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field).
This query uses [`sort` pipe](./LogsQL.md#sort-pipe) in order to sort per-hour stats
by [`_time`](./keyConcepts.md#time-field).
## How to calculate the number of logs per IPv4 subnetwork?
Use [`stats` by IPv4 bucket](https://docs.victoriametrics.com/victorialogs/logsql/#stats-by-ipv4-buckets). For example, the following
Use [`stats` by IPv4 bucket](./LogsQL.md#stats-by-ipv4-buckets). For example, the following
query returns top 10 `/24` subnetworks with the biggest number of logs for the last 5 minutes:
```logsql
_time:5m | stats by (ip:/24) count() rows | sort by (rows desc) limit 10
```
This query uses [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe) in order to sort per-subnetwork stats
This query uses [`sort` pipe](./LogsQL.md#sort-pipe) in order to sort per-subnetwork stats
by descending number of rows and limiting the result to top 10 rows.
The query assumes the original logs have `ip` [field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) with the IPv4 address.
If the IPv4 address is located inside [log message](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field) or any other text field,
then it can be extracted with the [`extract`](https://docs.victoriametrics.com/victorialogs/logsql/#extract-pipe)
or [`extract_regexp`](https://docs.victoriametrics.com/victorialogs/logsql/#extract_regexp-pipe) pipes. For example, the following query
extracts IPv4 address from [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field) and then returns top 10
The query assumes the original logs have `ip` [field](./keyConcepts.md#data-model) with the IPv4 address.
If the IPv4 address is located inside [log message](./keyConcepts.md#message-field) or any other text field,
then it can be extracted with the [`extract`](./LogsQL.md#extract-pipe)
or [`extract_regexp`](./LogsQL.md#extract_regexp-pipe) pipes. For example, the following query
extracts IPv4 address from [`_msg` field](./keyConcepts.md#message-field) and then returns top 10
`/16` subnetworks with the biggest number of logs for the last 5 minutes:
```logsql
@ -374,14 +374,14 @@ _time:5m | extract_regexp "(?P<ip>([0-9]+[.]){3}[0-9]+)" | stats by (ip:/16) cou
## How to calculate the number of logs per every value of the given field?
Use [`stats` by field](https://docs.victoriametrics.com/victorialogs/logsql/#stats-by-fields). For example, the following query
calculates the number of logs per `level` [field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) for logs over the last 5 minutes:
Use [`stats` by field](./LogsQL.md#stats-by-fields). For example, the following query
calculates the number of logs per `level` [field](./keyConcepts.md#data-model) for logs over the last 5 minutes:
```logsql
_time:5m | stats by (level) count() rows
```
An alternative is to use [`field_values` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#field_values-pipe):
An alternative is to use [`field_values` pipe](./LogsQL.md#field_values-pipe):
```logsql
_time:5m | field_values level
@ -389,7 +389,7 @@ _time:5m | field_values level
## How to get unique values for the given field?
Use [`uniq` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#uniq-pipe). For example, the following query returns unique values for the `ip` field
Use [`uniq` pipe](./LogsQL.md#uniq-pipe). For example, the following query returns unique values for the `ip` field
over logs for the last 5 minutes:
```logsql
@ -398,7 +398,7 @@ _time:5m | uniq by (ip)
## How to get unique sets of values for the given fields?
Use [`uniq` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#uniq-pipe). For example, the following query returns unique sets for (`host`, `path`) fields
Use [`uniq` pipe](./LogsQL.md#uniq-pipe). For example, the following query returns unique sets for (`host`, `path`) fields
over logs for the last 5 minutes:
```logsql
@ -407,18 +407,18 @@ _time:5m | uniq by (host, path)
## How to return last N logs for the given query?
Use [`sort` pipe with limit](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe). For example, the following query returns the last 10 logs with the `error`
[word](https://docs.victoriametrics.com/victorialogs/logsql/#word) in the [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field)
Use [`sort` pipe with limit](./LogsQL.md#sort-pipe). For example, the following query returns the last 10 logs with the `error`
[word](./LogsQL.md#word) in the [`_msg` field](./keyConcepts.md#message-field)
over the logs for the last 5 minutes:
```logsql
_time:5m error | sort by (_time desc) limit 10
```
It sorts the matching logs by [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field) in descending order and then selects
It sorts the matching logs by [`_time` field](./keyConcepts.md#time-field) in descending order and then selects
the first 10 logs with the highest values for the `_time` field.
If the query is sent to [`/select/logsql/query` HTTP API](https://docs.victoriametrics.com/victorialogs/querying/#querying-logs), then `limit=N` query arg
If the query is sent to [`/select/logsql/query` HTTP API](./querying/#querying-logs), then `limit=N` query arg
can be passed to it in order to return up to `N` latest log entries. For example, the following command returns up to 10 latest log entries with the `error` word:
```sh
@ -439,30 +439,30 @@ Use the following query:
_time:5m | stats count() logs, count() if (error) errors | math errors / logs
```
This query uses the following [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) features:
This query uses the following [LogsQL](./LogsQL.md) features:
- [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter) for selecting logs on the given time range (last 5 minutes in the query above).
- [`stats` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#stats-pipe) with [additional filtering](https://docs.victoriametrics.com/victorialogs/logsql/#stats-with-additional-filters)
for calculating the total number of logs and the number of logs with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word) on the selected time range.
- [`math` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#math-pipe) for calculating the share of logs with `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
- [`_time` filter](./LogsQL.md#time-filter) for selecting logs on the given time range (last 5 minutes in the query above).
- [`stats` pipe](./LogsQL.md#stats-pipe) with [additional filtering](./LogsQL.md#stats-with-additional-filters)
for calculating the total number of logs and the number of logs with the `error` [word](./LogsQL.md#word) on the selected time range.
- [`math` pipe](./LogsQL.md#math-pipe) for calculating the share of logs with `error` [word](./LogsQL.md#word)
comparing to the total number of logs.
## How to select logs for working hours and weekdays?
Use [`day_range`](https://docs.victoriametrics.com/victorialogs/logsql/#day-range-filter) and [`week_range`](https://docs.victoriametrics.com/victorialogs/logsql/#week-range-filter) filters.
Use [`day_range`](./LogsQL.md#day-range-filter) and [`week_range`](./LogsQL.md#week-range-filter) filters.
For example, the following query selects logs from Monday to Friday in working hours `[08:00 - 18:00]` over the last 4 weeks:
```logsql
_time:4w _time:week_range[Mon, Fri] _time:day_range[08:00, 18:00)
```
It uses implicit [`AND` logical filter](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter) for joining multiple filters
on [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field).
It uses implicit [`AND` logical filter](./LogsQL.md#logical-filter) for joining multiple filters
on [`_time` field](./keyConcepts.md#time-field).
## How to find logs with the given phrase containing whitespace?
Use [`phrase filter`](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter). For example, the following [LogsQL query](https://docs.victoriametrics.com/victorialogs/logsql/)
Use [`phrase filter`](./LogsQL.md#phrase-filter). For example, the following [LogsQL query](./LogsQL.md)
returns logs with the `cannot open file` phrase over the last 5 minutes:
@ -472,8 +472,8 @@ _time:5m "cannot open file"
## How to select all the logs for a particular stacktrace or panic?
Use [`stream_context` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#stream_context-pipe) for selecting surrounding logs for the given log.
For example, the following query selects up to 10 logs in front of every log message containing the `stacktrace` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word),
Use [`stream_context` pipe](./LogsQL.md#stream_context-pipe) for selecting surrounding logs for the given log.
For example, the following query selects up to 10 logs in front of every log message containing the `stacktrace` [word](./LogsQL.md#word),
plus up to 100 logs after the given log message:
```logsql

View file

@ -1,4 +1,4 @@
[VictoriaLogs](https://docs.victoriametrics.com/victorialogs/) can be queried with [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/)
[VictoriaLogs](./README.md) can be queried with [LogsQL](./LogsQL.md)
via the following ways:
- [Web UI](#web-ui) - a web-based UI for querying logs
@ -13,52 +13,52 @@ VictoriaLogs provides the following HTTP endpoints:
- [`/select/logsql/query`](#querying-logs) for querying logs.
- [`/select/logsql/tail`](#live-tailing) for live tailing of query results.
- [`/select/logsql/hits`](#querying-hits-stats) for querying log hits stats over the given time range.
- [`/select/logsql/stream_ids`](#querying-stream_ids) for querying `_stream_id` values of [log streams](#https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
- [`/select/logsql/streams`](#querying-streams) for querying [log streams](#https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
- [`/select/logsql/stream_field_names`](#querying-stream-field-names) for querying [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) field names.
- [`/select/logsql/stream_field_values`](#querying-stream-field-values) for querying [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) field values.
- [`/select/logsql/field_names`](#querying-field-names) for querying [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) names.
- [`/select/logsql/field_values`](#querying-field-values) for querying [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) values.
- [`/select/logsql/stream_ids`](#querying-stream_ids) for querying `_stream_id` values of [log streams](./keyConcepts.md#stream-fields).
- [`/select/logsql/streams`](#querying-streams) for querying [log streams](./keyConcepts.md#stream-fields).
- [`/select/logsql/stream_field_names`](#querying-stream-field-names) for querying [log stream](./keyConcepts.md#stream-fields) field names.
- [`/select/logsql/stream_field_values`](#querying-stream-field-values) for querying [log stream](./keyConcepts.md#stream-fields) field values.
- [`/select/logsql/field_names`](#querying-field-names) for querying [log field](./keyConcepts.md#data-model) names.
- [`/select/logsql/field_values`](#querying-field-values) for querying [log field](./keyConcepts.md#data-model) values.
### Querying logs
Logs stored in VictoriaLogs can be queried at the `/select/logsql/query` HTTP endpoint.
The [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) query must be passed via `query` argument.
The [LogsQL](./LogsQL.md) query must be passed via `query` argument.
For example, the following query returns all the log entries with the `error` word:
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error'
```
The response by default contains all the [fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) for the selected logs.
Use [`fields` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#fields-pipe) for selecting only the needed fields.
The response by default contains all the [fields](./keyConcepts.md#data-model) for the selected logs.
Use [`fields` pipe](./LogsQL.md#fields-pipe) for selecting only the needed fields.
The `query` argument can be passed either in the request url itself (aka HTTP GET request) or via request body
with the `x-www-form-urlencoded` encoding (aka HTTP POST request). The HTTP POST is useful for sending long queries
when they do not fit the maximum url length of the used clients and proxies.
See [LogsQL docs](https://docs.victoriametrics.com/victorialogs/logsql/) for details on what can be passed to the `query` arg.
See [LogsQL docs](./LogsQL.md) for details on what can be passed to the `query` arg.
The `query` arg must be properly encoded with [percent encoding](https://en.wikipedia.org/wiki/URL_encoding) when passing it to `curl`
or similar tools.
By default the `/select/logsql/query` returns all the log entries matching the given `query`. The response size can be limited in the following ways:
- By closing the response stream at any time. VictoriaLogs stops query execution and frees all the resources occupied by the request as soon as it detects closed client connection.
So it is safe running [`*` query](https://docs.victoriametrics.com/victorialogs/logsql/#any-value-filter), which selects all the logs, even if trillions of logs are stored in VictoriaLogs.
So it is safe running [`*` query](./LogsQL.md#any-value-filter), which selects all the logs, even if trillions of logs are stored in VictoriaLogs.
- By specifying the maximum number of log entries, which can be returned in the response via `limit` query arg. For example, the following command returns
up to 10 most recently added log entries with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
in the [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field):
up to 10 most recently added log entries with the `error` [word](./LogsQL.md#word)
in the [`_msg` field](./keyConcepts.md#message-field):
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error' -d 'limit=10'
```
- By adding [`limit` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#limit-pipe) to the query. For example, the following command returns up to 10 **random** log entries
with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word) in the [`_msg` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field):
- By adding [`limit` pipe](./LogsQL.md#limit-pipe) to the query. For example, the following command returns up to 10 **random** log entries
with the `error` [word](./LogsQL.md#word) in the [`_msg` field](./keyConcepts.md#message-field):
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error | limit 10'
```
- By adding [`_time` filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter). The time range for the query can be specified via optional
`start` and `end` query ars formatted according to [these docs](https://docs.victoriametrics.com/single-server-victoriametrics/#timestamp-formats).
- By adding more specific [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters) to the query, which select lower number of logs.
- By adding [`_time` filter](./LogsQL.md#time-filter). The time range for the query can be specified via optional
`start` and `end` query ars formatted according to [these docs](../#timestamp-formats).
- By adding more specific [filters](./LogsQL.md#filters) to the query, which select lower number of logs.
The `/select/logsql/query` endpoint returns [a stream of JSON lines](https://jsonlines.org/),
where each line contains JSON-encoded log entry in the form `{field1="value1",...,fieldN="valueN"}`.
@ -79,7 +79,7 @@ The returned lines aren't sorted by default, since sorting disables the ability
Query results can be sorted in the following ways:
- By passing `limit=N` query arg to `/select/logsql/query`. The up to `N` most recent matching log entries are returned in the response.
- By adding [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe) to the query.
- By adding [`sort` pipe](./LogsQL.md#sort-pipe) to the query.
- By using Unix `sort` command at client side according to [these docs](#command-line).
The maximum query execution time is limited by `-search.maxQueryDuration` command-line flag value. This limit can be overridden to smaller values
@ -90,7 +90,7 @@ to 4.2 seconds:
curl http://localhost:9428/select/logsql/query -d 'query=error' -d 'timeout=4.2s'
```
By default the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy) is queried.
By default the `(AccountID=0, ProjectID=0)` [tenant](./#multitenancy) is queried.
If you need querying other tenant, then specify it via `AccountID` and `ProjectID` http request headers. For example, the following query searches
for log messages at `(AccountID=12, ProjectID=34)` tenant:
@ -98,7 +98,7 @@ for log messages at `(AccountID=12, ProjectID=34)` tenant:
curl http://localhost:9428/select/logsql/query -H 'AccountID: 12' -H 'ProjectID: 34' -d 'query=error'
```
The number of requests to `/select/logsql/query` can be [monitored](https://docs.victoriametrics.com/victorialogs/#monitoring)
The number of requests to `/select/logsql/query` can be [monitored](./#monitoring)
with `vl_http_requests_total{path="/select/logsql/query"}` metric.
See also:
@ -114,7 +114,7 @@ See also:
### Live tailing
VictoriaLogs provides `/select/logsql/tail?query=<query>` HTTP endpoint, which returns live tailing results for the given [`<query>`](https://docs.victoriametrics.com/victorialogs/logsql/),
VictoriaLogs provides `/select/logsql/tail?query=<query>` HTTP endpoint, which returns live tailing results for the given [`<query>`](./LogsQL.md),
e.g. it works in the way similar to `tail -f` unix command. For example, the following command returns live tailing logs with the `error` word:
```sh
@ -126,23 +126,23 @@ because of internal response bufferring.
The `<query>` must conform the following rules:
- It cannot contain the following [pipes](https://docs.victoriametrics.com/victorialogs/logsql/#pipes):
- pipes, which calculate stats over the logs - [`stats`](https://docs.victoriametrics.com/victorialogs/logsql/#stats-pipe),
[`uniq`](https://docs.victoriametrics.com/victorialogs/logsql/#uniq-pipe), [`top`](https://docs.victoriametrics.com/victorialogs/logsql/#top-pipe)
- pipes, which change the order of logs - [`sort`](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe)
- pipes, which limit or ignore some logs - [`limit`](https://docs.victoriametrics.com/victorialogs/logsql/#limit-pipe),
[`offset`](https://docs.victoriametrics.com/victorialogs/logsql/#offset-pipe).
- It cannot contain the following [pipes](./LogsQL.md#pipes):
- pipes, which calculate stats over the logs - [`stats`](./LogsQL.md#stats-pipe),
[`uniq`](./LogsQL.md#uniq-pipe), [`top`](./LogsQL.md#top-pipe)
- pipes, which change the order of logs - [`sort`](./LogsQL.md#sort-pipe)
- pipes, which limit or ignore some logs - [`limit`](./LogsQL.md#limit-pipe),
[`offset`](./LogsQL.md#offset-pipe).
- It must select [`_time`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field) field.
- It must select [`_time`](./keyConcepts.md#time-field) field.
- It is recommended to return [`_stream_id`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) field for more accurate live tailing
- It is recommended to return [`_stream_id`](./keyConcepts.md#stream-fields) field for more accurate live tailing
across multiple streams.
**Performance tip**: live tailing works the best if it matches newly ingested logs at relatively slow rate (e.g. up to 1K matching logs per second),
e.g. it is optimized for the case when real humans inspect the output of live tailing in the real time. If live tailing returns logs at too high rate,
then it is recommended adding more specific [filters](https://docs.victoriametrics.com/victorialogs/logsql/#filters) to the `<query>`, so it matches less logs.
then it is recommended adding more specific [filters](./LogsQL.md#filters) to the `<query>`, so it matches less logs.
By default the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy) is queried.
By default the `(AccountID=0, ProjectID=0)` [tenant](./#multitenancy) is queried.
If you need querying other tenant, then specify it via `AccountID` and `ProjectID` http request headers. For example, the following query performs live tailing
for `(AccountID=12, ProjectID=34)` tenant:
@ -150,7 +150,7 @@ for `(AccountID=12, ProjectID=34)` tenant:
curl -N http://localhost:9428/select/logsql/tail -H 'AccountID: 12' -H 'ProjectID: 34' -d 'query=error'
```
The number of currently executed live tailing requests to `/select/logsql/tail` can be [monitored](https://docs.victoriametrics.com/victorialogs/#monitoring)
The number of currently executed live tailing requests to `/select/logsql/tail` can be [monitored](./#monitoring)
with `vl_live_tailing_requests` metric.
See also:
@ -161,18 +161,18 @@ See also:
### Querying hits stats
VictoriaLogs provides `/select/logsql/hits?query=<query>&start=<start>&end=<end>&step=<step>` HTTP endpoint, which returns the number
of matching log entries for the given [`<query>`](https://docs.victoriametrics.com/victorialogs/logsql/) on the given `[<start> ... <end>]`
of matching log entries for the given [`<query>`](./LogsQL.md) on the given `[<start> ... <end>]`
time range grouped by `<step>` buckets. The returned results are sorted by time.
The `<start>` and `<end>` args can contain values in [any supported format](https://docs.victoriametrics.com/#timestamp-formats).
The `<start>` and `<end>` args can contain values in [any supported format](../#timestamp-formats).
If `<start>` is missing, then it equals to the minimum timestamp across logs stored in VictoriaLogs.
If `<end>` is missing, then it equals to the maximum timestamp across logs stored in VictoriaLogs.
The `<step>` arg can contain values in [the format specified here](https://docs.victoriametrics.com/victorialogs/logsql/#stats-by-time-buckets).
The `<step>` arg can contain values in [the format specified here](./LogsQL.md#stats-by-time-buckets).
If `<step>` is missing, then it equals to `1d` (one day).
For example, the following command returns per-hour number of [log messages](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field)
with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word) over logs for the last 3 hours:
For example, the following command returns per-hour number of [log messages](./keyConcepts.md#message-field)
with the `error` [word](./LogsQL.md#word) over logs for the last 3 hours:
```sh
curl http://localhost:9428/select/logsql/hits -d 'query=error' -d 'start=3h' -d 'step=1h'
@ -202,8 +202,8 @@ Below is an example JSON output returned from this endpoint:
```
Additionally, the `offset=<offset>` arg can be passed to `/select/logsql/hits` in order to group buckets according to the given timezone offset.
The `<offset>` can contain values in [the format specified here](https://docs.victoriametrics.com/victorialogs/logsql/#duration-values).
For example, the following command returns per-day number of logs with `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
The `<offset>` can contain values in [the format specified here](./LogsQL.md#duration-values).
For example, the following command returns per-day number of logs with `error` [word](./LogsQL.md#word)
over the last week in New York time zone (`-4h`):
```sh
@ -211,7 +211,7 @@ curl http://localhost:9428/select/logsql/hits -d 'query=error' -d 'start=1w' -d
```
Additionally, any number of `field=<field_name>` args can be passed to `/select/logsql/hits` for grouping hits buckets by the mentioned `<field_name>` fields.
For example, the following query groups hits by `level` [field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) additionally to the provided `step`:
For example, the following query groups hits by `level` [field](./keyConcepts.md#data-model) additionally to the provided `step`:
```sh
curl http://localhost:9428/select/logsql/hits -d 'query=*' -d 'start=3h' -d 'step=1h' -d 'field=level'
@ -262,7 +262,7 @@ Optional `fields_limit=N` query arg can be passed to `/select/logsql/hits` for l
If more than `N` unique `"fields"` groups is found, then top `N` `"fields"` groups with the maximum number of `"total"` hits are returned.
The remaining hits are returned in `"fields": {}` group.
By default the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy) is queried.
By default the `(AccountID=0, ProjectID=0)` [tenant](./#multitenancy) is queried.
If you need querying other tenant, then specify it via `AccountID` and `ProjectID` http request headers. For example, the following query returns hits stats
for `(AccountID=12, ProjectID=34)` tenant:
@ -279,15 +279,15 @@ See also:
### Querying stream_ids
VictoriaLogs provides `/select/logsql/stream_ids?query=<query>&start=<start>&end=<end>` HTTP endpoint, which returns `_stream_id` values
for the [log streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) from results
of the given [`<query>`](https://docs.victoriametrics.com/victorialogs/logsql/) on the given `[<start> ... <end>]` time range.
for the [log streams](./keyConcepts.md#stream-fields) from results
of the given [`<query>`](./LogsQL.md) on the given `[<start> ... <end>]` time range.
The response also contains the number of log results per every `_stream_id`.
The `<start>` and `<end>` args can contain values in [any supported format](https://docs.victoriametrics.com/#timestamp-formats).
The `<start>` and `<end>` args can contain values in [any supported format](../#timestamp-formats).
If `<start>` is missing, then it equals to the minimum timestamp across logs stored in VictoriaLogs.
If `<end>` is missing, then it equals to the maximum timestamp across logs stored in VictoriaLogs.
For example, the following command returns `_stream_id` values across logs with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
For example, the following command returns `_stream_id` values across logs with the `error` [word](./LogsQL.md#word)
for the last 5 minutes:
```sh
@ -319,7 +319,7 @@ The `/select/logsql/stream_ids` endpoint supports optional `limit=N` query arg,
The endpoint returns arbitrary subset of `_stream_id` values if their number exceeds `N`, so `limit=N` cannot be used for pagination over big number of `_stream_id` values.
When the `limit` is reached, `hits` are zeroed, since they cannot be calculated reliably.
By default the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy) is queried.
By default the `(AccountID=0, ProjectID=0)` [tenant](./#multitenancy) is queried.
If you need querying other tenant, then specify it via `AccountID` and `ProjectID` http request headers. For example, the following query returns `_stream_id` stats
for `(AccountID=12, ProjectID=34)` tenant:
@ -336,15 +336,15 @@ See also:
### Querying streams
VictoriaLogs provides `/select/logsql/streams?query=<query>&start=<start>&end=<end>` HTTP endpoint, which returns [streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields)
from results of the given [`<query>`](https://docs.victoriametrics.com/victorialogs/logsql/) on the given `[<start> ... <end>]` time range.
VictoriaLogs provides `/select/logsql/streams?query=<query>&start=<start>&end=<end>` HTTP endpoint, which returns [streams](./keyConcepts.md#stream-fields)
from results of the given [`<query>`](./LogsQL.md) on the given `[<start> ... <end>]` time range.
The response also contains the number of log results per every `_stream`.
The `<start>` and `<end>` args can contain values in [any supported format](https://docs.victoriametrics.com/#timestamp-formats).
The `<start>` and `<end>` args can contain values in [any supported format](../#timestamp-formats).
If `<start>` is missing, then it equals to the minimum timestamp across logs stored in VictoriaLogs.
If `<end>` is missing, then it equals to the maximum timestamp across logs stored in VictoriaLogs.
For example, the following command returns streams across logs with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
For example, the following command returns streams across logs with the `error` [word](./LogsQL.md#word)
for the last 5 minutes:
```sh
@ -376,7 +376,7 @@ The `/select/logsql/streams` endpoint supports optional `limit=N` query arg, whi
The endpoint returns arbitrary subset of streams if their number exceeds `N`, so `limit=N` cannot be used for pagination over big number of streams.
When the `limit` is reached, `hits` are zeroed, since they cannot be calculated reliably.
By default the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy) is queried.
By default the `(AccountID=0, ProjectID=0)` [tenant](./#multitenancy) is queried.
If you need querying other tenant, then specify it via `AccountID` and `ProjectID` http request headers. For example, the following query returns stream stats
for `(AccountID=12, ProjectID=34)` tenant:
@ -394,15 +394,15 @@ See also:
### Querying stream field names
VictoriaLogs provides `/select/logsql/stream_field_names?query=<query>&start=<start>&end=<end>` HTTP endpoint, which returns
[log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) field names from results
of the given [`<query>`](https://docs.victoriametrics.com/victorialogs/logsql/) on the given `[<start> ... <end>]` time range.
[log stream](./keyConcepts.md#stream-fields) field names from results
of the given [`<query>`](./LogsQL.md) on the given `[<start> ... <end>]` time range.
The response also contains the number of log results per every field name.
The `<start>` and `<end>` args can contain values in [any supported format](https://docs.victoriametrics.com/#timestamp-formats).
The `<start>` and `<end>` args can contain values in [any supported format](../#timestamp-formats).
If `<start>` is missing, then it equals to the minimum timestamp across logs stored in VictoriaLogs.
If `<end>` is missing, then it equals to the maximum timestamp across logs stored in VictoriaLogs.
For example, the following command returns stream field names across logs with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
For example, the following command returns stream field names across logs with the `error` [word](./LogsQL.md#word)
for the last 5 minutes:
```sh
@ -430,7 +430,7 @@ Below is an example JSON output returned from this endpoint:
}
```
By default the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy) is queried.
By default the `(AccountID=0, ProjectID=0)` [tenant](./#multitenancy) is queried.
If you need querying other tenant, then specify it via `AccountID` and `ProjectID` http request headers. For example, the following query returns stream field names stats
for `(AccountID=12, ProjectID=34)` tenant:
@ -448,15 +448,15 @@ See also:
### Querying stream field values
VictoriaLogs provides `/select/logsql/stream_field_values?query=<query>&start=<start>&<end>&field=<fieldName>` HTTP endpoint,
which returns [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) field values for the field with the given `<fieldName>` name
from results of the given [`<query>`](https://docs.victoriametrics.com/victorialogs/logsql/) on the given `[<start> ... <end>]` time range.
which returns [log stream](./keyConcepts.md#stream-fields) field values for the field with the given `<fieldName>` name
from results of the given [`<query>`](./LogsQL.md) on the given `[<start> ... <end>]` time range.
The response also contains the number of log results per every field value.
The `<start>` and `<end>` args can contain values in [any supported format](https://docs.victoriametrics.com/#timestamp-formats).
The `<start>` and `<end>` args can contain values in [any supported format](../#timestamp-formats).
If `<start>` is missing, then it equals to the minimum timestamp across logs stored in VictoriaLogs.
If `<end>` is missing, then it equals to the maximum timestamp across logs stored in VictoriaLogs.
For example, the following command returns values for the stream field `host` across logs with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
For example, the following command returns values for the stream field `host` across logs with the `error` [word](./LogsQL.md#word)
for the last 5 minutes:
```sh
@ -484,7 +484,7 @@ The `/select/logsql/stream_field_names` endpoint supports optional `limit=N` que
The endpoint returns arbitrary subset of values if their number exceeds `N`, so `limit=N` cannot be used for pagination over big number of field values.
When the `limit` is reached, `hits` are zeroed, since they cannot be calculated reliably.
By default the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy) is queried.
By default the `(AccountID=0, ProjectID=0)` [tenant](./#multitenancy) is queried.
If you need querying other tenant, then specify it via `AccountID` and `ProjectID` http request headers. For example, the following query returns stream field values stats
for `(AccountID=12, ProjectID=34)` tenant:
@ -502,14 +502,14 @@ See also:
### Querying field names
VictoriaLogs provides `/select/logsql/field_names?query=<query>&start=<start>&end=<end>` HTTP endpoint, which returns field names
from results of the given [`<query>`](https://docs.victoriametrics.com/victorialogs/logsql/) on the given `[<start> ... <end>]` time range.
from results of the given [`<query>`](./LogsQL.md) on the given `[<start> ... <end>]` time range.
The response also contains the number of log results per every field name.
The `<start>` and `<end>` args can contain values in [any supported format](https://docs.victoriametrics.com/#timestamp-formats).
The `<start>` and `<end>` args can contain values in [any supported format](../#timestamp-formats).
If `<start>` is missing, then it equals to the minimum timestamp across logs stored in VictoriaLogs.
If `<end>` is missing, then it equals to the maximum timestamp across logs stored in VictoriaLogs.
For example, the following command returns field names across logs with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
For example, the following command returns field names across logs with the `error` [word](./LogsQL.md#word)
for the last 5 minutes:
```sh
@ -537,7 +537,7 @@ Below is an example JSON output returned from this endpoint:
}
```
By default the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy) is queried.
By default the `(AccountID=0, ProjectID=0)` [tenant](./#multitenancy) is queried.
If you need querying other tenant, then specify it via `AccountID` and `ProjectID` http request headers. For example, the following query returns field names stats
for `(AccountID=12, ProjectID=34)` tenant:
@ -555,16 +555,16 @@ See also:
### Querying field values
VictoriaLogs provides `/select/logsql/field_values?query=<query>&field=<fieldName>&start=<start>&end=<end>` HTTP endpoint, which returns
unique values for the given `<fieldName>` [field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
from results of the given [`<query>`](https://docs.victoriametrics.com/victorialogs/logsql/) on the given `[<start> ... <end>]` time range.
unique values for the given `<fieldName>` [field](./keyConcepts.md#data-model)
from results of the given [`<query>`](./LogsQL.md) on the given `[<start> ... <end>]` time range.
The response also contains the number of log results per every field value.
The `<start>` and `<end>` args can contain values in [any supported format](https://docs.victoriametrics.com/#timestamp-formats).
The `<start>` and `<end>` args can contain values in [any supported format](../#timestamp-formats).
If `<start>` is missing, then it equals to the minimum timestamp across logs stored in VictoriaLogs.
If `<end>` is missing, then it equals to the maximum timestamp across logs stored in VictoriaLogs.
For example, the following command returns unique values for `host` [field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
across logs with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word) for the last 5 minutes:
For example, the following command returns unique values for `host` [field](./keyConcepts.md#data-model)
across logs with the `error` [word](./LogsQL.md#word) for the last 5 minutes:
```sh
curl http://localhost:9428/select/logsql/field_values -d 'query=error' -d 'field=host' -d 'start=5m'
@ -595,7 +595,7 @@ The `/select/logsql/field_names` endpoint supports optional `limit=N` query arg,
The endpoint returns arbitrary subset of values if their number exceeds `N`, so `limit=N` cannot be used for pagination over big number of field values.
When the `limit` is reached, `hits` are zeroed, since they cannot be calculated reliably.
By default the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy) is queried.
By default the `(AccountID=0, ProjectID=0)` [tenant](./#multitenancy) is queried.
If you need querying other tenant, then specify it via `AccountID` and `ProjectID` http request headers. For example, the following query returns field values stats
for `(AccountID=12, ProjectID=34)` tenant:
@ -613,12 +613,12 @@ See also:
## Web UI
VictoriaLogs provides Web UI for logs [querying](https://docs.victoriametrics.com/victorialogs/logsql/) and exploration
VictoriaLogs provides Web UI for logs [querying](./LogsQL.md) and exploration
at `http://localhost:9428/select/vmui`.
There are three modes of displaying query results:
- `Group` - results are displayed as a table with rows grouped by [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
- `Group` - results are displayed as a table with rows grouped by [stream fields](./keyConcepts.md#stream-fields).
- `Table` - displays query results as a table.
- `JSON` - displays raw JSON response from [`/select/logsql/query` HTTP API](#querying-logs).
@ -626,7 +626,7 @@ See also [command line interface](#command-line).
## Visualization in Grafana
[VictoriaLogs Grafana Datasource](https://docs.victoriametrics.com/victorialogs/victorialogs-datasource/) allows you to query and visualize VictoriaLogs data in Grafana
[VictoriaLogs Grafana Datasource](./victorialogs-datasource.md) allows you to query and visualize VictoriaLogs data in Grafana
## Command-line
@ -646,7 +646,7 @@ These features allow executing queries at command-line interface, which potentia
without the risk of high resource usage (CPU, RAM, disk IO) at VictoriaLogs.
For example, the following query can return very big number of matching log entries (e.g. billions) if VictoriaLogs contains
many log messages with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word):
many log messages with the `error` [word](./LogsQL.md#word):
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error'
@ -664,7 +664,7 @@ curl http://localhost:9428/select/logsql/query -d 'query=error' | head -10
The `head -10` command reads only the first 10 log messages from the response and then closes the response stream.
This automatically cancels the query at VictoriaLogs side, so it stops consuming CPU, RAM and disk IO resources.
Alternatively, you can limit the number of returned logs at VictoriaLogs side via [`limit` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#limit-pipe):
Alternatively, you can limit the number of returned logs at VictoriaLogs side via [`limit` pipe](./LogsQL.md#limit-pipe):
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error | limit 10'
@ -682,9 +682,9 @@ It doesn't consume CPU and disk IO resources during this time. It resumes query
after the `less` continues reading the response stream.
Suppose that the initial investigation of the returned query results helped determining that the needed log messages contain
`cannot open file` [phrase](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter).
`cannot open file` [phrase](./LogsQL.md#phrase-filter).
Then the query can be narrowed down to `error AND "cannot open file"`
(see [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter) about `AND` operator).
(see [these docs](./LogsQL.md#logical-filter) about `AND` operator).
Then run the updated command in order to continue the investigation:
```sh
@ -701,57 +701,57 @@ The returned VictoriaLogs query response can be post-processed with any combinat
which are usually used for log analysis - `grep`, `jq`, `awk`, `sort`, `uniq`, `wc`, etc.
For example, the following command uses `wc -l` Unix command for counting the number of log messages
with the `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
received from [streams](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) with `app="nginx"` field
with the `error` [word](./LogsQL.md#word)
received from [streams](./keyConcepts.md#stream-fields) with `app="nginx"` field
during the last 5 minutes:
```sh
curl http://localhost:9428/select/logsql/query -d 'query=_stream:{app="nginx"} AND _time:5m AND error' | wc -l
```
See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter) about `_stream` filter,
[these docs](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter) about `_time` filter
and [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#logical-filter) about `AND` operator.
See [these docs](./LogsQL.md#stream-filter) about `_stream` filter,
[these docs](./LogsQL.md#time-filter) about `_time` filter
and [these docs](./LogsQL.md#logical-filter) about `AND` operator.
Alternatively, you can count the number of matching logs at VictoriaLogs side with [`stats` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#stats-pipe):
Alternatively, you can count the number of matching logs at VictoriaLogs side with [`stats` pipe](./LogsQL.md#stats-pipe):
```sh
curl http://localhost:9428/select/logsql/query -d 'query=_stream:{app="nginx"} AND _time:5m AND error | stats count() logs_with_error'
```
The following example shows how to sort query results by the [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field) with traditional Unix tools:
The following example shows how to sort query results by the [`_time` field](./keyConcepts.md#time-field) with traditional Unix tools:
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error' | jq -r '._time + " " + ._msg' | sort | less
```
This command uses `jq` for extracting [`_time`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field)
and [`_msg`](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field) fields from the returned results,
This command uses `jq` for extracting [`_time`](./keyConcepts.md#time-field)
and [`_msg`](./keyConcepts.md#message-field) fields from the returned results,
and piping them to `sort` command.
Note that the `sort` command needs to read all the response stream before returning the sorted results. So the command above
can take non-trivial amounts of time if the `query` returns too many results. The solution is to narrow down the `query`
before sorting the results. See [these tips](https://docs.victoriametrics.com/victorialogs/logsql/#performance-tips)
before sorting the results. See [these tips](./LogsQL.md#performance-tips)
on how to narrow down query results.
Alternatively, sorting of matching logs can be performed at VictoriaLogs side via [`sort` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#sort-pipe):
Alternatively, sorting of matching logs can be performed at VictoriaLogs side via [`sort` pipe](./LogsQL.md#sort-pipe):
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error | sort by (_time)' | less
```
The following example calculates stats on the number of log messages received during the last 5 minutes
grouped by `log.level` [field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) with traditional Unix tools:
grouped by `log.level` [field](./keyConcepts.md#data-model) with traditional Unix tools:
```sh
curl http://localhost:9428/select/logsql/query -d 'query=_time:5m log.level:*' | jq -r '."log.level"' | sort | uniq -c
```
The query selects all the log messages with non-empty `log.level` field via ["any value" filter](https://docs.victoriametrics.com/victorialogs/logsql/#any-value-filter),
The query selects all the log messages with non-empty `log.level` field via ["any value" filter](./LogsQL.md#any-value-filter),
then pipes them to `jq` command, which extracts the `log.level` field value from the returned JSON stream, then the extracted `log.level` values
are sorted with `sort` command and, finally, they are passed to `uniq -c` command for calculating the needed stats.
Alternatively, all the stats calculations above can be performed at VictoriaLogs side via [`stats by(...)`](https://docs.victoriametrics.com/victorialogs/logsql/#stats-by-fields):
Alternatively, all the stats calculations above can be performed at VictoriaLogs side via [`stats by(...)`](./LogsQL.md#stats-by-fields):
```sh
curl http://localhost:9428/select/logsql/query -d 'query=_time:5m log.level:* | stats by (log.level) count() matching_logs'
@ -759,5 +759,5 @@ curl http://localhost:9428/select/logsql/query -d 'query=_time:5m log.level:* |
See also:
- [Key concepts](https://docs.victoriametrics.com/victorialogs/keyconcepts/).
- [LogsQL docs](https://docs.victoriametrics.com/victorialogs/logsql/).
- [Key concepts](./keyConcepts.md).
- [LogsQL docs](./LogsQL.md).

View file

@ -22,7 +22,7 @@ Here's a minimalistic full config example, demonstrating many-to-many configurat
```yaml
# how and when to run the models is defined by schedulers
# https://docs.victoriametrics.com/anomaly-detection/components/scheduler/
# {{% ref "./scheduler.md" %}}
schedulers:
periodic_1d: # alias
class: 'periodic' # scheduler class
@ -36,7 +36,7 @@ schedulers:
fit_window: "7d"
# what model types and with what hyperparams to run on your data
# https://docs.victoriametrics.com/anomaly-detection/components/models/
# {{% ref "./models.md" %}}
models:
zscore: # alias
class: 'zscore' # model class
@ -53,7 +53,7 @@ models:
interval_width: 0.98
# where to read data from
# https://docs.victoriametrics.com/anomaly-detection/components/reader/
# {{% ref "./reader.md" %}}
reader:
datasource_url: "http://victoriametrics:8428/"
tenant_id: "0:0"
@ -64,12 +64,12 @@ reader:
host_network_receive_errors: 'rate(node_network_receive_errs_total[3m]) / rate(node_network_receive_packets_total[3m])'
# where to write data to
# https://docs.victoriametrics.com/anomaly-detection/components/writer/
# {{% ref "./writer.md" %}}
writer:
datasource_url: "http://victoriametrics:8428/"
# enable self-monitoring in pull and/or push mode
# https://docs.victoriametrics.com/anomaly-detection/components/monitoring/
# {{% ref "./monitoring.md" %}}
monitoring:
pull: # Enable /metrics endpoint.
addr: "0.0.0.0"

View file

@ -26,7 +26,7 @@ If want help Sending your data to Managed VictoriaMetrics check out [our blog](h
3. Set the parameters as follows:
- Name: VictoriaMetrics (can be changed to any string)
- Server: the hostname or IP of your VictoriaMetrics Instance
- Port: This will vary depending how you are sending data to VictoriaMetrics, but the defaults for all components are listed in the [data ingestion documentation](https://docs.victoriametrics.com/data-ingestion.html)
- Port: This will vary depending how you are sending data to VictoriaMetrics, but the defaults for all components are listed in the [data ingestion documentation](./README.md)
- Protocol: use HTTPS if you have TLS/SSL configured otherwise use HTTP
- Organization: leave empty since it doesn't get used
- Bucket: leave empty since it doesn't get used
@ -51,7 +51,7 @@ You should see 1 time series per node in your PVE cluster.
- Name: VictoriaMetrics (can be set to any string)
- URL: http(s)://<ip_or_host>:<port>
- set the URL to https if you have TLS enabled and http if you do not
- Port: This will vary depending how you are sending data to VictoriaMetrics, but the defaults for all components are listed in the [data ingestion documentation](https://docs.victoriametrics.com/data-ingestion.html)
- Port: This will vary depending how you are sending data to VictoriaMetrics, but the defaults for all components are listed in the [data ingestion documentation](./README.md)
- Organization: leave empty since it doesn't get used
- Bucket: leave empty since it doesn't get used
- Token: your token from vmauth or leave blank if you don't have authentication enabled

View file

@ -78,7 +78,7 @@ sinks:
## VictoriaMetrics and VictoriaLogs
This combines the Bearer Authentication section with the [VictoriaLogs docs for Vector](https://docs.victoriametrics.com/victorialogs/data-ingestion/vector/),
This combines the Bearer Authentication section with the [VictoriaLogs docs for Vector](../VictoriaLogs/data-ingestion/Vector.md),
so you can send metrics and logs with 1 agent to multiple sources:

View file

@ -28,16 +28,16 @@ The use of VictoriaMetrics Enterprise components is permitted in the following c
- Production use if you have a valid enterprise contract or valid permit from VictoriaMetrics company.
Please contact us via [this page](https://victoriametrics.com/products/enterprise/) if you are intereseted in such a contract.
- [Managed VictoriaMetrics](https://docs.victoriametrics.com/managed-victoriametrics/) is built on top of VictoriaMetrics Enterprise.
- [Managed VictoriaMetrics](./managed-victoriametrics/README.md) is built on top of VictoriaMetrics Enterprise.
See [these docs](#running-victoriametrics-enterprise) for details on how to run VictoriaMetrics enterprise.
## VictoriaMetrics enterprise features
VictoriaMetrics Enterprise includes [all the features of the community edition](https://docs.victoriametrics.com/#prominent-features),
VictoriaMetrics Enterprise includes [all the features of the community edition](./#prominent-features),
plus the following additional features:
- Stable releases with long-term support, which contains important bugfixes and security fixes. See [these docs](https://docs.victoriametrics.com/lts-releases/).
- Stable releases with long-term support, which contains important bugfixes and security fixes. See [these docs](./LTS-releases.md).
- First-class consulting and technical support provided by the core VictoriaMetrics dev team.
- [Monitoring of monitoring](https://victoriametrics.com/products/mom/) - this feature allows forecasting
and preventing possible issues in VictoriaMetrics setups.
@ -46,25 +46,25 @@ plus the following additional features:
On top of this, Enterprise package of VictoriaMetrics includes the following features:
- [Downsampling](https://docs.victoriametrics.com/#downsampling) - this feature allows reducing storage costs
- [Downsampling](./#downsampling) - this feature allows reducing storage costs
and increasing performance for queries over historical data.
- [Multiple retentions](https://docs.victoriametrics.com/#retention-filters) - this feature allows reducing storage costs
- [Multiple retentions](./#retention-filters) - this feature allows reducing storage costs
by specifying different retentions for different datasets.
- [Automatic discovery of vmstorage nodes](https://docs.victoriametrics.com/cluster-victoriametrics/#automatic-vmstorage-discovery) -
- [Automatic discovery of vmstorage nodes](./Cluster-VictoriaMetrics.md#automatic-vmstorage-discovery) -
this feature allows updating the list of `vmstorage` nodes at `vminsert` and `vmselect` without the need to restart these services.
- [Anomaly Detection Service](https://docs.victoriametrics.com/anomaly-detection) - this feature allows automation and simplification of your alerting rules, covering [complex anomalies](https://victoriametrics.com/blog/victoriametrics-anomaly-detection-handbook-chapter-2/) found in metrics data.
- [Backup automation](https://docs.victoriametrics.com/vmbackupmanager/).
- [Advanced per-tenant stats](https://docs.victoriametrics.com/pertenantstatistic/).
- [Advanced auth and rate limiter](https://docs.victoriametrics.com/vmgateway/).
- [Automatic issuing of TLS certificates](https://docs.victoriametrics.com/#automatic-issuing-of-tls-certificates).
- [mTLS for all the VictoriaMetrics components](https://docs.victoriametrics.com/#mtls-protection).
- [mTLS for communications between cluster components](https://docs.victoriametrics.com/cluster-victoriametrics/#mtls-protection).
- [mTLS-based request routing](https://docs.victoriametrics.com/vmauth/#mtls-based-request-routing).
- [Kafka integration](https://docs.victoriametrics.com/vmagent/#kafka-integration).
- [Google PubSub integration](https://docs.victoriametrics.com/vmagent/#google-pubsub-integration).
- [Multitenant support in vmalert](https://docs.victoriametrics.com/vmalert/#multitenancy).
- [Ability to read alerting and recording rules from Object Storage](https://docs.victoriametrics.com/vmalert/#reading-rules-from-object-storage).
- [Ability to filter incoming requests by IP at vmauth](https://docs.victoriametrics.com/vmauth/#ip-filters).
- [Anomaly Detection Service](./anomaly-detection/README.md) - this feature allows automation and simplification of your alerting rules, covering [complex anomalies](https://victoriametrics.com/blog/victoriametrics-anomaly-detection-handbook-chapter-2/) found in metrics data.
- [Backup automation](./vmbackupmanager.md).
- [Advanced per-tenant stats](./PerTenantStatistic.md).
- [Advanced auth and rate limiter](./vmgateway/).
- [Automatic issuing of TLS certificates](./#automatic-issuing-of-tls-certificates).
- [mTLS for all the VictoriaMetrics components](./#mtls-protection).
- [mTLS for communications between cluster components](./Cluster-VictoriaMetrics.md#mtls-protection).
- [mTLS-based request routing](./vmauth.md#mtls-based-request-routing).
- [Kafka integration](./vmagent.md#kafka-integration).
- [Google PubSub integration](./vmagent.md#google-pubsub-integration).
- [Multitenant support in vmalert](./vmalert.md#multitenancy).
- [Ability to read alerting and recording rules from Object Storage](./vmalert.md#reading-rules-from-object-storage).
- [Ability to filter incoming requests by IP at vmauth](./vmauth.md#ip-filters).
Contact us via [this page](https://victoriametrics.com/products/enterprise/) if you are interested in VictoriaMetrics Enterprise.
@ -212,12 +212,12 @@ kubectl create secret generic vm-license --from-literal=license={BASE64_ENCODED_
It is allowed to run VictoriaMetrics Enterprise components in [cases listed here](#valid-cases-for-victoriametrics-enterprise).
VictoriaMetrics Enterprise components can be deployed via [VictoriaMetrics operator](https://docs.victoriametrics.com/operator/).
VictoriaMetrics Enterprise components can be deployed via [VictoriaMetrics operator](./operator/README.md).
In order to use Enterprise components it is required to provide the license key via `license` field and adjust the image tag to the enterprise one.
Enterprise license key can be obtained at [this page](https://victoriametrics.com/products/enterprise/trial/).
For example, the following custom resource for [VictoriaMetrics single-node](https://docs.victoriametrics.com/single-server-victoriametrics/)
For example, the following custom resource for [VictoriaMetrics single-node](./Single-Server-VictoriaMetrics.md)
is used to provide key in plain-text:
```yaml
@ -267,7 +267,7 @@ Or create secret via `kubectl`:
kubectl create secret generic vm-license --from-literal=license={BASE64_ENCODED_LICENSE_KEY}
```
See full list of CRD specifications [here](https://docs.victoriametrics.com/operator/api.html).
See full list of CRD specifications [here](./operator/api.md).
## Monitoring license expiration
@ -276,7 +276,7 @@ All the VictoriaMetrics Enterprise components expose the following metrics at th
* `vm_license_expires_at` - license expiration date in unix timestamp format
* `vm_license_expires_in_seconds` - the number of seconds left until the license expires
Example alerts for [vmalert](https://docs.victoriametrics.com/vmalert/) based on these metrics:
Example alerts for [vmalert](./vmalert.md) based on these metrics:
```yaml
groups:

View file

@ -2,4 +2,5 @@
weight: 0
title: Guides
disableToc: true
---
---
{{% content "README.md" %}}

View file

@ -346,8 +346,8 @@ curl http://localhost:8081/api/fast
curl http://localhost:8081/api/slow
```
Open [vmui](https://docs.victoriametrics.com/#vmui) and query `http_requests_total` or `http_active_requests`
with [metricsql](https://docs.victoriametrics.com/metricsql/).
Open [vmui](../#vmui) and query `http_requests_total` or `http_active_requests`
with [metricsql](../MetricsQL.md).
![OTEL VMUI](getting-started-with-opentelemetry-vmui.webp)

View file

@ -11,8 +11,8 @@ aliases:
**The guide covers:**
* The setup of a [VM Operator](https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-operator) via Helm in [Kubernetes](https://kubernetes.io/) with Helm charts.
* The setup of a [VictoriaMetrics Cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) via [VM Operator](https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-operator).
* How to add CRD for a [VictoriaMetrics Cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) via [VM Operator](https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-operator).
* The setup of a [VictoriaMetrics Cluster](../Cluster-VictoriaMetrics.md) via [VM Operator](https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-operator).
* How to add CRD for a [VictoriaMetrics Cluster](../Cluster-VictoriaMetrics.md) via [VM Operator](https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-operator).
* How to visualize stored data
* How to store metrics in [VictoriaMetrics](https://victoriametrics.com)
@ -24,7 +24,7 @@ aliases:
## 1. VictoriaMetrics Helm repository
See how to work with a [VictoriaMetrics Helm repository in previous guide](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-cluster.html#1-victoriametrics-helm-repository).
See how to work with a [VictoriaMetrics Helm repository in previous guide](./k8s-monitoring-via-vm-cluster.md#1-victoriametrics-helm-repository).
## 2. Install the VM Operator from the Helm chart
@ -48,7 +48,7 @@ victoria-metrics-operator has been installed. Check its status by running:
kubectl --namespace default get pods -l "app.kubernetes.io/instance=vmoperator"
Get more information on https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-operator.
See "Getting started guide for VM Operator" on https://docs.victoriametrics.com/guides/getting-started-with-vm-operator.html.
See "Getting started guide for VM Operator" on {{% ref "./getting-started-with-vm-operator.md" %}}
```
Run the following command to check that VM Operator is up and running:
@ -68,7 +68,7 @@ vmoperator-victoria-metrics-operator-67cff44cd6-s47n6 1/1 Running 0
> For this example we will use default value for `name: example-vmcluster-persistent`. Change it value up to your needs.
Run the following command to install [VictoriaMetrics Cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) via [VM Operator](https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-operator):
Run the following command to install [VictoriaMetrics Cluster](../Cluster-VictoriaMetrics.md) via [VM Operator](https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-operator):
<p id="example-cluster-config"></p>
@ -96,8 +96,8 @@ The expected output:
vmcluster.operator.victoriametrics.com/example-vmcluster-persistent created
```
* By applying this CRD we install the [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) to the default [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) of your k8s cluster with following params:
* `retentionPeriod: "12"` defines the [retention](https://docs.victoriametrics.com/single-server-victoriametrics/#retention) to 12 months.
* By applying this CRD we install the [VictoriaMetrics cluster](../Cluster-VictoriaMetrics.md) to the default [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) of your k8s cluster with following params:
* `retentionPeriod: "12"` defines the [retention](../Single-server-VictoriaMetrics.md#retention) to 12 months.
* `replicaCount: 2` creates two replicas of vmselect, vminsert and vmstorage.
Please note that it may take some time for the pods to start. To check that the pods are started, run the following command:
@ -130,7 +130,7 @@ NAME INSERT COUNT STORAGE COUNT SELECT COUNT AGE
example-vmcluster-persistent 2 2 2 5m53s operational
```
Internet traffic goes through the Kubernetes Load balancer which use the set of Pods targeted by a [Kubernetes Service](https://kubernetes.io/docs/concepts/services-networking/service/). The service in [VictoriaMetrics Cluster architecture](https://docs.victoriametrics.com/cluster-victoriametrics/#architecture-overview) which accepts the ingested data named `vminsert` and in Kubernetes it is a `vminsert ` service. So we need to use it for remote_write url.
Internet traffic goes through the Kubernetes Load balancer which use the set of Pods targeted by a [Kubernetes Service](https://kubernetes.io/docs/concepts/services-networking/service/). The service in [VictoriaMetrics Cluster architecture](../Cluster-VictoriaMetrics.md#architecture-overview) which accepts the ingested data named `vminsert` and in Kubernetes it is a `vminsert ` service. So we need to use it for remote_write url.
To get the name of `vminsert` services, please run the following command:
@ -145,8 +145,8 @@ The expected output:
vminsert-example-vmcluster-persistent ClusterIP 10.107.47.136 <none> 8480/TCP 5m58s
```
To scrape metrics from Kubernetes with a VictoriaMetrics Cluster we will need to install [VMAgent](https://docs.victoriametrics.com/vmagent/) with some additional configurations.
Copy `vminsert-example-vmcluster-persistent` (or whatever user put into metadata.name field [https://docs.victoriametrics.com/guides/getting-started-with-vm-operator.html#example-cluster-config](https://docs.victoriametrics.com/guides/getting-started-with-vm-operator.html#example-cluster-config)) service name and add it to the `remoteWrite` URL from [quick-start example](https://github.com/VictoriaMetrics/operator/blob/master/docs/quick-start.MD#vmagent).
To scrape metrics from Kubernetes with a VictoriaMetrics Cluster we will need to install [VMAgent](../vmagent.md) with some additional configurations.
Copy `vminsert-example-vmcluster-persistent` (or whatever user put into [metadata.name field](./getting-started-with-vm-operator.md#example-cluster-config)) service name and add it to the `remoteWrite` URL from [quick-start example](https://github.com/VictoriaMetrics/operator/blob/master/docs/quick-start.md#vmagent).
Here is an example of the full configuration that we need to apply:
@ -219,7 +219,7 @@ You will see something like this:
## 4. Verifying VictoriaMetrics cluster
See [how to install and connect Grafana to VictoriaMetrics](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-cluster.html#4-install-and-connect-grafana-to-victoriametrics-with-helm) but with one addition - we should get the name of `vmselect` service from the freshly installed VictoriaMetrics Cluster because it will now be different.
See [how to install and connect Grafana to VictoriaMetrics](./k8s-monitoring-via-vm-cluster.md#4-install-and-connect-grafana-to-victoriametrics-with-helm) but with one addition - we should get the name of `vmselect` service from the freshly installed VictoriaMetrics Cluster because it will now be different.
To get the new service name, please run the following command:

View file

@ -1,4 +1,4 @@
Using [Grafana](https://grafana.com/) with [vmgateway](https://docs.victoriametrics.com/vmgateway/) is a great way to provide [multi-tenant](https://docs.victoriametrics.com/cluster-victoriametrics/#multitenancy) access to your metrics.
Using [Grafana](https://grafana.com/) with [vmgateway](../../vmgateway.md) is a great way to provide [multi-tenant](../../Cluster-VictoriaMetrics.md#multitenancy) access to your metrics.
vmgateway provides a way to authenticate users using [JWT tokens](https://en.wikipedia.org/wiki/JSON_Web_Token) issued by an external identity provider.
Those tokens can include information about the user and the tenant they belong to, which can be used
to restrict access to metrics to only those that belong to the tenant.
@ -8,7 +8,7 @@ to restrict access to metrics to only those that belong to the tenant.
* Identity service that can issue [JWT tokens](https://en.wikipedia.org/wiki/JSON_Web_Token)
* [Grafana](https://grafana.com/)
* VictoriaMetrics single-node or cluster version
* [vmgateway](https://docs.victoriametrics.com/vmgateway/)
* [vmgateway](../../vmgateway.md)
## Configure identity service
@ -25,7 +25,7 @@ The identity service must be able to issue JWT tokens with the following `vm_acc
}
```
See details about all supported options in the [vmgateway documentation](https://docs.victoriametrics.com/vmgateway/#access-control).
See details about all supported options in the [vmgateway documentation](../../vmgateway.md#access-control).
### Configuration example for Keycloak
@ -131,7 +131,7 @@ or manually managing access at another proxy level.
In order to use multi-tenant access with single-node VictoriaMetrics, you can use token claims such as `extra_labels`
or `extra_filters` filled dynamically by using Identity Provider's user information.
vmgateway uses those claims and [enhanced Prometheus querying API](https://docs.victoriametrics.com/single-server-victoriametrics/#prometheus-querying-api-enhancements)
vmgateway uses those claims and [enhanced Prometheus querying API](../../Single-Server-VictoriaMetrics.md#prometheus-querying-api-enhancements)
to provide additional filtering capabilities.
For example, the following claims can be used to restrict user access to specific metrics:
@ -157,7 +157,7 @@ So when user will try to query `vm_http_requests_total` query will be transforme
### Token signature verification
It is also possible to enable [JWT token signature verification](https://docs.victoriametrics.com/vmgateway/#jwt-signature-verification) at
It is also possible to enable [JWT token signature verification](../../vmgateway.md#jwt-signature-verification) at
vmgateway.
To do this by using OpenID Connect discovery endpoint you need to specify the `-auth.oidcDiscoveryEndpoints` flag. For example:
@ -178,7 +178,7 @@ Now vmgateway will print the following message on startup:
That means that vmgateway has successfully fetched the public keys from the OpenID Connect discovery endpoint.
It is also possible to provide the public keys directly via the `-auth.publicKeys` flag. See the [vmgateway documentation](https://docs.victoriametrics.com/vmgateway/#jwt-signature-verification) for details.
It is also possible to provide the public keys directly via the `-auth.publicKeys` flag. See the [vmgateway documentation](../../vmgateway.md#jwt-signature-verification) for details.
## Use Grafana to query metrics
@ -192,7 +192,7 @@ In the "Type and version" section it is recommended to set the type to "Promethe
This allows Grafana to use a more efficient API to get label values.
You can also use VictoriaMetrics [Grafana datasource](https://github.com/VictoriaMetrics/victoriametrics-datasource) plugin.
See installation instructions [here](https://docs.victoriametrics.com/victoriametrics-datasource/#installation).
See installation instructions [here](../../victoriametrics-datasource.md#installation).
Enable `Forward OAuth identity` flag.<br>
![Oauth identity](grafana-ds.webp)

View file

@ -9,12 +9,12 @@ aliases:
- /guides/guide-delete-or-replace-metrics.html
---
Data deletion is an operation people expect a database to have. [VictoriaMetrics](https://victoriametrics.com) supports
[delete operation](https://docs.victoriametrics.com/single-server-victoriametrics/#how-to-delete-time-series) but to a limited extent. Due to implementation details, VictoriaMetrics remains an [append-only database](https://en.wikipedia.org/wiki/Append-only), which perfectly fits the case for storing time series data. But the drawback of such architecture is that it is extremely expensive to mutate the data. Hence, `delete` or `update` operations support is very limited. In this guide, we'll walk through the possible workarounds for deleting or changing already written data in VictoriaMetrics.
[delete operation](../Single-Server-VictoriaMetrics.md#how-to-delete-time-series) but to a limited extent. Due to implementation details, VictoriaMetrics remains an [append-only database](https://en.wikipedia.org/wiki/Append-only), which perfectly fits the case for storing time series data. But the drawback of such architecture is that it is extremely expensive to mutate the data. Hence, `delete` or `update` operations support is very limited. In this guide, we'll walk through the possible workarounds for deleting or changing already written data in VictoriaMetrics.
### Precondition
- [Single-node VictoriaMetrics](https://docs.victoriametrics.com/single-server-victoriametrics/);
- [Cluster version of VictoriaMetrics](https://docs.victoriametrics.com/cluster-victoriametrics/);
- [Single-node VictoriaMetrics](../Single-Server-VictoriaMetrics.md);
- [Cluster version of VictoriaMetrics](../Cluster-VictoriaMetrics.md);
- [curl](https://curl.se/docs/manual.html)
- [jq tool](https://stedolan.github.io/jq/)
@ -22,7 +22,7 @@ Data deletion is an operation people expect a database to have. [VictoriaMetrics
_Warning: time series deletion is not recommended to use on a regular basis. Each call to delete API could have a performance penalty. The API was provided for one-off operations to deleting malformed data or to satisfy GDPR compliance._
[Delete API](https://docs.victoriametrics.com/single-server-victoriametrics/#how-to-delete-time-series) expects from user to specify [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). So the first thing to do before the deletion is to verify whether the selector matches the correct series.
[Delete API](../Single-Server-VictoriaMetrics.md#how-to-delete-time-series) expects from user to specify [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). So the first thing to do before the deletion is to verify whether the selector matches the correct series.
To check that metrics are present in **VictoriaMetrics Cluster** run the following command:
@ -76,7 +76,7 @@ The expected output:
```
When you're sure [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) is correct, send a POST request to [delete API](https://docs.victoriametrics.com/url-examples/#apiv1admintsdbdelete_series) with [`match[]=<time-series-selector>`](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) argument. For example:
When you're sure [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) is correct, send a POST request to [delete API](../url-examples.md#apiv1admintsdbdelete_series) with [`match[]=<time-series-selector>`](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) argument. For example:
```sh
@ -84,9 +84,9 @@ curl -s 'http://vmselect:8481/delete/0/prometheus/api/v1/admin/tsdb/delete_serie
```
If operation was successful, the deleted series will stop being [queryable](https://docs.victoriametrics.com/keyconcepts/#query-data). Storage space for the deleted time series isn't freed instantly - it is freed during subsequent [background merges of data files](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282). The background merges may never occur for data from previous months, so storage space won't be freed for historical data. In this case [forced merge](https://docs.victoriametrics.com/single-server-victoriametrics/#forced-merge) may help freeing up storage space.
If operation was successful, the deleted series will stop being [queryable](../keyConcepts.md#query-data). Storage space for the deleted time series isn't freed instantly - it is freed during subsequent [background merges of data files](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282). The background merges may never occur for data from previous months, so storage space won't be freed for historical data. In this case [forced merge](../Single-Server-VictoriaMetrics.md#forced-merge) may help freeing up storage space.
To trigger [forced merge](https://docs.victoriametrics.com/single-server-victoriametrics/#forced-merge) on VictoriaMetrics Cluster run the following command:
To trigger [forced merge](../Single-Server-VictoriaMetrics.md#forced-merge) on VictoriaMetrics Cluster run the following command:
```sh
@ -99,10 +99,10 @@ After the merge is complete, the data will be permanently deleted from the disk.
By default, VictoriaMetrics doesn't provide a mechanism for replacing or updating data. As a workaround, take the following actions:
- [export time series to a file](https://docs.victoriametrics.com/url-examples/#apiv1export);
- [export time series to a file](../url-examples.md#apiv1export);
- change the values of time series in the file and save it;
- [delete time series from a database](https://docs.victoriametrics.com/url-examples/#apiv1admintsdbdelete_series);
- [import saved file to VictoriaMetrics](https://docs.victoriametrics.com/url-examples/#apiv1import).
- [delete time series from a database](../url-examples.md#apiv1admintsdbdelete_series);
- [import saved file to VictoriaMetrics](../url-examples.md#apiv1import).
### Export metrics
@ -185,11 +185,11 @@ The expected output will be the next:
### Delete metrics
See [How-to-delete-metrics](https://docs.victoriametrics.com/guides/guide-delete-or-replace-metrics.html#how-to-delete-metrics) from the previous paragraph
See [How-to-delete-metrics](#how-to-delete-metrics) from the previous paragraph
### Import metrics
Victoriametrics supports a lot of [ingestion protocols](https://docs.victoriametrics.com/single-server-victoriametrics/#how-to-import-time-series-data) and we will use [import from JSON line format](https://docs.victoriametrics.com/single-server-victoriametrics/#how-to-import-data-in-json-line-format).
Victoriametrics supports a lot of [ingestion protocols](../Single-Server-VictoriaMetrics.md#how-to-import-time-series-data) and we will use [import from JSON line format](../Single-Server-VictoriaMetrics.md#how-to-import-data-in-json-line-format).
The next command will import metrics from `data.jsonl` to VictoriaMetrics:

View file

@ -14,21 +14,21 @@ Setup Victoria Metrics Cluster with support of multiple retention periods within
**Enterprise Solution**
[VictoriaMetrics enterprise](https://docs.victoriametrics.com/enterprise/) supports specifying multiple retentions
for distinct sets of time series and [tenants](https://docs.victoriametrics.com/cluster-victoriametrics/#multitenancy)
via [retention filters](https://docs.victoriametrics.com/cluster-victoriametrics/#retention-filters).
[VictoriaMetrics enterprise](../enterprise.md) supports specifying multiple retentions
for distinct sets of time series and [tenants](../Cluster-VictoriaMetrics.md#multitenancy)
via [retention filters](../Cluster-VictoriaMetrics.md#retention-filters).
**Open Source Solution**
Community version of VictoriaMetrics supports only one retention period per `vmstorage` node via [-retentionPeriod](https://docs.victoriametrics.com/#retention) command-line flag.
Community version of VictoriaMetrics supports only one retention period per `vmstorage` node via [-retentionPeriod](../#retention) command-line flag.
A multi-retention setup can be implemented by dividing a [victoriametrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) into logical groups with different retentions.
A multi-retention setup can be implemented by dividing a [victoriametrics cluster](../Cluster-VictoriaMetrics.md) into logical groups with different retentions.
Example:
Setup should handle 3 different retention groups 3months, 1year and 3 years.
Solution contains 3 groups of vmstorages + vminserts and one group of vmselects. Routing is done by [vmagent](https://docs.victoriametrics.com/vmagent/)
by [splitting data streams](https://docs.victoriametrics.com/vmagent/#splitting-data-streams-among-multiple-systems).
The [-retentionPeriod](https://docs.victoriametrics.com/#retention) sets how long to keep the metrics.
Solution contains 3 groups of vmstorages + vminserts and one group of vmselects. Routing is done by [vmagent](../vmagent.md)
by [splitting data streams](../vmagent.md#splitting-data-streams-among-multiple-systems).
The [-retentionPeriod](../#retention) sets how long to keep the metrics.
The diagram below shows a proposed solution
@ -36,12 +36,12 @@ The diagram below shows a proposed solution
**Implementation Details**
1. Groups of vminserts A know about only vmstorages A and this is explicitly specified via `-storageNode` [configuration](https://docs.victoriametrics.com/cluster-victoriametrics/#cluster-setup).
1. Groups of vminserts B know about only vmstorages B and this is explicitly specified via `-storageNode` [configuration](https://docs.victoriametrics.com/cluster-victoriametrics/#cluster-setup).
1. Groups of vminserts C know about only vmstorages A and this is explicitly specified via `-storageNode` [configuration](https://docs.victoriametrics.com/cluster-victoriametrics/#cluster-setup).
1. vmselect reads data from all vmstorage nodes via `-storageNode` [configuration](https://docs.victoriametrics.com/cluster-victoriametrics/#cluster-setup)
with [deduplication](https://docs.victoriametrics.com/cluster-victoriametrics/#deduplication) setting equal to vmagent's scrape interval or minimum interval between collected samples.
1. vmagent routes incoming metrics to the given set of `vminsert` nodes using relabeling rules specified at `-remoteWrite.urlRelabelConfig` [configuration](https://docs.victoriametrics.com/vmagent/#relabeling).
1. Groups of vminserts A know about only vmstorages A and this is explicitly specified via `-storageNode` [configuration](../Cluster-VictoriaMetrics.md#cluster-setup).
1. Groups of vminserts B know about only vmstorages B and this is explicitly specified via `-storageNode` [configuration](../Cluster-VictoriaMetrics.md#cluster-setup).
1. Groups of vminserts C know about only vmstorages A and this is explicitly specified via `-storageNode` [configuration](../Cluster-VictoriaMetrics.md#cluster-setup).
1. vmselect reads data from all vmstorage nodes via `-storageNode` [configuration](../Cluster-VictoriaMetrics.md#cluster-setup)
with [deduplication](../Cluster-VictoriaMetrics.md#deduplication) setting equal to vmagent's scrape interval or minimum interval between collected samples.
1. vmagent routes incoming metrics to the given set of `vminsert` nodes using relabeling rules specified at `-remoteWrite.urlRelabelConfig` [configuration](../vmagent.md#relabeling).
**Multi-Tenant Setup**
@ -49,4 +49,4 @@ Every group of vmstorages can handle one tenant or multiple one. Different group
**Additional Enhancements**
You can set up [vmauth](https://docs.victoriametrics.com/vmauth/) for routing data to the given vminsert group depending on the needed retention.
You can set up [vmauth](../vmauth.md) for routing data to the given vminsert group depending on the needed retention.

View file

@ -10,7 +10,7 @@ aliases:
---
**The guide covers:**
* High availability monitoring via [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) in [Kubernetes](https://kubernetes.io/) with Helm charts
* High availability monitoring via [VictoriaMetrics cluster](../Cluster-VictoriaMetrics.md) in [Kubernetes](https://kubernetes.io/) with Helm charts
* How to store metrics
* How to scrape metrics from k8s components using a service discovery
* How to visualize stored data
@ -25,7 +25,7 @@ aliases:
## 1. VictoriaMetrics Helm repository
Please see the relevant [VictoriaMetrics Helm repository](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-cluster.html#1-victoriametrics-helm-repository) section in previous guides.
Please see the relevant [VictoriaMetrics Helm repository](./k8s-monitoring-via-vm-cluster.md#1-victoriametrics-helm-repository) section in previous guides.
## 2. Install VictoriaMetrics Cluster from the Helm chart
@ -60,8 +60,8 @@ vmstorage:
EOF
```
* The `Helm install vmcluster vm/victoria-metrics-cluster` command installs [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) to the default [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/).
* `dedup.minScrapeInterval: 1ms` configures [de-duplication](https://docs.victoriametrics.com/#deduplication) for the cluster that de-duplicates data points in the same time series if they fall within the same discrete 1ms bucket. The earliest data point will be kept. In the case of equal timestamps, an arbitrary data point will be kept.
* The `Helm install vmcluster vm/victoria-metrics-cluster` command installs [VictoriaMetrics cluster](../Cluster-VictoriaMetrics.md) to the default [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/).
* `dedup.minScrapeInterval: 1ms` configures [de-duplication](../#deduplication) for the cluster that de-duplicates data points in the same time series if they fall within the same discrete 1ms bucket. The earliest data point will be kept. In the case of equal timestamps, an arbitrary data point will be kept.
* `replicationFactor: 2` Replication factor for the ingested data, i.e. how many copies should be made among distinct `-storageNode` instances. If the replication factor is greater than one, the deduplication must be enabled on the remote storage side.
* `podAnnotations: prometheus.io/scrape: "true"` enables the scraping of metrics from the vmselect, vminsert and vmstorage pods.
* `podAnnotations:prometheus.io/port: "some_port" ` enables the scraping of metrics from the vmselect, vminsert and vmstorage pods from corresponding ports.
@ -145,10 +145,10 @@ vmcluster-victoria-metrics-cluster-vmstorage-2 1/1 Running
## 3. Install vmagent from the Helm chart
To scrape metrics from Kubernetes with a VictoriaMetrics Cluster we will need to install [vmagent](https://docs.victoriametrics.com/vmagent/) with some additional configurations. To do so, please run the following command:
To scrape metrics from Kubernetes with a VictoriaMetrics Cluster we will need to install [vmagent](../vmagent.md) with some additional configurations. To do so, please run the following command:
```yaml
helm install vmagent vm/victoria-metrics-agent -f https://docs.victoriametrics.com/guides/guide-vmcluster-vmagent-values.yaml
helm install vmagent vm/victoria-metrics-agent -f {{% ref "./" %}}guide-vmcluster-vmagent-values.yaml
```
Here is full file content `guide-vmcluster-vmagent-values.yaml`
@ -348,7 +348,7 @@ The expected output is:
}
```
* Query `http://127.0.0.1:8481/select/0/prometheus/api/v1/query_range` uses [VictoriaMetrics querying API](https://docs.victoriametrics.com/cluster-victoriametrics/#url-format) to fetch previously stored data points;
* Query `http://127.0.0.1:8481/select/0/prometheus/api/v1/query_range` uses [VictoriaMetrics querying API](../Cluster-VictoriaMetrics.md#url-format) to fetch previously stored data points;
* Argument `query=count(up{kubernetes_pod_name=~".*vmselect.*"})` specifies the query we want to execute. Specifically, we calculate the number of `vmselect` pods.
* Additional arguments `start=-10m&step=1m'` set requested time range from -10 minutes (10 minutes ago) to now (default value if `end` argument is omitted) and step (the distance between returned data points) of 1 minute;
* By adding `| jq` we pass the output to the jq utility which outputs information in json format
@ -356,7 +356,7 @@ The expected output is:
The expected result of the query `count(up{kubernetes_pod_name=~".*vmselect.*"})` should be equal to `3` - the number of replicas we set via `replicaCount` parameter.
To test via Grafana, we need to install it first. [Install and connect Grafana to VictoriaMetrics](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-cluster.html#4-install-and-connect-grafana-to-victoriametrics-with-helm), login into Grafana and open the metrics [Explore](http://127.0.0.1:3000/explore) page.
To test via Grafana, we need to install it first. [Install and connect Grafana to VictoriaMetrics](./k8s-monitoring-via-vm-cluster.md#4-install-and-connect-grafana-to-victoriametrics-with-helm), login into Grafana and open the metrics [Explore](http://127.0.0.1:3000/explore) page.
![Explore](k8s-ha-monitoring-via-vm-cluster_explore.webp)

View file

@ -10,7 +10,7 @@ aliases:
---
**This guide covers:**
* The setup of a [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) in [Kubernetes](https://kubernetes.io/) via Helm charts
* The setup of a [VictoriaMetrics cluster](../Cluster-VictoriaMetrics.md) in [Kubernetes](https://kubernetes.io/) via Helm charts
* How to scrape metrics from k8s components using service discovery
* How to visualize stored data
* How to store metrics in [VictoriaMetrics](https://victoriametrics.com) tsdb
@ -29,7 +29,7 @@ We will use:
> For this guide we will use Helm 3 but if you already use Helm 2 please see this [https://github.com/VictoriaMetrics/helm-charts#for-helm-v2](https://github.com/VictoriaMetrics/helm-charts#for-helm-v2)
You need to add the VictoriaMetrics Helm repository to install VictoriaMetrics components. Were going to use [VictoriaMetrics Cluster](https://docs.victoriametrics.com/cluster-victoriametrics/). You can do this by running the following command:
You need to add the VictoriaMetrics Helm repository to install VictoriaMetrics components. Were going to use [VictoriaMetrics Cluster](../Cluster-VictoriaMetrics.md). You can do this by running the following command:
```shell
helm repo add vm https://victoriametrics.github.io/helm-charts/
@ -83,7 +83,7 @@ vmstorage:
EOF
```
* By running `Helm install vmcluster vm/victoria-metrics-cluster` we install [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) to default [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) inside your cluster.
* By running `Helm install vmcluster vm/victoria-metrics-cluster` we install [VictoriaMetrics cluster](../Cluster-VictoriaMetrics.md) to default [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) inside your cluster.
* By adding `podAnnotations: prometheus.io/scrape: "true"` we enable the scraping of metrics from the vmselect, vminsert and vmstorage pods.
* By adding `podAnnotations:prometheus.io/port: "some_port" ` we enable the scraping of metrics from the vmselect, vminsert and vmstorage pods from their ports as well.
@ -145,7 +145,7 @@ for example - inside the Kubernetes cluster:
For us its important to remember the url for the datasource (copy lines from the output).
Verify that [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) pods are up and running by executing the following command:
Verify that [VictoriaMetrics cluster](../Cluster-VictoriaMetrics.md) pods are up and running by executing the following command:
```sh
@ -166,11 +166,11 @@ vmcluster-victoria-metrics-cluster-vmstorage-1 1/1 Running
## 3. Install vmagent from the Helm chart
To scrape metrics from Kubernetes with a [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) we need to install [vmagent](https://docs.victoriametrics.com/vmagent/) with additional configuration. To do so, please run these commands in your terminal:
To scrape metrics from Kubernetes with a [VictoriaMetrics cluster](../Cluster-VictoriaMetrics.md) we need to install [vmagent](../vmagent.md) with additional configuration. To do so, please run these commands in your terminal:
```shell
helm install vmagent vm/victoria-metrics-agent -f https://docs.victoriametrics.com/guides/guide-vmcluster-vmagent-values.yaml
helm install vmagent vm/victoria-metrics-agent -f {{% ref "./" %}}guide-vmcluster-vmagent-values.yaml
```
Here is full file content `guide-vmcluster-vmagent-values.yaml`
@ -400,7 +400,7 @@ config:
target_label: kubernetes_pod_name
```
* By adding `remoteWriteUrls: - http://vmcluster-victoria-metrics-cluster-vminsert.default.svc.cluster.local:8480/insert/0/prometheus/` we configuring [vmagent](https://docs.victoriametrics.com/vmagent/) to write scraped metrics into the `vmselect service`.
* By adding `remoteWriteUrls: - http://vmcluster-victoria-metrics-cluster-vminsert.default.svc.cluster.local:8480/insert/0/prometheus/` we configuring [vmagent](../vmagent.md) to write scraped metrics into the `vmselect service`.
* The second part of this yaml file is needed to add the `metric_relabel_configs` section that helps us to show Kubernetes metrics on the Grafana dashboard.
@ -481,8 +481,8 @@ EOF
By running this command we:
* Install Grafana from the Helm repository.
* Provision a VictoriaMetrics data source with the url from the output above which we remembered.
* Add [this dashboard](https://grafana.com/grafana/dashboards/11176) for [VictoriaMetrics Cluster](https://docs.victoriametrics.com/cluster-victoriametrics/).
* Add [this dashboard](https://grafana.com/grafana/dashboards/12683) for [VictoriaMetrics Agent](https://docs.victoriametrics.com/vmagent/).
* Add [this dashboard](https://grafana.com/grafana/dashboards/11176) for [VictoriaMetrics Cluster](../Cluster-VictoriaMetrics.md).
* Add [this dashboard](https://grafana.com/grafana/dashboards/12683) for [VictoriaMetrics Agent](../vmagent.md).
* Add [this dashboard](https://grafana.com/grafana/dashboards/14205) dashboard to see Kubernetes cluster metrics.

View file

@ -10,7 +10,7 @@ aliases:
---
**This guide covers:**
* The setup of a [VictoriaMetrics Single](https://docs.victoriametrics.com/single-server-victoriametrics/) in [Kubernetes](https://kubernetes.io/) via Helm charts
* The setup of a [VictoriaMetrics Single](../Single-Server-VictoriaMetrics.md) in [Kubernetes](https://kubernetes.io/) via Helm charts
* How to scrape metrics from k8s components using service discovery
* How to visualize stored data
* How to store metrics in [VictoriaMetrics](https://victoriametrics.com) tsdb
@ -29,7 +29,7 @@ We will use:
> For this guide we will use Helm 3 but if you already use Helm 2 please see this [https://github.com/VictoriaMetrics/helm-charts#for-helm-v2](https://github.com/VictoriaMetrics/helm-charts#for-helm-v2)
You need to add the VictoriaMetrics Helm repository to install VictoriaMetrics components. Were going to use [VictoriaMetrics Single](https://docs.victoriametrics.com/single-server-victoriametrics/). You can do this by running the following command:
You need to add the VictoriaMetrics Helm repository to install VictoriaMetrics components. Were going to use [VictoriaMetrics Single](../Single-Server-VictoriaMetrics.md). You can do this by running the following command:
```shell
@ -63,12 +63,12 @@ vm/victoria-metrics-single 0.7.5 1.62.0 Victoria Metrics Single
```
## 2. Install [VictoriaMetrics Single](https://docs.victoriametrics.com/single-server-victoriametrics/) from Helm Chart
## 2. Install [VictoriaMetrics Single](../Single-Server-VictoriaMetrics.md) from Helm Chart
Run this command in your terminal:
```text
helm install vmsingle vm/victoria-metrics-single -f https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml
helm install vmsingle vm/victoria-metrics-single -f {{% ref "./" %}}guide-vmsingle-values.yaml
```
Here is full file content `guide-vmsingle-values.yaml`
@ -159,9 +159,9 @@ server:
```
* By running `helm install vmsingle vm/victoria-metrics-single` we install [VictoriaMetrics Single](https://docs.victoriametrics.com/single-server-victoriametrics/) to default [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) inside your cluster
* By adding `scrape: enable: true` we add and enable autodiscovery scraping from kubernetes cluster to [VictoriaMetrics Single](https://docs.victoriametrics.com/single-server-victoriametrics/)
* On line 166 from [https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml](https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml) we added `metric_relabel_configs` section that will help us to show Kubernetes metrics on Grafana dashboard.
* By running `helm install vmsingle vm/victoria-metrics-single` we install [VictoriaMetrics Single](../Single-Server-VictoriaMetrics.md) to default [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) inside your cluster
* By adding `scrape: enable: true` we add and enable autodiscovery scraping from kubernetes cluster to [VictoriaMetrics Single](../Single-Server-VictoriaMetrics.md)
* On line 166 from [manifest](./guide-vmsingle-values.yaml) we added `metric_relabel_configs` section that will help us to show Kubernetes metrics on Grafana dashboard.
As a result of the command you will see the following output:

View file

@ -39,20 +39,20 @@ Using this schema, you can achieve:
* If you scrape data from Prometheus-compatible targets, then please specify `-promscrape.config` parameter as well.
Here is a Quickstart guide for [vmagent](https://docs.victoriametrics.com/vmagent/#quick-start)
Here is a Quickstart guide for [vmagent](../vmagent.md#quick-start)
### How to read the data from Ground Control regions
You can use one of the following options:
1. Multi-level [vmselect setup](https://docs.victoriametrics.com/cluster-victoriametrics/#multi-level-cluster-setup) in cluster setup, top-level vmselect(s) reads data from cluster-level vmselects
1. Multi-level [vmselect setup](../Cluster-VictoriaMetrics.md#multi-level-cluster-setup) in cluster setup, top-level vmselect(s) reads data from cluster-level vmselects
* Returns data in one of the clusters is unavailable
* Merges data from both sources. You need to turn on [deduplication](https://docs.victoriametrics.com/cluster-victoriametrics/#deduplication) to remove duplicates
* Merges data from both sources. You need to turn on [deduplication](../Cluster-VictoriaMetrics.md#deduplication) to remove duplicates
1. Regional endpoints - use one regional endpoint as default and switch to another if there is an issue.
1. Load balancer - that sends queries to a particular region. The benefit and disadvantage of this setup is that it's simple.
1. Promxy - proxy that reads data from multiple Prometheus-like sources. It allows reading data more intelligently to cover the region's unavailability out of the box. It doesn't support MetricsQL yet (please check this issue).
1. Global vmselect in cluster setup - you can set up an additional subset of vmselects that knows about all storages in all regions.
* The [deduplication](https://docs.victoriametrics.com/cluster-victoriametrics/#deduplication) in 1ms on the vmselect side must be turned on. This setup allows you to query data using MetricsQL.
* The [deduplication](../Cluster-VictoriaMetrics.md#deduplication) in 1ms on the vmselect side must be turned on. This setup allows you to query data using MetricsQL.
* The downside is that vmselect waits for a response from all storages in all regions.
@ -78,8 +78,8 @@ An additional VictoriaMetrics single can be set up in every region, scraping met
You also may evaluate the option to send these metrics to the neighbour region to achieve HA.
Additional context
* VictoriaMetrics Single - [https://docs.victoriametrics.com/single-server-victoriametrics/#monitoring](https://docs.victoriametrics.com/single-server-victoriametrics/#monitoring)
* VictoriaMetrics Cluster - [https://docs.victoriametrics.com/cluster-victoriametrics/#monitoring](https://docs.victoriametrics.com/cluster-victoriametrics/#monitoring)
* VictoriaMetrics Single - [../Single-Server-VictoriaMetrics.md#monitoring](../Single-Server-VictoriaMetrics.md#monitoring)
* VictoriaMetrics Cluster - [../Cluster-VictoriaMetrics.md#monitoring](../Cluster-VictoriaMetrics.md#monitoring)
### What more can we do?

View file

@ -8,14 +8,14 @@ menu:
aliases:
- /guides/understand-your-setup-size.html
---
The docs provide a simple and high-level overview of Ingestion Rate, Active Time Series, and Query per Second. These terms are a part of capacity planning ([Single-Node](https://docs.victoriametrics.com/single-server-victoriametrics/#capacity-planning), [Cluster](https://docs.victoriametrics.com/cluster-victoriametrics/#capacity-planning)) and [Managed VictoriaMetrics](https://docs.victoriametrics.com/managed-victoriametrics/) pricing.
The docs provide a simple and high-level overview of Ingestion Rate, Active Time Series, and Query per Second. These terms are a part of capacity planning ([Single-Node](../Single-Server-VictoriaMetrics.md#capacity-planning), [Cluster](../Cluster-VictoriaMetrics.md#capacity-planning)) and [Managed VictoriaMetrics](../managed-victoriametrics/README.md) pricing.
## Terminology
- [Active Time Series](https://docs.victoriametrics.com/faq/#what-is-an-active-time-series) - the [time series](https://docs.victoriametrics.com/keyconcepts/#time-series) that receive at least one sample for the latest hour;
- Ingestion Rate - how many [data points](https://docs.victoriametrics.com/keyconcepts/#raw-samples) you ingest into the database per second;
- [Churn Rate](https://docs.victoriametrics.com/faq/#what-is-high-churn-rate) - how frequently a new time series is registered. For example, in the Kubernetes ecosystem, the pod name is a part of time series labels. And when the pod is re-created, its name changes and affects all the exposed metrics, which results in high cardinality and Churn Rate problems;
- Query per Second - the number of [read queries](https://docs.victoriametrics.com/keyconcepts/#query-data) per second;
- [Active Time Series](../FAQ.md#what-is-an-active-time-series) - the [time series](../keyConcepts.md#time-series) that receive at least one sample for the latest hour;
- Ingestion Rate - how many [data points](../keyConcepts.md#raw-samples) you ingest into the database per second;
- [Churn Rate](../FAQ.md#what-is-high-churn-rate) - how frequently a new time series is registered. For example, in the Kubernetes ecosystem, the pod name is a part of time series labels. And when the pod is re-created, its name changes and affects all the exposed metrics, which results in high cardinality and Churn Rate problems;
- Query per Second - the number of [read queries](../keyConcepts.md#query-data) per second;
- Retention Period - for how long data is stored in the database.
## Calculation
@ -36,7 +36,7 @@ _Note: if you have more than one Prometheus, you need to run this query across a
[CollectD](https://collectd.org/) exposes 346 series per host. The number of exposed series heavily depends on the installed plugins (`cgroups`, `conntrack`, `contextswitch`, `CPU`, `df`, `disk`, `ethstat`, `fhcount`, `interface`, `load`, `memory`, `processes`, `python`, `tcpconns`, `write_graphite`)
[Replication Factor](https://docs.victoriametrics.com/cluster-victoriametrics/#replication-and-data-safety) multiplies the number of Active Time Series since each series will be stored ReplicationFactor times.
[Replication Factor](../Cluster-VictoriaMetrics.md#replication-and-data-safety) multiplies the number of Active Time Series since each series will be stored ReplicationFactor times.
### Churn Rate
@ -52,7 +52,7 @@ To track the Churn Rate in VictoriaMetrics, use the following query:
### Ingestion Rate
Ingestion rate is how many time series are pulled (scraped) or pushed per second into the database. For example, if you scrape a service that exposes 1000 time series with an interval of 15s, the Ingestion Rate would be 1000/15 = 66 [samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) per second. The more services you scrape or the lower is scrape interval the higher would be the Ingestion Rate.
Ingestion rate is how many time series are pulled (scraped) or pushed per second into the database. For example, if you scrape a service that exposes 1000 time series with an interval of 15s, the Ingestion Rate would be 1000/15 = 66 [samples](../keyConcepts.md#raw-samples) per second. The more services you scrape or the lower is scrape interval the higher would be the Ingestion Rate.
For Ingestion Rate calculation, you need to know how many time series you pull or push and how often you save them into VictoriaMetrics. To be more specific, the formula is the Number Of Active Time Series / Metrics Collection Interval.
If you run the Prometheus, you can get the Ingestion Rate by running the following query:
@ -101,7 +101,7 @@ You have a Kubernetes environment that produces 5k time series per second with 1
VictoriaMetrics requires additional disk space for the index. The lower Churn Rate means lower disk space usage for the index because of better compression.
Usually, the index takes about 20% of the disk space for storing data. High cardinality setups may use >50% of datapoints storage size for index.
You can significantly reduce the amount of disk usage by specifying [Downsampling](https://docs.victoriametrics.com/#downsampling) and [Retention Filters](https://docs.victoriametrics.com/#retention-filters) that are lower than the Retention Period. Both two settings are available in Managed VictoriaMetrics and Enterprise.
You can significantly reduce the amount of disk usage by specifying [Downsampling](../#downsampling) and [Retention Filters](../#retention-filters) that are lower than the Retention Period. Both two settings are available in Managed VictoriaMetrics and Enterprise.
## Align Terms with VictoriaMetrics setups
@ -122,4 +122,4 @@ You can collect metrics from
### On-Premise
Please follow these capacity planning documents ([Single-Node](https://docs.victoriametrics.com/single-server-victoriametrics/#capacity-planning), [Cluster](https://docs.victoriametrics.com/cluster-victoriametrics/#capacity-planning)). It contains the number of CPUs and Memory required to handle the Ingestion Rate, Active Time Series, Churn Rate, QPS and Retention Period.
Please follow these capacity planning documents ([Single-Node](../Single-Server-VictoriaMetrics.md#capacity-planning), [Cluster](../Cluster-VictoriaMetrics.md#capacity-planning)). It contains the number of CPUs and Memory required to handle the Ingestion Rate, Active Time Series, Churn Rate, QPS and Retention Period.

View file

@ -15,11 +15,11 @@ This guide explains the different ways in which you can use vmalert in conjuncti
## Preconditions
* [vmalert](https://docs.victoriametrics.com/vmalert/) is installed. You can obtain it by building it from [source](https://docs.victoriametrics.com/vmalert/#quickstart), downloading it from the [GitHub releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest), or using the [docker image](https://hub.docker.com/r/victoriametrics/vmalert) for the container ecosystem (such as docker, k8s, etc.).
* [vmalert](../vmalert.md) is installed. You can obtain it by building it from [source](../vmalert.md#quickstart), downloading it from the [GitHub releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest), or using the [docker image](https://hub.docker.com/r/victoriametrics/vmalert) for the container ecosystem (such as docker, k8s, etc.).
* [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/) is installed.
* You have a [single or cluster](https://docs.victoriametrics.com/managed-victoriametrics/quickstart.html#creating-deployment) deployment in [Managed VictoriaMetrics](https://docs.victoriametrics.com/managed-victoriametrics/overview.html).
* You have a [single or cluster](./quickstart.md#creating-deployment) deployment in [Managed VictoriaMetrics](./overview.md).
* If you are using helm, add the [VictoriaMetrics helm chart](https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-alert#how-to-install) repository to your helm repositories. This step is optional.
* If you are using [vmoperator](https://docs.victoriametrics.com/operator/quick-start.html#quick-start), make sure that it and its CRDs are installed. This step is also optional.
* If you are using [vmoperator](../operator/quick-start.md#quick-start), make sure that it and its CRDs are installed. This step is also optional.
## Setup
@ -49,7 +49,7 @@ groups:
To use vmalert with Managed VictoriaMetrics, you must create a read/write token, or use an existing one. The token must have write access to ingest recording rules, ALERTS and ALERTS_FOR_STATE metrics, and read access for rules evaluation.
For instructions on how to create tokens, please refer to this section of the [documentation](https://docs.victoriametrics.com/managed-victoriametrics/quickstart.html#deployment-access).
For instructions on how to create tokens, please refer to this section of the [documentation](./quickstart.md#deployment-access).
#### Single-Node

View file

@ -14,7 +14,7 @@ Monitoring kubernetes cluster is necessary to build SLO/SLI, to analyze performa
To enable kubernetes cluster monitoring, we will be collecting metrics about cluster performance and utilization from kubernetes components like `kube-api-server`, `kube-controller-manager`, `kube-scheduler`, `kube-state-metrics`, `etcd`, `core-dns`, `kubelet` and `kube-proxy`. We will also install some recording rules, alert rules and dashboards to provide visibility of cluster performance, as well as alerting for cluster metrics.
For node resource utilization we will be collecting metrics from `node-exporter`. We will also install dashboard and alerts for node related metrics
For workloads monitoring in kubernetes cluster we will have [VictoriaMetrics Operator](https://docs.victoriametrics.com/operator/VictoriaMetrics-Operator.html). It enables us to define scrape jobs using kubernetes CRDs [VMServiceScrape](https://docs.victoriametrics.com/operator/design.html#vmservicescrape), [VMPodScrape](https://docs.victoriametrics.com/operator/design.html#vmpodscrape). To add alerts or recording rules for workloads we can use [VMRule](https://docs.victoriametrics.com/operator/design.html#vmrule) CRD
For workloads monitoring in kubernetes cluster we will have [VictoriaMetrics Operator](../operator/VictoriaMetrics-Operator.md). It enables us to define scrape jobs using kubernetes CRDs [VMServiceScrape](../operator/design.md#vmservicescrape), [VMPodScrape](../operator/design.md#vmpodscrape). To add alerts or recording rules for workloads we can use [VMRule](../operator/design.md#vmrule) CRD
## Overview
@ -24,7 +24,7 @@ This chart will install `VMOperator`, `VMAgent`, `NodeExporter`, `kube-state-met
## Prerequisites
- Active Managed VictoriaMetrics instance. You can learn how to sign up for Managed VictoriaMetrics [here](https://docs.victoriametrics.com/managed-victoriametrics/quickstart.html#how-to-register).
- Active Managed VictoriaMetrics instance. You can learn how to sign up for Managed VictoriaMetrics [here](./quickstart.md#how-to-register).
- Access to your kubernetes cluster
- Helm binary. You can find installation [here](https://helm.sh/docs/intro/install/)

View file

@ -11,7 +11,7 @@ aliases:
---
VictoriaMetrics is a fast and easy-to-use monitoring solution and time series database.
It integrates well with existing monitoring systems such as Grafana, Prometheus, Graphite,
InfluxDB, OpenTSDB and DataDog - see [these docs](https://docs.victoriametrics.com/#how-to-import-time-series-data) for details.
InfluxDB, OpenTSDB and DataDog - see [these docs](../#how-to-import-time-series-data) for details.
The most common use cases for VictoriaMetrics are:
* Long-term remote storage for Prometheus;
@ -30,8 +30,8 @@ maintenance.
Managed VictoriaMetrics comes with the following features:
* It can be used as a Managed Prometheus - just configure Prometheus or vmagent to write data to Managed VictoriaMetrics and then use the provided endpoint as a Prometheus datasource in Grafana;
* Built-in [Alerting & Recording](https://docs.victoriametrics.com/managed-victoriametrics/alertmanager-setup-for-deployment/#configure-alerting-rules) rules execution;
* Hosted [Alertmanager](https://docs.victoriametrics.com/managed-victoriametrics/alertmanager-setup-for-deployment/) for sending notifications;
* Built-in [Alerting & Recording](./alertmanager-setup-for-deployment.md#configure-alerting-rules) rules execution;
* Hosted [Alertmanager](./alertmanager-setup-for-deployment.md) for sending notifications;
* Every Managed VictoriaMetrics deployment runs in an isolated environment, so deployments cannot interfere with each other;
* Managed VictoriaMetrics deployment can be scaled up or scaled down in a few clicks;
* Automated backups;

View file

@ -199,7 +199,7 @@ On the opened screen, choose parameters of your new deployment:
* `Region` AWS region where deployment will run;
* Desired `storage capacity` for storing metrics (you always can expand disk size later);
* `Retention` period for stored metrics.
* `Size` of your deployment [based on your needs](https://docs.victoriametrics.com/guides/understand-your-setup-size.html)
* `Size` of your deployment [based on your needs](../guides/understand-your-setup-size.md)
![Create deployment form](create_deployment_form.webp)
@ -263,11 +263,11 @@ To discover additional configuration options click on `Advanced Settings` button
In that section, additional params can be set:
* [`Deduplication`](https://docs.victoriametrics.com/cluster-victoriametrics/#deduplication) defines interval when deployment leaves a single raw sample with the biggest timestamp per each discrete interval;
* [`Deduplication`](../Cluster-VictoriaMetrics.md#deduplication) defines interval when deployment leaves a single raw sample with the biggest timestamp per each discrete interval;
* `Maintenance Window` when deployment should start an upgrade process if needed;
* `Settings` allow to define different flags for the deployment:
1. [cluster components flags](https://docs.victoriametrics.com/cluster-victoriametrics/#list-of-command-line-flags).
2. [single version flags](https://docs.victoriametrics.com/single-server-victoriametrics/#list-of-command-line-flags).
1. [cluster components flags](../Cluster-VictoriaMetrics.md#list-of-command-line-flags).
2. [single version flags](../Single-server-VictoriaMetrics.md#list-of-command-line-flags).
Please note, such an update requires a deployment restart and may result in a short downtime for single-node deployments.

View file

@ -28,7 +28,7 @@ scrape_configs:
- localhost:9100
```
After you created the `scrape.yaml` file, download and unpack [single-node VictoriaMetrics](https://docs.victoriametrics.com/) to the same directory:
After you created the `scrape.yaml` file, download and unpack [single-node VictoriaMetrics](./README.md) to the same directory:
```
wget https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v1.102.0/victoria-metrics-linux-amd64-v1.102.0.tar.gz
@ -36,7 +36,7 @@ tar xzf victoria-metrics-linux-amd64-v1.102.0.tar.gz
```
Then start VictoriaMetrics and instruct it to scrape targets defined in `scrape.yaml` and save scraped metrics
to local storage according to [these docs](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter):
to local storage according to [these docs](./#how-to-scrape-prometheus-exporters-such-as-node-exporter):
```
./victoria-metrics-prod -promscrape.config=scrape.yaml
@ -46,7 +46,7 @@ Now open the `http://localhost:8428/targets` page in web browser in order to see
The page must contain the information about the target at `http://localhost:9100/metrics` url.
It is likely the target has `state: down` if you didn't start [`node-exporter`](https://github.com/prometheus/node_exporter) on `localhost`.
Let's add a new scrape config to `scrape.yaml` for scraping [VictoriaMetrics metrics](https://docs.victoriametrics.com/#monitoring):
Let's add a new scrape config to `scrape.yaml` for scraping [VictoriaMetrics metrics](./#monitoring):
```yaml
scrape_configs:
@ -61,10 +61,10 @@ scrape_configs:
```
Note that the last specified target contains the full url instead of host and port.
This is an extension supported by VictoriaMetrics and [vmagent](https://docs.victoriametrics.com/vmagent/) - you can use both `host:port`
This is an extension supported by VictoriaMetrics and [vmagent](./vmagent.md) - you can use both `host:port`
and full urls in scrape target lists.
Send `SIGHUP` signal `victoria-metrics-prod` process, so it [reloads the updated `scrape.yaml`](https://docs.victoriametrics.com/vmagent/#configuration-update):
Send `SIGHUP` signal `victoria-metrics-prod` process, so it [reloads the updated `scrape.yaml`](./vmagent.md#configuration-update):
```
kill -HUP `pidof victoria-metrics-prod`
@ -73,10 +73,10 @@ kill -HUP `pidof victoria-metrics-prod`
Now the `http://localhost:8428/targets` page must contain two targets - `http://localhost:9100/metrics` and `http://localhost:8428/metrics`.
The last one should have `state: up`, since this is VictoriaMetrics itself.
Let's query the scraped metrics. Open `http://localhost:8428/vmui/` aka [vmui](https://docs.victoriametrics.com/#vmui), enter `up` in the query input field
Let's query the scraped metrics. Open `http://localhost:8428/vmui/` aka [vmui](./#vmui), enter `up` in the query input field
and press `enter`. You'll see a graph for `up` metrics. It must contain two lines for the targets defined in `scrape.yaml` file above.
See [these docs](https://docs.victoriametrics.com/vmagent/#automatically-generated-metrics) about `up` metric. You can explore other scraped metrics
in `vmui` via [Prometheus metrics explorer](https://docs.victoriametrics.com/#metrics-explorer).
See [these docs](./vmagent.md#automatically-generated-metrics) about `up` metric. You can explore other scraped metrics
in `vmui` via [Prometheus metrics explorer](./#metrics-explorer).
Let's look closely to the contents of the `scrape.yaml` file created above:
@ -92,35 +92,35 @@ scrape_configs:
- http://localhost:8428/metrics
```
The [`scrape_configs`](https://docs.victoriametrics.com/sd_configs/#scrape_configs) section contains a list of scrape configs.
The [`scrape_configs`](./sd_configs.md#scrape_configs) section contains a list of scrape configs.
Our `scrape.yaml` file contains two scrape configs - for `job_name: node-exporter` and for `job_name: victoriametrics`.
[vmagent](https://docs.victoriametrics.com/vmagent/) and [single-node VictoriaMetrics](https://docs.victoriametrics.com/)
[vmagent](./vmagent.md) and [single-node VictoriaMetrics](./README.md)
can efficiently process thousands of scrape configs in production.
Every scrape config in the list **must** contain `job_name` field - its' value is used as [`job`](https://prometheus.io/docs/concepts/jobs_instances/) label
in all the metrics scraped from targets defined in this scrape config.
Every scrape config must contain at least a single section from [this list](https://docs.victoriametrics.com/sd_configs/#supported-service-discovery-configs).
Every scrape config may contain other options described [here](https://docs.victoriametrics.com/sd_configs/#scrape_configs).
Every scrape config must contain at least a single section from [this list](./sd_configs.md#supported-service-discovery-configs).
Every scrape config may contain other options described [here](./sd_configs.md#scrape_configs).
In our case only [`static_configs`](https://docs.victoriametrics.com/sd_configs/#static_configs) sections are used.
These sections consist of a list of static configs according to [these docs](https://docs.victoriametrics.com/sd_configs/#static_configs).
In our case only [`static_configs`](./sd_configs.md#static_configs) sections are used.
These sections consist of a list of static configs according to [these docs](./sd_configs.md#static_configs).
Every static config contains a list of `targets`, which need to be scraped. The target address is used as [`instance`](https://prometheus.io/docs/concepts/jobs_instances/)
label in all the metrics scraped from the target.
[vmagent](https://docs.victoriametrics.com/vmagent/) and [single-node VictoriaMetrics](https://docs.victoriametrics.com/)
[vmagent](./vmagent.md) and [single-node VictoriaMetrics](./README.md)
can efficiently process tens of thousands of targets in production. If you need scraping more targets,
then see [these docs](https://docs.victoriametrics.com/vmagent/#scraping-big-number-of-targets).
then see [these docs](./vmagent.md#scraping-big-number-of-targets).
Targets are scraped at `http` or `https` urls, which are formed according to [these rules](https://docs.victoriametrics.com/relabeling/#how-to-modify-scrape-urls-in-targets).
It is possible to modify scrape urls via [relabeling](https://docs.victoriametrics.com/relabeling/) if needed.
Targets are scraped at `http` or `https` urls, which are formed according to [these rules](./relabeling.md#how-to-modify-scrape-urls-in-targets).
It is possible to modify scrape urls via [relabeling](./relabeling.md) if needed.
## File-based target discovery
It may be not so convenient updating `scrape.yaml` file with [`static_configs`](https://docs.victoriametrics.com/sd_configs/#static_configs)
every time new scrape target is added, changed or removed. In this case [`file_sd_configs`](https://docs.victoriametrics.com/sd_configs/#file_sd_configs)
It may be not so convenient updating `scrape.yaml` file with [`static_configs`](./sd_configs.md#static_configs)
every time new scrape target is added, changed or removed. In this case [`file_sd_configs`](./sd_configs.md#file_sd_configs)
can come to rescue. It allows defining a list of scrape targets in `JSON` files, and automatically updating the list of scrape targets
at [vmagent](https://docs.victoriametrics.com/vmagent/) or [single-node VictoriaMetrics](https://docs.victoriametrics.com/) side
at [vmagent](./vmagent.md) or [single-node VictoriaMetrics](./README.md) side
when the corresponding `JSON` files are updated.
Let's create `node_exporter_targets.json` file with the following contents:
@ -143,7 +143,7 @@ scrape_configs:
- node_exporter_targets.json
```
Then start [single-node VictoriaMetrics](https://docs.victoriametrics.com/) according to [these docs](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter):
Then start [single-node VictoriaMetrics](./README.md) according to [these docs](./#how-to-scrape-prometheus-exporters-such-as-node-exporter):
```yaml
# Download and unpack single-node VictoriaMetrics
@ -167,13 +167,13 @@ Now let's add more targets to `node_exporter_targets.json`:
```
Note that the added targets contains full urls instead of host and port.
This is an extension supported by VictoriaMetrics and [vmagent](https://docs.victoriametrics.com/vmagent/) - you can use both `host:port`
This is an extension supported by VictoriaMetrics and [vmagent](./vmagent.md) - you can use both `host:port`
and full urls in scrape target lists.
Save the updated `node_exporter_targets.json`, wait for 30 seconds and then refresh the `http://localhost:8428/targets` page.
Now this page must contain all the targets defined in the updated `node_exporter_targets.json`.
By default [vmagent](https://docs.victoriametrics.com/vmagent/) and [single-node VictoriaMetrics](https://docs.victoriametrics.com/)
check for updates in `files` specified at [`file_sd_configs`](https://docs.victoriametrics.com/sd_configs/#file_sd_configs)
By default [vmagent](./vmagent.md) and [single-node VictoriaMetrics](./README.md)
check for updates in `files` specified at [`file_sd_configs`](./sd_configs.md#file_sd_configs)
every 30 seconds. This interval can be changed via `-promscrape.fileSDCheckInterval` command-line flag.
For example, the following command starts VictoriaMetrics, which checks for updates in `file_sd_configs` every 5 seconds:
@ -197,22 +197,22 @@ scrape_configs:
```
It is possible to specify directories with `*` wildcards for distinct sets of targets at `file_sd_configs`.
See [these docs](https://docs.victoriametrics.com/sd_configs/#file_sd_configs) for details.
See [these docs](./sd_configs.md#file_sd_configs) for details.
[vmagent](https://docs.victoriametrics.com/vmagent/) and [single-node VictoriaMetrics](https://docs.victoriametrics.com/)
[vmagent](./vmagent.md) and [single-node VictoriaMetrics](./README.md)
can efficiently scrape tens of thousands of scrape targets. If you need scraping more targets,
then see [these docs](https://docs.victoriametrics.com/vmagent/#scraping-big-number-of-targets).
then see [these docs](./vmagent.md#scraping-big-number-of-targets).
Targets are scraped at `http` or `https` urls, which are formed according to [these rules](https://docs.victoriametrics.com/relabeling/#how-to-modify-scrape-urls-in-targets).
It is possible to modify scrape urls via [relabeling](https://docs.victoriametrics.com/relabeling/) if needed.
Targets are scraped at `http` or `https` urls, which are formed according to [these rules](./relabeling.md#how-to-modify-scrape-urls-in-targets).
It is possible to modify scrape urls via [relabeling](./relabeling.md) if needed.
## HTTP-based target discovery
It may not so convenient maintaining a list of local files for [`file_sd_configs`](https://docs.victoriametrics.com/sd_configs/#file_sd_configs).
In this case [`http_sd_configs`](https://docs.victoriametrics.com/sd_configs/#http_sd_configs) can help.
It may not so convenient maintaining a list of local files for [`file_sd_configs`](./sd_configs.md#file_sd_configs).
In this case [`http_sd_configs`](./sd_configs.md#http_sd_configs) can help.
They allow specifying a list of `http` or `https` urls, which return targets, which need to be scraped.
For example, the following [`-promscrape.config`](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter)
For example, the following [`-promscrape.config`](./#how-to-scrape-prometheus-exporters-such-as-node-exporter)
periodically fetches the list of targets from the specified url:
```yaml
@ -233,7 +233,7 @@ If you feel brave, let's look at a few typical cases for Kubernetes monitoring.
### Discovering and scraping `node-exporter` targets in Kubernetes
The following [`-promscrape.config`](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter)
The following [`-promscrape.config`](./#how-to-scrape-prometheus-exporters-such-as-node-exporter)
instructs discovering and scraping all the [`node-exporter`](https://github.com/prometheus/node_exporter) targets inside Kubernetes cluster:
```yaml
@ -258,18 +258,18 @@ scrape_configs:
target_label: node
```
See [`kubernetes_sd_configs` docs](https://docs.victoriametrics.com/sd_configs/#kubernetes_sd_configs) for more details.
See [`kubernetes_sd_configs` docs](./sd_configs.md#kubernetes_sd_configs) for more details.
See [relabeling docs](https://docs.victoriametrics.com/vmagent/#relabeling) for details on `relabel_configs`.
See [relabeling docs](./vmagent.md#relabeling) for details on `relabel_configs`.
### Discovering and scraping `kube-state-metrics` in Kubernetes
[kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) is a special metrics exporter,
which exposes `state` metrics for all the Kubernetes objects such as `container`, `pod`, `node`, etc.
It already sets `namespace`, `container`, `pod` and `node` labels for every exposed metric,
so these metrics shouldn't be set in [target relabeling](https://docs.victoriametrics.com/vmagent/#relabeling).
so these metrics shouldn't be set in [target relabeling](./vmagent.md#relabeling).
The following [`-promscrape.config`](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter)
The following [`-promscrape.config`](./#how-to-scrape-prometheus-exporters-such-as-node-exporter)
instructs discovering and scraping [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) target inside Kubernetes cluster:
```yaml
@ -296,14 +296,14 @@ scrape_configs:
action: keep
```
See [`kubernetes_sd_configs` docs](https://docs.victoriametrics.com/sd_configs/#kubernetes_sd_configs) for more details.
See [`kubernetes_sd_configs` docs](./sd_configs.md#kubernetes_sd_configs) for more details.
See [relabeling docs](https://docs.victoriametrics.com/vmagent/#relabeling) for details on `relabel_configs`.
See [relabeling docs](./vmagent.md#relabeling) for details on `relabel_configs`.
### Discovering and scraping metrics from `cadvisor`
[cadvisor](https://github.com/google/cadvisor) exposes resource usage metrics for every container in Kubernetes.
The following [`-promscrape.config`](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter)
The following [`-promscrape.config`](./#how-to-scrape-prometheus-exporters-such-as-node-exporter)
can be used for collecting `cadvisor` metrics in Kubernetes:
```yaml
@ -334,15 +334,15 @@ scrape_configs:
target_label: instance
```
See [`kubernetes_sd_configs` docs](https://docs.victoriametrics.com/sd_configs/#kubernetes_sd_configs) for more details.
See [`kubernetes_sd_configs` docs](./sd_configs.md#kubernetes_sd_configs) for more details.
See [relabeling docs](https://docs.victoriametrics.com/vmagent/#relabeling) for details on `relabel_configs`.
See [relabeling docs](./vmagent.md#relabeling) for details on `relabel_configs`.
See [these docs](https://docs.victoriametrics.com/sd_configs/#http-api-client-options) for details on `bearer_token_file` and `tls_config` options.
See [these docs](./sd_configs.md#http-api-client-options) for details on `bearer_token_file` and `tls_config` options.
### Discovering and scraping metrics for a particular container in Kubernetes
The following [`-promscrape.config`](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter)
The following [`-promscrape.config`](./#how-to-scrape-prometheus-exporters-such-as-node-exporter)
instructs discovering and scraping metrics for all the containers with the name `my-super-app`.
It is expected that these containers expose only a single TCP port, which serves its metrics at `/metrics` page
according to [Prometheus text exposition format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format):
@ -355,7 +355,7 @@ scrape_configs:
relabel_configs:
# Leave only targets with the container name, which matches the `job_name` specified above
# See https://docs.victoriametrics.com/relabeling/#how-to-modify-instance-and-job for details on `job` label.
# See {{% ref "./relabeling.md#how-to-modify-instance-and-job" %}} for details on `job` label.
#
- source_labels: [__meta_kubernetes_pod_container_name]
target_label: job
@ -375,6 +375,6 @@ scrape_configs:
target_label: container
```
See [`kubernetes_sd_configs` docs](https://docs.victoriametrics.com/sd_configs/#kubernetes_sd_configs) for more details.
See [`kubernetes_sd_configs` docs](./sd_configs.md#kubernetes_sd_configs) for more details.
See [relabeling docs](https://docs.victoriametrics.com/vmagent/#relabeling) for details on `relabel_configs`.
See [relabeling docs](./vmagent.md#relabeling) for details on `relabel_configs`.

View file

@ -17,7 +17,7 @@ You can use `vmalert-tool` to run unit tests for alerting and recording rules.
It will perform the following actions:
* sets up an isolated VictoriaMetrics instance;
* simulates the periodic ingestion of time series;
* queries the ingested data for recording and alerting rules evaluation like [vmalert](https://docs.victoriametrics.com/vmalert/);
* queries the ingested data for recording and alerting rules evaluation like [vmalert](./vmalert.md);
* checks whether the firing alerts or resulting recording rules match the expected results.
See how to run vmalert-tool for unit test below:
@ -30,13 +30,13 @@ See how to run vmalert-tool for unit test below:
vmalert-tool unittest is compatible with [Prometheus config format for tests](https://prometheus.io/docs/prometheus/latest/configuration/unit_testing_rules/#test-file-format)
except `promql_expr_test` field. Use `metricsql_expr_test` field name instead. The name is different because vmalert-tool
validates and executes [MetricsQL](https://docs.victoriametrics.com/metricsql/) expressions,
validates and executes [MetricsQL](./MetricsQL.md) expressions,
which aren't always backward compatible with [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/).
### Limitations
* vmalert-tool evaluates all the groups defined in `rule_files` using `evaluation_interval`(default `1m`) instead of `interval` under each rule group.
* vmalert-tool shares the same limitation with [vmalert](https://docs.victoriametrics.com/vmalert/#limitations) on chaining rules under one group:
* vmalert-tool shares the same limitation with [vmalert](./vmalert.md#limitations) on chaining rules under one group:
>by default, rules execution is sequential within one group, but persistence of execution results to remote storage is asynchronous. Hence, user shouldnt rely on chaining of recording rules when result of previous recording rule is reused in the next one;
@ -63,7 +63,7 @@ groups:
The configuration format for files specified in `--files` cmd-line flag is the following:
```yaml
# Path to the files or http url containing [rule groups](https://docs.victoriametrics.com/vmalert/#groups) configuration.
# Path to the files or http url containing [rule groups](./vmalert.md#groups) configuration.
# Enterprise version of vmalert-tool supports S3 and GCS paths to rules.
rule_files:
[ - <string> ]

View file

@ -11,7 +11,7 @@ aliases:
---
# vmbackupmanager
***vmbackupmanager is a part of [enterprise package](https://docs.victoriametrics.com/enterprise/).
***vmbackupmanager is a part of [enterprise package](./enterprise.md).
It is available for download and evaluation at [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest).
See how to request a free trial license [here](https://victoriametrics.com/products/enterprise/trial/).***
@ -23,10 +23,10 @@ which represent the backup intervals (hourly, daily, weekly and monthly)
The required flags for running the service are as follows:
* `-license` or `-licenseFile` . See [these docs](https://docs.victoriametrics.com/enterprise/#running-victoriametrics-enterprise).
* `-license` or `-licenseFile` . See [these docs](./enterprise.md#running-victoriametrics-enterprise).
* `-storageDataPath` - path to VictoriaMetrics or vmstorage data path to make backup from.
* `-snapshot.createURL` - VictoriaMetrics creates snapshot URL which will automatically be created during backup. Example: <http://victoriametrics:8428/snapshot/create>
* `-dst` - backup destination at [the supported storage types](https://docs.victoriametrics.com/vmbackup/#supported-storage-types).
* `-dst` - backup destination at [the supported storage types](./vmbackup.md#supported-storage-types).
* `-credsFilePath` - path to file with GCS or S3 credentials. Credentials are loaded from default locations if not set.
See [https://cloud.google.com/iam/docs/creating-managing-service-account-keys](https://cloud.google.com/iam/docs/creating-managing-service-account-keys)
and [https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html](https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html).
@ -55,9 +55,9 @@ To get the full list of supported flags please run the following command:
```
The service creates a **full** backup each run. This means that the system can be restored fully
from any particular backup using [vmrestore](https://docs.victoriametrics.com/vmrestore/).
from any particular backup using [vmrestore](./vmrestore.md).
Backup manager uploads only the data that has been changed or created since the most recent backup
([incremental backup](https://docs.victoriametrics.com/vmbackup/#incremental-backups)).
([incremental backup](./vmbackup.md#incremental-backups)).
This reduces the consumed network traffic and the time needed for performing the backup.
See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-series-databases-533c1a927883) for details.
@ -123,13 +123,13 @@ The result on the GCS bucket
![latest folder](vmbackupmanager_latest_folder.webp)
`vmbackupmanager` uses [smart backups](https://docs.victoriametrics.com/vmbackup/#smart-backups) technique in order
`vmbackupmanager` uses [smart backups](./vmbackup.md#smart-backups) technique in order
to accelerate backups and save both data transfer costs and data copying costs. This includes server-side copy of already existing
objects. Typical object storage systems implement server-side copy by creating new names for already existing objects.
This is very fast and efficient. Unfortunately there are systems such as [S3 Glacier](https://aws.amazon.com/s3/storage-classes/glacier/),
which perform full object copy during server-side copying. This may be slow and expensive.
Please, see [vmbackup docs](https://docs.victoriametrics.com/vmbackup/#advanced-usage) for more examples of authentication with different
Please, see [vmbackup docs](./vmbackup.md#advanced-usage) for more examples of authentication with different
storage types.
## Backup Retention Policy
@ -143,7 +143,7 @@ Backup retention policy is controlled by:
> *Note*: 0 value in every keepLast flag results into deletion of ALL backups for particular type (hourly, daily, weekly and monthly)
> *Note*: retention policy does not enforce removing previous versions of objects in object storages such if versioning is enabled. See [these docs](https://docs.victoriametrics.com/vmbackup/#permanent-deletion-of-objects-in-s3-compatible-storages) for more details.
> *Note*: retention policy does not enforce removing previous versions of objects in object storages such if versioning is enabled. See [these docs](./vmbackup.md#permanent-deletion-of-objects-in-s3-compatible-storages) for more details.
Lets assume we have a backup manager collecting daily backups for the past 10 days.
@ -342,14 +342,14 @@ If restore mark doesn't exist at `storageDataPath`(restore wasn't requested) `vm
### How to restore in Kubernetes
1. Ensure there is an init container with `vmbackupmanager restore` in `vmstorage` or `vmsingle` pod.
For [VictoriaMetrics operator](https://docs.victoriametrics.com/operator/VictoriaMetrics-Operator.html) deployments it is required to add:
For [VictoriaMetrics operator](./operator/VictoriaMetrics-Operator.md) deployments it is required to add:
```yaml
vmbackup:
restore:
onStart:
enabled: "true"
```
See operator `VMStorage` schema [here](https://docs.victoriametrics.com/operator/api.html#vmstorage) and `VMSingle` [here](https://docs.victoriametrics.com/operator/api.html#vmsinglespec).
See operator `VMStorage` schema [here](./operator/api.md#vmstorage) and `VMSingle` [here](./operator/api.md#vmsinglespec).
1. Enter container running `vmbackupmanager`
1. Use `vmbackupmanager backup list` to get list of available backups:
```sh
@ -369,11 +369,11 @@ If restore mark doesn't exist at `storageDataPath`(restore wasn't requested) `vm
#### Restore cluster into another cluster
These steps are assuming that [VictoriaMetrics operator](https://docs.victoriametrics.com/operator/VictoriaMetrics-Operator.html) is used to manage `VMCluster`.
These steps are assuming that [VictoriaMetrics operator](./operator/VictoriaMetrics-Operator.md) is used to manage `VMCluster`.
Clusters here are referred to as `source` and `destination`.
1. Create a new cluster with access to *source* cluster `vmbackupmanager` storage and same number of storage nodes.
Add the following section in order to enable restore on start (operator `VMStorage` schema can be found [here](https://docs.victoriametrics.com/operator/api.html#vmstorage):
Add the following section in order to enable restore on start (operator `VMStorage` schema can be found [here](./operator/api.md#vmstorage):
```yaml
vmbackup:
restore:
@ -399,7 +399,7 @@ Clusters here are referred to as `source` and `destination`.
## Monitoring
`vmbackupmanager` exports various metrics in Prometheus exposition format at `http://vmbackupmanager:8300/metrics` page. It is recommended setting up regular scraping of this page
either via [vmagent](https://docs.victoriametrics.com/vmagent/) or via Prometheus, so the exported metrics could be analyzed later.
either via [vmagent](./vmagent.md) or via Prometheus, so the exported metrics could be analyzed later.
Use the official [Grafana dashboard](https://grafana.com/grafana/dashboards/17798) for `vmbackupmanager` overview.
Graphs on this dashboard contain useful hints - hover the `i` icon in the top left corner of each graph in order to read it.
@ -440,7 +440,7 @@ command-line flags:
-customS3Endpoint string
Custom S3 endpoint for use with S3-compatible storages (e.g. MinIO). S3 is used if not set
-deleteAllObjectVersions
Whether to prune previous object versions when deleting an object. By default, when object storage has versioning enabled deleting the file removes only current version. This option forces removal of all previous versions. See: https://docs.victoriametrics.com/vmbackup/#permanent-deletion-of-objects-in-s3-compatible-storages
Whether to prune previous object versions when deleting an object. By default, when object storage has versioning enabled deleting the file removes only current version. This option forces removal of all previous versions. See: ./vmbackup.md#permanent-deletion-of-objects-in-s3-compatible-storages
-disableDaily
Disable daily run. Default false
-disableHourly
@ -454,11 +454,11 @@ command-line flags:
-enableTCP6
Whether to enable IPv6 for listening and dialing. By default, only IPv4 TCP and UDP are used
-envflag.enable
Whether to enable reading flags from environment variables in addition to the command line. Command line flag values have priority over values from environment vars. Flags are read only from the command line if this flag isn't set. See https://docs.victoriametrics.com/#environment-variables for more details
Whether to enable reading flags from environment variables in addition to the command line. Command line flag values have priority over values from environment vars. Flags are read only from the command line if this flag isn't set. See ./#environment-variables for more details
-envflag.prefix string
Prefix for environment variables if -envflag.enable is set
-eula
Deprecated, please use -license or -licenseFile flags instead. By specifying this flag, you confirm that you have an enterprise license and accept the ESA https://victoriametrics.com/legal/esa/ . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise/
Deprecated, please use -license or -licenseFile flags instead. By specifying this flag, you confirm that you have an enterprise license and accept the ESA https://victoriametrics.com/legal/esa/ . This flag is available only in Enterprise binaries. See ./enterprise.md
-filestream.disableFadvise
Whether to disable fadvise() syscall when reading large data files. The fadvise() syscall prevents from eviction of recently accessed data from OS page cache during background merges and backups. In some rare cases it is better to disable the syscall if it uses too much CPU
-flagsAuthKey value
@ -544,11 +544,11 @@ command-line flags:
Auth key for /metrics endpoint. It must be passed via authKey query arg. It overrides -httpAuth.*
Flag value can be read from the given file when using -metricsAuthKey=file:///abs/path/to/file or -metricsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -metricsAuthKey=http://host/path or -metricsAuthKey=https://host/path
-mtls array
Whether to require valid client certificate for https requests to the corresponding -httpListenAddr . This flag works only if -tls flag is set. See also -mtlsCAFile . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise/
Whether to require valid client certificate for https requests to the corresponding -httpListenAddr . This flag works only if -tls flag is set. See also -mtlsCAFile . This flag is available only in Enterprise binaries. See ./enterprise.md
Supports array of values separated by comma or specified via multiple flags.
Empty values are set to false.
-mtlsCAFile array
Optional path to TLS Root CA for verifying client certificates at the corresponding -httpListenAddr when -mtls is enabled. By default the host system TLS Root CA is used for client certificate verification. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise/
Optional path to TLS Root CA for verifying client certificates at the corresponding -httpListenAddr when -mtls is enabled. By default the host system TLS Root CA is used for client certificate verification. This flag is available only in Enterprise binaries. See ./enterprise.md
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-pprofAuthKey value
@ -567,7 +567,7 @@ command-line flags:
-pushmetrics.interval duration
Interval for pushing metrics to every -pushmetrics.url (default 10s)
-pushmetrics.url array
Optional URL to push metrics exposed at /metrics page. See https://docs.victoriametrics.com/#push-metrics . By default, metrics exposed at /metrics page aren't pushed to any remote storage
Optional URL to push metrics exposed at /metrics page. See ./#push-metrics . By default, metrics exposed at /metrics page aren't pushed to any remote storage
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-runOnStart
@ -590,11 +590,11 @@ command-line flags:
Supports array of values separated by comma or specified via multiple flags.
Empty values are set to false.
-tlsAutocertCacheDir string
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise/
Directory to store TLS certificates issued via Let's Encrypt. Certificates are lost on restarts if this flag isn't set. This flag is available only in Enterprise binaries. See ./enterprise.md
-tlsAutocertEmail string
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise/
Contact email for the issued Let's Encrypt TLS certificates. See also -tlsAutocertHosts and -tlsAutocertCacheDir .This flag is available only in Enterprise binaries. See ./enterprise.md
-tlsAutocertHosts array
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise/
Optional hostnames for automatic issuing of Let's Encrypt TLS certificates. These hostnames must be reachable at -httpListenAddr . The -httpListenAddr must listen tcp port 443 . The -tlsAutocertHosts overrides -tlsCertFile and -tlsKeyFile . See also -tlsAutocertEmail and -tlsAutocertCacheDir . This flag is available only in Enterprise binaries. See ./enterprise.md
Supports an array of values separated by comma or specified via multiple flags.
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
-tlsCertFile array