mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2025-03-11 15:34:56 +00:00
vendor: run make vendor-update
This commit is contained in:
commit
778c092740
328 changed files with 9086 additions and 3815 deletions
2
.github/workflows/check-licenses.yml
vendored
2
.github/workflows/check-licenses.yml
vendored
|
@ -25,7 +25,7 @@ jobs:
|
|||
cache: false
|
||||
|
||||
- name: Cache Go artifacts
|
||||
uses: actions/cache@v3
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
|
|
2
.github/workflows/codeql-analysis.yml
vendored
2
.github/workflows/codeql-analysis.yml
vendored
|
@ -63,7 +63,7 @@ jobs:
|
|||
if: ${{ matrix.language == 'go' }}
|
||||
|
||||
- name: Cache Go artifacts
|
||||
uses: actions/cache@v3
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
|
|
6
.github/workflows/main.yml
vendored
6
.github/workflows/main.yml
vendored
|
@ -41,7 +41,7 @@ jobs:
|
|||
cache: false
|
||||
|
||||
- name: Cache Go artifacts
|
||||
uses: actions/cache@v3
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
|
@ -71,7 +71,7 @@ jobs:
|
|||
cache: false
|
||||
|
||||
- name: Cache Go artifacts
|
||||
uses: actions/cache@v3
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
|
@ -102,7 +102,7 @@ jobs:
|
|||
cache: false
|
||||
|
||||
- name: Cache Go artifacts
|
||||
uses: actions/cache@v3
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
|
|
5
Makefile
5
Makefile
|
@ -178,7 +178,8 @@ victoria-metrics-crossbuild: \
|
|||
victoria-metrics-darwin-amd64 \
|
||||
victoria-metrics-darwin-arm64 \
|
||||
victoria-metrics-freebsd-amd64 \
|
||||
victoria-metrics-openbsd-amd64
|
||||
victoria-metrics-openbsd-amd64 \
|
||||
victoria-metrics-windows-amd64
|
||||
|
||||
vmutils-crossbuild: \
|
||||
vmutils-linux-386 \
|
||||
|
@ -465,7 +466,7 @@ benchmark-pure:
|
|||
vendor-update:
|
||||
go get -u -d ./lib/...
|
||||
go get -u -d ./app/...
|
||||
go mod tidy -compat=1.20
|
||||
go mod tidy -compat=1.22
|
||||
go mod vendor
|
||||
|
||||
app-local:
|
||||
|
|
176
README.md
176
README.md
|
@ -12,6 +12,7 @@
|
|||
<img src="docs/logo.webp" width="300" alt="VictoriaMetrics logo">
|
||||
|
||||
VictoriaMetrics is a fast, cost-effective and scalable monitoring solution and time series database.
|
||||
See [case studies for VictoriaMetrics](https://docs.victoriametrics.com/CaseStudies.html).
|
||||
|
||||
VictoriaMetrics is available in [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest),
|
||||
[Docker images](https://hub.docker.com/r/victoriametrics/victoria-metrics/), [Snap packages](https://snapcraft.io/victoriametrics)
|
||||
|
@ -99,40 +100,7 @@ VictoriaMetrics has the following prominent features:
|
|||
* It can store data on [NFS-based storages](https://en.wikipedia.org/wiki/Network_File_System) such as [Amazon EFS](https://aws.amazon.com/efs/)
|
||||
and [Google Filestore](https://cloud.google.com/filestore).
|
||||
|
||||
See also [various Articles about VictoriaMetrics](https://docs.victoriametrics.com/Articles.html).
|
||||
|
||||
## Case studies and talks
|
||||
|
||||
Case studies:
|
||||
|
||||
* [AbiosGaming](https://docs.victoriametrics.com/CaseStudies.html#abiosgaming)
|
||||
* [adidas](https://docs.victoriametrics.com/CaseStudies.html#adidas)
|
||||
* [Adsterra](https://docs.victoriametrics.com/CaseStudies.html#adsterra)
|
||||
* [ARNES](https://docs.victoriametrics.com/CaseStudies.html#arnes)
|
||||
* [Brandwatch](https://docs.victoriametrics.com/CaseStudies.html#brandwatch)
|
||||
* [CERN](https://docs.victoriametrics.com/CaseStudies.html#cern)
|
||||
* [COLOPL](https://docs.victoriametrics.com/CaseStudies.html#colopl)
|
||||
* [Criteo](https://docs.victoriametrics.com/CaseStudies.html#criteo)
|
||||
* [Dig Security](https://docs.victoriametrics.com/CaseStudies.html#dig-security)
|
||||
* [Fly.io](https://docs.victoriametrics.com/CaseStudies.html#flyio)
|
||||
* [German Research Center for Artificial Intelligence](https://docs.victoriametrics.com/CaseStudies.html#german-research-center-for-artificial-intelligence)
|
||||
* [Grammarly](https://docs.victoriametrics.com/CaseStudies.html#grammarly)
|
||||
* [Groove X](https://docs.victoriametrics.com/CaseStudies.html#groove-x)
|
||||
* [Idealo.de](https://docs.victoriametrics.com/CaseStudies.html#idealode)
|
||||
* [MHI Vestas Offshore Wind](https://docs.victoriametrics.com/CaseStudies.html#mhi-vestas-offshore-wind)
|
||||
* [Naver](https://docs.victoriametrics.com/CaseStudies.html#naver)
|
||||
* [Razorpay](https://docs.victoriametrics.com/CaseStudies.html#razorpay)
|
||||
* [Percona](https://docs.victoriametrics.com/CaseStudies.html#percona)
|
||||
* [Roblox](https://docs.victoriametrics.com/CaseStudies.html#roblox)
|
||||
* [Sensedia](https://docs.victoriametrics.com/CaseStudies.html#sensedia)
|
||||
* [Smarkets](https://docs.victoriametrics.com/CaseStudies.html#smarkets)
|
||||
* [Synthesio](https://docs.victoriametrics.com/CaseStudies.html#synthesio)
|
||||
* [Wedos.com](https://docs.victoriametrics.com/CaseStudies.html#wedoscom)
|
||||
* [Wix.com](https://docs.victoriametrics.com/CaseStudies.html#wixcom)
|
||||
* [Zerodha](https://docs.victoriametrics.com/CaseStudies.html#zerodha)
|
||||
* [zhihu](https://docs.victoriametrics.com/CaseStudies.html#zhihu)
|
||||
|
||||
See also [articles and slides about VictoriaMetrics from our users](https://docs.victoriametrics.com/Articles.html#third-party-articles-and-slides-about-victoriametrics)
|
||||
See [case studies for VictoriaMetrics](https://docs.victoriametrics.com/CaseStudies.html) and [various Articles about VictoriaMetrics](https://docs.victoriametrics.com/Articles.html).
|
||||
|
||||
## Operation
|
||||
|
||||
|
@ -497,11 +465,18 @@ Prometheus doesn't drop data during VictoriaMetrics restart. See [this article](
|
|||
|
||||
## How to scrape Prometheus exporters such as [node-exporter](https://github.com/prometheus/node_exporter)
|
||||
|
||||
VictoriaMetrics can be used as drop-in replacement for Prometheus for scraping targets configured in `prometheus.yml` config file according to [the specification](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file). Just set `-promscrape.config` command-line flag to the path to `prometheus.yml` config - and VictoriaMetrics should start scraping the configured targets. If the provided configuration file contains [unsupported options](https://docs.victoriametrics.com/vmagent.html#unsupported-prometheus-config-sections), then either delete them from the file or just pass `-promscrape.config.strictParse=false` command-line flag to VictoriaMetrics, so it will ignore unsupported options.
|
||||
VictoriaMetrics can be used as drop-in replacement for Prometheus for scraping targets configured in `prometheus.yml` config file
|
||||
according to [the specification](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file).
|
||||
Just set `-promscrape.config` command-line flag to the path to `prometheus.yml` config - and VictoriaMetrics should start scraping the configured targets.
|
||||
If the provided configuration file contains [unsupported options](https://docs.victoriametrics.com/vmagent.html#unsupported-prometheus-config-sections),
|
||||
then either delete them from the file or just pass `-promscrape.config.strictParse=false` command-line flag to VictoriaMetrics, so it will ignore unsupported options.
|
||||
|
||||
The file pointed by `-promscrape.config` may contain `%{ENV_VAR}` placeholders, which are substituted by the corresponding `ENV_VAR` environment variable values.
|
||||
|
||||
See [the list of supported service discovery types for Prometheus scrape targets](https://docs.victoriametrics.com/sd_configs.html).
|
||||
See also:
|
||||
|
||||
- [scrape config examples](https://docs.victoriametrics.com/scrape_config_examples/)
|
||||
- [the list of supported service discovery types for Prometheus scrape targets](https://docs.victoriametrics.com/sd_configs.html)
|
||||
|
||||
VictoriaMetrics also supports [importing data in Prometheus exposition format](#how-to-import-data-in-prometheus-exposition-format).
|
||||
|
||||
|
@ -509,8 +484,9 @@ See also [vmagent](https://docs.victoriametrics.com/vmagent.html), which can be
|
|||
|
||||
## How to send data from DataDog agent
|
||||
|
||||
VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/agent/) or [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/)
|
||||
via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics) at `/datadog/api/v2/series` path.
|
||||
VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/agent/), [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/) and
|
||||
[DataDog Lambda Extension](https://docs.datadoghq.com/serverless/libraries_integrations/extension/)
|
||||
via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics) at `/datadog/api/v2/series` or via "sketches" API at `/datadog/api/beta/sketches`.
|
||||
|
||||
### Sending metrics to VictoriaMetrics
|
||||
|
||||
|
@ -536,11 +512,11 @@ add the following line:
|
|||
dd_url: http://victoriametrics:8428/datadog
|
||||
```
|
||||
|
||||
[vmagent](https://docs.victoriametrics.com/vmagent.html) also can accept Datadog metrics format. Depending on where vmagent will forward data,
|
||||
[vmagent](https://docs.victoriametrics.com/vmagent.html) also can accept DataDog metrics format. Depending on where vmagent will forward data,
|
||||
pick [single-node or cluster URL](https://docs.victoriametrics.com/url-examples.html#datadog) formats.
|
||||
|
||||
### Sending metrics to Datadog and VictoriaMetrics
|
||||
|
||||
### Sending metrics to DataDog and VictoriaMetrics
|
||||
|
||||
DataDog allows configuring [Dual Shipping](https://docs.datadoghq.com/agent/guide/dual-shipping/) for metrics
|
||||
sending via ENV variable `DD_ADDITIONAL_ENDPOINTS` or via configuration file `additional_endpoints`.
|
||||
|
||||
|
@ -564,6 +540,19 @@ additional_endpoints:
|
|||
- apikey
|
||||
```
|
||||
|
||||
### Send metrics via Serverless DataDog plugin
|
||||
|
||||
Disable logs (logs ingestion is not supported by VictoriaMetrics) and set a custom endpoint in `serverless.yaml`:
|
||||
|
||||
```
|
||||
custom:
|
||||
datadog:
|
||||
enableDDLogs: false # Disabled not supported DD logs
|
||||
apiKey: fakekey # Set any key, otherwise plugin fails
|
||||
provider:
|
||||
environment:
|
||||
DD_DD_URL: <<vm-url>>/datadog # VictoriaMetrics endpoint for DataDog
|
||||
```
|
||||
|
||||
### Send via cURL
|
||||
|
||||
|
@ -1046,7 +1035,7 @@ to your needs or when testing bugfixes.
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.20.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.22.
|
||||
1. Run `make victoria-metrics` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1062,7 +1051,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
### Development ARM build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.20.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.22.
|
||||
1. Run `make victoria-metrics-linux-arm` or `make victoria-metrics-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics-linux-arm` or `victoria-metrics-linux-arm64` binary respectively and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1076,7 +1065,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
`Pure Go` mode builds only Go code without [cgo](https://golang.org/cmd/cgo/) dependencies.
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.20.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.22.
|
||||
1. Run `make victoria-metrics-pure` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics-pure` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1524,7 +1513,7 @@ Set HTTP request header `Content-Encoding: gzip` when sending gzip-compressed da
|
|||
VictoriaMetrics accepts data in JSON line format at [/api/v1/import](#how-to-import-data-in-json-line-format)
|
||||
and exports data in this format at [/api/v1/export](#how-to-export-data-in-json-line-format).
|
||||
|
||||
The format follows [JSON streaming concept](http://ndjson.org/), e.g. each line contains JSON object with metrics data in the following format:
|
||||
The format follows [JSON streaming concept](https://jsonlines.org/), e.g. each line contains JSON object with metrics data in the following format:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -1947,13 +1936,26 @@ Additionally, alerting can be set up with the following tools:
|
|||
* With Promxy - see [the corresponding docs](https://github.com/jacksontj/promxy/blob/master/README.md#how-do-i-use-alertingrecording-rules-in-promxy).
|
||||
* With Grafana - see [the corresponding docs](https://grafana.com/docs/alerting/rules/).
|
||||
|
||||
## mTLS protection
|
||||
|
||||
By default `VictoriaMetrics` accepts http requests at `8428` port (this port can be changed via `-httpListenAddr` command-line flags).
|
||||
[Enterprise version of VictoriaMetrics](https://docs.victoriametrics.com/enterprise.html) supports the ability to accept [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication)
|
||||
requests at this port, by specifying `-tls` and `-mtls` command-line flags. For example, the following command runs `VictoriaMetrics`, which accepts only mTLS requests at port `8428`:
|
||||
|
||||
```
|
||||
./victoria-metrics -tls -mtls
|
||||
```
|
||||
|
||||
By default system-wide [TLS Root CA](https://en.wikipedia.org/wiki/Root_certificate) is used for verifying client certificates if `-mtls` command-line flag is specified.
|
||||
It is possible to specify custom TLS Root CA via `-mtlsCAFile` command-line flag.
|
||||
|
||||
## Security
|
||||
|
||||
General security recommendations:
|
||||
|
||||
- All the VictoriaMetrics components must run in protected private networks without direct access from untrusted networks such as Internet.
|
||||
The exception is [vmauth](https://docs.victoriametrics.com/vmauth.html) and [vmgateway](https://docs.victoriametrics.com/vmgateway.html),
|
||||
which are indended for serving public requests.
|
||||
which are indended for serving public requests and performing authorization with [TLS termination](https://en.wikipedia.org/wiki/TLS_termination_proxy).
|
||||
- All the requests from untrusted networks to VictoriaMetrics components must go through auth proxy such as [vmauth](https://docs.victoriametrics.com/vmauth.html)
|
||||
or [vmgateway](https://docs.victoriametrics.com/vmgateway.html). The proxy must be set up with proper authentication and authorization.
|
||||
- Prefer using lists of allowed API endpoints, while disallowing access to other endpoints when configuring [vmauth](https://docs.victoriametrics.com/vmauth.html)
|
||||
|
@ -1964,7 +1966,8 @@ General security recommendations:
|
|||
|
||||
VictoriaMetrics provides the following security-related command-line flags:
|
||||
|
||||
* `-tls`, `-tlsCertFile` and `-tlsKeyFile` for switching from HTTP to HTTPS.
|
||||
* `-tls`, `-tlsCertFile` and `-tlsKeyFile` for switching from HTTP to HTTPS at `-httpListenAddr` (8428 by default).
|
||||
* `-mtls` and `-mtlsCAFile` for enabling [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) for requests to `-httpListenAddr`. See [these docs](#mtls-protection).
|
||||
* `-httpAuth.username` and `-httpAuth.password` for protecting all the HTTP endpoints
|
||||
with [HTTP Basic Authentication](https://en.wikipedia.org/wiki/Basic_access_authentication).
|
||||
* `-deleteAuthKey` for protecting `/api/v1/admin/tsdb/delete_series` endpoint. See [how to delete time series](#how-to-delete-time-series).
|
||||
|
@ -2009,12 +2012,17 @@ mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
|
|||
## Monitoring
|
||||
|
||||
VictoriaMetrics exports internal metrics in Prometheus exposition format at `/metrics` page.
|
||||
These metrics can be scraped via [vmagent](https://docs.victoriametrics.com/vmagent.html) or Prometheus.
|
||||
These metrics can be scraped via [vmagent](https://docs.victoriametrics.com/vmagent.html) or any other Prometheus-compatible scraper.
|
||||
|
||||
If you use Google Cloud Managed Prometheus for scraping metrics from VictoriaMetrics components, then pass `-metrics.exposeMetadata`
|
||||
command-line to them, so they add `TYPE` and `HELP` comments per each exposed metric at `/metrics` page.
|
||||
See [these docs](https://cloud.google.com/stackdriver/docs/managed-prometheus/troubleshooting#missing-metric-type) for details.
|
||||
|
||||
Alternatively, single-node VictoriaMetrics can self-scrape the metrics when `-selfScrapeInterval` command-line flag is
|
||||
set to duration greater than 0. For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page
|
||||
with 10 seconds interval.
|
||||
|
||||
_Please note, never use loadbalancer address for scraping metrics. All monitored components should be scraped directly by their address._
|
||||
_Please note, never use loadbalancer address for scraping metrics. All the monitored components should be scraped directly by their address._
|
||||
|
||||
Official Grafana dashboards available for [single-node](https://grafana.com/grafana/dashboards/10229)
|
||||
and [clustered](https://grafana.com/grafana/dashboards/11176) VictoriaMetrics.
|
||||
|
@ -2277,6 +2285,21 @@ Sometimes it is needed to remove such caches on the next startup. This can be do
|
|||
In this case VictoriaMetrics will automatically remove all the caches on the next start.
|
||||
See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1447) for details.
|
||||
|
||||
It is also possible removing [rollup result cache](#rollup-result-cache) on startup by passing `-search.resetRollupResultCacheOnStartup` command-line flag to VictoriaMetrics.
|
||||
|
||||
## Rollup result cache
|
||||
|
||||
VictoriaMetrics caches query reponses by default. This allows increasing performance for repated queries
|
||||
to [`/api/v1/query`](https://docs.victoriametrics.com/keyconcepts/#instant-query) and [`/api/v1/query_range`](https://docs.victoriametrics.com/keyconcepts/#range-query)
|
||||
with the increasing `time`, `start` and `end` query args.
|
||||
|
||||
This cache may work incorrectly when ingesting historical data into VictoriaMetrics. See [these docs](#backfilling) for details.
|
||||
|
||||
The rollup cache can be disabled either globally by running VictoriaMetrics with `-search.disableCache` command-line flag
|
||||
or on a per-query basis by passing `nocache=1` query arg to `/api/v1/query` and `/api/v1/query_range`.
|
||||
|
||||
See also [cache removal docs](#cache-removal).
|
||||
|
||||
## Cache tuning
|
||||
|
||||
VictoriaMetrics uses various in-memory caches for faster data ingestion and query performance.
|
||||
|
@ -2341,11 +2364,12 @@ VictoriaMetrics accepts historical data in arbitrary order of time via [any supp
|
|||
See [how to backfill data with recording rules in vmalert](https://docs.victoriametrics.com/vmalert.html#rules-backfilling).
|
||||
Make sure that configured `-retentionPeriod` covers timestamps for the backfilled data.
|
||||
|
||||
It is recommended disabling query cache with `-search.disableCache` command-line flag when writing
|
||||
It is recommended disabling [query cache](#rollup-result-cache) with `-search.disableCache` command-line flag when writing
|
||||
historical data with timestamps from the past, since the cache assumes that the data is written with
|
||||
the current timestamps. Query cache can be enabled after the backfilling is complete.
|
||||
|
||||
An alternative solution is to query [/internal/resetRollupResultCache](https://docs.victoriametrics.com/url-examples.html#internalresetrollupresultcache) handler after the backfilling is complete. This will reset the query cache, which could contain incomplete data cached during the backfilling.
|
||||
An alternative solution is to query [/internal/resetRollupResultCache](https://docs.victoriametrics.com/url-examples.html#internalresetrollupresultcache)
|
||||
after the backfilling is complete. This will reset the [query cache](#rollup-result-cache), which could contain incomplete data cached during the backfilling.
|
||||
|
||||
Yet another solution is to increase `-search.cacheTimestampOffset` flag value in order to disable caching
|
||||
for data with timestamps close to the current time. Single-node VictoriaMetrics automatically resets response
|
||||
|
@ -2584,7 +2608,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-graphiteTrimTimestamp duration
|
||||
Trim timestamps for Graphite data to this duration. Minimum practical duration is 1s. Higher duration (i.e. 1m) may be used for reducing disk space usage for timestamp data (default 1s)
|
||||
-http.connTimeout duration
|
||||
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s)
|
||||
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem
|
||||
-http.disableResponseCompression
|
||||
Disable compression of HTTP responses to save CPU resources. By default, compression is enabled to save network bandwidth
|
||||
-http.header.csp default-src 'self'
|
||||
|
@ -2606,10 +2630,14 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Flag value can be read from the given file when using -httpAuth.password=file:///abs/path/to/file or -httpAuth.password=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -httpAuth.password=http://host/path or -httpAuth.password=https://host/path
|
||||
-httpAuth.username string
|
||||
Username for HTTP server's Basic Auth. The authentication is disabled if empty. See also -httpAuth.password
|
||||
-httpListenAddr string
|
||||
TCP address to listen for http connections. See also -tls and -httpListenAddr.useProxyProtocol (default ":8428")
|
||||
-httpListenAddr.useProxyProtocol
|
||||
Whether to use proxy protocol for connections accepted at -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
-httpListenAddr array
|
||||
TCP addresses to listen for incoming http requests. See also -tls and -httpListenAddr.useProxyProtocol
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-httpListenAddr.useProxyProtocol array
|
||||
Whether to use proxy protocol for connections accepted at the corresponding -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-import.maxLineLen size
|
||||
The maximum length in bytes of a single line accepted by /api/v1/import; the line length can be limited with 'max_rows_per_line' query arg passed to /api/v1/export
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 10485760)
|
||||
|
@ -2689,6 +2717,14 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-metricsAuthKey value
|
||||
Auth key for /metrics endpoint. It must be passed via authKey query arg. It overrides httpAuth.* settings
|
||||
Flag value can be read from the given file when using -metricsAuthKey=file:///abs/path/to/file or -metricsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -metricsAuthKey=http://host/path or -metricsAuthKey=https://host/path
|
||||
-mtls array
|
||||
Whether to require valid client certificate for https requests to the corresponding -httpListenAddr . This flag works only if -tls flag is set. See also -mtlsCAFile . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-mtlsCAFile array
|
||||
Optional path to TLS Root CA for verifying client certificates at the corresponding -httpListenAddr when -mtls is enabled. By default the host system TLS Root CA is used for client certificate verification. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-newrelic.maxInsertRequestSize size
|
||||
The maximum size in bytes of a single NewRelic request to /newrelic/infra/v2/metrics/events/bulk
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
|
||||
|
@ -2846,7 +2882,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-search.disableAutoCacheReset
|
||||
Whether to disable automatic response cache reset if a sample with timestamp outside -search.cacheTimestampOffset is inserted into VictoriaMetrics
|
||||
-search.disableCache
|
||||
Whether to disable response caching. This may be useful during data backfilling
|
||||
Whether to disable response caching. This may be useful when ingesting historical data. See https://docs.victoriametrics.com/#backfilling . See also -search.resetRollupResultCacheOnStartup
|
||||
-search.graphiteMaxPointsPerSeries int
|
||||
The maximum number of points per series Graphite render API can return (default 1000000)
|
||||
-search.graphiteStorageStep duration
|
||||
|
@ -2930,6 +2966,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-search.resetCacheAuthKey value
|
||||
Optional authKey for resetting rollup cache via /internal/resetRollupResultCache call
|
||||
Flag value can be read from the given file when using -search.resetCacheAuthKey=file:///abs/path/to/file or -search.resetCacheAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -search.resetCacheAuthKey=http://host/path or -search.resetCacheAuthKey=https://host/path
|
||||
-search.resetRollupResultCacheOnStartup
|
||||
Whether to reset rollup result cache on startup. See https://docs.victoriametrics.com/#rollup-result-cache . See also -search.disableCache
|
||||
-search.setLookbackToStep
|
||||
Whether to fix lookback interval to 'step' query arg value. If set to true, the query model becomes closer to InfluxDB data model. If set to true, then -search.maxLookback and -search.maxStalenessInterval are ignored
|
||||
-search.treatDotsAsIsInRegexps
|
||||
|
@ -2981,18 +3019,26 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Whether to drop all the input samples after the aggregation with -streamAggr.config. By default, only aggregated samples are dropped, while the remaining samples are stored in the database. See also -streamAggr.keepInput and https://docs.victoriametrics.com/stream-aggregation.html
|
||||
-streamAggr.keepInput
|
||||
Whether to keep all the input samples after the aggregation with -streamAggr.config. By default, only aggregated samples are dropped, while the remaining samples are stored in the database. See also -streamAggr.dropInput and https://docs.victoriametrics.com/stream-aggregation.html
|
||||
-tls
|
||||
Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tls array
|
||||
Whether to enable TLS for incoming HTTP requests at the given -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set. See also -mtls
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-tlsCertFile array
|
||||
Path to file with TLS certificate for the corresponding -httpListenAddr if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsCipherSuites array
|
||||
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsMinVersion string
|
||||
Optional minimum TLS version to use for incoming requests over HTTPS if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
-tlsKeyFile array
|
||||
Path to file with TLS key for the corresponding -httpListenAddr if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsMinVersion array
|
||||
Optional minimum TLS version to use for the corresponding -httpListenAddr if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-usePromCompatibleNaming
|
||||
Whether to replace characters unsupported by Prometheus with underscores in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
|
||||
-version
|
||||
|
|
|
@ -11,7 +11,6 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlselect"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
|
||||
|
@ -22,11 +21,10 @@ import (
|
|||
)
|
||||
|
||||
var (
|
||||
httpListenAddr = flag.String("httpListenAddr", ":9428", "TCP address to listen for http connections. See also -httpListenAddr.useProxyProtocol")
|
||||
useProxyProtocol = flag.Bool("httpListenAddr.useProxyProtocol", false, "Whether to use proxy protocol for connections accepted at -httpListenAddr . "+
|
||||
httpListenAddrs = flagutil.NewArrayString("httpListenAddr", "TCP address to listen for incoming http requests. See also -httpListenAddr.useProxyProtocol")
|
||||
useProxyProtocol = flagutil.NewArrayBool("httpListenAddr.useProxyProtocol", "Whether to use proxy protocol for connections accepted at the given -httpListenAddr . "+
|
||||
"See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . "+
|
||||
"With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing")
|
||||
gogc = flag.Int("gogc", 100, "GOGC to use. See https://tip.golang.org/doc/gc-guide")
|
||||
)
|
||||
|
||||
func main() {
|
||||
|
@ -34,18 +32,21 @@ func main() {
|
|||
flag.CommandLine.SetOutput(os.Stdout)
|
||||
flag.Usage = usage
|
||||
envflag.Parse()
|
||||
cgroup.SetGOGC(*gogc)
|
||||
buildinfo.Init()
|
||||
logger.Init()
|
||||
|
||||
logger.Infof("starting VictoriaLogs at %q...", *httpListenAddr)
|
||||
listenAddrs := *httpListenAddrs
|
||||
if len(listenAddrs) == 0 {
|
||||
listenAddrs = []string{":9428"}
|
||||
}
|
||||
logger.Infof("starting VictoriaLogs at %q...", listenAddrs)
|
||||
startTime := time.Now()
|
||||
|
||||
vlstorage.Init()
|
||||
vlselect.Init()
|
||||
vlinsert.Init()
|
||||
|
||||
go httpserver.Serve(*httpListenAddr, *useProxyProtocol, requestHandler)
|
||||
go httpserver.Serve(listenAddrs, useProxyProtocol, requestHandler)
|
||||
logger.Infof("started VictoriaLogs in %.3f seconds; see https://docs.victoriametrics.com/VictoriaLogs/", time.Since(startTime).Seconds())
|
||||
|
||||
pushmetrics.Init()
|
||||
|
@ -53,9 +54,9 @@ func main() {
|
|||
logger.Infof("received signal %s", sig)
|
||||
pushmetrics.Stop()
|
||||
|
||||
logger.Infof("gracefully shutting down webservice at %q", *httpListenAddr)
|
||||
logger.Infof("gracefully shutting down webservice at %q", listenAddrs)
|
||||
startTime = time.Now()
|
||||
if err := httpserver.Stop(*httpListenAddr); err != nil {
|
||||
if err := httpserver.Stop(listenAddrs); err != nil {
|
||||
logger.Fatalf("cannot stop the webservice: %s", err)
|
||||
}
|
||||
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
|
||||
|
|
|
@ -26,8 +26,8 @@ import (
|
|||
)
|
||||
|
||||
var (
|
||||
httpListenAddr = flag.String("httpListenAddr", ":8428", "TCP address to listen for http connections. See also -tls and -httpListenAddr.useProxyProtocol")
|
||||
useProxyProtocol = flag.Bool("httpListenAddr.useProxyProtocol", false, "Whether to use proxy protocol for connections accepted at -httpListenAddr . "+
|
||||
httpListenAddrs = flagutil.NewArrayString("httpListenAddr", "TCP addresses to listen for incoming http requests. See also -tls and -httpListenAddr.useProxyProtocol")
|
||||
useProxyProtocol = flagutil.NewArrayBool("httpListenAddr.useProxyProtocol", "Whether to use proxy protocol for connections accepted at the corresponding -httpListenAddr . "+
|
||||
"See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . "+
|
||||
"With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing")
|
||||
minScrapeInterval = flag.Duration("dedup.minScrapeInterval", 0, "Leave only the last sample in every time series per each discrete interval "+
|
||||
|
@ -74,7 +74,11 @@ func main() {
|
|||
return
|
||||
}
|
||||
|
||||
logger.Infof("starting VictoriaMetrics at %q...", *httpListenAddr)
|
||||
listenAddrs := *httpListenAddrs
|
||||
if len(listenAddrs) == 0 {
|
||||
listenAddrs = []string{":8428"}
|
||||
}
|
||||
logger.Infof("starting VictoriaMetrics at %q...", listenAddrs)
|
||||
startTime := time.Now()
|
||||
err := storage.SetDownsamplingPeriods(*downsamplingPeriods, *minScrapeInterval)
|
||||
if err != nil {
|
||||
|
@ -87,7 +91,7 @@ func main() {
|
|||
|
||||
startSelfScraper()
|
||||
|
||||
go httpserver.Serve(*httpListenAddr, *useProxyProtocol, requestHandler)
|
||||
go httpserver.Serve(listenAddrs, useProxyProtocol, requestHandler)
|
||||
logger.Infof("started VictoriaMetrics in %.3f seconds", time.Since(startTime).Seconds())
|
||||
|
||||
pushmetrics.Init()
|
||||
|
@ -97,9 +101,9 @@ func main() {
|
|||
|
||||
stopSelfScraper()
|
||||
|
||||
logger.Infof("gracefully shutting down webservice at %q", *httpListenAddr)
|
||||
logger.Infof("gracefully shutting down webservice at %q", listenAddrs)
|
||||
startTime = time.Now()
|
||||
if err := httpserver.Stop(*httpListenAddr); err != nil {
|
||||
if err := httpserver.Stop(listenAddrs); err != nil {
|
||||
logger.Fatalf("cannot stop the webservice: %s", err)
|
||||
}
|
||||
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
|
||||
|
@ -135,6 +139,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
{"api/v1/status/tsdb", "tsdb status page"},
|
||||
{"api/v1/status/top_queries", "top queries"},
|
||||
{"api/v1/status/active_queries", "active queries"},
|
||||
{"-/reload", "reload configuration"},
|
||||
})
|
||||
for _, p := range customAPIPathList {
|
||||
p, doc := p[0], p[1]
|
||||
|
|
|
@ -7,6 +7,7 @@ import (
|
|||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"math/rand"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
|
@ -180,7 +181,7 @@ func setUp() {
|
|||
vmstorage.Init(promql.ResetRollupResultCacheIfNeeded)
|
||||
vmselect.Init()
|
||||
vminsert.Init()
|
||||
go httpserver.Serve(*httpListenAddr, false, requestHandler)
|
||||
go httpserver.Serve(*httpListenAddrs, useProxyProtocol, requestHandler)
|
||||
readyStorageCheckFunc := func() bool {
|
||||
resp, err := http.Get(testHealthHTTPPath)
|
||||
if err != nil {
|
||||
|
@ -226,7 +227,7 @@ func waitFor(timeout time.Duration, f func() bool) error {
|
|||
}
|
||||
|
||||
func tearDown() {
|
||||
if err := httpserver.Stop(*httpListenAddr); err != nil {
|
||||
if err := httpserver.Stop(*httpListenAddrs); err != nil {
|
||||
log.Printf("cannot stop the webservice: %s", err)
|
||||
}
|
||||
vminsert.Stop()
|
||||
|
@ -413,6 +414,7 @@ func httpReadMetrics(t *testing.T, address, query string) []Metric {
|
|||
}
|
||||
return rows
|
||||
}
|
||||
|
||||
func httpReadStruct(t *testing.T, address, query string, dst interface{}) {
|
||||
t.Helper()
|
||||
s := newSuite(t)
|
||||
|
@ -497,3 +499,73 @@ func (s *suite) greaterThan(a, b int) {
|
|||
s.t.FailNow()
|
||||
}
|
||||
}
|
||||
|
||||
func TestImportJSONLines(t *testing.T) {
|
||||
f := func(labelsCount, labelLen int) {
|
||||
t.Helper()
|
||||
|
||||
reqURL := fmt.Sprintf("http://localhost%s/api/v1/import", testHTTPListenAddr)
|
||||
line := generateJSONLine(labelsCount, labelLen)
|
||||
req, err := http.NewRequest("POST", reqURL, bytes.NewBufferString(line))
|
||||
if err != nil {
|
||||
t.Fatalf("cannot create request: %s", err)
|
||||
}
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot perform request for labelsCount=%d, labelLen=%d: %s", labelsCount, labelLen, err)
|
||||
}
|
||||
if resp.StatusCode != 204 {
|
||||
t.Fatalf("unexpected statusCode for labelsCount=%d, labelLen=%d; got %d; want 204", labelsCount, labelLen, resp.StatusCode)
|
||||
}
|
||||
}
|
||||
|
||||
// labels with various lengths
|
||||
for i := 0; i < 500; i++ {
|
||||
f(10, i*5)
|
||||
}
|
||||
|
||||
// Too many labels
|
||||
f(1000, 100)
|
||||
|
||||
// Too long labels
|
||||
f(1, 100_000)
|
||||
f(10, 100_000)
|
||||
f(10, 10_000)
|
||||
}
|
||||
|
||||
func generateJSONLine(labelsCount, labelLen int) string {
|
||||
m := make(map[string]string, labelsCount)
|
||||
m["__name__"] = generateSizedRandomString(labelLen)
|
||||
for j := 1; j < labelsCount; j++ {
|
||||
labelName := generateSizedRandomString(labelLen)
|
||||
labelValue := generateSizedRandomString(labelLen)
|
||||
m[labelName] = labelValue
|
||||
}
|
||||
|
||||
type jsonLine struct {
|
||||
Metric map[string]string `json:"metric"`
|
||||
Values []float64 `json:"values"`
|
||||
Timestamps []int64 `json:"timestamps"`
|
||||
}
|
||||
line := &jsonLine{
|
||||
Metric: m,
|
||||
Values: []float64{1.34},
|
||||
Timestamps: []int64{time.Now().UnixNano() / 1e6},
|
||||
}
|
||||
data, err := json.Marshal(&line)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("cannot marshal JSON: %w", err))
|
||||
}
|
||||
data = append(data, '\n')
|
||||
return string(data)
|
||||
}
|
||||
|
||||
const alphabetSample = `qwertyuiopasdfghjklzxcvbnm`
|
||||
|
||||
func generateSizedRandomString(size int) string {
|
||||
dst := make([]byte, size)
|
||||
for i := range dst {
|
||||
dst[i] = alphabetSample[rand.Intn(len(alphabetSample))]
|
||||
}
|
||||
return string(dst)
|
||||
}
|
||||
|
|
|
@ -8,6 +8,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/appmetrics"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
|
||||
|
@ -49,16 +50,8 @@ func selfScraper(scrapeInterval time.Duration) {
|
|||
var mrs []storage.MetricRow
|
||||
var labels []prompb.Label
|
||||
t := time.NewTicker(scrapeInterval)
|
||||
var currentTimestamp int64
|
||||
for {
|
||||
select {
|
||||
case <-selfScraperStopCh:
|
||||
t.Stop()
|
||||
logger.Infof("stopped self-scraping `/metrics` page")
|
||||
return
|
||||
case currentTime := <-t.C:
|
||||
currentTimestamp = currentTime.UnixNano() / 1e6
|
||||
}
|
||||
f := func(currentTime time.Time, sendStaleMarkers bool) {
|
||||
currentTimestamp := currentTime.UnixNano() / 1e6
|
||||
bb.Reset()
|
||||
appmetrics.WritePrometheusMetrics(&bb)
|
||||
s := bytesutil.ToUnsafeString(bb.B)
|
||||
|
@ -83,12 +76,27 @@ func selfScraper(scrapeInterval time.Duration) {
|
|||
mr := &mrs[len(mrs)-1]
|
||||
mr.MetricNameRaw = storage.MarshalMetricNameRaw(mr.MetricNameRaw[:0], labels)
|
||||
mr.Timestamp = currentTimestamp
|
||||
mr.Value = r.Value
|
||||
if sendStaleMarkers {
|
||||
mr.Value = decimal.StaleNaN
|
||||
} else {
|
||||
mr.Value = r.Value
|
||||
}
|
||||
}
|
||||
if err := vmstorage.AddRows(mrs); err != nil {
|
||||
logger.Errorf("cannot store self-scraped metrics: %s", err)
|
||||
}
|
||||
}
|
||||
for {
|
||||
select {
|
||||
case <-selfScraperStopCh:
|
||||
f(time.Now(), true)
|
||||
t.Stop()
|
||||
logger.Infof("stopped self-scraping `/metrics` page")
|
||||
return
|
||||
case currentTime := <-t.C:
|
||||
f(currentTime, false)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func addLabel(dst []prompb.Label, key, value string) []prompb.Label {
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
{
|
||||
"files": {
|
||||
"main.css": "./static/css/main.d1313636.css",
|
||||
"main.js": "./static/js/main.1919fefe.js",
|
||||
"main.css": "./static/css/main.1e22ee10.css",
|
||||
"main.js": "./static/js/main.92cf3903.js",
|
||||
"static/js/522.da77e7b3.chunk.js": "./static/js/522.da77e7b3.chunk.js",
|
||||
"static/media/MetricsQL.md": "./static/media/MetricsQL.8644fd7c964802dd34a9.md",
|
||||
"static/media/MetricsQL.md": "./static/media/MetricsQL.48b7b7105a48d7775f01.md",
|
||||
"index.html": "./index.html"
|
||||
},
|
||||
"entrypoints": [
|
||||
"static/css/main.d1313636.css",
|
||||
"static/js/main.1919fefe.js"
|
||||
"static/css/main.1e22ee10.css",
|
||||
"static/js/main.92cf3903.js"
|
||||
]
|
||||
}
|
|
@ -1 +1 @@
|
|||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=5"/><meta name="theme-color" content="#000000"/><meta name="description" content="UI for VictoriaMetrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><script src="./dashboards/index.js" type="module"></script><meta name="twitter:card" content="summary_large_image"><meta name="twitter:image" content="./preview.jpg"><meta name="twitter:title" content="UI for VictoriaMetrics"><meta name="twitter:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta name="twitter:site" content="@VictoriaMetrics"><meta property="og:title" content="Metric explorer for VictoriaMetrics"><meta property="og:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta property="og:image" content="./preview.jpg"><meta property="og:type" content="website"><script defer="defer" src="./static/js/main.1919fefe.js"></script><link href="./static/css/main.d1313636.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
||||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=5"/><meta name="theme-color" content="#000000"/><meta name="description" content="UI for VictoriaMetrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><script src="./dashboards/index.js" type="module"></script><meta name="twitter:card" content="summary_large_image"><meta name="twitter:image" content="./preview.jpg"><meta name="twitter:title" content="UI for VictoriaMetrics"><meta name="twitter:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta name="twitter:site" content="@VictoriaMetrics"><meta property="og:title" content="Metric explorer for VictoriaMetrics"><meta property="og:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta property="og:image" content="./preview.jpg"><meta property="og:type" content="website"><script defer="defer" src="./static/js/main.92cf3903.js"></script><link href="./static/css/main.1e22ee10.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
1
app/vlselect/vmui/static/css/main.1e22ee10.css
Normal file
1
app/vlselect/vmui/static/css/main.1e22ee10.css
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
2
app/vlselect/vmui/static/js/main.92cf3903.js
Normal file
2
app/vlselect/vmui/static/js/main.92cf3903.js
Normal file
File diff suppressed because one or more lines are too long
|
@ -810,7 +810,7 @@ Metric names are stripped from the resulting rollups. Add [keep_metric_names](#k
|
|||
|
||||
#### timestamp
|
||||
|
||||
`timestamp(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the last raw sample
|
||||
`timestamp(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the last raw sample
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
@ -819,7 +819,7 @@ This function is supported by PromQL. See also [timestamp_with_name](#timestamp_
|
|||
|
||||
#### timestamp_with_name
|
||||
|
||||
`timestamp_with_name(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the last raw sample
|
||||
`timestamp_with_name(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the last raw sample
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are preserved in the resulting rollups.
|
||||
|
@ -828,7 +828,7 @@ See also [timestamp](#timestamp).
|
|||
|
||||
#### tfirst_over_time
|
||||
|
||||
`tfirst_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the first raw sample
|
||||
`tfirst_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the first raw sample
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
@ -837,7 +837,7 @@ See also [first_over_time](#first_over_time).
|
|||
|
||||
#### tlast_change_over_time
|
||||
|
||||
`tlast_change_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the last change
|
||||
`tlast_change_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the last change
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) on the given lookbehind window `d`.
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
@ -852,7 +852,7 @@ See also [tlast_change_over_time](#tlast_change_over_time).
|
|||
|
||||
#### tmax_over_time
|
||||
|
||||
`tmax_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the raw sample
|
||||
`tmax_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the raw sample
|
||||
with the maximum value on the given lookbehind window `d`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
|
@ -862,7 +862,7 @@ See also [max_over_time](#max_over_time).
|
|||
|
||||
#### tmin_over_time
|
||||
|
||||
`tmin_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the raw sample
|
||||
`tmin_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the raw sample
|
||||
with the minimum value on the given lookbehind window `d`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
95
app/vmagent/datadogsketches/request_handler.go
Normal file
95
app/vmagent/datadogsketches/request_handler.go
Normal file
|
@ -0,0 +1,95 @@
|
|||
package datadogsketches
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogsketches"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogsketches/stream"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
var (
|
||||
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="datadogsketches"}`)
|
||||
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="datadogsketches"}`)
|
||||
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="datadogsketches"}`)
|
||||
)
|
||||
|
||||
// InsertHandlerForHTTP processes remote write for DataDog POST /api/beta/sketches request.
|
||||
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
|
||||
extraLabels, err := parserCommon.GetExtraLabels(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ce := req.Header.Get("Content-Encoding")
|
||||
return stream.Parse(req.Body, ce, func(sketches []*datadogsketches.Sketch) error {
|
||||
return insertRows(at, sketches, extraLabels)
|
||||
})
|
||||
}
|
||||
|
||||
func insertRows(at *auth.Token, sketches []*datadogsketches.Sketch, extraLabels []prompbmarshal.Label) error {
|
||||
ctx := common.GetPushCtx()
|
||||
defer common.PutPushCtx(ctx)
|
||||
|
||||
rowsTotal := 0
|
||||
tssDst := ctx.WriteRequest.Timeseries[:0]
|
||||
labels := ctx.Labels[:0]
|
||||
samples := ctx.Samples[:0]
|
||||
for _, sketch := range sketches {
|
||||
ms := sketch.ToSummary()
|
||||
for _, m := range ms {
|
||||
labelsLen := len(labels)
|
||||
labels = append(labels, prompbmarshal.Label{
|
||||
Name: "__name__",
|
||||
Value: m.Name,
|
||||
})
|
||||
for _, label := range m.Labels {
|
||||
labels = append(labels, prompbmarshal.Label{
|
||||
Name: label.Name,
|
||||
Value: label.Value,
|
||||
})
|
||||
}
|
||||
for _, tag := range sketch.Tags {
|
||||
name, value := datadogutils.SplitTag(tag)
|
||||
if name == "host" {
|
||||
name = "exported_host"
|
||||
}
|
||||
labels = append(labels, prompbmarshal.Label{
|
||||
Name: name,
|
||||
Value: value,
|
||||
})
|
||||
}
|
||||
labels = append(labels, extraLabels...)
|
||||
samplesLen := len(samples)
|
||||
for _, p := range m.Points {
|
||||
samples = append(samples, prompbmarshal.Sample{
|
||||
Timestamp: p.Timestamp,
|
||||
Value: p.Value,
|
||||
})
|
||||
}
|
||||
rowsTotal += len(m.Points)
|
||||
tssDst = append(tssDst, prompbmarshal.TimeSeries{
|
||||
Labels: labels[labelsLen:],
|
||||
Samples: samples[samplesLen:],
|
||||
})
|
||||
}
|
||||
}
|
||||
ctx.WriteRequest.Timeseries = tssDst
|
||||
ctx.Labels = labels
|
||||
ctx.Samples = samples
|
||||
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
|
||||
return remotewrite.ErrQueueFullHTTPRetry
|
||||
}
|
||||
rowsInserted.Add(rowsTotal)
|
||||
if at != nil {
|
||||
rowsTenantInserted.Get(at).Add(rowsTotal)
|
||||
}
|
||||
rowsPerInsert.Update(float64(rowsTotal))
|
||||
return nil
|
||||
}
|
|
@ -12,6 +12,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/csvimport"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/datadogsketches"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/datadogv1"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/datadogv2"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/graphite"
|
||||
|
@ -45,10 +46,10 @@ import (
|
|||
)
|
||||
|
||||
var (
|
||||
httpListenAddr = flag.String("httpListenAddr", ":8429", "TCP address to listen for http connections. "+
|
||||
httpListenAddrs = flagutil.NewArrayString("httpListenAddr", "TCP address to listen for incoming http requests. "+
|
||||
"Set this flag to empty value in order to disable listening on any port. This mode may be useful for running multiple vmagent instances on the same server. "+
|
||||
"Note that /targets and /metrics pages aren't available if -httpListenAddr=''. See also -tls and -httpListenAddr.useProxyProtocol")
|
||||
useProxyProtocol = flag.Bool("httpListenAddr.useProxyProtocol", false, "Whether to use proxy protocol for connections accepted at -httpListenAddr . "+
|
||||
useProxyProtocol = flagutil.NewArrayBool("httpListenAddr.useProxyProtocol", "Whether to use proxy protocol for connections accepted at the corresponding -httpListenAddr . "+
|
||||
"See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . "+
|
||||
"With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing")
|
||||
influxListenAddr = flag.String("influxListenAddr", "", "TCP and UDP address to listen for InfluxDB line protocol data. Usually :8089 must be set. Doesn't work if empty. "+
|
||||
|
@ -119,7 +120,11 @@ func main() {
|
|||
return
|
||||
}
|
||||
|
||||
logger.Infof("starting vmagent at %q...", *httpListenAddr)
|
||||
listenAddrs := *httpListenAddrs
|
||||
if len(listenAddrs) == 0 {
|
||||
listenAddrs = []string{":8429"}
|
||||
}
|
||||
logger.Infof("starting vmagent at %q...", listenAddrs)
|
||||
startTime := time.Now()
|
||||
remotewrite.Init()
|
||||
common.StartUnmarshalWorkers()
|
||||
|
@ -142,9 +147,7 @@ func main() {
|
|||
|
||||
promscrape.Init(remotewrite.PushDropSamplesOnFailure)
|
||||
|
||||
if len(*httpListenAddr) > 0 {
|
||||
go httpserver.Serve(*httpListenAddr, *useProxyProtocol, requestHandler)
|
||||
}
|
||||
go httpserver.Serve(listenAddrs, useProxyProtocol, requestHandler)
|
||||
logger.Infof("started vmagent in %.3f seconds", time.Since(startTime).Seconds())
|
||||
|
||||
pushmetrics.Init()
|
||||
|
@ -153,13 +156,11 @@ func main() {
|
|||
pushmetrics.Stop()
|
||||
|
||||
startTime = time.Now()
|
||||
if len(*httpListenAddr) > 0 {
|
||||
logger.Infof("gracefully shutting down webservice at %q", *httpListenAddr)
|
||||
if err := httpserver.Stop(*httpListenAddr); err != nil {
|
||||
logger.Fatalf("cannot stop the webservice: %s", err)
|
||||
}
|
||||
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
|
||||
logger.Infof("gracefully shutting down webservice at %q", listenAddrs)
|
||||
if err := httpserver.Stop(listenAddrs); err != nil {
|
||||
logger.Fatalf("cannot stop the webservice: %s", err)
|
||||
}
|
||||
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
|
||||
|
||||
promscrape.Stop()
|
||||
|
||||
|
@ -368,6 +369,15 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
w.WriteHeader(202)
|
||||
fmt.Fprintf(w, `{"status":"ok"}`)
|
||||
return true
|
||||
case "/datadog/api/beta/sketches":
|
||||
datadogsketchesWriteRequests.Inc()
|
||||
if err := datadogsketches.InsertHandlerForHTTP(nil, r); err != nil {
|
||||
datadogsketchesWriteErrors.Inc()
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return true
|
||||
}
|
||||
w.WriteHeader(202)
|
||||
return true
|
||||
case "/datadog/api/v1/validate":
|
||||
datadogValidateRequests.Inc()
|
||||
// See https://docs.datadoghq.com/api/latest/authentication/#validate-api-key
|
||||
|
@ -603,6 +613,15 @@ func processMultitenantRequest(w http.ResponseWriter, r *http.Request, path stri
|
|||
w.WriteHeader(202)
|
||||
fmt.Fprintf(w, `{"status":"ok"}`)
|
||||
return true
|
||||
case "datadog/api/beta/sketches":
|
||||
datadogsketchesWriteRequests.Inc()
|
||||
if err := datadogsketches.InsertHandlerForHTTP(at, r); err != nil {
|
||||
datadogsketchesWriteErrors.Inc()
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return true
|
||||
}
|
||||
w.WriteHeader(202)
|
||||
return true
|
||||
case "datadog/api/v1/validate":
|
||||
datadogValidateRequests.Inc()
|
||||
// See https://docs.datadoghq.com/api/latest/authentication/#validate-api-key
|
||||
|
@ -659,6 +678,9 @@ var (
|
|||
datadogv2WriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v2/series", protocol="datadog"}`)
|
||||
datadogv2WriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/datadog/api/v2/series", protocol="datadog"}`)
|
||||
|
||||
datadogsketchesWriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/beta/sketches", protocol="datadog"}`)
|
||||
datadogsketchesWriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/datadog/api/beta/sketches", protocol="datadog"}`)
|
||||
|
||||
datadogValidateRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`)
|
||||
datadogCheckRunRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`)
|
||||
datadogIntakeRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/intake", protocol="datadog"}`)
|
||||
|
|
|
@ -35,6 +35,7 @@ var (
|
|||
proxyURL = flagutil.NewArrayString("remoteWrite.proxyURL", "Optional proxy URL for writing data to the corresponding -remoteWrite.url. "+
|
||||
"Supported proxies: http, https, socks5. Example: -remoteWrite.proxyURL=socks5://proxy:1234")
|
||||
|
||||
tlsHandshakeTimeout = flagutil.NewArrayDuration("remoteWrite.tlsHandshakeTimeout", 20*time.Second, "The timeout for estabilishing tls connections to the corresponding -remoteWrite.url")
|
||||
tlsInsecureSkipVerify = flagutil.NewArrayBool("remoteWrite.tlsInsecureSkipVerify", "Whether to skip tls verification when connecting to the corresponding -remoteWrite.url")
|
||||
tlsCertFile = flagutil.NewArrayString("remoteWrite.tlsCertFile", "Optional path to client-side TLS certificate file to use when connecting "+
|
||||
"to the corresponding -remoteWrite.url")
|
||||
|
@ -122,7 +123,7 @@ func newHTTPClient(argIdx int, remoteWriteURL, sanitizedURL string, fq *persiste
|
|||
tr := &http.Transport{
|
||||
DialContext: statDial,
|
||||
TLSClientConfig: tlsCfg,
|
||||
TLSHandshakeTimeout: 10 * time.Second,
|
||||
TLSHandshakeTimeout: tlsHandshakeTimeout.GetOptionalArg(argIdx),
|
||||
MaxConnsPerHost: 2 * concurrency,
|
||||
MaxIdleConnsPerHost: 2 * concurrency,
|
||||
IdleConnTimeout: time.Minute,
|
||||
|
|
|
@ -25,6 +25,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/prometheus"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/promql"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
|
@ -184,7 +185,8 @@ func processFlags() {
|
|||
|
||||
func setUp() {
|
||||
vmstorage.Init(promql.ResetRollupResultCacheIfNeeded)
|
||||
go httpserver.Serve(httpListenAddr, false, func(w http.ResponseWriter, r *http.Request) bool {
|
||||
var ab flagutil.ArrayBool
|
||||
go httpserver.Serve([]string{httpListenAddr}, &ab, func(w http.ResponseWriter, r *http.Request) bool {
|
||||
switch r.URL.Path {
|
||||
case "/prometheus/api/v1/query":
|
||||
if err := prometheus.QueryHandler(nil, time.Now(), w, r); err != nil {
|
||||
|
@ -225,7 +227,7 @@ checkCheck:
|
|||
}
|
||||
|
||||
func tearDown() {
|
||||
if err := httpserver.Stop(httpListenAddr); err != nil {
|
||||
if err := httpserver.Stop([]string{httpListenAddr}); err != nil {
|
||||
logger.Errorf("cannot stop the webservice: %s", err)
|
||||
}
|
||||
vmstorage.Stop()
|
||||
|
|
|
@ -10,6 +10,7 @@ import (
|
|||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/utils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
)
|
||||
|
||||
|
@ -93,7 +94,7 @@ func Init(extraParams url.Values) (QuerierBuilder, error) {
|
|||
logger.Warnf("flag `-datasource.lookback` will be deprecated soon. Please use `-rule.evalDelay` command-line flag instead. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5155 for details.")
|
||||
}
|
||||
|
||||
tr, err := utils.Transport(*addr, *tlsCertFile, *tlsKeyFile, *tlsCAFile, *tlsServerName, *tlsInsecureSkipVerify)
|
||||
tr, err := httputils.Transport(*addr, *tlsCertFile, *tlsKeyFile, *tlsCAFile, *tlsServerName, *tlsInsecureSkipVerify)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create transport: %w", err)
|
||||
}
|
||||
|
|
|
@ -59,8 +59,8 @@ absolute path to all .tpl files in root.
|
|||
configCheckInterval = flag.Duration("configCheckInterval", 0, "Interval for checking for changes in '-rule' or '-notifier.config' files. "+
|
||||
"By default, the checking is disabled. Send SIGHUP signal in order to force config check for changes.")
|
||||
|
||||
httpListenAddr = flag.String("httpListenAddr", ":8880", "Address to listen for http connections. See also -tls and -httpListenAddr.useProxyProtocol")
|
||||
useProxyProtocol = flag.Bool("httpListenAddr.useProxyProtocol", false, "Whether to use proxy protocol for connections accepted at -httpListenAddr . "+
|
||||
httpListenAddrs = flagutil.NewArrayString("httpListenAddr", "Address to listen for incoming http requests. See also -tls and -httpListenAddr.useProxyProtocol")
|
||||
useProxyProtocol = flagutil.NewArrayBool("httpListenAddr.useProxyProtocol", "Whether to use proxy protocol for connections accepted at the corresponding -httpListenAddr . "+
|
||||
"See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . "+
|
||||
"With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing")
|
||||
evaluationInterval = flag.Duration("evaluationInterval", time.Minute, "How often to evaluate the rules")
|
||||
|
@ -178,15 +178,19 @@ func main() {
|
|||
|
||||
go configReload(ctx, manager, groupsCfg, sighupCh)
|
||||
|
||||
listenAddrs := *httpListenAddrs
|
||||
if len(listenAddrs) == 0 {
|
||||
listenAddrs = []string{":8880"}
|
||||
}
|
||||
rh := &requestHandler{m: manager}
|
||||
go httpserver.Serve(*httpListenAddr, *useProxyProtocol, rh.handler)
|
||||
go httpserver.Serve(listenAddrs, useProxyProtocol, rh.handler)
|
||||
|
||||
pushmetrics.Init()
|
||||
sig := procutil.WaitForSigterm()
|
||||
logger.Infof("service received signal %s", sig)
|
||||
pushmetrics.Stop()
|
||||
|
||||
if err := httpserver.Stop(*httpListenAddr); err != nil {
|
||||
if err := httpserver.Stop(listenAddrs); err != nil {
|
||||
logger.Fatalf("cannot stop the webservice: %s", err)
|
||||
}
|
||||
cancel()
|
||||
|
@ -248,7 +252,13 @@ func newManager(ctx context.Context) (*manager, error) {
|
|||
func getExternalURL(customURL string) (*url.URL, error) {
|
||||
if customURL == "" {
|
||||
// use local hostname as external URL
|
||||
return getHostnameAsExternalURL(*httpListenAddr, httpserver.IsTLS())
|
||||
listenAddr := ":8880"
|
||||
if len(*httpListenAddrs) > 0 {
|
||||
listenAddr = (*httpListenAddrs)[0]
|
||||
}
|
||||
isTLS := httpserver.IsTLS(0)
|
||||
|
||||
return getHostnameAsExternalURL(listenAddr, isTLS)
|
||||
}
|
||||
u, err := url.Parse(customURL)
|
||||
if err != nil {
|
||||
|
@ -260,13 +270,13 @@ func getExternalURL(customURL string) (*url.URL, error) {
|
|||
return u, nil
|
||||
}
|
||||
|
||||
func getHostnameAsExternalURL(httpListenAddr string, isSecure bool) (*url.URL, error) {
|
||||
func getHostnameAsExternalURL(addr string, isSecure bool) (*url.URL, error) {
|
||||
hname, err := os.Hostname()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get hostname: %w", err)
|
||||
}
|
||||
port := ""
|
||||
if ipport := strings.Split(httpListenAddr, ":"); len(ipport) > 1 {
|
||||
if ipport := strings.Split(addr, ":"); len(ipport) > 1 {
|
||||
port = ":" + ipport[1]
|
||||
}
|
||||
schema := "http://"
|
||||
|
|
|
@ -11,6 +11,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/utils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
|
||||
)
|
||||
|
@ -127,7 +128,7 @@ func NewAlertManager(alertManagerURL string, fn AlertURLGenerator, authCfg proma
|
|||
if authCfg.TLSConfig != nil {
|
||||
tls = authCfg.TLSConfig
|
||||
}
|
||||
tr, err := utils.Transport(alertManagerURL, tls.CertFile, tls.KeyFile, tls.CAFile, tls.ServerName, tls.InsecureSkipVerify)
|
||||
tr, err := httputils.Transport(alertManagerURL, tls.CertFile, tls.KeyFile, tls.CAFile, tls.ServerName, tls.InsecureSkipVerify)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create transport: %w", err)
|
||||
}
|
||||
|
|
|
@ -8,6 +8,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/utils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -60,7 +61,7 @@ func Init() (datasource.QuerierBuilder, error) {
|
|||
if *addr == "" {
|
||||
return nil, nil
|
||||
}
|
||||
tr, err := utils.Transport(*addr, *tlsCertFile, *tlsKeyFile, *tlsCAFile, *tlsServerName, *tlsInsecureSkipVerify)
|
||||
tr, err := httputils.Transport(*addr, *tlsCertFile, *tlsKeyFile, *tlsCAFile, *tlsServerName, *tlsInsecureSkipVerify)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create transport: %w", err)
|
||||
}
|
||||
|
|
|
@ -11,7 +11,7 @@ import (
|
|||
|
||||
"github.com/golang/snappy"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/utils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
)
|
||||
|
||||
|
@ -30,7 +30,7 @@ func NewDebugClient() (*DebugClient, error) {
|
|||
return nil, nil
|
||||
}
|
||||
|
||||
t, err := utils.Transport(*addr, *tlsCertFile, *tlsKeyFile, *tlsCAFile, *tlsServerName, *tlsInsecureSkipVerify)
|
||||
t, err := httputils.Transport(*addr, *tlsCertFile, *tlsKeyFile, *tlsCAFile, *tlsServerName, *tlsInsecureSkipVerify)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create transport: %w", err)
|
||||
}
|
||||
|
|
|
@ -8,6 +8,7 @@ import (
|
|||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/utils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -64,7 +65,7 @@ func Init(ctx context.Context) (*Client, error) {
|
|||
return nil, nil
|
||||
}
|
||||
|
||||
t, err := utils.Transport(*addr, *tlsCertFile, *tlsKeyFile, *tlsCAFile, *tlsServerName, *tlsInsecureSkipVerify)
|
||||
t, err := httputils.Transport(*addr, *tlsCertFile, *tlsKeyFile, *tlsCAFile, *tlsServerName, *tlsInsecureSkipVerify)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create transport: %w", err)
|
||||
}
|
||||
|
|
|
@ -14,6 +14,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/tpl"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
|
||||
)
|
||||
|
@ -87,7 +88,10 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
|
|||
WriteRuleDetails(w, r, rule)
|
||||
return true
|
||||
case "/vmalert/groups":
|
||||
WriteListGroups(w, r, rh.groups())
|
||||
var data []apiGroup
|
||||
rf := extractRulesFilter(r)
|
||||
data = rh.groups(rf)
|
||||
WriteListGroups(w, r, data)
|
||||
return true
|
||||
case "/vmalert/notifiers":
|
||||
WriteListTargets(w, r, notifier.GetTargets())
|
||||
|
@ -98,12 +102,20 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
|
|||
case "/rules":
|
||||
// Grafana makes an extra request to `/rules`
|
||||
// handler in addition to `/api/v1/rules` calls in alerts UI,
|
||||
WriteListGroups(w, r, rh.groups())
|
||||
var data []apiGroup
|
||||
rf := extractRulesFilter(r)
|
||||
data = rh.groups(rf)
|
||||
WriteListGroups(w, r, data)
|
||||
return true
|
||||
|
||||
case "/vmalert/api/v1/rules", "/api/v1/rules":
|
||||
// path used by Grafana for ng alerting
|
||||
data, err := rh.listGroups()
|
||||
var data []byte
|
||||
var err error
|
||||
|
||||
rf := extractRulesFilter(r)
|
||||
data, err = rh.listGroups(rf)
|
||||
|
||||
if err != nil {
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return true
|
||||
|
@ -111,6 +123,7 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
|
|||
w.Header().Set("Content-Type", "application/json")
|
||||
w.Write(data)
|
||||
return true
|
||||
|
||||
case "/vmalert/api/v1/alerts", "/api/v1/alerts":
|
||||
// path used by Grafana for ng alerting
|
||||
data, err := rh.listAlerts()
|
||||
|
@ -207,26 +220,90 @@ type listGroupsResponse struct {
|
|||
} `json:"data"`
|
||||
}
|
||||
|
||||
func (rh *requestHandler) groups() []apiGroup {
|
||||
// see https://prometheus.io/docs/prometheus/latest/querying/api/#rules
|
||||
type rulesFilter struct {
|
||||
files []string
|
||||
groupNames []string
|
||||
ruleNames []string
|
||||
ruleType string
|
||||
excludeAlerts bool
|
||||
}
|
||||
|
||||
func extractRulesFilter(r *http.Request) rulesFilter {
|
||||
rf := rulesFilter{}
|
||||
|
||||
var ruleType string
|
||||
ruleTypeParam := r.URL.Query().Get("type")
|
||||
// for some reason, `type` in filter doesn't match `type` in response,
|
||||
// so we use this matching here
|
||||
if ruleTypeParam == "alert" {
|
||||
ruleType = ruleTypeAlerting
|
||||
} else if ruleTypeParam == "record" {
|
||||
ruleType = ruleTypeRecording
|
||||
}
|
||||
rf.ruleType = ruleType
|
||||
|
||||
rf.excludeAlerts = httputils.GetBool(r, "exclude_alerts")
|
||||
rf.ruleNames = append([]string{}, r.Form["rule_name[]"]...)
|
||||
rf.groupNames = append([]string{}, r.Form["rule_group[]"]...)
|
||||
rf.files = append([]string{}, r.Form["file[]"]...)
|
||||
return rf
|
||||
}
|
||||
|
||||
func (rh *requestHandler) groups(rf rulesFilter) []apiGroup {
|
||||
rh.m.groupsMu.RLock()
|
||||
defer rh.m.groupsMu.RUnlock()
|
||||
|
||||
groups := make([]apiGroup, 0)
|
||||
for _, g := range rh.m.groups {
|
||||
groups = append(groups, groupToAPI(g))
|
||||
isInList := func(list []string, needle string) bool {
|
||||
if len(list) < 1 {
|
||||
return true
|
||||
}
|
||||
for _, i := range list {
|
||||
if i == needle {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// sort list of alerts for deterministic output
|
||||
groups := make([]apiGroup, 0)
|
||||
for _, group := range rh.m.groups {
|
||||
if !isInList(rf.groupNames, group.Name) {
|
||||
continue
|
||||
}
|
||||
if !isInList(rf.files, group.File) {
|
||||
continue
|
||||
}
|
||||
|
||||
g := groupToAPI(group)
|
||||
// the returned list should always be non-nil
|
||||
// https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4221
|
||||
filteredRules := make([]apiRule, 0)
|
||||
for _, r := range g.Rules {
|
||||
if rf.ruleType != "" && rf.ruleType != r.Type {
|
||||
continue
|
||||
}
|
||||
if !isInList(rf.ruleNames, r.Name) {
|
||||
continue
|
||||
}
|
||||
if rf.excludeAlerts {
|
||||
r.Alerts = nil
|
||||
}
|
||||
filteredRules = append(filteredRules, r)
|
||||
}
|
||||
g.Rules = filteredRules
|
||||
groups = append(groups, g)
|
||||
}
|
||||
// sort list of groups for deterministic output
|
||||
sort.Slice(groups, func(i, j int) bool {
|
||||
return groups[i].Name < groups[j].Name
|
||||
})
|
||||
|
||||
return groups
|
||||
}
|
||||
|
||||
func (rh *requestHandler) listGroups() ([]byte, error) {
|
||||
func (rh *requestHandler) listGroups(rf rulesFilter) ([]byte, error) {
|
||||
lr := listGroupsResponse{Status: "success"}
|
||||
lr.Data.Groups = rh.groups()
|
||||
lr.Data.Groups = rh.groups(rf)
|
||||
b, err := json.Marshal(lr)
|
||||
if err != nil {
|
||||
return nil, &httpserver.ErrorWithStatusCode{
|
||||
|
|
|
@ -23,6 +23,7 @@ func TestHandler(t *testing.T) {
|
|||
})
|
||||
g := &rule.Group{
|
||||
Name: "group",
|
||||
File: "rules.yaml",
|
||||
Concurrency: 1,
|
||||
}
|
||||
ar := rule.NewAlertingRule(fq, g, config.Rule{ID: 0, Alert: "alert"})
|
||||
|
@ -165,6 +166,74 @@ func TestHandler(t *testing.T) {
|
|||
t.Fatalf("expected %+v to have state updates field not empty", gotRuleWithUpdates.StateUpdates)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("/api/v1/rules&filters", func(t *testing.T) {
|
||||
check := func(url string, expGroups, expRules int) {
|
||||
t.Helper()
|
||||
lr := listGroupsResponse{}
|
||||
getResp(ts.URL+url, &lr, 200)
|
||||
if length := len(lr.Data.Groups); length != expGroups {
|
||||
t.Errorf("expected %d groups got %d", expGroups, length)
|
||||
}
|
||||
if len(lr.Data.Groups) < 1 {
|
||||
return
|
||||
}
|
||||
var rulesN int
|
||||
for _, gr := range lr.Data.Groups {
|
||||
rulesN += len(gr.Rules)
|
||||
}
|
||||
if rulesN != expRules {
|
||||
t.Errorf("expected %d rules got %d", expRules, rulesN)
|
||||
}
|
||||
}
|
||||
|
||||
check("/api/v1/rules?type=alert", 1, 1)
|
||||
check("/api/v1/rules?type=record", 1, 1)
|
||||
|
||||
check("/vmalert/api/v1/rules?type=alert", 1, 1)
|
||||
check("/vmalert/api/v1/rules?type=record", 1, 1)
|
||||
|
||||
// no filtering expected due to bad params
|
||||
check("/api/v1/rules?type=badParam", 1, 2)
|
||||
check("/api/v1/rules?foo=bar", 1, 2)
|
||||
|
||||
check("/api/v1/rules?rule_group[]=foo&rule_group[]=bar", 0, 0)
|
||||
check("/api/v1/rules?rule_group[]=foo&rule_group[]=group&rule_group[]=bar", 1, 2)
|
||||
|
||||
check("/api/v1/rules?rule_group[]=group&file[]=foo", 0, 0)
|
||||
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml", 1, 2)
|
||||
|
||||
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml&rule_name[]=foo", 1, 0)
|
||||
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml&rule_name[]=alert", 1, 1)
|
||||
check("/api/v1/rules?rule_group[]=group&file[]=rules.yaml&rule_name[]=alert&rule_name[]=record", 1, 2)
|
||||
})
|
||||
t.Run("/api/v1/rules&exclude_alerts=true", func(t *testing.T) {
|
||||
// check if response returns active alerts by default
|
||||
lr := listGroupsResponse{}
|
||||
getResp(ts.URL+"/api/v1/rules?rule_group[]=group&file[]=rules.yaml", &lr, 200)
|
||||
activeAlerts := 0
|
||||
for _, gr := range lr.Data.Groups {
|
||||
for _, r := range gr.Rules {
|
||||
activeAlerts += len(r.Alerts)
|
||||
}
|
||||
}
|
||||
if activeAlerts == 0 {
|
||||
t.Fatalf("expected at least 1 active alert in response; got 0")
|
||||
}
|
||||
|
||||
// disable returning alerts via param
|
||||
lr = listGroupsResponse{}
|
||||
getResp(ts.URL+"/api/v1/rules?rule_group[]=group&file[]=rules.yaml&exclude_alerts=true", &lr, 200)
|
||||
activeAlerts = 0
|
||||
for _, gr := range lr.Data.Groups {
|
||||
for _, r := range gr.Rules {
|
||||
activeAlerts += len(r.Alerts)
|
||||
}
|
||||
}
|
||||
if activeAlerts != 0 {
|
||||
t.Fatalf("expected to get 0 active alert in response; got %d", activeAlerts)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestEmptyResponse(t *testing.T) {
|
||||
|
|
|
@ -193,10 +193,15 @@ func ruleToAPI(r interface{}) apiRule {
|
|||
return apiRule{}
|
||||
}
|
||||
|
||||
const (
|
||||
ruleTypeRecording = "recording"
|
||||
ruleTypeAlerting = "alerting"
|
||||
)
|
||||
|
||||
func recordingToAPI(rr *rule.RecordingRule) apiRule {
|
||||
lastState := rule.GetLastEntry(rr)
|
||||
r := apiRule{
|
||||
Type: "recording",
|
||||
Type: ruleTypeRecording,
|
||||
DatasourceType: rr.Type.String(),
|
||||
Name: rr.Name,
|
||||
Query: rr.Expr,
|
||||
|
@ -224,7 +229,7 @@ func recordingToAPI(rr *rule.RecordingRule) apiRule {
|
|||
func alertingToAPI(ar *rule.AlertingRule) apiRule {
|
||||
lastState := rule.GetLastEntry(ar)
|
||||
r := apiRule{
|
||||
Type: "alerting",
|
||||
Type: ruleTypeAlerting,
|
||||
DatasourceType: ar.Type.String(),
|
||||
Name: ar.Name,
|
||||
Query: ar.Expr,
|
||||
|
|
|
@ -43,6 +43,9 @@ var (
|
|||
type AuthConfig struct {
|
||||
Users []UserInfo `yaml:"users,omitempty"`
|
||||
UnauthorizedUser *UserInfo `yaml:"unauthorized_user,omitempty"`
|
||||
|
||||
// ms holds all the metrics for the given AuthConfig
|
||||
ms *metrics.Set
|
||||
}
|
||||
|
||||
// UserInfo is user information read from authConfigPath
|
||||
|
@ -503,6 +506,11 @@ func loadAuthConfig() (bool, error) {
|
|||
}
|
||||
logger.Infof("loaded information about %d users from -auth.config=%q", len(m), *authConfigPath)
|
||||
|
||||
prevAc := authConfig.Load()
|
||||
if prevAc != nil {
|
||||
metrics.UnregisterSet(prevAc.ms)
|
||||
}
|
||||
metrics.RegisterSet(ac.ms)
|
||||
authConfig.Store(ac)
|
||||
authConfigData.Store(&data)
|
||||
authUsers.Store(&m)
|
||||
|
@ -515,10 +523,13 @@ func parseAuthConfig(data []byte) (*AuthConfig, error) {
|
|||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot expand environment vars: %w", err)
|
||||
}
|
||||
var ac AuthConfig
|
||||
if err = yaml.UnmarshalStrict(data, &ac); err != nil {
|
||||
ac := &AuthConfig{
|
||||
ms: metrics.NewSet(),
|
||||
}
|
||||
if err = yaml.UnmarshalStrict(data, ac); err != nil {
|
||||
return nil, fmt.Errorf("cannot unmarshal AuthConfig data: %w", err)
|
||||
}
|
||||
|
||||
ui := ac.UnauthorizedUser
|
||||
if ui != nil {
|
||||
if ui.Username != "" {
|
||||
|
@ -541,15 +552,15 @@ func parseAuthConfig(data []byte) (*AuthConfig, error) {
|
|||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot parse metric_labels for unauthorized_user: %w", err)
|
||||
}
|
||||
ui.requests = metrics.GetOrCreateCounter(`vmauth_unauthorized_user_requests_total` + metricLabels)
|
||||
ui.backendErrors = metrics.GetOrCreateCounter(`vmauth_unauthorized_user_request_backend_errors_total` + metricLabels)
|
||||
ui.requestsDuration = metrics.GetOrCreateSummary(`vmauth_unauthorized_user_request_duration_seconds` + metricLabels)
|
||||
ui.requests = ac.ms.NewCounter(`vmauth_unauthorized_user_requests_total` + metricLabels)
|
||||
ui.backendErrors = ac.ms.NewCounter(`vmauth_unauthorized_user_request_backend_errors_total` + metricLabels)
|
||||
ui.requestsDuration = ac.ms.NewSummary(`vmauth_unauthorized_user_request_duration_seconds` + metricLabels)
|
||||
ui.concurrencyLimitCh = make(chan struct{}, ui.getMaxConcurrentRequests())
|
||||
ui.concurrencyLimitReached = metrics.GetOrCreateCounter(`vmauth_unauthorized_user_concurrent_requests_limit_reached_total` + metricLabels)
|
||||
_ = metrics.GetOrCreateGauge(`vmauth_unauthorized_user_concurrent_requests_capacity`+metricLabels, func() float64 {
|
||||
ui.concurrencyLimitReached = ac.ms.NewCounter(`vmauth_unauthorized_user_concurrent_requests_limit_reached_total` + metricLabels)
|
||||
_ = ac.ms.NewGauge(`vmauth_unauthorized_user_concurrent_requests_capacity`+metricLabels, func() float64 {
|
||||
return float64(cap(ui.concurrencyLimitCh))
|
||||
})
|
||||
_ = metrics.GetOrCreateGauge(`vmauth_unauthorized_user_concurrent_requests_current`+metricLabels, func() float64 {
|
||||
_ = ac.ms.NewGauge(`vmauth_unauthorized_user_concurrent_requests_current`+metricLabels, func() float64 {
|
||||
return float64(len(ui.concurrencyLimitCh))
|
||||
})
|
||||
|
||||
|
@ -559,7 +570,7 @@ func parseAuthConfig(data []byte) (*AuthConfig, error) {
|
|||
}
|
||||
ui.httpTransport = tr
|
||||
}
|
||||
return &ac, nil
|
||||
return ac, nil
|
||||
}
|
||||
|
||||
func parseAuthConfigUsers(ac *AuthConfig) (map[string]*UserInfo, error) {
|
||||
|
@ -570,18 +581,23 @@ func parseAuthConfigUsers(ac *AuthConfig) (map[string]*UserInfo, error) {
|
|||
byAuthToken := make(map[string]*UserInfo, len(uis))
|
||||
for i := range uis {
|
||||
ui := &uis[i]
|
||||
if ui.BearerToken == "" && ui.Username == "" {
|
||||
return nil, fmt.Errorf("either bearer_token or username must be set")
|
||||
if ui.Username != "" && ui.Password == "" {
|
||||
// Do not allow setting username without password if there are other auth configs exist.
|
||||
// This should prevent from typical mis-configuration when access by username without password
|
||||
// remains open if other authorization schemes are defined.
|
||||
if ui.BearerToken != "" {
|
||||
return nil, fmt.Errorf("bearer_token=%q and username=%q cannot be set simultaneously", ui.BearerToken, ui.Username)
|
||||
}
|
||||
}
|
||||
if ui.BearerToken != "" && ui.Username != "" {
|
||||
return nil, fmt.Errorf("bearer_token=%q and username=%q cannot be set simultaneously", ui.BearerToken, ui.Username)
|
||||
ats := getAuthTokens(ui.BearerToken, ui.Username, ui.Password)
|
||||
if len(ats) == 0 {
|
||||
return nil, fmt.Errorf("one of bearer_token, username or mtls must be set")
|
||||
}
|
||||
at1, at2 := getAuthTokens(ui.BearerToken, ui.Username, ui.Password)
|
||||
if byAuthToken[at1] != nil {
|
||||
return nil, fmt.Errorf("duplicate auth token found for bearer_token=%q, username=%q: %q", ui.BearerToken, ui.Username, at1)
|
||||
}
|
||||
if byAuthToken[at2] != nil {
|
||||
return nil, fmt.Errorf("duplicate auth token found for bearer_token=%q, username=%q: %q", ui.BearerToken, ui.Username, at2)
|
||||
for _, at := range ats {
|
||||
if uiOld := byAuthToken[at]; uiOld != nil {
|
||||
return nil, fmt.Errorf("duplicate auth token=%q found for username=%q, name=%q; the previous one is set for username=%q, name=%q",
|
||||
at, ui.Username, ui.Name, uiOld.Username, uiOld.Name)
|
||||
}
|
||||
}
|
||||
|
||||
if err := ui.initURLs(); err != nil {
|
||||
|
@ -596,16 +612,16 @@ func parseAuthConfigUsers(ac *AuthConfig) (map[string]*UserInfo, error) {
|
|||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot parse metric_labels: %w", err)
|
||||
}
|
||||
ui.requests = metrics.GetOrCreateCounter(`vmauth_user_requests_total` + metricLabels)
|
||||
ui.backendErrors = metrics.GetOrCreateCounter(`vmauth_user_request_backend_errors_total` + metricLabels)
|
||||
ui.requestsDuration = metrics.GetOrCreateSummary(`vmauth_user_request_duration_seconds` + metricLabels)
|
||||
ui.requests = ac.ms.GetOrCreateCounter(`vmauth_user_requests_total` + metricLabels)
|
||||
ui.backendErrors = ac.ms.GetOrCreateCounter(`vmauth_user_request_backend_errors_total` + metricLabels)
|
||||
ui.requestsDuration = ac.ms.GetOrCreateSummary(`vmauth_user_request_duration_seconds` + metricLabels)
|
||||
mcr := ui.getMaxConcurrentRequests()
|
||||
ui.concurrencyLimitCh = make(chan struct{}, mcr)
|
||||
ui.concurrencyLimitReached = metrics.GetOrCreateCounter(`vmauth_user_concurrent_requests_limit_reached_total` + metricLabels)
|
||||
_ = metrics.GetOrCreateGauge(`vmauth_user_concurrent_requests_capacity`+metricLabels, func() float64 {
|
||||
ui.concurrencyLimitReached = ac.ms.GetOrCreateCounter(`vmauth_user_concurrent_requests_limit_reached_total` + metricLabels)
|
||||
_ = ac.ms.GetOrCreateGauge(`vmauth_user_concurrent_requests_capacity`+metricLabels, func() float64 {
|
||||
return float64(cap(ui.concurrencyLimitCh))
|
||||
})
|
||||
_ = metrics.GetOrCreateGauge(`vmauth_user_concurrent_requests_current`+metricLabels, func() float64 {
|
||||
_ = ac.ms.GetOrCreateGauge(`vmauth_user_concurrent_requests_current`+metricLabels, func() float64 {
|
||||
return float64(len(ui.concurrencyLimitCh))
|
||||
})
|
||||
|
||||
|
@ -615,8 +631,9 @@ func parseAuthConfigUsers(ac *AuthConfig) (map[string]*UserInfo, error) {
|
|||
}
|
||||
ui.httpTransport = tr
|
||||
|
||||
byAuthToken[at1] = ui
|
||||
byAuthToken[at2] = ui
|
||||
for _, at := range ats {
|
||||
byAuthToken[at] = ui
|
||||
}
|
||||
}
|
||||
return byAuthToken, nil
|
||||
}
|
||||
|
@ -720,24 +737,45 @@ func (ui *UserInfo) name() string {
|
|||
return ""
|
||||
}
|
||||
|
||||
func getAuthTokens(bearerToken, username, password string) (string, string) {
|
||||
func getAuthTokens(bearerToken, username, password string) []string {
|
||||
var ats []string
|
||||
if bearerToken != "" {
|
||||
// Accept the bearerToken as Basic Auth username with empty password
|
||||
at1 := getAuthToken(bearerToken, "", "")
|
||||
at2 := getAuthToken("", bearerToken, "")
|
||||
return at1, at2
|
||||
at1 := getHTTPAuthBearerToken(bearerToken)
|
||||
at2 := getHTTPAuthBasicToken(bearerToken, "")
|
||||
ats = append(ats, at1, at2)
|
||||
} else if username != "" {
|
||||
at := getHTTPAuthBasicToken(username, password)
|
||||
ats = append(ats, at)
|
||||
}
|
||||
at := getAuthToken("", username, password)
|
||||
return at, at
|
||||
return ats
|
||||
}
|
||||
|
||||
func getAuthToken(bearerToken, username, password string) string {
|
||||
if bearerToken != "" {
|
||||
return "Bearer " + bearerToken
|
||||
}
|
||||
func getHTTPAuthBearerToken(bearerToken string) string {
|
||||
return "http_auth:Bearer " + bearerToken
|
||||
}
|
||||
|
||||
func getHTTPAuthBasicToken(username, password string) string {
|
||||
token := username + ":" + password
|
||||
token64 := base64.StdEncoding.EncodeToString([]byte(token))
|
||||
return "Basic " + token64
|
||||
return "http_auth:Basic " + token64
|
||||
}
|
||||
|
||||
func getAuthTokensFromRequest(r *http.Request) []string {
|
||||
var ats []string
|
||||
|
||||
ah := r.Header.Get("Authorization")
|
||||
if ah == "" {
|
||||
return ats
|
||||
}
|
||||
if strings.HasPrefix(ah, "Token ") {
|
||||
// Handle InfluxDB's proprietary token authentication scheme as a bearer token authentication
|
||||
// See https://docs.influxdata.com/influxdb/v2.0/api/
|
||||
ah = strings.Replace(ah, "Token", "Bearer", 1)
|
||||
}
|
||||
at := "http_auth:" + ah
|
||||
ats = append(ats, at)
|
||||
return ats
|
||||
}
|
||||
|
||||
func (up *URLPrefix) sanitize() error {
|
||||
|
|
|
@ -267,7 +267,7 @@ users:
|
|||
max_concurrent_requests: 5
|
||||
tls_insecure_skip_verify: true
|
||||
`, map[string]*UserInfo{
|
||||
getAuthToken("", "foo", "bar"): {
|
||||
getHTTPAuthBasicToken("foo", "bar"): {
|
||||
Username: "foo",
|
||||
Password: "bar",
|
||||
URLPrefix: mustParseURL("http://aaa:343/bbb"),
|
||||
|
@ -290,7 +290,7 @@ users:
|
|||
load_balancing_policy: first_available
|
||||
drop_src_path_prefix_parts: 1
|
||||
`, map[string]*UserInfo{
|
||||
getAuthToken("", "foo", "bar"): {
|
||||
getHTTPAuthBasicToken("foo", "bar"): {
|
||||
Username: "foo",
|
||||
Password: "bar",
|
||||
URLPrefix: mustParseURLs([]string{
|
||||
|
@ -312,11 +312,11 @@ users:
|
|||
- username: bar
|
||||
url_prefix: https://bar/x///
|
||||
`, map[string]*UserInfo{
|
||||
getAuthToken("", "foo", ""): {
|
||||
getHTTPAuthBasicToken("foo", ""): {
|
||||
Username: "foo",
|
||||
URLPrefix: mustParseURL("http://foo"),
|
||||
},
|
||||
getAuthToken("", "bar", ""): {
|
||||
getHTTPAuthBasicToken("bar", ""): {
|
||||
Username: "bar",
|
||||
URLPrefix: mustParseURL("https://bar/x"),
|
||||
},
|
||||
|
@ -336,7 +336,7 @@ users:
|
|||
- "foo: bar"
|
||||
- "xxx: y"
|
||||
`, map[string]*UserInfo{
|
||||
getAuthToken("foo", "", ""): {
|
||||
getHTTPAuthBearerToken("foo"): {
|
||||
BearerToken: "foo",
|
||||
URLMaps: []URLMap{
|
||||
{
|
||||
|
@ -365,7 +365,7 @@ users:
|
|||
},
|
||||
},
|
||||
},
|
||||
getAuthToken("", "foo", ""): {
|
||||
getHTTPAuthBasicToken("foo", ""): {
|
||||
BearerToken: "foo",
|
||||
URLMaps: []URLMap{
|
||||
{
|
||||
|
@ -395,7 +395,7 @@ users:
|
|||
},
|
||||
},
|
||||
})
|
||||
// Multiple users with the same name
|
||||
// Multiple users with the same name - this should work, since these users have different passwords
|
||||
f(`
|
||||
users:
|
||||
- username: foo-same
|
||||
|
@ -405,17 +405,18 @@ users:
|
|||
password: bar
|
||||
url_prefix: https://bar/x///
|
||||
`, map[string]*UserInfo{
|
||||
getAuthToken("", "foo-same", "baz"): {
|
||||
getHTTPAuthBasicToken("foo-same", "baz"): {
|
||||
Username: "foo-same",
|
||||
Password: "baz",
|
||||
URLPrefix: mustParseURL("http://foo"),
|
||||
},
|
||||
getAuthToken("", "foo-same", "bar"): {
|
||||
getHTTPAuthBasicToken("foo-same", "bar"): {
|
||||
Username: "foo-same",
|
||||
Password: "bar",
|
||||
URLPrefix: mustParseURL("https://bar/x"),
|
||||
},
|
||||
})
|
||||
|
||||
// with default url
|
||||
f(`
|
||||
users:
|
||||
|
@ -432,7 +433,7 @@ users:
|
|||
- http://default1/select/0/prometheus
|
||||
- http://default2/select/0/prometheus
|
||||
`, map[string]*UserInfo{
|
||||
getAuthToken("foo", "", ""): {
|
||||
getHTTPAuthBearerToken("foo"): {
|
||||
BearerToken: "foo",
|
||||
URLMaps: []URLMap{
|
||||
{
|
||||
|
@ -464,7 +465,7 @@ users:
|
|||
"http://default2/select/0/prometheus",
|
||||
}),
|
||||
},
|
||||
getAuthToken("", "foo", ""): {
|
||||
getHTTPAuthBasicToken("foo", ""): {
|
||||
BearerToken: "foo",
|
||||
URLMaps: []URLMap{
|
||||
{
|
||||
|
@ -513,7 +514,7 @@ users:
|
|||
backend_env: test
|
||||
team: accounting
|
||||
`, map[string]*UserInfo{
|
||||
getAuthToken("", "foo-same", "baz"): {
|
||||
getHTTPAuthBasicToken("foo-same", "baz"): {
|
||||
Username: "foo-same",
|
||||
Password: "baz",
|
||||
URLPrefix: mustParseURL("http://foo"),
|
||||
|
@ -522,7 +523,7 @@ users:
|
|||
"team": "dev",
|
||||
},
|
||||
},
|
||||
getAuthToken("", "foo-same", "bar"): {
|
||||
getHTTPAuthBasicToken("foo-same", "bar"): {
|
||||
Username: "foo-same",
|
||||
Password: "bar",
|
||||
URLPrefix: mustParseURL("https://bar/x"),
|
||||
|
@ -558,7 +559,7 @@ unauthorized_user:
|
|||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
|
||||
ui := m[getAuthToken("", "foo", "bar")]
|
||||
ui := m[getHTTPAuthBasicToken("foo", "bar")]
|
||||
if !isSetBool(ui.TLSInsecureSkipVerify, true) || !ui.httpTransport.TLSClientConfig.InsecureSkipVerify {
|
||||
t.Fatalf("unexpected TLSInsecureSkipVerify value for user foo")
|
||||
}
|
||||
|
|
|
@ -33,8 +33,8 @@ import (
|
|||
)
|
||||
|
||||
var (
|
||||
httpListenAddr = flag.String("httpListenAddr", ":8427", "TCP address to listen for http connections. See also -tls and -httpListenAddr.useProxyProtocol")
|
||||
useProxyProtocol = flag.Bool("httpListenAddr.useProxyProtocol", false, "Whether to use proxy protocol for connections accepted at -httpListenAddr . "+
|
||||
httpListenAddrs = flagutil.NewArrayString("httpListenAddr", "TCP address to listen for incoming http requests. See also -tls and -httpListenAddr.useProxyProtocol")
|
||||
useProxyProtocol = flagutil.NewArrayBool("httpListenAddr.useProxyProtocol", "Whether to use proxy protocol for connections accepted at the corresponding -httpListenAddr . "+
|
||||
"See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . "+
|
||||
"With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing")
|
||||
maxIdleConnsPerBackend = flag.Int("maxIdleConnsPerBackend", 100, "The maximum number of idle connections vmauth can open per each backend host. "+
|
||||
|
@ -65,10 +65,14 @@ func main() {
|
|||
buildinfo.Init()
|
||||
logger.Init()
|
||||
|
||||
logger.Infof("starting vmauth at %q...", *httpListenAddr)
|
||||
listenAddrs := *httpListenAddrs
|
||||
if len(listenAddrs) == 0 {
|
||||
listenAddrs = []string{":8427"}
|
||||
}
|
||||
logger.Infof("starting vmauth at %q...", listenAddrs)
|
||||
startTime := time.Now()
|
||||
initAuthConfig()
|
||||
go httpserver.Serve(*httpListenAddr, *useProxyProtocol, requestHandler)
|
||||
go httpserver.Serve(listenAddrs, useProxyProtocol, requestHandler)
|
||||
logger.Infof("started vmauth in %.3f seconds", time.Since(startTime).Seconds())
|
||||
|
||||
pushmetrics.Init()
|
||||
|
@ -77,8 +81,8 @@ func main() {
|
|||
pushmetrics.Stop()
|
||||
|
||||
startTime = time.Now()
|
||||
logger.Infof("gracefully shutting down webservice at %q", *httpListenAddr)
|
||||
if err := httpserver.Stop(*httpListenAddr); err != nil {
|
||||
logger.Infof("gracefully shutting down webservice at %q", listenAddrs)
|
||||
if err := httpserver.Stop(listenAddrs); err != nil {
|
||||
logger.Fatalf("cannot stop the webservice: %s", err)
|
||||
}
|
||||
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
|
||||
|
@ -97,8 +101,9 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
w.WriteHeader(http.StatusOK)
|
||||
return true
|
||||
}
|
||||
authToken := r.Header.Get("Authorization")
|
||||
if authToken == "" {
|
||||
|
||||
ats := getAuthTokensFromRequest(r)
|
||||
if len(ats) == 0 {
|
||||
// Process requests for unauthorized users
|
||||
ui := authConfig.Load().UnauthorizedUser
|
||||
if ui != nil {
|
||||
|
@ -110,18 +115,12 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
http.Error(w, "missing `Authorization` request header", http.StatusUnauthorized)
|
||||
return true
|
||||
}
|
||||
if strings.HasPrefix(authToken, "Token ") {
|
||||
// Handle InfluxDB's proprietary token authentication scheme as a bearer token authentication
|
||||
// See https://docs.influxdata.com/influxdb/v2.0/api/
|
||||
authToken = strings.Replace(authToken, "Token", "Bearer", 1)
|
||||
}
|
||||
|
||||
ac := *authUsers.Load()
|
||||
ui := ac[authToken]
|
||||
ui := getUserInfoByAuthTokens(ats)
|
||||
if ui == nil {
|
||||
invalidAuthTokenRequests.Inc()
|
||||
if *logInvalidAuthTokens {
|
||||
err := fmt.Errorf("cannot find the provided auth token %q in config", authToken)
|
||||
err := fmt.Errorf("cannot authorize request with auth tokens %q", ats)
|
||||
err = &httpserver.ErrorWithStatusCode{
|
||||
Err: err,
|
||||
StatusCode: http.StatusUnauthorized,
|
||||
|
@ -137,6 +136,17 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
return true
|
||||
}
|
||||
|
||||
func getUserInfoByAuthTokens(ats []string) *UserInfo {
|
||||
ac := *authUsers.Load()
|
||||
for _, at := range ats {
|
||||
ui := ac[at]
|
||||
if ui != nil {
|
||||
return ui
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func processUserRequest(w http.ResponseWriter, r *http.Request, ui *UserInfo) {
|
||||
startTime := time.Now()
|
||||
defer ui.requestsDuration.UpdateDuration(startTime)
|
||||
|
|
|
@ -20,6 +20,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/pushmetrics"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/snapshot"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/snapshot/snapshotutil"
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -93,7 +94,8 @@ func main() {
|
|||
}
|
||||
}
|
||||
|
||||
go httpserver.Serve(*httpListenAddr, false, nil)
|
||||
listenAddrs := []string{*httpListenAddr}
|
||||
go httpserver.Serve(listenAddrs, nil, nil)
|
||||
|
||||
pushmetrics.Init()
|
||||
err := makeBackup()
|
||||
|
@ -104,8 +106,8 @@ func main() {
|
|||
pushmetrics.Stop()
|
||||
|
||||
startTime := time.Now()
|
||||
logger.Infof("gracefully shutting down http server for metrics at %q", *httpListenAddr)
|
||||
if err := httpserver.Stop(*httpListenAddr); err != nil {
|
||||
logger.Infof("gracefully shutting down http server for metrics at %q", listenAddrs)
|
||||
if err := httpserver.Stop(listenAddrs); err != nil {
|
||||
logger.Fatalf("cannot stop http server for metrics: %s", err)
|
||||
}
|
||||
logger.Infof("successfully shut down http server for metrics in %.3f seconds", time.Since(startTime).Seconds())
|
||||
|
@ -168,7 +170,7 @@ See the docs at https://docs.victoriametrics.com/vmbackup.html .
|
|||
}
|
||||
|
||||
func newSrcFS() (*fslocal.FS, error) {
|
||||
if err := snapshot.Validate(*snapshotName); err != nil {
|
||||
if err := snapshotutil.Validate(*snapshotName); err != nil {
|
||||
return nil, fmt.Errorf("invalid -snapshotName=%q: %w", *snapshotName, err)
|
||||
}
|
||||
snapshotPath := filepath.Join(*storageDataPath, "snapshots", *snapshotName)
|
||||
|
|
85
app/vminsert/datadogsketches/request_handler.go
Normal file
85
app/vminsert/datadogsketches/request_handler.go
Normal file
|
@ -0,0 +1,85 @@
|
|||
package datadogsketches
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogsketches"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogsketches/stream"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
var (
|
||||
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="datadogsketches"}`)
|
||||
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="datadogsketches"}`)
|
||||
)
|
||||
|
||||
// InsertHandlerForHTTP processes remote write for DataDog POST /api/beta/sketches request.
|
||||
func InsertHandlerForHTTP(req *http.Request) error {
|
||||
extraLabels, err := parserCommon.GetExtraLabels(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ce := req.Header.Get("Content-Encoding")
|
||||
return stream.Parse(req.Body, ce, func(sketches []*datadogsketches.Sketch) error {
|
||||
return insertRows(sketches, extraLabels)
|
||||
})
|
||||
}
|
||||
|
||||
func insertRows(sketches []*datadogsketches.Sketch, extraLabels []prompbmarshal.Label) error {
|
||||
ctx := common.GetInsertCtx()
|
||||
defer common.PutInsertCtx(ctx)
|
||||
|
||||
rowsLen := 0
|
||||
for _, sketch := range sketches {
|
||||
rowsLen += sketch.RowsCount()
|
||||
}
|
||||
ctx.Reset(rowsLen)
|
||||
rowsTotal := 0
|
||||
hasRelabeling := relabel.HasRelabeling()
|
||||
for _, sketch := range sketches {
|
||||
ms := sketch.ToSummary()
|
||||
for _, m := range ms {
|
||||
ctx.Labels = ctx.Labels[:0]
|
||||
ctx.AddLabel("", m.Name)
|
||||
for _, label := range m.Labels {
|
||||
ctx.AddLabel(label.Name, label.Value)
|
||||
}
|
||||
for _, tag := range sketch.Tags {
|
||||
name, value := datadogutils.SplitTag(tag)
|
||||
if name == "host" {
|
||||
name = "exported_host"
|
||||
}
|
||||
ctx.AddLabel(name, value)
|
||||
}
|
||||
for j := range extraLabels {
|
||||
label := &extraLabels[j]
|
||||
ctx.AddLabel(label.Name, label.Value)
|
||||
}
|
||||
if hasRelabeling {
|
||||
ctx.ApplyRelabeling()
|
||||
}
|
||||
if len(ctx.Labels) == 0 {
|
||||
// Skip metric without labels.
|
||||
continue
|
||||
}
|
||||
ctx.SortLabelsIfNeeded()
|
||||
var metricNameRaw []byte
|
||||
var err error
|
||||
for _, p := range m.Points {
|
||||
metricNameRaw, err = ctx.WriteDataPointExt(metricNameRaw, ctx.Labels, p.Timestamp, p.Value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
rowsTotal += len(m.Points)
|
||||
}
|
||||
}
|
||||
rowsInserted.Add(rowsTotal)
|
||||
rowsPerInsert.Update(float64(rowsTotal))
|
||||
return ctx.FlushBufs()
|
||||
}
|
|
@ -13,6 +13,7 @@ import (
|
|||
|
||||
vminsertCommon "github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/csvimport"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/datadogsketches"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/datadogv1"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/datadogv2"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/graphite"
|
||||
|
@ -271,6 +272,15 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
w.WriteHeader(202)
|
||||
fmt.Fprintf(w, `{"status":"ok"}`)
|
||||
return true
|
||||
case "/datadog/api/beta/sketches":
|
||||
datadogsketchesWriteRequests.Inc()
|
||||
if err := datadogsketches.InsertHandlerForHTTP(r); err != nil {
|
||||
datadogsketchesWriteErrors.Inc()
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return true
|
||||
}
|
||||
w.WriteHeader(202)
|
||||
return true
|
||||
case "/datadog/api/v1/validate":
|
||||
datadogValidateRequests.Inc()
|
||||
// See https://docs.datadoghq.com/api/latest/authentication/#validate-api-key
|
||||
|
@ -341,7 +351,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
}
|
||||
promscrapeConfigReloadRequests.Inc()
|
||||
procutil.SelfSIGHUP()
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
return true
|
||||
case "/ready":
|
||||
if rdy := atomic.LoadInt32(&promscrape.PendingScrapeConfigs); rdy > 0 {
|
||||
|
@ -394,6 +404,9 @@ var (
|
|||
datadogv2WriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v2/series", protocol="datadog"}`)
|
||||
datadogv2WriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/datadog/api/v2/series", protocol="datadog"}`)
|
||||
|
||||
datadogsketchesWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/beta/sketches", protocol="datadog"}`)
|
||||
datadogsketchesWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/datadog/api/beta/sketches", protocol="datadog"}`)
|
||||
|
||||
datadogValidateRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`)
|
||||
datadogCheckRunRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`)
|
||||
datadogIntakeRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/intake", protocol="datadog"}`)
|
||||
|
|
|
@ -37,7 +37,8 @@ func main() {
|
|||
buildinfo.Init()
|
||||
logger.Init()
|
||||
|
||||
go httpserver.Serve(*httpListenAddr, false, nil)
|
||||
listenAddrs := []string{*httpListenAddr}
|
||||
go httpserver.Serve(listenAddrs, nil, nil)
|
||||
|
||||
srcFS, err := newSrcFS()
|
||||
if err != nil {
|
||||
|
@ -62,8 +63,8 @@ func main() {
|
|||
dstFS.MustStop()
|
||||
|
||||
startTime := time.Now()
|
||||
logger.Infof("gracefully shutting down http server for metrics at %q", *httpListenAddr)
|
||||
if err := httpserver.Stop(*httpListenAddr); err != nil {
|
||||
logger.Infof("gracefully shutting down http server for metrics at %q", listenAddrs)
|
||||
if err := httpserver.Stop(listenAddrs); err != nil {
|
||||
logger.Fatalf("cannot stop http server for metrics: %s", err)
|
||||
}
|
||||
logger.Infof("successfully shut down http server for metrics in %.3f seconds", time.Since(startTime).Seconds())
|
||||
|
|
|
@ -136,21 +136,25 @@ func (tbf *tmpBlocksFile) Finalize() error {
|
|||
return fmt.Errorf("cannot write the remaining %d bytes to %q: %w", len(tbf.buf), fname, err)
|
||||
}
|
||||
tbf.buf = tbf.buf[:0]
|
||||
r := fs.MustOpenReaderAt(fname)
|
||||
r := fs.NewReaderAt(tbf.f)
|
||||
|
||||
// Hint the OS that the file is read almost sequentiallly.
|
||||
// This should reduce the number of disk seeks, which is important
|
||||
// for HDDs.
|
||||
r.MustFadviseSequentialRead(true)
|
||||
|
||||
// Collect local stats in order to improve performance on systems with big number of CPU cores.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3966
|
||||
r.SetUseLocalStats()
|
||||
|
||||
tbf.r = r
|
||||
tbf.f = nil
|
||||
return nil
|
||||
}
|
||||
|
||||
func (tbf *tmpBlocksFile) MustReadBlockRefAt(partRef storage.PartRef, addr tmpBlockAddr) storage.BlockRef {
|
||||
var buf []byte
|
||||
if tbf.f == nil {
|
||||
if tbf.r == nil {
|
||||
buf = tbf.buf[addr.offset : addr.offset+uint64(addr.size)]
|
||||
} else {
|
||||
bb := tmpBufPool.Get()
|
||||
|
@ -169,21 +173,45 @@ func (tbf *tmpBlocksFile) MustReadBlockRefAt(partRef storage.PartRef, addr tmpBl
|
|||
var tmpBufPool bytesutil.ByteBufferPool
|
||||
|
||||
func (tbf *tmpBlocksFile) MustClose() {
|
||||
if tbf.f == nil {
|
||||
if tbf.f != nil {
|
||||
// tbf.f could be non-nil if Finalize wasn't called.
|
||||
// In this case tbf.r must be nil.
|
||||
if tbf.r != nil {
|
||||
logger.Panicf("BUG: tbf.r must be nil when tbf.f!=nil")
|
||||
}
|
||||
|
||||
// Try removing the file before closing it in order to prevent from flushing the in-memory data
|
||||
// from page cache to the disk and save disk write IO. This may fail on non-posix systems such as Windows.
|
||||
// Gracefully handle this case by attempting to remove the file after closing it.
|
||||
fname := tbf.f.Name()
|
||||
errRemove := os.Remove(fname)
|
||||
if err := tbf.f.Close(); err != nil {
|
||||
logger.Panicf("FATAL: cannot close %q: %s", fname, err)
|
||||
}
|
||||
if errRemove != nil {
|
||||
if err := os.Remove(fname); err != nil {
|
||||
logger.Panicf("FATAL: cannot remove %q: %s", fname, err)
|
||||
}
|
||||
}
|
||||
tbf.f = nil
|
||||
return
|
||||
}
|
||||
if tbf.r != nil {
|
||||
// tbf.r could be nil if Finalize wasn't called.
|
||||
tbf.r.MustClose()
|
||||
}
|
||||
fname := tbf.f.Name()
|
||||
|
||||
if err := tbf.f.Close(); err != nil {
|
||||
logger.Panicf("FATAL: cannot close %q: %s", fname, err)
|
||||
if tbf.r == nil {
|
||||
// Nothing to do
|
||||
return
|
||||
}
|
||||
// We cannot remove unclosed at non-posix filesystems, like windows
|
||||
if err := os.Remove(fname); err != nil {
|
||||
logger.Panicf("FATAL: cannot remove %q: %s", fname, err)
|
||||
|
||||
// Try removing the file before closing it in order to prevent from flushing the in-memory data
|
||||
// from page cache to the disk and save disk write IO. This may fail on non-posix systems such as Windows.
|
||||
// Gracefully handle this case by attempting to remove the file after closing it.
|
||||
fname := tbf.r.Path()
|
||||
errRemove := os.Remove(fname)
|
||||
tbf.r.MustClose()
|
||||
if errRemove != nil {
|
||||
if err := os.Remove(fname); err != nil {
|
||||
logger.Panicf("FATAL: cannot remove %q: %s", fname, err)
|
||||
}
|
||||
}
|
||||
tbf.f = nil
|
||||
tbf.r = nil
|
||||
}
|
||||
|
|
|
@ -29,7 +29,8 @@ import (
|
|||
)
|
||||
|
||||
var (
|
||||
disableCache = flag.Bool("search.disableCache", false, "Whether to disable response caching. This may be useful during data backfilling")
|
||||
disableCache = flag.Bool("search.disableCache", false, "Whether to disable response caching. This may be useful when ingesting historical data. "+
|
||||
"See https://docs.victoriametrics.com/#backfilling . See also -search.resetRollupResultCacheOnStartup")
|
||||
maxPointsSubqueryPerTimeseries = flag.Int("search.maxPointsSubqueryPerTimeseries", 100e3, "The maximum number of points per series, which can be generated by subquery. "+
|
||||
"See https://valyala.medium.com/prometheus-subqueries-in-victoriametrics-9b1492b720b3")
|
||||
maxMemoryPerQuery = flagutil.NewBytes("search.maxMemoryPerQuery", 0, "The maximum amounts of memory a single query may consume. "+
|
||||
|
|
|
@ -6108,6 +6108,39 @@ func TestExecSuccess(t *testing.T) {
|
|||
resultExpected := []netstorage.Result{r}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`sum_gt_over_time`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `round(sum_gt_over_time(rand(0)[200s:10s], 0.7), 0.1)`
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{5.9, 5.2, 8.5, 5.1, 4.9, 4.5},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`sum_le_over_time`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `round(sum_le_over_time(rand(0)[200s:10s], 0.7), 0.1)`
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{4.2, 4.9, 3.2, 5.8, 4.1, 5.3},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`sum_eq_over_time`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `round(sum_eq_over_time(rand(0)[200s:10s], 0.7), 0.1)`
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{0, 0, 0, 0, 0, 0},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`increases_over_time`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `increases_over_time(rand(0)[200s:10s])`
|
||||
|
|
|
@ -79,12 +79,15 @@ var rollupFuncs = map[string]newRollupFunc{
|
|||
"rollup_rate": newRollupFuncOneOrTwoArgs(rollupFake), // + rollupFuncsRemoveCounterResets
|
||||
"rollup_scrape_interval": newRollupFuncOneOrTwoArgs(rollupFake),
|
||||
"scrape_interval": newRollupFuncOneArg(rollupScrapeInterval),
|
||||
"share_eq_over_time": newRollupShareEQ,
|
||||
"share_gt_over_time": newRollupShareGT,
|
||||
"share_le_over_time": newRollupShareLE,
|
||||
"share_eq_over_time": newRollupShareEQ,
|
||||
"stale_samples_over_time": newRollupFuncOneArg(rollupStaleSamples),
|
||||
"stddev_over_time": newRollupFuncOneArg(rollupStddev),
|
||||
"stdvar_over_time": newRollupFuncOneArg(rollupStdvar),
|
||||
"sum_eq_over_time": newRollupSumEQ,
|
||||
"sum_gt_over_time": newRollupSumGT,
|
||||
"sum_le_over_time": newRollupSumLE,
|
||||
"sum_over_time": newRollupFuncOneArg(rollupSum),
|
||||
"sum2_over_time": newRollupFuncOneArg(rollupSum2),
|
||||
"tfirst_over_time": newRollupFuncOneArg(rollupTfirst),
|
||||
|
@ -1089,59 +1092,89 @@ func newRollupDurationOverTime(args []interface{}) (rollupFunc, error) {
|
|||
}
|
||||
|
||||
func newRollupShareLE(args []interface{}) (rollupFunc, error) {
|
||||
return newRollupShareFilter(args, countFilterLE)
|
||||
return newRollupAvgFilter(args, countFilterLE)
|
||||
}
|
||||
|
||||
func countFilterLE(values []float64, le float64) int {
|
||||
func countFilterLE(values []float64, le float64) float64 {
|
||||
n := 0
|
||||
for _, v := range values {
|
||||
if v <= le {
|
||||
n++
|
||||
}
|
||||
}
|
||||
return n
|
||||
return float64(n)
|
||||
}
|
||||
|
||||
func newRollupShareGT(args []interface{}) (rollupFunc, error) {
|
||||
return newRollupShareFilter(args, countFilterGT)
|
||||
return newRollupAvgFilter(args, countFilterGT)
|
||||
}
|
||||
|
||||
func countFilterGT(values []float64, gt float64) int {
|
||||
func countFilterGT(values []float64, gt float64) float64 {
|
||||
n := 0
|
||||
for _, v := range values {
|
||||
if v > gt {
|
||||
n++
|
||||
}
|
||||
}
|
||||
return n
|
||||
return float64(n)
|
||||
}
|
||||
|
||||
func newRollupShareEQ(args []interface{}) (rollupFunc, error) {
|
||||
return newRollupShareFilter(args, countFilterEQ)
|
||||
return newRollupAvgFilter(args, countFilterEQ)
|
||||
}
|
||||
|
||||
func countFilterEQ(values []float64, eq float64) int {
|
||||
func sumFilterEQ(values []float64, eq float64) float64 {
|
||||
var sum float64
|
||||
for _, v := range values {
|
||||
if v == eq {
|
||||
sum += v
|
||||
}
|
||||
}
|
||||
return sum
|
||||
}
|
||||
|
||||
func sumFilterLE(values []float64, le float64) float64 {
|
||||
var sum float64
|
||||
for _, v := range values {
|
||||
if v <= le {
|
||||
sum += v
|
||||
}
|
||||
}
|
||||
return sum
|
||||
}
|
||||
|
||||
func sumFilterGT(values []float64, gt float64) float64 {
|
||||
var sum float64
|
||||
for _, v := range values {
|
||||
if v > gt {
|
||||
sum += v
|
||||
}
|
||||
}
|
||||
return sum
|
||||
}
|
||||
|
||||
func countFilterEQ(values []float64, eq float64) float64 {
|
||||
n := 0
|
||||
for _, v := range values {
|
||||
if v == eq {
|
||||
n++
|
||||
}
|
||||
}
|
||||
return n
|
||||
return float64(n)
|
||||
}
|
||||
|
||||
func countFilterNE(values []float64, ne float64) int {
|
||||
func countFilterNE(values []float64, ne float64) float64 {
|
||||
n := 0
|
||||
for _, v := range values {
|
||||
if v != ne {
|
||||
n++
|
||||
}
|
||||
}
|
||||
return n
|
||||
return float64(n)
|
||||
}
|
||||
|
||||
func newRollupShareFilter(args []interface{}, countFilter func(values []float64, limit float64) int) (rollupFunc, error) {
|
||||
rf, err := newRollupCountFilter(args, countFilter)
|
||||
func newRollupAvgFilter(args []interface{}, f func(values []float64, limit float64) float64) (rollupFunc, error) {
|
||||
rf, err := newRollupFilter(args, f)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -1151,23 +1184,35 @@ func newRollupShareFilter(args []interface{}, countFilter func(values []float64,
|
|||
}, nil
|
||||
}
|
||||
|
||||
func newRollupCountEQ(args []interface{}) (rollupFunc, error) {
|
||||
return newRollupFilter(args, countFilterEQ)
|
||||
}
|
||||
|
||||
func newRollupCountLE(args []interface{}) (rollupFunc, error) {
|
||||
return newRollupCountFilter(args, countFilterLE)
|
||||
return newRollupFilter(args, countFilterLE)
|
||||
}
|
||||
|
||||
func newRollupCountGT(args []interface{}) (rollupFunc, error) {
|
||||
return newRollupCountFilter(args, countFilterGT)
|
||||
}
|
||||
|
||||
func newRollupCountEQ(args []interface{}) (rollupFunc, error) {
|
||||
return newRollupCountFilter(args, countFilterEQ)
|
||||
return newRollupFilter(args, countFilterGT)
|
||||
}
|
||||
|
||||
func newRollupCountNE(args []interface{}) (rollupFunc, error) {
|
||||
return newRollupCountFilter(args, countFilterNE)
|
||||
return newRollupFilter(args, countFilterNE)
|
||||
}
|
||||
|
||||
func newRollupCountFilter(args []interface{}, countFilter func(values []float64, limit float64) int) (rollupFunc, error) {
|
||||
func newRollupSumEQ(args []interface{}) (rollupFunc, error) {
|
||||
return newRollupFilter(args, sumFilterEQ)
|
||||
}
|
||||
|
||||
func newRollupSumLE(args []interface{}) (rollupFunc, error) {
|
||||
return newRollupFilter(args, sumFilterLE)
|
||||
}
|
||||
|
||||
func newRollupSumGT(args []interface{}) (rollupFunc, error) {
|
||||
return newRollupFilter(args, sumFilterGT)
|
||||
}
|
||||
|
||||
func newRollupFilter(args []interface{}, f func(values []float64, limit float64) float64) (rollupFunc, error) {
|
||||
if err := expectRollupArgsNum(args, 2); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -1183,7 +1228,7 @@ func newRollupCountFilter(args []interface{}, countFilter func(values []float64,
|
|||
return nan
|
||||
}
|
||||
limit := limits[rfa.idx]
|
||||
return float64(countFilter(values, limit))
|
||||
return f(values, limit)
|
||||
}
|
||||
return rf, nil
|
||||
}
|
||||
|
@ -1916,6 +1961,10 @@ func rollupChangesPrometheus(rfa *rollupFuncArg) float64 {
|
|||
n := 0
|
||||
for _, v := range values[1:] {
|
||||
if v != prevValue {
|
||||
if math.Abs(v-prevValue) < 1e-12*math.Abs(v) {
|
||||
// This may be precision error. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/767#issuecomment-1650932203
|
||||
continue
|
||||
}
|
||||
n++
|
||||
prevValue = v
|
||||
}
|
||||
|
@ -1939,6 +1988,10 @@ func rollupChanges(rfa *rollupFuncArg) float64 {
|
|||
}
|
||||
for _, v := range values {
|
||||
if v != prevValue {
|
||||
if math.Abs(v-prevValue) < 1e-12*math.Abs(v) {
|
||||
// This may be precision error. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/767#issuecomment-1650932203
|
||||
continue
|
||||
}
|
||||
n++
|
||||
prevValue = v
|
||||
}
|
||||
|
@ -1967,6 +2020,10 @@ func rollupIncreases(rfa *rollupFuncArg) float64 {
|
|||
n := 0
|
||||
for _, v := range values {
|
||||
if v > prevValue {
|
||||
if math.Abs(v-prevValue) < 1e-12*math.Abs(v) {
|
||||
// This may be precision error. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/767#issuecomment-1650932203
|
||||
continue
|
||||
}
|
||||
n++
|
||||
}
|
||||
prevValue = v
|
||||
|
@ -1998,6 +2055,10 @@ func rollupResets(rfa *rollupFuncArg) float64 {
|
|||
n := 0
|
||||
for _, v := range values {
|
||||
if v < prevValue {
|
||||
if math.Abs(v-prevValue) < 1e-12*math.Abs(v) {
|
||||
// This may be precision error. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/767#issuecomment-1650932203
|
||||
continue
|
||||
}
|
||||
n++
|
||||
}
|
||||
prevValue = v
|
||||
|
|
|
@ -30,6 +30,8 @@ var (
|
|||
"due to time synchronization issues between VictoriaMetrics and data sources. See also -search.disableAutoCacheReset")
|
||||
disableAutoCacheReset = flag.Bool("search.disableAutoCacheReset", false, "Whether to disable automatic response cache reset if a sample with timestamp "+
|
||||
"outside -search.cacheTimestampOffset is inserted into VictoriaMetrics")
|
||||
resetRollupResultCacheOnStartup = flag.Bool("search.resetRollupResultCacheOnStartup", false, "Whether to reset rollup result cache on startup. "+
|
||||
"See https://docs.victoriametrics.com/#rollup-result-cache . See also -search.disableCache")
|
||||
)
|
||||
|
||||
// ResetRollupResultCacheIfNeeded resets rollup result cache if mrs contains timestamps outside `now - search.cacheTimestampOffset`.
|
||||
|
@ -117,7 +119,12 @@ func InitRollupResultCache(cachePath string) {
|
|||
cacheSize := getRollupResultCacheSize()
|
||||
var c *workingsetcache.Cache
|
||||
if len(rollupResultCachePath) > 0 {
|
||||
logger.Infof("loading rollupResult cache from %q...", rollupResultCachePath)
|
||||
if *resetRollupResultCacheOnStartup {
|
||||
logger.Infof("removing rollupResult cache at %q becasue -search.resetRollupResultCacheOnStartup command-line flag is set", rollupResultCachePath)
|
||||
fs.MustRemoveAll(rollupResultCachePath)
|
||||
} else {
|
||||
logger.Infof("loading rollupResult cache from %q...", rollupResultCachePath)
|
||||
}
|
||||
c = workingsetcache.Load(rollupResultCachePath, cacheSize)
|
||||
mustLoadRollupResultCacheKeyPrefix(rollupResultCachePath)
|
||||
} else {
|
||||
|
|
|
@ -407,6 +407,75 @@ func TestRollupCountNEOverTime(t *testing.T) {
|
|||
f(12, 11)
|
||||
}
|
||||
|
||||
func TestRollupSumLEOverTime(t *testing.T) {
|
||||
f := func(le, vExpected float64) {
|
||||
t.Helper()
|
||||
les := []*timeseries{{
|
||||
Values: []float64{le},
|
||||
Timestamps: []int64{123},
|
||||
}}
|
||||
var me metricsql.MetricExpr
|
||||
args := []interface{}{&metricsql.RollupExpr{Expr: &me}, les}
|
||||
testRollupFunc(t, "sum_le_over_time", args, vExpected)
|
||||
}
|
||||
|
||||
f(-123, 0)
|
||||
f(0, 0)
|
||||
f(10, 0)
|
||||
f(12, 12)
|
||||
f(30, 33)
|
||||
f(50, 289)
|
||||
f(100, 442)
|
||||
f(123, 565)
|
||||
f(1000, 565)
|
||||
}
|
||||
|
||||
func TestRollupSumGTOverTime(t *testing.T) {
|
||||
f := func(le, vExpected float64) {
|
||||
t.Helper()
|
||||
les := []*timeseries{{
|
||||
Values: []float64{le},
|
||||
Timestamps: []int64{123},
|
||||
}}
|
||||
var me metricsql.MetricExpr
|
||||
args := []interface{}{&metricsql.RollupExpr{Expr: &me}, les}
|
||||
testRollupFunc(t, "sum_gt_over_time", args, vExpected)
|
||||
}
|
||||
|
||||
f(-123, 565)
|
||||
f(0, 565)
|
||||
f(10, 565)
|
||||
f(12, 553)
|
||||
f(30, 532)
|
||||
f(50, 276)
|
||||
f(100, 123)
|
||||
f(123, 0)
|
||||
f(1000, 0)
|
||||
}
|
||||
|
||||
func TestRollupSumEQOverTime(t *testing.T) {
|
||||
f := func(le, vExpected float64) {
|
||||
t.Helper()
|
||||
les := []*timeseries{{
|
||||
Values: []float64{le},
|
||||
Timestamps: []int64{123},
|
||||
}}
|
||||
var me metricsql.MetricExpr
|
||||
args := []interface{}{&metricsql.RollupExpr{Expr: &me}, les}
|
||||
testRollupFunc(t, "sum_eq_over_time", args, vExpected)
|
||||
}
|
||||
|
||||
f(-123, 0)
|
||||
f(0, 0)
|
||||
f(10, 0)
|
||||
f(12, 12)
|
||||
f(30, 0)
|
||||
f(50, 0)
|
||||
f(100, 0)
|
||||
f(123, 123)
|
||||
f(1000, 0)
|
||||
}
|
||||
|
||||
func TestRollupQuantileOverTime(t *testing.T) {
|
||||
f := func(phi, vExpected float64) {
|
||||
t.Helper()
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
{
|
||||
"files": {
|
||||
"main.css": "./static/css/main.2f6793ba.css",
|
||||
"main.js": "./static/js/main.bcd188b5.js",
|
||||
"main.css": "./static/css/main.be4fee7a.css",
|
||||
"main.js": "./static/js/main.fd9d9e16.js",
|
||||
"static/js/522.da77e7b3.chunk.js": "./static/js/522.da77e7b3.chunk.js",
|
||||
"static/media/MetricsQL.md": "./static/media/MetricsQL.b64c4dbf91f4fa581621.md",
|
||||
"static/media/MetricsQL.md": "./static/media/MetricsQL.8a01ddf56e4e6bc1ccf1.md",
|
||||
"index.html": "./index.html"
|
||||
},
|
||||
"entrypoints": [
|
||||
"static/css/main.2f6793ba.css",
|
||||
"static/js/main.bcd188b5.js"
|
||||
"static/css/main.be4fee7a.css",
|
||||
"static/js/main.fd9d9e16.js"
|
||||
]
|
||||
}
|
|
@ -1 +1 @@
|
|||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=5"/><meta name="theme-color" content="#000000"/><meta name="description" content="UI for VictoriaMetrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><script src="./dashboards/index.js" type="module"></script><meta name="twitter:card" content="summary_large_image"><meta name="twitter:image" content="./preview.jpg"><meta name="twitter:title" content="UI for VictoriaMetrics"><meta name="twitter:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta name="twitter:site" content="@VictoriaMetrics"><meta property="og:title" content="Metric explorer for VictoriaMetrics"><meta property="og:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta property="og:image" content="./preview.jpg"><meta property="og:type" content="website"><script defer="defer" src="./static/js/main.bcd188b5.js"></script><link href="./static/css/main.2f6793ba.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
||||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=5"/><meta name="theme-color" content="#000000"/><meta name="description" content="UI for VictoriaMetrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><script src="./dashboards/index.js" type="module"></script><meta name="twitter:card" content="summary_large_image"><meta name="twitter:image" content="./preview.jpg"><meta name="twitter:title" content="UI for VictoriaMetrics"><meta name="twitter:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta name="twitter:site" content="@VictoriaMetrics"><meta property="og:title" content="Metric explorer for VictoriaMetrics"><meta property="og:description" content="Explore and troubleshoot your VictoriaMetrics data"><meta property="og:image" content="./preview.jpg"><meta property="og:type" content="website"><script defer="defer" src="./static/js/main.fd9d9e16.js"></script><link href="./static/css/main.be4fee7a.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
File diff suppressed because one or more lines are too long
1
app/vmselect/vmui/static/css/main.be4fee7a.css
Normal file
1
app/vmselect/vmui/static/css/main.be4fee7a.css
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
2
app/vmselect/vmui/static/js/main.fd9d9e16.js
Normal file
2
app/vmselect/vmui/static/js/main.fd9d9e16.js
Normal file
File diff suppressed because one or more lines are too long
|
@ -70,7 +70,7 @@ The list of MetricsQL features on top of PromQL:
|
|||
It is equivalent to `rate(node_network_receive_bytes_total[$__interval])` when used in Grafana.
|
||||
* Numeric values can contain `_` delimiters for better readability. For example, `1_234_567_890` can be used in queries instead of `1234567890`.
|
||||
* [Series selectors](https://docs.victoriametrics.com/keyConcepts.html#filtering) accept multiple `or` filters. For example, `{env="prod",job="a" or env="dev",job="b"}`
|
||||
selects series with either `{env="prod",job="a"}` or `{env="dev",job="b"}` labels.
|
||||
selects series with `{env="prod",job="a"}` or `{env="dev",job="b"}` labels.
|
||||
See [these docs](https://docs.victoriametrics.com/keyConcepts.html#filtering-by-multiple-or-filters) for details.
|
||||
* Support for `group_left(*)` and `group_right(*)` for copying all the labels from time series on the `one` side
|
||||
of [many-to-one operations](https://prometheus.io/docs/prometheus/latest/querying/operators/#many-to-one-and-one-to-many-vector-matches).
|
||||
|
@ -243,7 +243,7 @@ from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.ht
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [count_over_time](#count_over_time).
|
||||
See also [count_over_time](#count_over_time) and [share_eq_over_time](#share_eq_over_time).
|
||||
|
||||
#### count_gt_over_time
|
||||
|
||||
|
@ -253,7 +253,7 @@ from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.ht
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [count_over_time](#count_over_time).
|
||||
See also [count_over_time](#count_over_time) and [share_gt_over_time](#share_gt_over_time).
|
||||
|
||||
#### count_le_over_time
|
||||
|
||||
|
@ -263,7 +263,7 @@ from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.ht
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [count_over_time](#count_over_time).
|
||||
See also [count_over_time](#count_over_time) and [share_le_over_time](#share_le_over_time).
|
||||
|
||||
#### count_ne_over_time
|
||||
|
||||
|
@ -743,7 +743,7 @@ This function is useful for calculating SLI and SLO. Example: `share_gt_over_tim
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [share_le_over_time](#share_le_over_time).
|
||||
See also [share_le_over_time](#share_le_over_time) and [count_gt_over_time](#count_gt_over_time).
|
||||
|
||||
#### share_le_over_time
|
||||
|
||||
|
@ -756,7 +756,7 @@ the share of time series values for the last 24 hours when memory usage was belo
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [share_gt_over_time](#share_gt_over_time).
|
||||
See also [share_gt_over_time](#share_gt_over_time) and [count_le_over_time](#count_le_over_time).
|
||||
|
||||
#### share_eq_over_time
|
||||
|
||||
|
@ -766,6 +766,8 @@ from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.ht
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [count_eq_over_time](#count_eq_over_time).
|
||||
|
||||
#### stale_samples_over_time
|
||||
|
||||
`stale_samples_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the number
|
||||
|
@ -792,6 +794,33 @@ Metric names are stripped from the resulting rollups. Add [keep_metric_names](#k
|
|||
|
||||
This function is supported by PromQL. See also [stddev_over_time](#stddev_over_time).
|
||||
|
||||
#### sum_eq_over_time
|
||||
|
||||
`sum_eq_over_time(series_selector[d], eq)` is a [rollup function](#rollup-function), which calculates the sum of raw sample values equal to `eq`
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [sum_over_time](#sum_over_time) and [count_eq_over_time](#count_eq_over_time).
|
||||
|
||||
#### sum_gt_over_time
|
||||
|
||||
`sum_gt_over_time(series_selector[d], gt)` is a [rollup function](#rollup-function), which calculates the sum of raw sample values bigger than `gt`
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [sum_over_time](#sum_over_time) and [count_gt_over_time](#count_gt_over_time).
|
||||
|
||||
#### sum_le_over_time
|
||||
|
||||
`sum_le_over_time(series_selector[d], le)` is a [rollup function](#rollup-function), which calculates the sum of raw sample values smaller or equal to `le`
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [sum_over_time](#sum_over_time) and [count_le_over_time](#count_le_over_time).
|
||||
|
||||
#### sum_over_time
|
||||
|
||||
`sum_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the sum of raw sample values
|
||||
|
@ -810,7 +839,7 @@ Metric names are stripped from the resulting rollups. Add [keep_metric_names](#k
|
|||
|
||||
#### timestamp
|
||||
|
||||
`timestamp(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the last raw sample
|
||||
`timestamp(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the last raw sample
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
@ -819,7 +848,7 @@ This function is supported by PromQL. See also [timestamp_with_name](#timestamp_
|
|||
|
||||
#### timestamp_with_name
|
||||
|
||||
`timestamp_with_name(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the last raw sample
|
||||
`timestamp_with_name(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the last raw sample
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are preserved in the resulting rollups.
|
||||
|
@ -828,7 +857,7 @@ See also [timestamp](#timestamp).
|
|||
|
||||
#### tfirst_over_time
|
||||
|
||||
`tfirst_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the first raw sample
|
||||
`tfirst_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the first raw sample
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
@ -837,7 +866,7 @@ See also [first_over_time](#first_over_time).
|
|||
|
||||
#### tlast_change_over_time
|
||||
|
||||
`tlast_change_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the last change
|
||||
`tlast_change_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the last change
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) on the given lookbehind window `d`.
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
@ -852,7 +881,7 @@ See also [tlast_change_over_time](#tlast_change_over_time).
|
|||
|
||||
#### tmax_over_time
|
||||
|
||||
`tmax_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the raw sample
|
||||
`tmax_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the raw sample
|
||||
with the maximum value on the given lookbehind window `d`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
|
@ -862,7 +891,7 @@ See also [max_over_time](#max_over_time).
|
|||
|
||||
#### tmin_over_time
|
||||
|
||||
`tmin_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the raw sample
|
||||
`tmin_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the raw sample
|
||||
with the minimum value on the given lookbehind window `d`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
|
@ -1029,7 +1058,7 @@ for every point of every time series returned by `q`.
|
|||
|
||||
Metric names are stripped from the resulting series. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
This function is supported by PromQL. This function is supported by PromQL. See also [acosh](#acosh).
|
||||
This function is supported by PromQL. See also [acosh](#acosh).
|
||||
|
||||
#### day_of_month
|
||||
|
||||
|
@ -1049,6 +1078,15 @@ Metric names are stripped from the resulting series. Add [keep_metric_names](#ke
|
|||
|
||||
This function is supported by PromQL.
|
||||
|
||||
#### day_of_year
|
||||
|
||||
`day_of_year(q)` is a [transform function](#transform-functions), which returns the day of year for every point of every time series returned by `q`.
|
||||
It is expected that `q` returns unix timestamps. The returned values are in the range `[1...365]` for non-leap years, and `[1 to 366]` in leap years.
|
||||
|
||||
Metric names are stripped from the resulting series. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
This function is supported by PromQL.
|
||||
|
||||
#### days_in_month
|
||||
|
||||
`days_in_month(q)` is a [transform function](#transform-functions), which returns the number of days in the month identified
|
|
@ -1,4 +1,4 @@
|
|||
FROM golang:1.21.6 as build-web-stage
|
||||
FROM golang:1.22.0 as build-web-stage
|
||||
COPY build /build
|
||||
|
||||
WORKDIR /build
|
||||
|
|
|
@ -70,7 +70,7 @@ The list of MetricsQL features on top of PromQL:
|
|||
It is equivalent to `rate(node_network_receive_bytes_total[$__interval])` when used in Grafana.
|
||||
* Numeric values can contain `_` delimiters for better readability. For example, `1_234_567_890` can be used in queries instead of `1234567890`.
|
||||
* [Series selectors](https://docs.victoriametrics.com/keyConcepts.html#filtering) accept multiple `or` filters. For example, `{env="prod",job="a" or env="dev",job="b"}`
|
||||
selects series with either `{env="prod",job="a"}` or `{env="dev",job="b"}` labels.
|
||||
selects series with `{env="prod",job="a"}` or `{env="dev",job="b"}` labels.
|
||||
See [these docs](https://docs.victoriametrics.com/keyConcepts.html#filtering-by-multiple-or-filters) for details.
|
||||
* Support for `group_left(*)` and `group_right(*)` for copying all the labels from time series on the `one` side
|
||||
of [many-to-one operations](https://prometheus.io/docs/prometheus/latest/querying/operators/#many-to-one-and-one-to-many-vector-matches).
|
||||
|
@ -243,7 +243,7 @@ from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.ht
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [count_over_time](#count_over_time).
|
||||
See also [count_over_time](#count_over_time) and [share_eq_over_time](#share_eq_over_time).
|
||||
|
||||
#### count_gt_over_time
|
||||
|
||||
|
@ -253,7 +253,7 @@ from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.ht
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [count_over_time](#count_over_time).
|
||||
See also [count_over_time](#count_over_time) and [share_gt_over_time](#share_gt_over_time).
|
||||
|
||||
#### count_le_over_time
|
||||
|
||||
|
@ -263,7 +263,7 @@ from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.ht
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [count_over_time](#count_over_time).
|
||||
See also [count_over_time](#count_over_time) and [share_le_over_time](#share_le_over_time).
|
||||
|
||||
#### count_ne_over_time
|
||||
|
||||
|
@ -743,7 +743,7 @@ This function is useful for calculating SLI and SLO. Example: `share_gt_over_tim
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [share_le_over_time](#share_le_over_time).
|
||||
See also [share_le_over_time](#share_le_over_time) and [count_gt_over_time](#count_gt_over_time).
|
||||
|
||||
#### share_le_over_time
|
||||
|
||||
|
@ -756,7 +756,7 @@ the share of time series values for the last 24 hours when memory usage was belo
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [share_gt_over_time](#share_gt_over_time).
|
||||
See also [share_gt_over_time](#share_gt_over_time) and [count_le_over_time](#count_le_over_time).
|
||||
|
||||
#### share_eq_over_time
|
||||
|
||||
|
@ -766,6 +766,8 @@ from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.ht
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [count_eq_over_time](#count_eq_over_time).
|
||||
|
||||
#### stale_samples_over_time
|
||||
|
||||
`stale_samples_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the number
|
||||
|
@ -792,6 +794,33 @@ Metric names are stripped from the resulting rollups. Add [keep_metric_names](#k
|
|||
|
||||
This function is supported by PromQL. See also [stddev_over_time](#stddev_over_time).
|
||||
|
||||
#### sum_eq_over_time
|
||||
|
||||
`sum_eq_over_time(series_selector[d], eq)` is a [rollup function](#rollup-function), which calculates the sum of raw sample values equal to `eq`
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [sum_over_time](#sum_over_time) and [count_eq_over_time](#count_eq_over_time).
|
||||
|
||||
#### sum_gt_over_time
|
||||
|
||||
`sum_gt_over_time(series_selector[d], gt)` is a [rollup function](#rollup-function), which calculates the sum of raw sample values bigger than `gt`
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [sum_over_time](#sum_over_time) and [count_gt_over_time](#count_gt_over_time).
|
||||
|
||||
#### sum_le_over_time
|
||||
|
||||
`sum_le_over_time(series_selector[d], le)` is a [rollup function](#rollup-function), which calculates the sum of raw sample values smaller or equal to `le`
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [sum_over_time](#sum_over_time) and [count_le_over_time](#count_le_over_time).
|
||||
|
||||
#### sum_over_time
|
||||
|
||||
`sum_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the sum of raw sample values
|
||||
|
@ -810,7 +839,7 @@ Metric names are stripped from the resulting rollups. Add [keep_metric_names](#k
|
|||
|
||||
#### timestamp
|
||||
|
||||
`timestamp(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the last raw sample
|
||||
`timestamp(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the last raw sample
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
@ -819,7 +848,7 @@ This function is supported by PromQL. See also [timestamp_with_name](#timestamp_
|
|||
|
||||
#### timestamp_with_name
|
||||
|
||||
`timestamp_with_name(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the last raw sample
|
||||
`timestamp_with_name(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the last raw sample
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are preserved in the resulting rollups.
|
||||
|
@ -828,7 +857,7 @@ See also [timestamp](#timestamp).
|
|||
|
||||
#### tfirst_over_time
|
||||
|
||||
`tfirst_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the first raw sample
|
||||
`tfirst_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the first raw sample
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
@ -837,7 +866,7 @@ See also [first_over_time](#first_over_time).
|
|||
|
||||
#### tlast_change_over_time
|
||||
|
||||
`tlast_change_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the last change
|
||||
`tlast_change_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the last change
|
||||
per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) on the given lookbehind window `d`.
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
@ -852,7 +881,7 @@ See also [tlast_change_over_time](#tlast_change_over_time).
|
|||
|
||||
#### tmax_over_time
|
||||
|
||||
`tmax_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the raw sample
|
||||
`tmax_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the raw sample
|
||||
with the maximum value on the given lookbehind window `d`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
|
@ -862,7 +891,7 @@ See also [max_over_time](#max_over_time).
|
|||
|
||||
#### tmin_over_time
|
||||
|
||||
`tmin_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds for the raw sample
|
||||
`tmin_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which returns the timestamp in seconds with millisecond precision for the raw sample
|
||||
with the minimum value on the given lookbehind window `d`. It is calculated independently per each time series returned
|
||||
from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
|
|
|
@ -1,15 +1,15 @@
|
|||
import React, { FC, useMemo, useRef, useState } from "preact/compat";
|
||||
import { getTimezoneList, getUTCByTimezone } from "../../../../utils/time";
|
||||
import { getBrowserTimezone, getTimezoneList, getUTCByTimezone } from "../../../../utils/time";
|
||||
import { ArrowDropDownIcon } from "../../../Main/Icons";
|
||||
import classNames from "classnames";
|
||||
import Popper from "../../../Main/Popper/Popper";
|
||||
import Accordion from "../../../Main/Accordion/Accordion";
|
||||
import dayjs from "dayjs";
|
||||
import TextField from "../../../Main/TextField/TextField";
|
||||
import { Timezone } from "../../../../types";
|
||||
import "./style.scss";
|
||||
import useDeviceDetect from "../../../../hooks/useDeviceDetect";
|
||||
import useBoolean from "../../../../hooks/useBoolean";
|
||||
import WarningTimezone from "./WarningTimezone";
|
||||
|
||||
interface TimezonesProps {
|
||||
timezoneState: string;
|
||||
|
@ -18,9 +18,12 @@ interface TimezonesProps {
|
|||
}
|
||||
|
||||
interface PinnedTimezone extends Timezone {
|
||||
title: string
|
||||
title: string;
|
||||
isInvalid?: boolean;
|
||||
}
|
||||
|
||||
const browserTimezone = getBrowserTimezone();
|
||||
|
||||
const Timezones: FC<TimezonesProps> = ({ timezoneState, defaultTimezone, onChange }) => {
|
||||
const { isMobile } = useDeviceDetect();
|
||||
const timezones = getTimezoneList();
|
||||
|
@ -41,9 +44,10 @@ const Timezones: FC<TimezonesProps> = ({ timezoneState, defaultTimezone, onChang
|
|||
utc: defaultTimezone ? getUTCByTimezone(defaultTimezone) : "UTC"
|
||||
},
|
||||
{
|
||||
title: `Browser Time (${dayjs.tz.guess()})`,
|
||||
region: dayjs.tz.guess(),
|
||||
utc: getUTCByTimezone(dayjs.tz.guess())
|
||||
title: browserTimezone.title,
|
||||
region: browserTimezone.region,
|
||||
utc: getUTCByTimezone(browserTimezone.region),
|
||||
isInvalid: !browserTimezone.isValid
|
||||
},
|
||||
{
|
||||
title: "UTC (Coordinated Universal Time)",
|
||||
|
@ -132,7 +136,7 @@ const Timezones: FC<TimezonesProps> = ({ timezoneState, defaultTimezone, onChang
|
|||
className="vm-timezones-item vm-timezones-list-group-options__item"
|
||||
onClick={createHandlerSetTimezone(t)}
|
||||
>
|
||||
<div className="vm-timezones-item__title">{t.title}</div>
|
||||
<div className="vm-timezones-item__title">{t.title}{t.isInvalid && <WarningTimezone/>}</div>
|
||||
<div className="vm-timezones-item__utc">{t.utc}</div>
|
||||
</div>
|
||||
))}
|
||||
|
|
|
@ -0,0 +1,16 @@
|
|||
import React, { FC } from "preact/compat";
|
||||
import Tooltip from "../../../Main/Tooltip/Tooltip";
|
||||
import { WarningIcon } from "../../../Main/Icons";
|
||||
|
||||
const waringText = "Browser timezone is not recognized, supported, or could not be determined.";
|
||||
|
||||
const WarningTimezone: FC = () => {
|
||||
|
||||
return (
|
||||
<Tooltip title={waringText}>
|
||||
<WarningIcon/>
|
||||
</Tooltip>
|
||||
);
|
||||
};
|
||||
|
||||
export default WarningTimezone;
|
|
@ -16,7 +16,15 @@
|
|||
}
|
||||
|
||||
&__title {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: $padding-small;
|
||||
text-transform: capitalize;
|
||||
|
||||
svg {
|
||||
width: 14px;
|
||||
color: $color-warning;
|
||||
}
|
||||
}
|
||||
|
||||
&__utc {
|
||||
|
|
|
@ -4,9 +4,8 @@ import { useFetchQueryOptions } from "../../../hooks/useFetchQueryOptions";
|
|||
import { getTextWidth } from "../../../utils/uplot";
|
||||
import { escapeRegexp } from "../../../utils/regexp";
|
||||
import useGetMetricsQL from "../../../hooks/useGetMetricsQL";
|
||||
import { RefreshIcon } from "../../Main/Icons";
|
||||
import { QueryContextType } from "../../../types";
|
||||
import { AUTOCOMPLETE_LIMITS, AUTOCOMPLETE_MIN_SYMBOLS } from "../../../constants/queryAutocomplete";
|
||||
import { AUTOCOMPLETE_LIMITS } from "../../../constants/queryAutocomplete";
|
||||
|
||||
interface QueryEditorAutocompleteProps {
|
||||
value: string;
|
||||
|
@ -26,22 +25,27 @@ const QueryEditorAutocomplete: FC<QueryEditorAutocompleteProps> = ({
|
|||
const [leftOffset, setLeftOffset] = useState(0);
|
||||
const metricsqlFunctions = useGetMetricsQL();
|
||||
|
||||
const exprLastPart = useMemo(() => {
|
||||
const parts = value.split("}");
|
||||
return parts[parts.length - 1];
|
||||
}, [value]);
|
||||
|
||||
const metric = useMemo(() => {
|
||||
const regexp = /\b[^{}(),\s]+(?={|$)/g;
|
||||
const match = value.match(regexp);
|
||||
const match = exprLastPart.match(regexp);
|
||||
return match ? match[0] : "";
|
||||
}, [value]);
|
||||
}, [exprLastPart]);
|
||||
|
||||
const label = useMemo(() => {
|
||||
const regexp = /[a-z_:-][\w\-.:/]*\b(?=\s*(=|!=|=~|!~))/g;
|
||||
const match = value.match(regexp);
|
||||
const match = exprLastPart.match(regexp);
|
||||
return match ? match[match.length - 1] : "";
|
||||
}, [value]);
|
||||
}, [exprLastPart]);
|
||||
|
||||
const context = useMemo(() => {
|
||||
if (!value) return QueryContextType.empty;
|
||||
if (!value || value.endsWith("}")) return QueryContextType.empty;
|
||||
|
||||
const labelRegexp = /\{[^}]*?(\w+)$/gm;
|
||||
const labelRegexp = /\{[^}]*?(\w+)*$/gm;
|
||||
const labelValueRegexp = new RegExp(`(${escapeRegexp(metric)})?{?.+${escapeRegexp(label)}(=|!=|=~|!~)"?([^"]*)$`, "g");
|
||||
|
||||
switch (true) {
|
||||
|
@ -88,6 +92,13 @@ const QueryEditorAutocomplete: FC<QueryEditorAutocompleteProps> = ({
|
|||
const beforeValueByContext = value.substring(0, startIndexOfValueByContext);
|
||||
const afterValueByContext = value.substring(endIndexOfValueByContext);
|
||||
|
||||
// Add quotes around the value if the context is labelValue
|
||||
if (context === QueryContextType.labelValue) {
|
||||
const quote = "\"";
|
||||
const needsQuote = /(?:=|!=|=~|!~)$/.test(beforeValueByContext);
|
||||
insert = `${needsQuote ? quote : ""}${insert}`;
|
||||
}
|
||||
|
||||
// Assemble the new value with the inserted text
|
||||
const newVal = `${beforeValueByContext}${insert}${afterValueByContext}`;
|
||||
onSelect(newVal);
|
||||
|
@ -109,11 +120,12 @@ const QueryEditorAutocomplete: FC<QueryEditorAutocompleteProps> = ({
|
|||
return (
|
||||
<>
|
||||
<Autocomplete
|
||||
loading={loading}
|
||||
disabledFullScreen
|
||||
value={valueByContext}
|
||||
options={options?.length < AUTOCOMPLETE_LIMITS.queryLimit ? options : []}
|
||||
options={options}
|
||||
anchor={anchorEl}
|
||||
minLength={AUTOCOMPLETE_MIN_SYMBOLS[context]}
|
||||
minLength={0}
|
||||
offset={{ top: 0, left: leftOffset }}
|
||||
onSelect={handleSelect}
|
||||
onFoundOptions={onFoundOptions}
|
||||
|
@ -122,7 +134,6 @@ const QueryEditorAutocomplete: FC<QueryEditorAutocompleteProps> = ({
|
|||
message: "Please, specify the query more precisely."
|
||||
}}
|
||||
/>
|
||||
{loading && <div className="vm-query-editor-autocomplete"><RefreshIcon/></div>}
|
||||
</>
|
||||
);
|
||||
};
|
||||
|
|
|
@ -2,20 +2,4 @@
|
|||
|
||||
.vm-query-editor {
|
||||
position: relative;
|
||||
|
||||
&-autocomplete {
|
||||
position: absolute;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-items: center;
|
||||
top: 0;
|
||||
bottom: 0;
|
||||
right: $padding-global;
|
||||
width: 12px;
|
||||
height: 100%;
|
||||
color: $color-text-secondary;
|
||||
z-index: 2;
|
||||
animation: half-circle-spinner-animation 1s infinite linear, vm-fade 0.5s ease-in;
|
||||
pointer-events: none;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2,7 +2,7 @@ import React, { FC, Ref, useCallback, useEffect, useMemo, useRef, useState, JSX
|
|||
import classNames from "classnames";
|
||||
import Popper from "../Popper/Popper";
|
||||
import "./style.scss";
|
||||
import { DoneIcon } from "../Icons";
|
||||
import { DoneIcon, RefreshIcon } from "../Icons";
|
||||
import useDeviceDetect from "../../../hooks/useDeviceDetect";
|
||||
import useBoolean from "../../../hooks/useBoolean";
|
||||
import useEventListener from "../../../hooks/useEventListener";
|
||||
|
@ -27,9 +27,11 @@ interface AutocompleteProps {
|
|||
disabledFullScreen?: boolean
|
||||
offset?: {top: number, left: number}
|
||||
maxDisplayResults?: {limit: number, message?: string}
|
||||
loading?: boolean;
|
||||
onSelect: (val: string) => void
|
||||
onOpenAutocomplete?: (val: boolean) => void
|
||||
onFoundOptions?: (val: AutocompleteOptions[]) => void
|
||||
onChangeWrapperRef?: (elementRef: Ref<HTMLDivElement>) => void
|
||||
}
|
||||
|
||||
enum FocusType {
|
||||
|
@ -50,9 +52,11 @@ const Autocomplete: FC<AutocompleteProps> = ({
|
|||
disabledFullScreen,
|
||||
offset,
|
||||
maxDisplayResults,
|
||||
loading,
|
||||
onSelect,
|
||||
onOpenAutocomplete,
|
||||
onFoundOptions
|
||||
onFoundOptions,
|
||||
onChangeWrapperRef
|
||||
}) => {
|
||||
const { isMobile } = useDeviceDetect();
|
||||
const wrapperEl = useRef<HTMLDivElement>(null);
|
||||
|
@ -166,6 +170,10 @@ const Autocomplete: FC<AutocompleteProps> = ({
|
|||
onFoundOptions && onFoundOptions(hideFoundedOptions ? [] : foundOptions);
|
||||
}, [foundOptions, hideFoundedOptions]);
|
||||
|
||||
useEffect(() => {
|
||||
onChangeWrapperRef && onChangeWrapperRef(wrapperEl);
|
||||
}, [wrapperEl]);
|
||||
|
||||
return (
|
||||
<Popper
|
||||
open={openAutocomplete}
|
||||
|
@ -184,6 +192,7 @@ const Autocomplete: FC<AutocompleteProps> = ({
|
|||
})}
|
||||
ref={wrapperEl}
|
||||
>
|
||||
{loading && <div className="vm-autocomplete__loader"><RefreshIcon/><span>Loading...</span></div>}
|
||||
{displayNoOptionsText && <div className="vm-autocomplete__no-options">{noOptionsText}</div>}
|
||||
{!hideFoundedOptions && foundOptions.map((option, i) =>
|
||||
<div
|
||||
|
|
|
@ -16,6 +16,22 @@
|
|||
color: $color-text-disabled;
|
||||
}
|
||||
|
||||
&__loader {
|
||||
display: grid;
|
||||
grid-template-columns: 14px auto;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
gap: $padding-small;
|
||||
padding: $padding-global;
|
||||
color: $color-text-secondary;
|
||||
z-index: 2;
|
||||
pointer-events: none;
|
||||
|
||||
svg {
|
||||
animation: half-circle-spinner-animation 1s infinite linear, vm-fade 0.5s ease-in;
|
||||
}
|
||||
}
|
||||
|
||||
&-info,
|
||||
&-message {
|
||||
padding: $padding-global;
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
import React, { FC, useEffect, useMemo, useRef, useState, } from "preact/compat";
|
||||
import React, { FC, Ref, useEffect, useMemo, useRef, useState, } from "preact/compat";
|
||||
import classNames from "classnames";
|
||||
import { ArrowDropDownIcon, CloseIcon } from "../Icons";
|
||||
import { FormEvent, MouseEvent } from "react";
|
||||
|
@ -8,6 +8,7 @@ import "./style.scss";
|
|||
import useDeviceDetect from "../../../hooks/useDeviceDetect";
|
||||
import MultipleSelectedValue from "./MultipleSelectedValue/MultipleSelectedValue";
|
||||
import useEventListener from "../../../hooks/useEventListener";
|
||||
import useClickOutside from "../../../hooks/useClickOutside";
|
||||
|
||||
interface SelectProps {
|
||||
value: string | string[]
|
||||
|
@ -39,6 +40,7 @@ const Select: FC<SelectProps> = ({
|
|||
|
||||
const [search, setSearch] = useState("");
|
||||
const autocompleteAnchorEl = useRef<HTMLDivElement>(null);
|
||||
const [wrapperRef, setWrapperRef] = useState<Ref<HTMLDivElement> | null>(null);
|
||||
const [openList, setOpenList] = useState(false);
|
||||
|
||||
const inputRef = useRef<HTMLInputElement>(null);
|
||||
|
@ -76,6 +78,7 @@ const Select: FC<SelectProps> = ({
|
|||
};
|
||||
|
||||
const handleSelected = (val: string) => {
|
||||
setSearch("");
|
||||
onChange(val);
|
||||
if (!isMultiple) handleCloseList();
|
||||
if (isMultiple && inputRef.current) inputRef.current.focus();
|
||||
|
@ -110,6 +113,7 @@ const Select: FC<SelectProps> = ({
|
|||
}, [autofocus, inputRef]);
|
||||
|
||||
useEventListener("keyup", handleKeyUp);
|
||||
useClickOutside(autocompleteAnchorEl, handleCloseList, wrapperRef);
|
||||
|
||||
return (
|
||||
<div
|
||||
|
@ -172,6 +176,7 @@ const Select: FC<SelectProps> = ({
|
|||
noOptionsText={noOptionsText}
|
||||
onSelect={handleSelected}
|
||||
onOpenAutocomplete={setOpenList}
|
||||
onChangeWrapperRef={setWrapperRef}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
|
|
|
@ -1,14 +1,5 @@
|
|||
import { QueryContextType } from "../types";
|
||||
|
||||
export const AUTOCOMPLETE_LIMITS = {
|
||||
displayResults: 50,
|
||||
queryLimit: 1000,
|
||||
cacheLimit: 1000,
|
||||
};
|
||||
|
||||
export const AUTOCOMPLETE_MIN_SYMBOLS = {
|
||||
[QueryContextType.metricsql]: 2,
|
||||
[QueryContextType.empty]: 2,
|
||||
[QueryContextType.label]: 0,
|
||||
[QueryContextType.labelValue]: 0,
|
||||
};
|
||||
|
|
|
@ -20,7 +20,7 @@ const useReadyChart = (setPlotScale: SetMinMax) => {
|
|||
const onReadyChart = (u: uPlot): void => {
|
||||
const handleInteractionStart = (e: MouseEvent | TouchEvent) => {
|
||||
const dragByMouse = e instanceof MouseEvent && isLiftClickWithMeta(e);
|
||||
const dragByTouch = e instanceof TouchEvent && e.touches.length > 1;
|
||||
const dragByTouch = window.TouchEvent && e instanceof TouchEvent && e.touches.length > 1;
|
||||
if (dragByMouse || dragByTouch) {
|
||||
dragChart({ u, e });
|
||||
}
|
||||
|
|
|
@ -7,7 +7,7 @@ type Event = MouseEvent | TouchEvent;
|
|||
const useClickOutside = <T extends HTMLElement = HTMLElement>(
|
||||
ref: RefObject<T>,
|
||||
handler: (event: Event) => void,
|
||||
preventRef?: RefObject<T>
|
||||
preventRef?: RefObject<T> | null
|
||||
) => {
|
||||
const listener = useCallback((event: Event) => {
|
||||
const el = ref?.current;
|
||||
|
|
|
@ -4,6 +4,7 @@ import { useAppState } from "../state/common/StateContext";
|
|||
import { useTimeDispatch } from "../state/time/TimeStateContext";
|
||||
import { getFromStorage } from "../utils/storage";
|
||||
import dayjs from "dayjs";
|
||||
import { getBrowserTimezone } from "../utils/time";
|
||||
|
||||
const disabledDefaultTimezone = Boolean(getFromStorage("DISABLED_DEFAULT_TIMEZONE"));
|
||||
|
||||
|
@ -15,7 +16,7 @@ const useFetchDefaultTimezone = () => {
|
|||
const [error, setError] = useState<ErrorTypes | string>("");
|
||||
|
||||
const setTimezone = (timezoneStr: string) => {
|
||||
const timezone = timezoneStr.toLowerCase() === "local" ? dayjs.tz.guess() : timezoneStr;
|
||||
const timezone = timezoneStr.toLowerCase() === "local" ? getBrowserTimezone().region : timezoneStr;
|
||||
try {
|
||||
dayjs().tz(timezone).isValid();
|
||||
timeDispatch({ type: "SET_DEFAULT_TIMEZONE", payload: timezone });
|
||||
|
|
|
@ -47,7 +47,7 @@ export const useFetchQueryOptions = ({ valueByContext, metric, label, context }:
|
|||
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [value, setValue] = useState(valueByContext);
|
||||
const debouncedSetValue = debounce(setValue, 800);
|
||||
const debouncedSetValue = debounce(setValue, 500);
|
||||
useEffect(() => {
|
||||
debouncedSetValue(valueByContext);
|
||||
return debouncedSetValue.cancel;
|
||||
|
@ -80,7 +80,7 @@ export const useFetchQueryOptions = ({ valueByContext, metric, label, context }:
|
|||
};
|
||||
|
||||
const fetchData = async ({ value, urlSuffix, setter, type, params }: FetchDataArgs) => {
|
||||
if (!value) return;
|
||||
if (!value && type === TypeData.metric) return;
|
||||
abortControllerRef.current.abort();
|
||||
abortControllerRef.current = new AbortController();
|
||||
const { signal } = abortControllerRef.current;
|
||||
|
|
|
@ -6,10 +6,10 @@ import {
|
|||
getDurationFromPeriod,
|
||||
getTimeperiodForDuration,
|
||||
getRelativeTime,
|
||||
setTimezone
|
||||
setTimezone,
|
||||
getBrowserTimezone
|
||||
} from "../../utils/time";
|
||||
import { getQueryStringValue } from "../../utils/query-string";
|
||||
import dayjs from "dayjs";
|
||||
import { getFromStorage, saveToStorage } from "../../utils/storage";
|
||||
|
||||
export interface TimeState {
|
||||
|
@ -29,7 +29,7 @@ export type TimeAction =
|
|||
| { type: "SET_TIMEZONE", payload: string }
|
||||
| { type: "SET_DEFAULT_TIMEZONE", payload: string }
|
||||
|
||||
const timezone = getFromStorage("TIMEZONE") as string || dayjs.tz.guess();
|
||||
const timezone = getFromStorage("TIMEZONE") as string || getBrowserTimezone().region;
|
||||
setTimezone(timezone);
|
||||
|
||||
const defaultDuration = getQueryStringValue("g0.range_input") as string;
|
||||
|
|
|
@ -227,3 +227,22 @@ export const getTimezoneList = (search = "") => {
|
|||
export const setTimezone = (timezone: string) => {
|
||||
dayjs.tz.setDefault(timezone);
|
||||
};
|
||||
|
||||
const isValidTimezone = (timezone: string) => {
|
||||
try {
|
||||
dayjs().tz(timezone);
|
||||
return true;
|
||||
} catch (e) {
|
||||
return false;
|
||||
}
|
||||
};
|
||||
|
||||
export const getBrowserTimezone = () => {
|
||||
const timezone = dayjs.tz.guess();
|
||||
const isValid = isValidTimezone(timezone);
|
||||
return {
|
||||
isValid,
|
||||
title: isValid ? `Browser Time (${timezone})` : "Browser timezone (UTC)",
|
||||
region: isValid ? timezone : "UTC",
|
||||
};
|
||||
};
|
||||
|
|
|
@ -1,4 +1,32 @@
|
|||
{
|
||||
"__inputs": [],
|
||||
"__elements": {},
|
||||
"__requires": [
|
||||
{
|
||||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "10.3.1"
|
||||
},
|
||||
{
|
||||
"type": "datasource",
|
||||
"id": "prometheus",
|
||||
"name": "Prometheus",
|
||||
"version": "1.0.0"
|
||||
},
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "stat",
|
||||
"name": "Stat",
|
||||
"version": ""
|
||||
},
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "timeseries",
|
||||
"name": "Time series",
|
||||
"version": ""
|
||||
}
|
||||
],
|
||||
"annotations": {
|
||||
"list": [
|
||||
{
|
||||
|
@ -22,7 +50,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"enable": true,
|
||||
"expr": "sum(vm_app_version{job=~\"$job\", instance=~\"$instance\"}) by(short_version) unless (sum(vm_app_version{job=~\"$job\", instance=~\"$instance\"} offset 20m) by(short_version))",
|
||||
|
@ -35,7 +63,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"enable": true,
|
||||
"expr": "sum(changes(vm_app_start_timestamp{job=~\"$job\", instance=~\"$instance\"})) by(job)",
|
||||
|
@ -49,6 +77,7 @@
|
|||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"graphTooltip": 0,
|
||||
"id": null,
|
||||
"links": [],
|
||||
"liveNow": false,
|
||||
"panels": [
|
||||
|
@ -68,7 +97,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"description": "How many log entries are in storage",
|
||||
"fieldConfig": {
|
||||
|
@ -86,7 +115,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -111,15 +141,17 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"exemplar": false,
|
||||
|
@ -138,7 +170,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"description": "Shows the datapoints ingestion rate.",
|
||||
"fieldConfig": {
|
||||
|
@ -156,7 +188,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -181,15 +214,17 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"exemplar": false,
|
||||
|
@ -208,7 +243,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"description": "The ratio of original data size and compressed data stored on disk",
|
||||
"fieldConfig": {
|
||||
|
@ -226,7 +261,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "none"
|
||||
"unit": "none",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -251,15 +287,17 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"exemplar": false,
|
||||
|
@ -300,7 +338,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -325,10 +364,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -373,7 +414,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "s"
|
||||
"unit": "s",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -396,10 +438,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -421,7 +465,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"description": "Total amount of used disk space",
|
||||
"fieldConfig": {
|
||||
|
@ -439,7 +483,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "bytes"
|
||||
"unit": "bytes",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -464,15 +509,17 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"exemplar": false,
|
||||
|
@ -509,7 +556,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -534,10 +582,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -579,7 +629,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "bytes"
|
||||
"unit": "bytes",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -604,10 +655,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -642,7 +695,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"description": "How many datapoints are inserted into storage per second",
|
||||
"fieldConfig": {
|
||||
|
@ -651,6 +704,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -664,6 +718,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -696,7 +751,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -730,7 +786,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(rate(vl_rows_ingested_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by (type) > 0",
|
||||
|
@ -748,7 +804,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"description": "* `*` - unsupported query path\n* `/insert` - insert into VM\n* `/metrics` - query VL system metrics\n* `/query` - read the data",
|
||||
"fieldConfig": {
|
||||
|
@ -757,6 +813,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -770,6 +827,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -802,7 +860,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -837,7 +896,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(rate(vl_http_requests_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])) by (path) > 0",
|
||||
|
@ -855,7 +914,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"description": "Shows the amount of on-disk space occupied by data before and after compressiom",
|
||||
"fieldConfig": {
|
||||
|
@ -864,6 +923,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -877,6 +937,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -909,7 +970,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "bytes"
|
||||
"unit": "bytes",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -944,7 +1006,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(vl_compressed_data_size_bytes{job=~\"$job\", instance=~\"$instance\"})",
|
||||
|
@ -962,7 +1024,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"description": "The number of the new log streams created over the last 24h",
|
||||
"fieldConfig": {
|
||||
|
@ -971,6 +1033,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -984,6 +1047,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1016,7 +1080,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1051,7 +1116,7 @@
|
|||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "increase(vl_streams_created_total{job=~\"$job\", instance=~\"$instance\"}[1d])",
|
||||
|
@ -1091,6 +1156,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1104,6 +1170,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1136,7 +1203,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "percentunit"
|
||||
"unit": "percentunit",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1197,6 +1265,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1210,6 +1279,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1242,7 +1312,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "bytes"
|
||||
"unit": "bytes",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1353,6 +1424,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1366,6 +1438,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1398,7 +1471,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "percentunit"
|
||||
"unit": "percentunit",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1458,6 +1532,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1471,6 +1546,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1503,7 +1579,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "percentunit"
|
||||
"unit": "percentunit",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1566,6 +1643,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1579,6 +1657,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1612,7 +1691,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "percentunit"
|
||||
"unit": "percentunit",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
|
@ -1691,6 +1771,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1704,6 +1785,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1736,7 +1818,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
|
@ -1827,6 +1910,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1840,6 +1924,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1873,7 +1958,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1934,6 +2020,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1947,6 +2034,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1978,7 +2066,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "bytes"
|
||||
"unit": "bytes",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
|
@ -2064,6 +2153,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -2077,6 +2167,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -2101,7 +2192,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -2109,7 +2201,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -2170,6 +2263,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -2183,6 +2277,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -2205,7 +2300,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -2213,7 +2309,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
|
@ -2304,6 +2401,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -2317,6 +2415,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -2340,7 +2439,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -2348,7 +2448,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -2410,6 +2511,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -2423,6 +2525,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -2446,7 +2549,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -2454,7 +2558,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -2505,16 +2610,16 @@
|
|||
"type": "timeseries"
|
||||
}
|
||||
],
|
||||
"schemaVersion": 37,
|
||||
"style": "dark",
|
||||
"refresh": "",
|
||||
"schemaVersion": 39,
|
||||
"tags": [],
|
||||
"templating": {
|
||||
"list": [
|
||||
{
|
||||
"current": {
|
||||
"selected": true,
|
||||
"text": "default",
|
||||
"value": "default"
|
||||
"text": "VictoriaMetrics",
|
||||
"value": "P4169E866C3094E38"
|
||||
},
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
|
@ -2529,14 +2634,10 @@
|
|||
"type": "datasource"
|
||||
},
|
||||
{
|
||||
"current": {
|
||||
"selected": false,
|
||||
"text": "victorialogs",
|
||||
"value": "victorialogs"
|
||||
},
|
||||
"current": {},
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"definition": "label_values(vm_app_version{version=~\"victoria-logs-.*\"}, job)",
|
||||
"hide": 0,
|
||||
|
@ -2555,14 +2656,10 @@
|
|||
"type": "query"
|
||||
},
|
||||
{
|
||||
"current": {
|
||||
"selected": false,
|
||||
"text": "victorialogs:9428",
|
||||
"value": "victorialogs:9428"
|
||||
},
|
||||
"current": {},
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${ds}"
|
||||
"uid": "$ds"
|
||||
},
|
||||
"definition": "label_values(vm_app_version{job=~\"$job\"}, instance)",
|
||||
"hide": 0,
|
||||
|
@ -2601,6 +2698,6 @@
|
|||
"timezone": "",
|
||||
"title": "VictoriaLogs",
|
||||
"uid": "OqPIZTX4z",
|
||||
"version": 3,
|
||||
"version": 1,
|
||||
"weekStart": ""
|
||||
}
|
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
|
@ -7,7 +7,7 @@
|
|||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "9.2.7"
|
||||
"version": "10.3.1"
|
||||
},
|
||||
{
|
||||
"type": "datasource",
|
||||
|
@ -179,7 +179,8 @@
|
|||
"value": null
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -202,10 +203,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -239,7 +242,8 @@
|
|||
"value": null
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -262,10 +266,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -299,7 +305,8 @@
|
|||
"value": null
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -322,10 +329,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -363,7 +372,8 @@
|
|||
"value": 1
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -386,10 +396,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -427,7 +439,8 @@
|
|||
"value": 1
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -450,10 +463,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -484,7 +499,9 @@
|
|||
},
|
||||
"custom": {
|
||||
"align": "auto",
|
||||
"displayMode": "auto",
|
||||
"cellOptions": {
|
||||
"type": "auto"
|
||||
},
|
||||
"inspect": false,
|
||||
"minWidth": 50
|
||||
},
|
||||
|
@ -501,7 +518,8 @@
|
|||
"value": 80
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
|
@ -538,7 +556,9 @@
|
|||
},
|
||||
"id": 45,
|
||||
"options": {
|
||||
"cellHeight": "sm",
|
||||
"footer": {
|
||||
"countRows": false,
|
||||
"fields": "",
|
||||
"reducer": [
|
||||
"sum"
|
||||
|
@ -547,7 +567,7 @@
|
|||
},
|
||||
"showHeader": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -576,6 +596,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -589,6 +610,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "stepAfter",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -622,7 +644,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -707,6 +730,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -720,6 +744,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -750,7 +775,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -810,6 +836,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -823,6 +850,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -853,7 +881,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "s"
|
||||
"unit": "s",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -913,6 +942,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -926,6 +956,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -956,7 +987,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1014,6 +1046,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1027,6 +1060,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1057,7 +1091,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1115,7 +1150,9 @@
|
|||
},
|
||||
"custom": {
|
||||
"align": "auto",
|
||||
"displayMode": "auto",
|
||||
"cellOptions": {
|
||||
"type": "auto"
|
||||
},
|
||||
"inspect": false
|
||||
},
|
||||
"mappings": [],
|
||||
|
@ -1123,8 +1160,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1276,8 +1312,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1357,6 +1392,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1370,6 +1406,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1393,7 +1430,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1401,7 +1439,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "percentunit"
|
||||
"unit": "percentunit",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1409,7 +1448,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 33
|
||||
"y": 3
|
||||
},
|
||||
"id": 37,
|
||||
"links": [
|
||||
|
@ -1468,6 +1507,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1481,6 +1521,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1504,7 +1545,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1512,7 +1554,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "bytes"
|
||||
"unit": "bytes",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1520,7 +1563,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 33
|
||||
"y": 3
|
||||
},
|
||||
"id": 57,
|
||||
"links": [
|
||||
|
@ -1579,6 +1622,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1592,6 +1636,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1615,7 +1660,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1623,7 +1669,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "percentunit"
|
||||
"unit": "percentunit",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1631,7 +1678,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 41
|
||||
"y": 11
|
||||
},
|
||||
"id": 35,
|
||||
"links": [
|
||||
|
@ -1692,6 +1739,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1705,6 +1753,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1728,7 +1777,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1736,7 +1786,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1744,7 +1795,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 41
|
||||
"y": 11
|
||||
},
|
||||
"id": 56,
|
||||
"links": [
|
||||
|
@ -1821,6 +1872,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1834,6 +1886,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1858,7 +1911,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1866,7 +1920,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "percentunit"
|
||||
"unit": "percentunit",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1874,7 +1929,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 49
|
||||
"y": 19
|
||||
},
|
||||
"id": 39,
|
||||
"links": [],
|
||||
|
@ -1926,6 +1981,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1939,6 +1995,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1963,7 +2020,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1971,7 +2029,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1979,7 +2038,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 49
|
||||
"y": 19
|
||||
},
|
||||
"id": 41,
|
||||
"links": [],
|
||||
|
@ -2018,6 +2077,115 @@
|
|||
],
|
||||
"title": "Goroutines ($instance)",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "victoriametrics-datasource",
|
||||
"uid": "$ds"
|
||||
},
|
||||
"description": "Shows the percent of CPU spent on garbage collection.\n\nIf % is high, then CPU usage can be decreased by changing GOGC to higher values. Increasing GOGC value will increase memory usage, and decrease CPU usage.\n\nTry searching for keyword `GOGC` at https://docs.victoriametrics.com/troubleshooting/ ",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 0,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"decimals": 0,
|
||||
"links": [],
|
||||
"mappings": [],
|
||||
"min": 0,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "percentunit",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 27
|
||||
},
|
||||
"id": 59,
|
||||
"links": [],
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": [
|
||||
"mean",
|
||||
"lastNotNull",
|
||||
"max"
|
||||
],
|
||||
"displayMode": "table",
|
||||
"placement": "bottom",
|
||||
"showLegend": true
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "multi",
|
||||
"sort": "desc"
|
||||
}
|
||||
},
|
||||
"pluginVersion": "9.2.6",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "victoriametrics-datasource",
|
||||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "max(\n rate(go_gc_cpu_seconds_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval]) \n / rate(process_cpu_seconds_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])\n ) by(job)",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "__auto",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "CPU spent on GC ($instance)",
|
||||
"type": "timeseries"
|
||||
}
|
||||
],
|
||||
"targets": [
|
||||
|
@ -2108,7 +2276,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 28
|
||||
"y": 36
|
||||
},
|
||||
"id": 14,
|
||||
"options": {
|
||||
|
@ -2210,7 +2378,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 28
|
||||
"y": 36
|
||||
},
|
||||
"id": 13,
|
||||
"options": {
|
||||
|
@ -2312,7 +2480,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 36
|
||||
"y": 44
|
||||
},
|
||||
"id": 20,
|
||||
"options": {
|
||||
|
@ -2415,7 +2583,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 36
|
||||
"y": 44
|
||||
},
|
||||
"id": 32,
|
||||
"options": {
|
||||
|
@ -2514,7 +2682,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 44
|
||||
"y": 52
|
||||
},
|
||||
"id": 26,
|
||||
"options": {
|
||||
|
@ -2641,7 +2809,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 43
|
||||
"y": 51
|
||||
},
|
||||
"id": 31,
|
||||
"options": {
|
||||
|
@ -2743,7 +2911,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 43
|
||||
"y": 51
|
||||
},
|
||||
"id": 33,
|
||||
"options": {
|
||||
|
@ -2844,7 +3012,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 51
|
||||
"y": 59
|
||||
},
|
||||
"id": 30,
|
||||
"options": {
|
||||
|
@ -2965,7 +3133,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 9
|
||||
"y": 17
|
||||
},
|
||||
"id": 52,
|
||||
"options": {
|
||||
|
@ -3057,7 +3225,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 9
|
||||
"y": 17
|
||||
},
|
||||
"id": 53,
|
||||
"options": {
|
||||
|
@ -3093,9 +3261,8 @@
|
|||
"type": "row"
|
||||
}
|
||||
],
|
||||
"refresh": false,
|
||||
"schemaVersion": 37,
|
||||
"style": "dark",
|
||||
"refresh": "",
|
||||
"schemaVersion": 39,
|
||||
"tags": [
|
||||
"victoriametrics",
|
||||
"vmalert"
|
||||
|
@ -3104,9 +3271,9 @@
|
|||
"list": [
|
||||
{
|
||||
"current": {
|
||||
"selected": false,
|
||||
"text": "VictoriaMetrics - cluster",
|
||||
"value": "VictoriaMetrics - cluster"
|
||||
"selected": true,
|
||||
"text": "VictoriaMetrics",
|
||||
"value": "P4169E866C3094E38"
|
||||
},
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -6,7 +6,7 @@
|
|||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "9.2.7"
|
||||
"version": "10.3.1"
|
||||
},
|
||||
{
|
||||
"type": "datasource",
|
||||
|
@ -178,7 +178,8 @@
|
|||
"value": null
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -201,10 +202,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -238,7 +241,8 @@
|
|||
"value": null
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -261,10 +265,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -298,7 +304,8 @@
|
|||
"value": null
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -321,10 +328,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -362,7 +371,8 @@
|
|||
"value": 1
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -385,10 +395,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -426,7 +438,8 @@
|
|||
"value": 1
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -449,10 +462,12 @@
|
|||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showPercentChange": false,
|
||||
"text": {},
|
||||
"textMode": "auto"
|
||||
"textMode": "auto",
|
||||
"wideLayout": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -483,7 +498,9 @@
|
|||
},
|
||||
"custom": {
|
||||
"align": "auto",
|
||||
"displayMode": "auto",
|
||||
"cellOptions": {
|
||||
"type": "auto"
|
||||
},
|
||||
"inspect": false,
|
||||
"minWidth": 50
|
||||
},
|
||||
|
@ -500,7 +517,8 @@
|
|||
"value": 80
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
|
@ -537,7 +555,9 @@
|
|||
},
|
||||
"id": 45,
|
||||
"options": {
|
||||
"cellHeight": "sm",
|
||||
"footer": {
|
||||
"countRows": false,
|
||||
"fields": "",
|
||||
"reducer": [
|
||||
"sum"
|
||||
|
@ -546,7 +566,7 @@
|
|||
},
|
||||
"showHeader": true
|
||||
},
|
||||
"pluginVersion": "9.2.7",
|
||||
"pluginVersion": "10.3.1",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
|
@ -575,6 +595,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -588,6 +609,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "stepAfter",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -621,7 +643,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -706,6 +729,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -719,6 +743,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -749,7 +774,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -809,6 +835,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -822,6 +849,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -852,7 +880,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "s"
|
||||
"unit": "s",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -912,6 +941,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -925,6 +955,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -955,7 +986,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1013,6 +1045,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1026,6 +1059,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1056,7 +1090,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1114,7 +1149,9 @@
|
|||
},
|
||||
"custom": {
|
||||
"align": "auto",
|
||||
"displayMode": "auto",
|
||||
"cellOptions": {
|
||||
"type": "auto"
|
||||
},
|
||||
"inspect": false
|
||||
},
|
||||
"mappings": [],
|
||||
|
@ -1122,8 +1159,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1275,8 +1311,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1356,6 +1391,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1369,6 +1405,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1392,7 +1429,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1400,7 +1438,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "percentunit"
|
||||
"unit": "percentunit",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1408,7 +1447,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 33
|
||||
"y": 3
|
||||
},
|
||||
"id": 37,
|
||||
"links": [
|
||||
|
@ -1467,6 +1506,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1480,6 +1520,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1503,7 +1544,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1511,7 +1553,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "bytes"
|
||||
"unit": "bytes",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1519,7 +1562,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 33
|
||||
"y": 3
|
||||
},
|
||||
"id": 57,
|
||||
"links": [
|
||||
|
@ -1578,6 +1621,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1591,6 +1635,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1614,7 +1659,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1622,7 +1668,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "percentunit"
|
||||
"unit": "percentunit",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1630,7 +1677,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 41
|
||||
"y": 11
|
||||
},
|
||||
"id": 35,
|
||||
"links": [
|
||||
|
@ -1691,6 +1738,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1704,6 +1752,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1727,7 +1776,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1735,7 +1785,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1743,7 +1794,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 41
|
||||
"y": 11
|
||||
},
|
||||
"id": 56,
|
||||
"links": [
|
||||
|
@ -1820,6 +1871,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1833,6 +1885,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1857,7 +1910,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1865,7 +1919,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "percentunit"
|
||||
"unit": "percentunit",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1873,7 +1928,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 49
|
||||
"y": 19
|
||||
},
|
||||
"id": 39,
|
||||
"links": [],
|
||||
|
@ -1925,6 +1980,7 @@
|
|||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
|
@ -1938,6 +1994,7 @@
|
|||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
|
@ -1962,7 +2019,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1970,7 +2028,8 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
"unit": "short",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
|
@ -1978,7 +2037,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 49
|
||||
"y": 19
|
||||
},
|
||||
"id": 41,
|
||||
"links": [],
|
||||
|
@ -2017,6 +2076,115 @@
|
|||
],
|
||||
"title": "Goroutines ($instance)",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "$ds"
|
||||
},
|
||||
"description": "Shows the percent of CPU spent on garbage collection.\n\nIf % is high, then CPU usage can be decreased by changing GOGC to higher values. Increasing GOGC value will increase memory usage, and decrease CPU usage.\n\nTry searching for keyword `GOGC` at https://docs.victoriametrics.com/troubleshooting/ ",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisBorderShow": false,
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 0,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"decimals": 0,
|
||||
"links": [],
|
||||
"mappings": [],
|
||||
"min": 0,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "percentunit",
|
||||
"unitScale": true
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 27
|
||||
},
|
||||
"id": 59,
|
||||
"links": [],
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": [
|
||||
"mean",
|
||||
"lastNotNull",
|
||||
"max"
|
||||
],
|
||||
"displayMode": "table",
|
||||
"placement": "bottom",
|
||||
"showLegend": true
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "multi",
|
||||
"sort": "desc"
|
||||
}
|
||||
},
|
||||
"pluginVersion": "9.2.6",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "max(\n rate(go_gc_cpu_seconds_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval]) \n / rate(process_cpu_seconds_total{job=~\"$job\", instance=~\"$instance\"}[$__rate_interval])\n ) by(job)",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "__auto",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "CPU spent on GC ($instance)",
|
||||
"type": "timeseries"
|
||||
}
|
||||
],
|
||||
"targets": [
|
||||
|
@ -2107,7 +2275,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 28
|
||||
"y": 36
|
||||
},
|
||||
"id": 14,
|
||||
"options": {
|
||||
|
@ -2209,7 +2377,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 28
|
||||
"y": 36
|
||||
},
|
||||
"id": 13,
|
||||
"options": {
|
||||
|
@ -2311,7 +2479,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 36
|
||||
"y": 44
|
||||
},
|
||||
"id": 20,
|
||||
"options": {
|
||||
|
@ -2414,7 +2582,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 36
|
||||
"y": 44
|
||||
},
|
||||
"id": 32,
|
||||
"options": {
|
||||
|
@ -2513,7 +2681,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 44
|
||||
"y": 52
|
||||
},
|
||||
"id": 26,
|
||||
"options": {
|
||||
|
@ -2640,7 +2808,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 43
|
||||
"y": 51
|
||||
},
|
||||
"id": 31,
|
||||
"options": {
|
||||
|
@ -2742,7 +2910,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 43
|
||||
"y": 51
|
||||
},
|
||||
"id": 33,
|
||||
"options": {
|
||||
|
@ -2843,7 +3011,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 51
|
||||
"y": 59
|
||||
},
|
||||
"id": 30,
|
||||
"options": {
|
||||
|
@ -2964,7 +3132,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 9
|
||||
"y": 17
|
||||
},
|
||||
"id": 52,
|
||||
"options": {
|
||||
|
@ -3056,7 +3224,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 9
|
||||
"y": 17
|
||||
},
|
||||
"id": 53,
|
||||
"options": {
|
||||
|
@ -3092,9 +3260,8 @@
|
|||
"type": "row"
|
||||
}
|
||||
],
|
||||
"refresh": false,
|
||||
"schemaVersion": 37,
|
||||
"style": "dark",
|
||||
"refresh": "",
|
||||
"schemaVersion": 39,
|
||||
"tags": [
|
||||
"victoriametrics",
|
||||
"vmalert"
|
||||
|
@ -3103,9 +3270,9 @@
|
|||
"list": [
|
||||
{
|
||||
"current": {
|
||||
"selected": false,
|
||||
"text": "VictoriaMetrics - cluster",
|
||||
"value": "VictoriaMetrics - cluster"
|
||||
"selected": true,
|
||||
"text": "VictoriaMetrics",
|
||||
"value": "P4169E866C3094E38"
|
||||
},
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
|
|
|
@ -5,7 +5,7 @@ DOCKER_NAMESPACE ?= victoriametrics
|
|||
ROOT_IMAGE ?= alpine:3.19.1
|
||||
CERTS_IMAGE := alpine:3.19.1
|
||||
|
||||
GO_BUILDER_IMAGE := golang:1.21.6-alpine
|
||||
GO_BUILDER_IMAGE := golang:1.22.0-alpine
|
||||
BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr :/ __)-1
|
||||
BASE_IMAGE := local/base:1.1.4-$(shell echo $(ROOT_IMAGE) | tr :/ __)-$(shell echo $(CERTS_IMAGE) | tr :/ __)
|
||||
DOCKER ?= docker
|
||||
|
|
|
@ -7,32 +7,24 @@ and [Grafana](https://grafana.com/).
|
|||
For starting the docker-compose environment ensure you have docker installed and running and access to the Internet.
|
||||
**All commands should be executed from the root directory of [the repo](https://github.com/VictoriaMetrics/VictoriaMetrics).**
|
||||
|
||||
To spin-up environment with VictoriaMetrics components run one of the following commands:
|
||||
```
|
||||
make docker-single-up # start single server VictoriaMetrics
|
||||
or
|
||||
make docker-cluster-up # start cluster VictoriaMetrics
|
||||
```
|
||||
* [VictoriaMetrics single server](#victoriametrics-single-server)
|
||||
* [VictoriaMetrics cluster](#victoriametrics-cluster)
|
||||
* [vmagent](#vmagent)
|
||||
* [vmauth](#vmauth)
|
||||
* [vmalert](#vmalert)
|
||||
* [alertmanager](#alertmanager)
|
||||
* [Alerts](#alerts)
|
||||
* [Grafana](#grafana)
|
||||
* [VictoriaLogs](#victoriaLogs-server)
|
||||
|
||||
To shut down the docker-compose environment run one the following commands:
|
||||
```
|
||||
make docker-single-down # shutdown single server VictoriaMetrics
|
||||
or
|
||||
make docker-cluster-down # shutdown cluster VictoriaMetrics
|
||||
```
|
||||
|
||||
Optionally, environment with [VictoriaMetrics Grafana datasource](https://github.com/VictoriaMetrics/grafana-datasource)
|
||||
can be started with the following commands:
|
||||
```
|
||||
make docker-single-vm-datasource-up # start single server
|
||||
make docker-single-vm-datasource-down # shut down single server
|
||||
|
||||
make docker-cluster-vm-datasource-up # start cluster
|
||||
make docker-cluster-vm-datasource-down # shutdown cluster
|
||||
```
|
||||
|
||||
## VictoriaMetrics single server
|
||||
|
||||
To spin-up environment with VictoriaMetrics single server run the following command:
|
||||
```
|
||||
make docker-single-up
|
||||
```
|
||||
|
||||
VictoriaMetrics will be accessible on the following ports:
|
||||
|
||||
* `--graphiteListenAddr=:2003`
|
||||
|
@ -53,9 +45,19 @@ use link [http://localhost:8428/vmui](http://localhost:8428/vmui).
|
|||
|
||||
To access `vmalert` use link [http://localhost:8428/vmalert](http://localhost:8428/vmalert/).
|
||||
|
||||
To shutdown environment execute the following command:
|
||||
```
|
||||
make docker-single-down
|
||||
```
|
||||
|
||||
|
||||
## VictoriaMetrics cluster
|
||||
|
||||
To spin-up environment with VictoriaMetrics cluster run the following command:
|
||||
```
|
||||
make docker-cluster-up
|
||||
```
|
||||
|
||||
VictoriaMetrics cluster environment consists of `vminsert`, `vmstorage` and `vmselect` components.
|
||||
`vminsert` has exposed port `:8480`, access to `vmselect` components goes through `vmauth` on port `:8427`,
|
||||
and the rest of components are available only inside the environment.
|
||||
|
@ -77,6 +79,11 @@ use link [http://localhost:8427/select/0/prometheus/vmui/](http://localhost:8427
|
|||
|
||||
To access `vmalert` use link [http://localhost:8427/select/0/prometheus/vmalert/](http://localhost:8427/select/0/prometheus/vmalert/).
|
||||
|
||||
To shutdown environment execute the following command:
|
||||
```
|
||||
make docker-cluster-down
|
||||
```
|
||||
|
||||
## vmagent
|
||||
|
||||
vmagent is used for scraping and pushing time series to VictoriaMetrics instance.
|
||||
|
@ -127,9 +134,15 @@ Grafana is provisioned by default with following entities:
|
|||
|
||||
Remember to pick `VictoriaMetrics - cluster` datasource when viewing `VictoriaMetrics - cluster` dashboard.
|
||||
|
||||
If environment was started via `docker-single-vm-datasource-up` or `docker-cluster-vm-datasource-up`, then
|
||||
Grafana will have [VictoriaMetrics Grafana datasource](https://github.com/VictoriaMetrics/grafana-datasource)
|
||||
installed by default.
|
||||
Optionally, environment with [VictoriaMetrics Grafana datasource](https://github.com/VictoriaMetrics/grafana-datasource)
|
||||
can be started with the following commands:
|
||||
```
|
||||
make docker-single-vm-datasource-up # start single server
|
||||
make docker-single-vm-datasource-down # shut down single server
|
||||
|
||||
make docker-cluster-vm-datasource-up # start cluster
|
||||
make docker-cluster-vm-datasource-down # shutdown cluster
|
||||
```
|
||||
|
||||
## Alerts
|
||||
|
||||
|
|
|
@ -13,10 +13,7 @@ groups:
|
|||
expr: |
|
||||
vm_free_disk_space_bytes / ignoring(path)
|
||||
(
|
||||
(
|
||||
rate(vm_rows_added_to_storage_total[1d]) -
|
||||
ignoring(type) rate(vm_deduplicated_samples_total{type="merge"}[1d])
|
||||
)
|
||||
rate(vm_rows_added_to_storage_total[1d])
|
||||
* scalar(
|
||||
sum(vm_data_size_bytes{type!~"indexdb.*"}) /
|
||||
sum(vm_rows{type!~"indexdb.*"})
|
||||
|
|
|
@ -13,10 +13,7 @@ groups:
|
|||
expr: |
|
||||
vm_free_disk_space_bytes / ignoring(path)
|
||||
(
|
||||
(
|
||||
rate(vm_rows_added_to_storage_total[1d]) -
|
||||
ignoring(type) rate(vm_deduplicated_samples_total{type="merge"}[1d])
|
||||
)
|
||||
rate(vm_rows_added_to_storage_total[1d])
|
||||
* scalar(
|
||||
sum(vm_data_size_bytes{type!~"indexdb.*"}) /
|
||||
sum(vm_rows{type!~"indexdb.*"})
|
||||
|
|
|
@ -2,7 +2,7 @@ version: '3.5'
|
|||
services:
|
||||
vmagent:
|
||||
container_name: vmagent
|
||||
image: victoriametrics/vmagent:v1.97.0
|
||||
image: victoriametrics/vmagent:v1.97.1
|
||||
depends_on:
|
||||
- "vminsert"
|
||||
ports:
|
||||
|
@ -17,7 +17,7 @@ services:
|
|||
|
||||
grafana:
|
||||
container_name: grafana
|
||||
image: grafana/grafana:9.5.15
|
||||
image: grafana/grafana:10.3.1
|
||||
depends_on:
|
||||
- "vmauth"
|
||||
ports:
|
||||
|
@ -33,7 +33,7 @@ services:
|
|||
|
||||
vmstorage-1:
|
||||
container_name: vmstorage-1
|
||||
image: victoriametrics/vmstorage:v1.97.0-cluster
|
||||
image: victoriametrics/vmstorage:v1.97.1-cluster
|
||||
ports:
|
||||
- 8482
|
||||
- 8400
|
||||
|
@ -45,7 +45,7 @@ services:
|
|||
restart: always
|
||||
vmstorage-2:
|
||||
container_name: vmstorage-2
|
||||
image: victoriametrics/vmstorage:v1.97.0-cluster
|
||||
image: victoriametrics/vmstorage:v1.97.1-cluster
|
||||
ports:
|
||||
- 8482
|
||||
- 8400
|
||||
|
@ -58,7 +58,7 @@ services:
|
|||
|
||||
vminsert:
|
||||
container_name: vminsert
|
||||
image: victoriametrics/vminsert:v1.97.0-cluster
|
||||
image: victoriametrics/vminsert:v1.97.1-cluster
|
||||
depends_on:
|
||||
- "vmstorage-1"
|
||||
- "vmstorage-2"
|
||||
|
@ -71,7 +71,7 @@ services:
|
|||
|
||||
vmselect-1:
|
||||
container_name: vmselect-1
|
||||
image: victoriametrics/vmselect:v1.97.0-cluster
|
||||
image: victoriametrics/vmselect:v1.97.1-cluster
|
||||
depends_on:
|
||||
- "vmstorage-1"
|
||||
- "vmstorage-2"
|
||||
|
@ -85,7 +85,7 @@ services:
|
|||
|
||||
vmselect-2:
|
||||
container_name: vmselect-2
|
||||
image: victoriametrics/vmselect:v1.97.0-cluster
|
||||
image: victoriametrics/vmselect:v1.97.1-cluster
|
||||
depends_on:
|
||||
- "vmstorage-1"
|
||||
- "vmstorage-2"
|
||||
|
@ -99,7 +99,7 @@ services:
|
|||
|
||||
vmauth:
|
||||
container_name: vmauth
|
||||
image: victoriametrics/vmauth:v1.97.0
|
||||
image: victoriametrics/vmauth:v1.97.1
|
||||
depends_on:
|
||||
- "vmselect-1"
|
||||
- "vmselect-2"
|
||||
|
@ -114,7 +114,7 @@ services:
|
|||
|
||||
vmalert:
|
||||
container_name: vmalert
|
||||
image: victoriametrics/vmalert:v1.97.0
|
||||
image: victoriametrics/vmalert:v1.97.1
|
||||
depends_on:
|
||||
- "vmauth"
|
||||
ports:
|
||||
|
|
|
@ -2,7 +2,7 @@ version: "3.5"
|
|||
services:
|
||||
vmagent:
|
||||
container_name: vmagent
|
||||
image: victoriametrics/vmagent:v1.97.0
|
||||
image: victoriametrics/vmagent:v1.97.1
|
||||
depends_on:
|
||||
- "victoriametrics"
|
||||
ports:
|
||||
|
@ -18,7 +18,7 @@ services:
|
|||
restart: always
|
||||
victoriametrics:
|
||||
container_name: victoriametrics
|
||||
image: victoriametrics/victoria-metrics:v1.97.0
|
||||
image: victoriametrics/victoria-metrics:v1.97.1
|
||||
ports:
|
||||
- 8428:8428
|
||||
- 8089:8089
|
||||
|
@ -40,7 +40,7 @@ services:
|
|||
restart: always
|
||||
grafana:
|
||||
container_name: grafana
|
||||
image: grafana/grafana:9.5.15
|
||||
image: grafana/grafana:10.3.1
|
||||
depends_on:
|
||||
- "victoriametrics"
|
||||
ports:
|
||||
|
@ -58,7 +58,7 @@ services:
|
|||
restart: always
|
||||
vmalert:
|
||||
container_name: vmalert
|
||||
image: victoriametrics/vmalert:v1.97.0
|
||||
image: victoriametrics/vmalert:v1.97.1
|
||||
depends_on:
|
||||
- "victoriametrics"
|
||||
- "alertmanager"
|
||||
|
|
|
@ -2,7 +2,7 @@ version: "3.5"
|
|||
services:
|
||||
grafana:
|
||||
container_name: grafana
|
||||
image: grafana/grafana:9.5.15
|
||||
image: grafana/grafana:10.3.1
|
||||
depends_on:
|
||||
- "vmauth"
|
||||
ports:
|
||||
|
|
|
@ -2,7 +2,7 @@ version: "3.5"
|
|||
services:
|
||||
grafana:
|
||||
container_name: grafana
|
||||
image: grafana/grafana:9.5.15
|
||||
image: grafana/grafana:10.3.1
|
||||
depends_on:
|
||||
- "victoriametrics"
|
||||
ports:
|
||||
|
|
|
@ -2,14 +2,14 @@
|
|||
|
||||
Please read the "vmanomaly integration" guide first - [https://docs.victoriametrics.com/anomaly-detection/guides/guide-vmanomaly-vmalert.html](https://docs.victoriametrics.com/anomaly-detection/guides/guide-vmanomaly-vmalert.html)
|
||||
|
||||
To make this Docker compose file work, you MUST replace the content of [vmanomaly_license.txt](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/vmanomaly/vmanomaly-vmalert-guide/vmanomaly_license.txt) with valid license.
|
||||
To make this Docker compose file work, you MUST replace the content of [vmanomaly_license](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/vmanomaly/vmanomaly-integration/vmanomaly_license) with valid license.
|
||||
|
||||
You can issue the [trial license here](https://victoriametrics.com/products/enterprise/trial/)
|
||||
|
||||
|
||||
## How to run
|
||||
|
||||
1. Replace content of `vmanomaly_license.txt` with your license
|
||||
1. Replace content of `vmanomaly_license` with your license
|
||||
1. Run
|
||||
|
||||
```sh
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
services:
|
||||
vmagent:
|
||||
container_name: vmagent
|
||||
image: victoriametrics/vmagent:v1.97.0
|
||||
image: victoriametrics/vmagent:v1.97.1
|
||||
depends_on:
|
||||
- "victoriametrics"
|
||||
ports:
|
||||
|
@ -18,7 +18,7 @@ services:
|
|||
|
||||
victoriametrics:
|
||||
container_name: victoriametrics
|
||||
image: victoriametrics/victoria-metrics:v1.97.0
|
||||
image: victoriametrics/victoria-metrics:v1.97.1
|
||||
ports:
|
||||
- 8428:8428
|
||||
volumes:
|
||||
|
@ -51,7 +51,7 @@ services:
|
|||
|
||||
vmalert:
|
||||
container_name: vmalert
|
||||
image: victoriametrics/vmalert:v1.97.0
|
||||
image: victoriametrics/vmalert:v1.97.1
|
||||
depends_on:
|
||||
- "victoriametrics"
|
||||
ports:
|
||||
|
|
|
@ -12,7 +12,7 @@ reader:
|
|||
datasource_url: "http://victoriametrics:8428/"
|
||||
sampling_period: "60s"
|
||||
queries:
|
||||
node_cpu_rate: "quantile by (mode) (0.5, rate(node_cpu_seconds_total[5m]))"
|
||||
node_cpu_rate: "sum(rate(node_cpu_seconds_total[5m])) by (mode, instance, job)"
|
||||
|
||||
writer:
|
||||
datasource_url: "http://victoriametrics:8428/"
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
{
|
||||
"annotations": {
|
||||
"annotations": {
|
||||
"list": [
|
||||
{
|
||||
"builtIn": 1,
|
||||
|
@ -18,7 +18,7 @@
|
|||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"graphTooltip": 1,
|
||||
"links": [],
|
||||
"links": [],
|
||||
"liveNow": false,
|
||||
"panels": [
|
||||
{
|
||||
|
@ -40,7 +40,7 @@
|
|||
"showLineNumbers": false,
|
||||
"showMiniMap": false
|
||||
},
|
||||
"content": "If you don't see any data, please wait a few minutes. \n\nYou will see a lot of false positive anomalies when you run the guide for the first time. \nThe prediction must be more accurate if you provide vmanomaly 2w of data.\n\nEvery row represents information for one specific mode. \nThe query for anomaly detection is `quantile by (mode) (0.5, rate(node_cpu_seconds_total[5m]))`\nThis is a median (or 50% quantileof `rate` function over `node_cpu_seconds_total`)",
|
||||
"content": "If you don't observe any data initially, please wait a few minutes for it to appear. \n\nUpon the first running the guide (if there is not enough node_exporter monitoring data collected in your system), you may notice a significant number of false positive anomalies found. The predictions will become more accurate with at least two weeks' (full `fit_window`) worth of data provided to vmanomaly.\n\nEach row displays information for a distinct mode. The query used for anomaly detection is `sum(rate(node_cpu_seconds_total[5m])) by (mode, instance, job)`.\n",
|
||||
"mode": "markdown"
|
||||
},
|
||||
"pluginVersion": "10.2.1",
|
||||
|
@ -67,7 +67,7 @@
|
|||
"type": "prometheus",
|
||||
"uid": "VictoriaMetrics"
|
||||
},
|
||||
"description": "quantile by (mode) (0.5, rate(node_cpu_seconds_total[5m]))",
|
||||
"description": "sum(rate(node_cpu_seconds_total{mode=~\"$mode\"}[5m])) by (mode, instance,job)",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
|
@ -90,12 +90,15 @@
|
|||
},
|
||||
"insertNulls": false,
|
||||
"lineInterpolation": "linear",
|
||||
"lineStyle": {
|
||||
"fill": "solid"
|
||||
},
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"pointSize": 1,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "auto",
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
|
@ -106,6 +109,7 @@
|
|||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"min": 0,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
|
@ -118,7 +122,8 @@
|
|||
"value": 80
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"unit": "none"
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
|
@ -166,7 +171,7 @@
|
|||
"showLegend": true
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single",
|
||||
"mode": "multi",
|
||||
"sort": "none"
|
||||
}
|
||||
},
|
||||
|
@ -177,14 +182,14 @@
|
|||
"uid": "VictoriaMetrics"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "quantile by (mode, instance,job) (0.5, rate(node_cpu_seconds_total{mode=~\"$mode\"}[5m]))",
|
||||
"expr": "sum(rate(node_cpu_seconds_total{mode=~\"$mode\"}[5m])) by (mode, instance,job)",
|
||||
"instant": false,
|
||||
"legendFormat": "Instance: {{instance}}, Job {{job}}",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "CPU median for $mode mode",
|
||||
"title": "CPU rate sum for $mode mode",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
|
@ -219,7 +224,7 @@
|
|||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "auto",
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
|
@ -236,15 +241,27 @@
|
|||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 1
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"overrides": []
|
||||
"overrides": [
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "threshold"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "color",
|
||||
"value": {
|
||||
"fixedColor": "red",
|
||||
"mode": "fixed"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 8,
|
||||
|
@ -281,6 +298,19 @@
|
|||
"legendFormat": "Instance: {{instance}}, Job: {{job}}",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "VictoriaMetrics"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "vector(1)",
|
||||
"hide": false,
|
||||
"instant": false,
|
||||
"legendFormat": "threshold",
|
||||
"range": true,
|
||||
"refId": "B"
|
||||
}
|
||||
],
|
||||
"title": "Anomaly Scores for $mode mode",
|
||||
|
@ -318,7 +348,7 @@
|
|||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "auto",
|
||||
"showPoints": "never",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
|
@ -344,18 +374,6 @@
|
|||
}
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Predicted Value: yhat"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.fillBelowTo",
|
||||
"value": "yhat_lower"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
|
@ -364,7 +382,7 @@
|
|||
"properties": [
|
||||
{
|
||||
"id": "custom.fillBelowTo",
|
||||
"value": "yhat"
|
||||
"value": "Predicted Lower Boundary"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
@ -381,11 +399,11 @@
|
|||
"legend": {
|
||||
"calcs": [],
|
||||
"displayMode": "table",
|
||||
"placement": "right",
|
||||
"placement": "bottom",
|
||||
"showLegend": true
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single",
|
||||
"mode": "multi",
|
||||
"sort": "none"
|
||||
}
|
||||
},
|
||||
|
@ -427,6 +445,19 @@
|
|||
"legendFormat": "Predicted Upper Boundary",
|
||||
"range": true,
|
||||
"refId": "C"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "VictoriaMetrics"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(rate(node_cpu_seconds_total{mode=~\"$mode\"}[5m])) by (mode, instance,job)",
|
||||
"hide": false,
|
||||
"instant": false,
|
||||
"legendFormat": "Value",
|
||||
"range": true,
|
||||
"refId": "D"
|
||||
}
|
||||
],
|
||||
"title": "Predicted Value and Boundaries for $mode mode",
|
||||
|
@ -440,11 +471,7 @@
|
|||
"list": [
|
||||
{
|
||||
"allValue": ".*",
|
||||
"current": {
|
||||
"selected": false,
|
||||
"text": "All",
|
||||
"value": "$__all"
|
||||
},
|
||||
"current": {},
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "VictoriaMetrics"
|
||||
|
@ -464,19 +491,19 @@
|
|||
"refresh": 2,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
"sort": 1,
|
||||
"sort": 2,
|
||||
"type": "query"
|
||||
}
|
||||
]
|
||||
},
|
||||
"time": {
|
||||
"from": "now-30m",
|
||||
"from": "now-3h",
|
||||
"to": "now"
|
||||
},
|
||||
"timepicker": {},
|
||||
"timezone": "",
|
||||
"title": "Vmanomaly Guide",
|
||||
"uid": "cfa61e6a-6074-4626-8e54-ea33e08746b9",
|
||||
"version": 2,
|
||||
"version": 3,
|
||||
"weekStart": ""
|
||||
}
|
|
@ -18,7 +18,7 @@ services:
|
|||
- vlogs
|
||||
|
||||
generator:
|
||||
image: golang:1.21.6-alpine
|
||||
image: golang:1.22.0-alpine
|
||||
restart: always
|
||||
working_dir: /go/src/app
|
||||
volumes:
|
||||
|
|
|
@ -2,7 +2,7 @@ version: '3'
|
|||
|
||||
services:
|
||||
generator:
|
||||
image: golang:1.21.6-alpine
|
||||
image: golang:1.22.0-alpine
|
||||
restart: always
|
||||
working_dir: /go/src/app
|
||||
volumes:
|
||||
|
|
|
@ -46,7 +46,7 @@ services:
|
|||
- '--config=/config.yml'
|
||||
|
||||
vmsingle:
|
||||
image: victoriametrics/victoria-metrics:v1.97.0
|
||||
image: victoriametrics/victoria-metrics:v1.97.1
|
||||
ports:
|
||||
- '8428:8428'
|
||||
command:
|
||||
|
|
|
@ -19,8 +19,8 @@ On the server:
|
|||
* VictoriaMetrics is running on ports: 8428, 8089, 4242, 2003 and they are bound to the local interface.
|
||||
|
||||
********************************************************************************
|
||||
# This image includes 1.97.0 version of VictoriaMetrics.
|
||||
# See Release notes https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.97.0
|
||||
# This image includes 1.97.1 version of VictoriaMetrics.
|
||||
# See Release notes https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.97.1
|
||||
|
||||
# Welcome to VictoriaMetrics droplet!
|
||||
|
||||
|
|
|
@ -28,6 +28,39 @@ The sandbox cluster installation is running under the constant load generated by
|
|||
|
||||
## tip
|
||||
|
||||
* SECURITY: upgrade Go builder from Go1.21.6 to Go1.22.0. See [the list of issues addressed in Go1.21.7](https://github.com/golang/go/issues?q=milestone%3AGo1.21.7+label%3ACherryPickApproved),
|
||||
plus [the changelog for Go1.22.0](https://go.dev/doc/go1.22).
|
||||
|
||||
* FEATURE: all VictoriaMetrics components: add support for TLS client certificate verification at `-httpListenAddr` (aka [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication)). See [these docs](https://docs.victoriametrics.com/#mtls-protection) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5458).
|
||||
* FEATURE: all VictoriaMetrics components: add support for accepting http requests over multiple distinct TCP addresses. This can be done by starting VictoriaMetrics component with multiple `-httpListenAddr` command-line flags. For example, `./victoria-metrics -httpListenAddr=some-host:12345 -httpListenAddr=localhost:8428` starts VictoriaMetrics, which accepts incoming http requests at both `some-host:12345` and `localhost:8428`. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1470).
|
||||
* FEATURE: all VictoriaMetrics components: add support for empty command-line flag values in short array notation. For example, `-remoteWrite.sendTimeout=',20s,'` specifies three `-remoteWrite.sendTimeout` values - the first one and the last one have default values (`30s` in this case), while the second one is set to `20s`.
|
||||
* FEATURE: all VictoriaMetrics components: do not close connections to `-httpListenAddr` every 2 minutes. This behavior didn't help spreading load among multiple backend servers behind load-balancing TCP proxy. Instead, it could lead to hard-to-debug issues like [this one](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1304#issuecomment-1636997037). If you still need periodically closing client connections because of some reason, then pass the desired timeout to `-http.connTimeout` command-line flag.
|
||||
* FEATURE: [vmauth](https://docs.victoriametrics.com/vmauth.html): add support for [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication)-based request routing to different backends depending on the subject of the TLS certificate provided by the client. See [these docs](https://docs.victoriametrics.com/vmauth.html#mtls-based-request-routing) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1547).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html) and [single-node VictoriaMetrics](https://docs.victoriametrics.com): add support for data ingestion via [DataDog lambda extension](https://docs.datadoghq.com/serverless/libraries_integrations/extension/) aka `/api/beta/sketches` endpoint. See [these docs](https://docs.victoriametrics.com/#how-to-send-data-from-datadog-agent) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3091). Thanks to @AndrewChubatiuk for [the pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5584).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `-remoteWrite.tlsHandshakeTimeout` command-line flag for tuning the timeout needed for establishing TLS connections to `-remoteWrite.url`. Bigger tls handshake timeouts should reduce the probability of `http: TLS handshake error from ...: EOF` errors at the remote storage side under high load. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1699).
|
||||
* FEATURE: [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html): add `-disableReroutingOnUnavailable` command-line flag to `vminsert`, which can be used for reducing resource usage spikes at `vmstorage` nodes during rolling restart. See [these docs](https://docs.victoriametrics.com/cluster-victoriametrics/#improving-re-routing-performance-during-restart). Thanks to @Muxa1L for [the pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5713).
|
||||
* FEATURE: add `-search.resetRollupResultCacheOnStartup` command-line flag for resetting [query cache](https://docs.victoriametrics.com/#rollup-result-cache) on startup. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/834).
|
||||
* FEATURE: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): propagate label filters across [label_set](https://docs.victoriametrics.com/MetricsQL.html#label_set) and [alias](https://docs.victoriametrics.com/MetricsQL.html#alias) functions. For example, `label_set(q1, "a", "b") + q2{c="d"}` is automatically transformed to `label_set(q1{c="d"}, "a", "b") + q2{a="b",c="d"}` now. This should improve performance for such queries. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1827#issuecomment-1654095358).
|
||||
* FEATURE: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): add [sum_eq_over_time](https://docs.victoriametrics.com/MetricsQL.html#sum_eq_over_time), [sum_gt_over_time](https://docs.victoriametrics.com/MetricsQL.html#sum_gt_over_time) and [sum_le_over_time](https://docs.victoriametrics.com/MetricsQL.html#sum_le_over_time) functions. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4641).
|
||||
* FEATURE: [dashboards/vmagent](https://grafana.com/grafana/dashboards/12683): add `Targets scraped/s` stat panel showing the number of targets scraped by the vmagent per-second.
|
||||
* FEATURE: [dashboards/all](https://grafana.com/orgs/victoriametrics): add new panel `CPU spent on GC`. It should help identifying cases when too much CPU is spent on garbage collection, and advice users on how this can be addressed.
|
||||
* FEATURE: [vmalert](https://docs.victoriametrics.com/#vmalert): support [filtering](https://prometheus.io/docs/prometheus/2.49/querying/api/#rules) for `/api/v1/rules` API. See [the pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5749) by @victoramsantos.
|
||||
* FEATURE: [vmbackup](https://docs.victoriametrics.com/vmbackup.html): support client-side TLS configuration for creating and deleting snapshots via `-snapshot.tls*` cmd-line flags. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5724). Thanks to @khushijain21 for the [pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5738).
|
||||
|
||||
* BUGFIX: [vmauth](https://docs.victoriametrics.com/vmauth.html): properly release memory during config reload. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4690).
|
||||
* BUGFIX: [vmauth](https://docs.victoriametrics.com/vmauth.html): properly expose `vmauth_unauthorized_user_concurrent_requests_capacity`, `vmauth_unauthorized_user_concurrent_requests_current`, `vmauth_user_concurrent_requests_capacity` and `vmauth_user_concurrent_requests_current` [metrics](https://docs.victoriametrics.com/vmauth.html#monitoring) after [config reload](https://docs.victoriametrics.com/vmauth.html#config-reload). Previously these metrics didn't work after config reload.
|
||||
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): properly propagate [label filters](https://docs.victoriametrics.com/keyconcepts/#filtering) from multiple arguments passed to [aggregate functions](https://docs.victoriametrics.com/metricsql/#aggregate-functions). For example, `sum({job="foo"}, {job="bar"}) by (job) + a` was improperly optimized to `sum({job="foo"}, {job="bar"}) by (job) + a{job="foo"}` before being executed. This could lead to unexpected results. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5604).
|
||||
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): properly handle precision errors when calculating [changes](https://docs.victoriametrics.com/metricsql/#changes), [changes_prometheus](https://docs.victoriametrics.com/metricsql/#changes_prometheus), [increases_over_time](https://docs.victoriametrics.com/metricsql/#increases_over_time) and [resets](https://docs.victoriametrics.com/metricsql/#resets) functions. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/767).
|
||||
* BUGFIX: all VictoriaMetrics components: consistently return 200 http status code from [`/-/reload` endpoint](https://docs.victoriametrics.com/vmagent/#configuration-update). Previously [single-node VictoriaMetrics](https://docs.victoriametrics.com/) was returning 204 http status code. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5774).
|
||||
* BUGFIX: properly store [staleness markers](https://docs.victoriametrics.com/vmagent/#prometheus-staleness-markers) for [self-scraped metrics](https://docs.victoriametrics.com/#monitoring) on single-node VictoriaMetrics shutdown. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/943).
|
||||
* BUGFIX: prevent from possible `too big indexBlockSize` panic when samples with too long label values (~64Kb) are ingested into VictoriaMetrics.
|
||||
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): fix the graph dragging for Firefox and Safari. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5764).
|
||||
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): fix handling invalid timezone. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5732).
|
||||
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): fix the bug where the select does not open. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5728).
|
||||
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): clear entered text in select after selecting a value. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5727).
|
||||
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): improve the operation of the context for autocomplete. See [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5736), [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5737) and [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5739) issues.
|
||||
* BUGFIX: [dashboards](https://grafana.com/orgs/victoriametrics): update `Storage full ETA` panels for Single-node and Cluster dashboards to prevent them from showing negative or blank results caused by increase of deduplicated samples. Deduplicated samples were part of the expression to provide a better estimate for disk usage, but due to sporadic nature of [deduplication](https://docs.victoriametrics.com/#deduplication) in VictoriaMetrics it rather produced skewed results. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5747).
|
||||
|
||||
## [v1.97.1](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.97.1)
|
||||
|
||||
Released at 2024-02-01
|
||||
|
@ -41,7 +74,7 @@ The v1.97.x line will be supported for at least 12 months since [v1.97.0](https:
|
|||
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent.html): fix the increased CPU usage when sending the data to remote storage. The issue has been introduced in [v1.97.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.97.0).
|
||||
* BUGFIX: fix `runtime error: slice bounds out of range` panic, which can occur during query execution. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5733). The bug has been introduced in `v1.97.0`.
|
||||
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): properly handle `avg_over_time({some_filter}[d]) keep_metric_names` queries, where [`some_filter`](https://docs.victoriametrics.com/keyconcepts/#filtering) matches multiple time series with multiple names, while `d` is bigger or equal to `3h`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5556).
|
||||
* BUGFIX: dashboards/single: fix typo in query for `version` annotation which falsely produced many version change events.
|
||||
* BUGFIX: [dashboards/single](https://grafana.com/grafana/dashboards/10229): fix typo in query for `version` annotation which falsely produced many version change events.
|
||||
|
||||
## [v1.97.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.97.0)
|
||||
|
||||
|
@ -76,9 +109,9 @@ The v1.97.x line will be supported for at least 12 months since [v1.97.0](https:
|
|||
* FEATURE: all VictoriaMetrics components: add ability to specify arbitrary HTTP headers to send with every request to `-pushmetrics.url`. See [`push metrics` docs](https://docs.victoriametrics.com/#push-metrics).
|
||||
* FEATURE: all VictoriaMetrics components: add `-metrics.exposeMetadata` command-line flag, which allows displaying `TYPE` and `HELP` metadata at `/metrics` page exposed at `-httpListenAddr`. This may be needed when the `/metrics` page is scraped by collector, which requires the `TYPE` and `HELP` metadata such as [Google Cloud Managed Prometheus](https://cloud.google.com/stackdriver/docs/managed-prometheus/troubleshooting#missing-metric-type).
|
||||
* FEATURE: all VictoriaMetrics components: add ability to dynamically re-read auth keys and passwords from files and urls when using `file:///path/to/file` or `http://host/path` syntax for the following command-line flags: `-configAuthKey`, `-deleteAuthKey`, `-flagsAuthKey`, `-forceMergeAuthKey`, `-forceFlushAuthKey`, `-httpAuth.password`, `-metricsAuthKey`, `-pprofAuthKey`, `-reloadAuthKey`, `-search.resetCacheAuthKey`, `-snapshotAuthKey`. For example, `-httpAuth.password=file:///path/to/password`. See [these docs](https://docs.victoriametrics.com/#security) for details.
|
||||
* FEATURE: dashboards/cluster: add panels for detailed visualization of traffic usage between vmstorage, vminsert, vmselect components and their clients. New panels are available in the rows dedicated to specific components.
|
||||
* FEATURE: dashboards/cluster: update "Slow Queries" panel to show percentage of the slow queries to the total number of read queries served by vmselect. The percentage value should make it more clear for users whether there is a service degradation.
|
||||
* FEATURE: dashboards/single: change dashboard title from `VictoriaMetrics` to `VictoriaMetrics - single-node`. The new title should provide better understanding of this dashboard purpose.
|
||||
* FEATURE: [dashboards/cluster](https://grafana.com/grafana/dashboards/11176): add panels for detailed visualization of traffic usage between vmstorage, vminsert, vmselect components and their clients. New panels are available in the rows dedicated to specific components.
|
||||
* FEATURE: [dashboards/cluster](https://grafana.com/grafana/dashboards/11176): update "Slow Queries" panel to show percentage of the slow queries to the total number of read queries served by vmselect. The percentage value should make it more clear for users whether there is a service degradation.
|
||||
* FEATURE: [dashboards/single](https://grafana.com/grafana/dashboards/10229): change dashboard title from `VictoriaMetrics` to `VictoriaMetrics - single-node`. The new title should provide better understanding of this dashboard purpose.
|
||||
* FEATURE: [vmctl](https://docs.victoriametrics.com/vmctl.html): rename cmd-line flag `vm-native-disable-retries` to `vm-native-disable-per-metric-migration` to better reflect its meaning.
|
||||
* FEATURE: [vmctl](https://docs.victoriametrics.com/vmctl.html): add `-vm-native-src-insecure-skip-verify` and `-vm-native-dst-insecure-skip-verify` command-line flags for native protocol. It can be used for skipping TLS certificate verification when connecting to the source or destination addresses.
|
||||
* FEATURE: [Alerting rules for VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker#alerts): add `job` label to `DiskRunsOutOfSpace` alerting rule, so it is easier to understand to which installation the triggered instance belongs.
|
||||
|
@ -137,6 +170,21 @@ See changes [here](https://docs.victoriametrics.com/CHANGELOG_2023.html#v1950)
|
|||
|
||||
See changes [here](https://docs.victoriametrics.com/CHANGELOG_2023.html#v1940)
|
||||
|
||||
## [v1.93.11](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.93.11)
|
||||
|
||||
Released at 2024-02-01
|
||||
|
||||
**v1.93.x is a line of LTS releases (e.g. long-time support). It contains important up-to-date bugfixes.
|
||||
The v1.93.x line will be supported for at least 12 months since [v1.93.0](https://docs.victoriametrics.com/CHANGELOG.html#v1930) release**
|
||||
|
||||
* SECURITY: upgrade base docker image (Alpine) from 3.19.0 to 3.19.1. See [alpine 3.19.1 release notes](https://www.alpinelinux.org/posts/Alpine-3.19.1-released.html).
|
||||
|
||||
* BUGFIX: properly return errors from [export APIs](https://docs.victoriametrics.com/#how-to-export-time-series). Previously these errors were silently suppressed. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5649).
|
||||
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent.html): properly discover targets for `role: endpoints` and `role: endpointslice` in [kubernetes_sd_configs](https://docs.victoriametrics.com/sd_configs.html#kubernetes_sd_configs). Previously some `endpoints` and `endpointslice` targets could be left undiscovered or some targets could have missing `__meta_*` labels when performing service discovery in busy Kubernetes clusters with large number of pods. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5557).
|
||||
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent.html): respect explicitly set `series_limit: 0` in [scrape_config](https://docs.victoriametrics.com/sd_configs.html#scrape_configs). This allows removing [`series_limit` restriction](https://docs.victoriametrics.com/vmagent.html#cardinality-limiter) on a per-`scrape_config` basis when global limit is set via `-promscrape.seriesLimitPerTarget`. Previously, `0` value was ignored in favor of `-promscrape.seriesLimitPerTarget`.
|
||||
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): properly process queries with too big lookbehind window such as `foo[100y]`. Previously, such queries could return empty responses even if `foo` is present in database. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5553).
|
||||
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): properly handle possible negative results caused by float operations precision error in rollup functions like rate() or increase(). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5571).
|
||||
|
||||
## [v1.93.10](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.93.10)
|
||||
|
||||
Released at 2024-01-17
|
||||
|
@ -241,6 +289,20 @@ See changes [here](https://docs.victoriametrics.com/CHANGELOG_2023.html#v1881)
|
|||
|
||||
See changes [here](https://docs.victoriametrics.com/CHANGELOG_2023.html#v1880)
|
||||
|
||||
## [v1.87.14](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.87.14)
|
||||
|
||||
Released at 2024-02-01
|
||||
|
||||
**v1.87.x is a line of LTS releases (e.g. long-time support). It contains important up-to-date bugfixes.
|
||||
The v1.87.x line will be supported for at least 12 months since [v1.87.0](https://docs.victoriametrics.com/CHANGELOG.html#v1870) release**
|
||||
|
||||
* SECURITY: upgrade base docker image (Alpine) from 3.19.0 to 3.19.1. See [alpine 3.19.1 release notes](https://www.alpinelinux.org/posts/Alpine-3.19.1-released.html).
|
||||
|
||||
* BUGFIX: properly return errors from [export APIs](https://docs.victoriametrics.com/#how-to-export-time-series). Previously these errors were silently suppressed. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5649).
|
||||
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent.html): respect explicitly set `series_limit: 0` in [scrape_config](https://docs.victoriametrics.com/sd_configs.html#scrape_configs). This allows removing [`series_limit` restriction](https://docs.victoriametrics.com/vmagent.html#cardinality-limiter) on a per-`scrape_config` basis when global limit is set via `-promscrape.seriesLimitPerTarget`. Previously, `0` value was ignored in favor of `-promscrape.seriesLimitPerTarget`.
|
||||
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): properly process queries with too big lookbehind window such as `foo[100y]`. Previously, such queries could return empty responses even if `foo` is present in database. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5553).
|
||||
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): properly handle possible negative results caused by float operations precision error in rollup functions like rate() or increase(). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5571).
|
||||
|
||||
## [v1.87.13](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.87.13)
|
||||
|
||||
Released at 2024-01-17
|
||||
|
|
|
@ -155,7 +155,7 @@ vmstorage-prod
|
|||
|
||||
### Development Builds
|
||||
|
||||
1. [Install go](https://golang.org/doc/install). The minimum supported version is Go 1.20.
|
||||
1. [Install go](https://golang.org/doc/install). The minimum supported version is Go 1.22.
|
||||
1. Run `make` from [the repository root](https://github.com/VictoriaMetrics/VictoriaMetrics). It should build `vmstorage`, `vmselect`
|
||||
and `vminsert` binaries and put them into the `bin` folder.
|
||||
|
||||
|
@ -281,6 +281,20 @@ and [the general security page at VictoriaMetrics website](https://victoriametri
|
|||
|
||||
## mTLS protection
|
||||
|
||||
By default `vminsert` and `vmselect` nodes accept http requests at `8480` and `8481` ports accordingly (these ports can be changed via `-httpListenAddr` command-line flags),
|
||||
since it is expected that [vmauth](https://docs.victoriametrics.com/vmauth.html) is used for authorization and [TLS termination](https://en.wikipedia.org/wiki/TLS_termination_proxy)
|
||||
in front of `vminsert` and `vmselect`.
|
||||
[Enterprise version of VictoriaMetrics](https://docs.victoriametrics.com/enterprise.html) supports the ability to accept [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication)
|
||||
requests at `8480` and `8481` ports for `vminsert` and `vmselect` nodes, by specifying `-tls` and `-mtls` command-line flags.
|
||||
For example, the following command runs `vmselect`, which accepts only mTLS requests at port `8481`:
|
||||
|
||||
```
|
||||
./vmselect -tls -mtls
|
||||
```
|
||||
|
||||
By default system-wide [TLS Root CA](https://en.wikipedia.org/wiki/Root_certificate) is used for verifying client certificates if `-mtls` command-line flag is specified.
|
||||
It is possible to specify custom TLS Root CA via `-mtlsCAFile` command-line flag.
|
||||
|
||||
By default `vminsert` and `vmselect` nodes use unencrypted connections to `vmstorage` nodes, since it is assumed that all the cluster components [run in a protected environment](#security). [Enterprise version of VictoriaMetrics](https://docs.victoriametrics.com/enterprise.html) provides optional support for [mTLS connections](https://en.wikipedia.org/wiki/Mutual_authentication#mTLS) between cluster components. Pass `-cluster.tls=true` command-line flag to `vminsert`, `vmselect` and `vmstorage` nodes in order to enable mTLS protection. Additionally, `vminsert`, `vmselect` and `vmstorage` must be configured with mTLS certificates via `-cluster.tlsCertFile`, `-cluster.tlsKeyFile` command-line options. These certificates are mutually verified when `vminsert` and `vmselect` dial `vmstorage`.
|
||||
|
||||
The following optional command-line flags related to mTLS are supported:
|
||||
|
@ -308,7 +322,12 @@ By default, the following TCP ports are used:
|
|||
It is recommended setting up [vmagent](https://docs.victoriametrics.com/vmagent.html)
|
||||
or Prometheus to scrape `/metrics` pages from all the cluster components, so they can be monitored and analyzed
|
||||
with [the official Grafana dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176)
|
||||
or [an alternative dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11831). Graphs on these dashboards contain useful hints - hover the `i` icon at the top left corner of each graph in order to read it.
|
||||
or [an alternative dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11831).
|
||||
Graphs on these dashboards contain useful hints - hover the `i` icon at the top left corner of each graph in order to read it.
|
||||
|
||||
If you use Google Cloud Managed Prometheus for scraping metrics from VictoriaMetrics components, then pass `-metrics.exposeMetadata`
|
||||
command-line to them, so they add `TYPE` and `HELP` comments per each exposed metric at `/metrics` page.
|
||||
See [these docs](https://cloud.google.com/stackdriver/docs/managed-prometheus/troubleshooting#missing-metric-type) for details.
|
||||
|
||||
It is recommended setting up alerts in [vmalert](https://docs.victoriametrics.com/vmalert.html) or in Prometheus from [this list](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker#alerts).
|
||||
See more details in the article [VictoriaMetrics Monitoring](https://victoriametrics.com/blog/victoriametrics-monitoring/).
|
||||
|
@ -363,6 +382,7 @@ Check practical examples of VictoriaMetrics API [here](https://docs.victoriametr
|
|||
- `opentelemetry/api/v1/push` - for ingesting data via [OpenTelemetry protocol for metrics](https://github.com/open-telemetry/opentelemetry-specification/blob/ffddc289462dfe0c2041e3ca42a7b1df805706de/specification/metrics/data-model.md). See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#sending-data-via-opentelemetry).
|
||||
- `datadog/api/v1/series` - for ingesting data with DataDog submit metrics API v1. See [these docs](https://docs.victoriametrics.com/url-examples.html#datadogapiv1series) for details.
|
||||
- `datadog/api/v2/series` - for ingesting data with [DataDog submit metrics API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics). See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-datadog-agent) for details.
|
||||
- `datadog/api/beta/sketches` - for ingesting data with [DataDog lambda extension](https://docs.datadoghq.com/serverless/libraries_integrations/extension/).
|
||||
- `influx/write` and `influx/api/v2/write` - for ingesting data with [InfluxDB line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/). See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) for details.
|
||||
- `newrelic/infra/v2/metrics/events/bulk` - for accepting data from [NewRelic infrastructure agent](https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent). See [these docs](https://docs.victoriametrics.com/#how-to-send-data-from-newrelic-agent) for details.
|
||||
- `opentsdb/api/put` - for accepting [OpenTSDB HTTP /api/put requests](http://opentsdb.net/docs/build/html/api_http/put.html). This handler is disabled by default. It is exposed on a distinct TCP address set via `-opentsdbHTTPListenAddr` command-line flag. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#sending-opentsdb-data-via-http-apiput-requests) for details.
|
||||
|
@ -481,20 +501,6 @@ This strategy allows upgrading the cluster without downtime if the following con
|
|||
If at least a single condition isn't met, then the rolling restart may result in cluster unavailability
|
||||
during the config update / version upgrade. In this case the following strategy is recommended.
|
||||
|
||||
#### Improving re-routing performance during restart
|
||||
|
||||
`vmstorage` nodes may experience increased usage for CPU, RAM and disk IO during
|
||||
[rolling restarts](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#no-downtime-strategy),
|
||||
since they need to process higher load when some of `vmstorage` nodes are temporarily unavailable in the cluster.
|
||||
It is possible to reduce resource usage spikes by running more `vminsert` nodes and by passing bigger values
|
||||
to `-storage.vminsertConnsShutdownDuration` (available from [v1.95.0](https://docs.victoriametrics.com/CHANGELOG.html#v1950))
|
||||
command-line flag at `vmstorage` nodes.
|
||||
In this case `vmstorage` increases the interval between gradual closing of `vminsert` connections during graceful shutdown.
|
||||
This reduces data ingestion slowdown during rollout restarts.
|
||||
|
||||
Make sure that the `-storage.vminsertConnsShutdownDuration` is smaller than the graceful shutdown timeout configured at the system which manages `vmstorage`
|
||||
(e.g. Docker, Kubernetes, systemd, etc.). Otherwise the system may kill `vmstorage` node before it finishes gradual closing of `vminsert` connections.
|
||||
|
||||
### Minimum downtime strategy
|
||||
|
||||
1. Gracefully stop all the `vminsert` and `vmselect` nodes in parallel.
|
||||
|
@ -513,6 +519,29 @@ The `minimum downtime` strategy has the following benefits comparing to `no down
|
|||
- It allows minimizing the duration of config update / version upgrade for clusters with big number of nodes
|
||||
of for clusters with big `vmstorage` nodes, which may take long time for graceful restart.
|
||||
|
||||
## Improving re-routing performance during restart
|
||||
|
||||
`vmstorage` nodes may experience increased usage for CPU, RAM and disk IO during
|
||||
[rolling restarts](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#no-downtime-strategy),
|
||||
since they need to process higher load when some of `vmstorage` nodes are temporarily unavailable in the cluster.
|
||||
|
||||
The following approaches can be used for reducing resource usage at `vmstorage` nodes during rolling restart:
|
||||
|
||||
- To pass `-disableReroutingOnUnavailable` command-line flag to `vminsert` nodes, so they pause data ingestion when `vmstorage` nodes are restarted
|
||||
instead of re-routing the ingested data to other available `vmstorage` nodes.
|
||||
Note that the `-disableReroutingOnUnavailable` flag may pause data ingestion for long time when some `vmstorage` nodes are unavailable
|
||||
for long time.
|
||||
|
||||
- To pass bigger values to `-storage.vminsertConnsShutdownDuration` (available from [v1.95.0](https://docs.victoriametrics.com/CHANGELOG.html#v1950))
|
||||
command-line flag at `vmstorage` nodes.In this case `vmstorage` increases the interval between gradual closing of `vminsert` connections during graceful shutdown.
|
||||
This reduces data ingestion slowdown during rollout restarts.
|
||||
|
||||
Make sure that the `-storage.vminsertConnsShutdownDuration` is smaller than the graceful shutdown timeout configured at the system which manages `vmstorage`
|
||||
(e.g. Docker, Kubernetes, systemd, etc.). Otherwise the system may kill `vmstorage` node before it finishes gradual closing of `vminsert` connections.
|
||||
|
||||
See also [minimum downtime strategy](#minimum-downtime-strategy).
|
||||
|
||||
|
||||
## Cluster availability
|
||||
|
||||
VictoriaMetrics cluster architecture prioritizes availability over data consistency.
|
||||
|
@ -568,7 +597,15 @@ The cluster works in the following way when some of `vmstorage` nodes are unavai
|
|||
|
||||
It is also possible to configure independent replication factor per distinct `vmstorage` groups - see [these docs](#vmstorage-groups-at-vmselect).
|
||||
|
||||
`vmselect` doesn't serve partial responses for API handlers returning raw datapoints - [`/api/v1/export*` endpoints](https://docs.victoriametrics.com/#how-to-export-time-series), since users usually expect this data is always complete.
|
||||
`vmselect` doesn't serve partial responses for API handlers returning [raw datapoints](https://docs.victoriametrics.com/keyconcepts/#raw-samples),
|
||||
since users usually expect this data is always complete. The following handlers return raw samples:
|
||||
|
||||
- [`/api/v1/export*` endpoints](https://docs.victoriametrics.com/#how-to-export-time-series)
|
||||
- [`/api/v1/query`](https://docs.victoriametrics.com/url-examples/#apiv1query) when the `query` contains [series selector](https://docs.victoriametrics.com/keyconcepts/#filtering)
|
||||
ending with some duration in square brackets. For example, `/api/v1/query?query=up[1h]&time=2024-01-02T03:00:00Z`.
|
||||
This query returns [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) for [time series](https://docs.victoriametrics.com/keyconcepts/#time-series)
|
||||
with the `up` name on the time range `(2024-01-02T02:00:00 .. 2024-01-02T03:00:00]`. See [this article](https://valyala.medium.com/analyzing-prometheus-data-with-external-tools-5f3e5e147639)
|
||||
for details.
|
||||
|
||||
Data replication can be used for increasing storage durability. See [these docs](#replication-and-data-safety) for details.
|
||||
|
||||
|
@ -962,7 +999,9 @@ Below is the output for `/path/to/vminsert -help`:
|
|||
-denyQueryTracing
|
||||
Whether to disable the ability to trace queries. See https://docs.victoriametrics.com/#query-tracing
|
||||
-disableRerouting
|
||||
Whether to disable re-routing when some of vmstorage nodes accept incoming data at slower speed compared to other storage nodes. Disabled re-routing limits the ingestion rate by the slowest vmstorage node. On the other side, disabled re-routing minimizes the number of active time series in the cluster during rolling restarts and during spikes in series churn rate. See also -dropSamplesOnOverload (default true)
|
||||
Whether to disable re-routing when some of vmstorage nodes accept incoming data at slower speed compared to other storage nodes. Disabled re-routing limits the ingestion rate by the slowest vmstorage node. On the other side, disabled re-routing minimizes the number of active time series in the cluster during rolling restarts and during spikes in series churn rate. See also -disableReroutingOnUnavailable and -dropSamplesOnOverload (default true)
|
||||
-disableReroutingOnUnavailable
|
||||
Whether to disable re-routing when some of vmstorage nodes are unavailable. Disabled re-routing stops ingestion when some storage nodes are unavailable. On the other side, disabled re-routing minimizes the number of active time series in the cluster during rolling restarts and during spikes in series churn rate. See also -disableRerouting
|
||||
-dropSamplesOnOverload
|
||||
Whether to drop incoming samples if the destination vmstorage node is overloaded and/or unavailable. This prioritizes cluster availability over consistency, e.g. the cluster continues accepting all the ingested samples, but some of them may be dropped if vmstorage nodes are temporarily unavailable and/or overloaded. The drop of samples happens before the replication, so it's not recommended to use this flag with -replicationFactor enabled.
|
||||
-enableTCP6
|
||||
|
@ -987,7 +1026,7 @@ Below is the output for `/path/to/vminsert -help`:
|
|||
-graphiteTrimTimestamp duration
|
||||
Trim timestamps for Graphite data to this duration. Minimum practical duration is 1s. Higher duration (i.e. 1m) may be used for reducing disk space usage for timestamp data (default 1s)
|
||||
-http.connTimeout duration
|
||||
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s)
|
||||
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem
|
||||
-http.disableResponseCompression
|
||||
Disable compression of HTTP responses to save CPU resources. By default, compression is enabled to save network bandwidth
|
||||
-http.header.csp default-src 'self'
|
||||
|
@ -1009,10 +1048,14 @@ Below is the output for `/path/to/vminsert -help`:
|
|||
Flag value can be read from the given file when using -httpAuth.password=file:///abs/path/to/file or -httpAuth.password=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -httpAuth.password=http://host/path or -httpAuth.password=https://host/path
|
||||
-httpAuth.username string
|
||||
Username for HTTP server's Basic Auth. The authentication is disabled if empty. See also -httpAuth.password
|
||||
-httpListenAddr string
|
||||
Address to listen for http connections. See also -httpListenAddr.useProxyProtocol (default ":8480")
|
||||
-httpListenAddr.useProxyProtocol
|
||||
Whether to use proxy protocol for connections accepted at -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
-httpListenAddr array
|
||||
Address to listen for incoming http requests. See also -httpListenAddr.useProxyProtocol
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-httpListenAddr.useProxyProtocol array
|
||||
Whether to use proxy protocol for connections accepted at the given -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-import.maxLineLen size
|
||||
The maximum length in bytes of a single line accepted by /api/v1/import; the line length can be limited with 'max_rows_per_line' query arg passed to /api/v1/export
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 10485760)
|
||||
|
@ -1088,6 +1131,14 @@ Below is the output for `/path/to/vminsert -help`:
|
|||
-metricsAuthKey value
|
||||
Auth key for /metrics endpoint. It must be passed via authKey query arg. It overrides httpAuth.* settings
|
||||
Flag value can be read from the given file when using -metricsAuthKey=file:///abs/path/to/file or -metricsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -metricsAuthKey=http://host/path or -metricsAuthKey=https://host/path
|
||||
-mtls array
|
||||
Whether to require valid client certificate for https requests to the corresponding -httpListenAddr . This flag works only if -tls flag is set. See also -mtlsCAFile . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-mtlsCAFile array
|
||||
Optional path to TLS Root CA for verifying client certificates at the corresponding -httpListenAddr when -mtls is enabled. By default the host system TLS Root CA is used for client certificate verification. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-newrelic.maxInsertRequestSize size
|
||||
The maximum size in bytes of a single NewRelic request to /newrelic/infra/v2/metrics/events/bulk
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
|
||||
|
@ -1143,18 +1194,26 @@ Below is the output for `/path/to/vminsert -help`:
|
|||
Interval for refreshing -storageNode list behind dns+srv records. The minimum supported interval is 1s. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#automatic-vmstorage-discovery . This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 2s)
|
||||
-storageNode.filter string
|
||||
An optional regexp filter for discovered -storageNode addresses according to https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#automatic-vmstorage-discovery. Discovered addresses matching the filter are retained, while other addresses are ignored. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html
|
||||
-tls
|
||||
Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tls array
|
||||
Whether to enable TLS for incoming HTTP requests at the given -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set. See also -mtls
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-tlsCertFile array
|
||||
Path to file with TLS certificate for the corresponding -httpListenAddr if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsCipherSuites array
|
||||
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsMinVersion string
|
||||
Optional minimum TLS version to use for incoming requests over HTTPS if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
-tlsKeyFile array
|
||||
Path to file with TLS key for the corresponding -httpListenAddr if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsMinVersion array
|
||||
Optional minimum TLS version to use for the corresponding -httpListenAddr if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-usePromCompatibleNaming
|
||||
Whether to replace characters unsupported by Prometheus with underscores in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
|
||||
-version
|
||||
|
@ -1241,7 +1300,7 @@ Below is the output for `/path/to/vmselect -help`:
|
|||
-fs.disableMmap
|
||||
Whether to use pread() instead of mmap() for reading data files. By default, mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread()
|
||||
-http.connTimeout duration
|
||||
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s)
|
||||
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem
|
||||
-http.disableResponseCompression
|
||||
Disable compression of HTTP responses to save CPU resources. By default, compression is enabled to save network bandwidth
|
||||
-http.header.csp default-src 'self'
|
||||
|
@ -1263,10 +1322,14 @@ Below is the output for `/path/to/vmselect -help`:
|
|||
Flag value can be read from the given file when using -httpAuth.password=file:///abs/path/to/file or -httpAuth.password=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -httpAuth.password=http://host/path or -httpAuth.password=https://host/path
|
||||
-httpAuth.username string
|
||||
Username for HTTP server's Basic Auth. The authentication is disabled if empty. See also -httpAuth.password
|
||||
-httpListenAddr string
|
||||
Address to listen for http connections. See also -httpListenAddr.useProxyProtocol (default ":8481")
|
||||
-httpListenAddr.useProxyProtocol
|
||||
Whether to use proxy protocol for connections accepted at -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
-httpListenAddr array
|
||||
Address to listen for incoming http requests. See also -httpListenAddr.useProxyProtocol
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-httpListenAddr.useProxyProtocol array
|
||||
Whether to use proxy protocol for connections accepted at the given -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-internStringCacheExpireDuration duration
|
||||
The expiry duration for caches for interned strings. See https://en.wikipedia.org/wiki/String_interning . See also -internStringMaxLen and -internStringDisableCache (default 6m0s)
|
||||
-internStringDisableCache
|
||||
|
@ -1307,6 +1370,14 @@ Below is the output for `/path/to/vmselect -help`:
|
|||
-metricsAuthKey value
|
||||
Auth key for /metrics endpoint. It must be passed via authKey query arg. It overrides httpAuth.* settings
|
||||
Flag value can be read from the given file when using -metricsAuthKey=file:///abs/path/to/file or -metricsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -metricsAuthKey=http://host/path or -metricsAuthKey=https://host/path
|
||||
-mtls array
|
||||
Whether to require valid client certificate for https requests to the corresponding -httpListenAddr . This flag works only if -tls flag is set. See also -mtlsCAFile . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-mtlsCAFile array
|
||||
Optional path to TLS Root CA for verifying client certificates at the corresponding -httpListenAddr when -mtls is enabled. By default the host system TLS Root CA is used for client certificate verification. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-pprofAuthKey value
|
||||
Auth key for /debug/pprof/* endpoints. It must be passed via authKey query arg. It overrides httpAuth.* settings
|
||||
Flag value can be read from the given file when using -pprofAuthKey=file:///abs/path/to/file or -pprofAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -pprofAuthKey=http://host/path or -pprofAuthKey=https://host/path
|
||||
|
@ -1336,7 +1407,7 @@ Below is the output for `/path/to/vmselect -help`:
|
|||
-search.denyPartialResponse
|
||||
Whether to deny partial responses if a part of -storageNode instances fail to perform queries; this trades availability over consistency; see also -search.maxQueryDuration
|
||||
-search.disableCache
|
||||
Whether to disable response caching. This may be useful during data backfilling
|
||||
Whether to disable response caching. This may be useful when ingesting historical data. See https://docs.victoriametrics.com/#backfilling . See also -search.resetRollupResultCacheOnStartup
|
||||
-search.graphiteMaxPointsPerSeries int
|
||||
The maximum number of points per series Graphite render API can return (default 1000000)
|
||||
-search.graphiteStorageStep duration
|
||||
|
@ -1416,6 +1487,8 @@ Below is the output for `/path/to/vmselect -help`:
|
|||
-search.resetCacheAuthKey value
|
||||
Optional authKey for resetting rollup cache via /internal/resetRollupResultCache call
|
||||
Flag value can be read from the given file when using -search.resetCacheAuthKey=file:///abs/path/to/file or -search.resetCacheAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -search.resetCacheAuthKey=http://host/path or -search.resetCacheAuthKey=https://host/path
|
||||
-search.resetRollupResultCacheOnStartup
|
||||
Whether to reset rollup result cache on startup. See https://docs.victoriametrics.com/#rollup-result-cache . See also -search.disableCache
|
||||
-search.setLookbackToStep
|
||||
Whether to fix lookback interval to 'step' query arg value. If set to true, the query model becomes closer to InfluxDB data model. If set to true, then -search.maxLookback and -search.maxStalenessInterval are ignored
|
||||
-search.skipSlowReplicas
|
||||
|
@ -1434,18 +1507,26 @@ Below is the output for `/path/to/vmselect -help`:
|
|||
Interval for refreshing -storageNode list behind dns+srv records. The minimum supported interval is 1s. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#automatic-vmstorage-discovery . This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 2s)
|
||||
-storageNode.filter string
|
||||
An optional regexp filter for discovered -storageNode addresses according to https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#automatic-vmstorage-discovery. Discovered addresses matching the filter are retained, while other addresses are ignored. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html
|
||||
-tls
|
||||
Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tls array
|
||||
Whether to enable TLS for incoming HTTP requests at the given -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set. See also -mtls
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-tlsCertFile array
|
||||
Path to file with TLS certificate for the corresponding -httpListenAddr if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsCipherSuites array
|
||||
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsMinVersion string
|
||||
Optional minimum TLS version to use for incoming requests over HTTPS if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
-tlsKeyFile array
|
||||
Path to file with TLS key for the corresponding -httpListenAddr if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsMinVersion array
|
||||
Optional minimum TLS version to use for the corresponding -httpListenAddr if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
-vmalert.proxyURL string
|
||||
|
@ -1519,7 +1600,7 @@ Below is the output for `/path/to/vmstorage -help`:
|
|||
-fs.disableMmap
|
||||
Whether to use pread() instead of mmap() for reading data files. By default, mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread()
|
||||
-http.connTimeout duration
|
||||
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s)
|
||||
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem
|
||||
-http.disableResponseCompression
|
||||
Disable compression of HTTP responses to save CPU resources. By default, compression is enabled to save network bandwidth
|
||||
-http.header.csp default-src 'self'
|
||||
|
@ -1541,10 +1622,14 @@ Below is the output for `/path/to/vmstorage -help`:
|
|||
Flag value can be read from the given file when using -httpAuth.password=file:///abs/path/to/file or -httpAuth.password=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -httpAuth.password=http://host/path or -httpAuth.password=https://host/path
|
||||
-httpAuth.username string
|
||||
Username for HTTP server's Basic Auth. The authentication is disabled if empty. See also -httpAuth.password
|
||||
-httpListenAddr string
|
||||
Address to listen for http connections. See also -httpListenAddr.useProxyProtocol (default ":8482")
|
||||
-httpListenAddr.useProxyProtocol
|
||||
Whether to use proxy protocol for connections accepted at -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
-httpListenAddr array
|
||||
Address to listen for incoming http requests. See also -httpListenAddr.useProxyProtocol
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-httpListenAddr.useProxyProtocol array
|
||||
Whether to use proxy protocol for connections accepted at the given -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-inmemoryDataFlushInterval duration
|
||||
The interval for guaranteed saving of in-memory data to disk. The saved data survives unclean shutdowns such as OOM crash, hardware reset, SIGKILL, etc. Bigger intervals may help increase the lifetime of flash storage with limited write cycles (e.g. Raspberry PI). Smaller intervals increase disk IO load. Minimum supported value is 1s (default 5s)
|
||||
-insert.maxQueueDuration duration
|
||||
|
@ -1593,6 +1678,14 @@ Below is the output for `/path/to/vmstorage -help`:
|
|||
-metricsAuthKey value
|
||||
Auth key for /metrics endpoint. It must be passed via authKey query arg. It overrides httpAuth.* settings
|
||||
Flag value can be read from the given file when using -metricsAuthKey=file:///abs/path/to/file or -metricsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -metricsAuthKey=http://host/path or -metricsAuthKey=https://host/path
|
||||
-mtls array
|
||||
Whether to require valid client certificate for https requests to the corresponding -httpListenAddr . This flag works only if -tls flag is set. See also -mtlsCAFile . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-mtlsCAFile array
|
||||
Optional path to TLS Root CA for verifying client certificates at the corresponding -httpListenAddr when -mtls is enabled. By default the host system TLS Root CA is used for client certificate verification. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-pprofAuthKey value
|
||||
Auth key for /debug/pprof/* endpoints. It must be passed via authKey query arg. It overrides httpAuth.* settings
|
||||
Flag value can be read from the given file when using -pprofAuthKey=file:///abs/path/to/file or -pprofAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -pprofAuthKey=http://host/path or -pprofAuthKey=https://host/path
|
||||
|
@ -1672,18 +1765,26 @@ Below is the output for `/path/to/vmstorage -help`:
|
|||
The time needed for gradual closing of vminsert connections during graceful shutdown. Bigger duration reduces spikes in CPU, RAM and disk IO load on the remaining vmstorage nodes during rolling restart. Smaller duration reduces the time needed to close all the vminsert connections, thus reducing the time for graceful shutdown. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#improving-re-routing-performance-during-restart (default 25s)
|
||||
-storageDataPath string
|
||||
Path to storage data (default "vmstorage-data")
|
||||
-tls
|
||||
Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tls array
|
||||
Whether to enable TLS for incoming HTTP requests at the given -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set. See also -mtls
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-tlsCertFile array
|
||||
Path to file with TLS certificate for the corresponding -httpListenAddr if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsCipherSuites array
|
||||
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsMinVersion string
|
||||
Optional minimum TLS version to use for incoming requests over HTTPS if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
-tlsKeyFile array
|
||||
Path to file with TLS key for the corresponding -httpListenAddr if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsMinVersion array
|
||||
Optional minimum TLS version to use for the corresponding -httpListenAddr if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
-vminsertAddr string
|
||||
|
|
|
@ -70,7 +70,7 @@ The list of MetricsQL features on top of PromQL:
|
|||
It is equivalent to `rate(node_network_receive_bytes_total[$__interval])` when used in Grafana.
|
||||
* Numeric values can contain `_` delimiters for better readability. For example, `1_234_567_890` can be used in queries instead of `1234567890`.
|
||||
* [Series selectors](https://docs.victoriametrics.com/keyConcepts.html#filtering) accept multiple `or` filters. For example, `{env="prod",job="a" or env="dev",job="b"}`
|
||||
selects series with either `{env="prod",job="a"}` or `{env="dev",job="b"}` labels.
|
||||
selects series with `{env="prod",job="a"}` or `{env="dev",job="b"}` labels.
|
||||
See [these docs](https://docs.victoriametrics.com/keyConcepts.html#filtering-by-multiple-or-filters) for details.
|
||||
* Support for `group_left(*)` and `group_right(*)` for copying all the labels from time series on the `one` side
|
||||
of [many-to-one operations](https://prometheus.io/docs/prometheus/latest/querying/operators/#many-to-one-and-one-to-many-vector-matches).
|
||||
|
@ -243,7 +243,7 @@ from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.ht
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [count_over_time](#count_over_time).
|
||||
See also [count_over_time](#count_over_time) and [share_eq_over_time](#share_eq_over_time).
|
||||
|
||||
#### count_gt_over_time
|
||||
|
||||
|
@ -253,7 +253,7 @@ from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.ht
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [count_over_time](#count_over_time).
|
||||
See also [count_over_time](#count_over_time) and [share_gt_over_time](#share_gt_over_time).
|
||||
|
||||
#### count_le_over_time
|
||||
|
||||
|
@ -263,7 +263,7 @@ from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.ht
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [count_over_time](#count_over_time).
|
||||
See also [count_over_time](#count_over_time) and [share_le_over_time](#share_le_over_time).
|
||||
|
||||
#### count_ne_over_time
|
||||
|
||||
|
@ -743,7 +743,7 @@ This function is useful for calculating SLI and SLO. Example: `share_gt_over_tim
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [share_le_over_time](#share_le_over_time).
|
||||
See also [share_le_over_time](#share_le_over_time) and [count_gt_over_time](#count_gt_over_time).
|
||||
|
||||
#### share_le_over_time
|
||||
|
||||
|
@ -756,7 +756,7 @@ the share of time series values for the last 24 hours when memory usage was belo
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [share_gt_over_time](#share_gt_over_time).
|
||||
See also [share_gt_over_time](#share_gt_over_time) and [count_le_over_time](#count_le_over_time).
|
||||
|
||||
#### share_eq_over_time
|
||||
|
||||
|
@ -766,6 +766,8 @@ from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.ht
|
|||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [count_eq_over_time](#count_eq_over_time).
|
||||
|
||||
#### stale_samples_over_time
|
||||
|
||||
`stale_samples_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the number
|
||||
|
@ -792,6 +794,33 @@ Metric names are stripped from the resulting rollups. Add [keep_metric_names](#k
|
|||
|
||||
This function is supported by PromQL. See also [stddev_over_time](#stddev_over_time).
|
||||
|
||||
#### sum_eq_over_time
|
||||
|
||||
`sum_eq_over_time(series_selector[d], eq)` is a [rollup function](#rollup-function), which calculates the sum of raw sample values equal to `eq`
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [sum_over_time](#sum_over_time) and [count_eq_over_time](#count_eq_over_time).
|
||||
|
||||
#### sum_gt_over_time
|
||||
|
||||
`sum_gt_over_time(series_selector[d], gt)` is a [rollup function](#rollup-function), which calculates the sum of raw sample values bigger than `gt`
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [sum_over_time](#sum_over_time) and [count_gt_over_time](#count_gt_over_time).
|
||||
|
||||
#### sum_le_over_time
|
||||
|
||||
`sum_le_over_time(series_selector[d], le)` is a [rollup function](#rollup-function), which calculates the sum of raw sample values smaller or equal to `le`
|
||||
on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
|
||||
|
||||
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
|
||||
|
||||
See also [sum_over_time](#sum_over_time) and [count_le_over_time](#count_le_over_time).
|
||||
|
||||
#### sum_over_time
|
||||
|
||||
`sum_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the sum of raw sample values
|
||||
|
|
176
docs/README.md
176
docs/README.md
|
@ -15,6 +15,7 @@ title: VictoriaMetrics
|
|||
<img src="logo.webp" width="300" alt="VictoriaMetrics logo">
|
||||
|
||||
VictoriaMetrics is a fast, cost-effective and scalable monitoring solution and time series database.
|
||||
See [case studies for VictoriaMetrics](https://docs.victoriametrics.com/CaseStudies.html).
|
||||
|
||||
VictoriaMetrics is available in [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest),
|
||||
[Docker images](https://hub.docker.com/r/victoriametrics/victoria-metrics/), [Snap packages](https://snapcraft.io/victoriametrics)
|
||||
|
@ -102,40 +103,7 @@ VictoriaMetrics has the following prominent features:
|
|||
* It can store data on [NFS-based storages](https://en.wikipedia.org/wiki/Network_File_System) such as [Amazon EFS](https://aws.amazon.com/efs/)
|
||||
and [Google Filestore](https://cloud.google.com/filestore).
|
||||
|
||||
See also [various Articles about VictoriaMetrics](https://docs.victoriametrics.com/Articles.html).
|
||||
|
||||
## Case studies and talks
|
||||
|
||||
Case studies:
|
||||
|
||||
* [AbiosGaming](https://docs.victoriametrics.com/CaseStudies.html#abiosgaming)
|
||||
* [adidas](https://docs.victoriametrics.com/CaseStudies.html#adidas)
|
||||
* [Adsterra](https://docs.victoriametrics.com/CaseStudies.html#adsterra)
|
||||
* [ARNES](https://docs.victoriametrics.com/CaseStudies.html#arnes)
|
||||
* [Brandwatch](https://docs.victoriametrics.com/CaseStudies.html#brandwatch)
|
||||
* [CERN](https://docs.victoriametrics.com/CaseStudies.html#cern)
|
||||
* [COLOPL](https://docs.victoriametrics.com/CaseStudies.html#colopl)
|
||||
* [Criteo](https://docs.victoriametrics.com/CaseStudies.html#criteo)
|
||||
* [Dig Security](https://docs.victoriametrics.com/CaseStudies.html#dig-security)
|
||||
* [Fly.io](https://docs.victoriametrics.com/CaseStudies.html#flyio)
|
||||
* [German Research Center for Artificial Intelligence](https://docs.victoriametrics.com/CaseStudies.html#german-research-center-for-artificial-intelligence)
|
||||
* [Grammarly](https://docs.victoriametrics.com/CaseStudies.html#grammarly)
|
||||
* [Groove X](https://docs.victoriametrics.com/CaseStudies.html#groove-x)
|
||||
* [Idealo.de](https://docs.victoriametrics.com/CaseStudies.html#idealode)
|
||||
* [MHI Vestas Offshore Wind](https://docs.victoriametrics.com/CaseStudies.html#mhi-vestas-offshore-wind)
|
||||
* [Naver](https://docs.victoriametrics.com/CaseStudies.html#naver)
|
||||
* [Razorpay](https://docs.victoriametrics.com/CaseStudies.html#razorpay)
|
||||
* [Percona](https://docs.victoriametrics.com/CaseStudies.html#percona)
|
||||
* [Roblox](https://docs.victoriametrics.com/CaseStudies.html#roblox)
|
||||
* [Sensedia](https://docs.victoriametrics.com/CaseStudies.html#sensedia)
|
||||
* [Smarkets](https://docs.victoriametrics.com/CaseStudies.html#smarkets)
|
||||
* [Synthesio](https://docs.victoriametrics.com/CaseStudies.html#synthesio)
|
||||
* [Wedos.com](https://docs.victoriametrics.com/CaseStudies.html#wedoscom)
|
||||
* [Wix.com](https://docs.victoriametrics.com/CaseStudies.html#wixcom)
|
||||
* [Zerodha](https://docs.victoriametrics.com/CaseStudies.html#zerodha)
|
||||
* [zhihu](https://docs.victoriametrics.com/CaseStudies.html#zhihu)
|
||||
|
||||
See also [articles and slides about VictoriaMetrics from our users](https://docs.victoriametrics.com/Articles.html#third-party-articles-and-slides-about-victoriametrics)
|
||||
See [case studies for VictoriaMetrics](https://docs.victoriametrics.com/CaseStudies.html) and [various Articles about VictoriaMetrics](https://docs.victoriametrics.com/Articles.html).
|
||||
|
||||
## Operation
|
||||
|
||||
|
@ -500,11 +468,18 @@ Prometheus doesn't drop data during VictoriaMetrics restart. See [this article](
|
|||
|
||||
## How to scrape Prometheus exporters such as [node-exporter](https://github.com/prometheus/node_exporter)
|
||||
|
||||
VictoriaMetrics can be used as drop-in replacement for Prometheus for scraping targets configured in `prometheus.yml` config file according to [the specification](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file). Just set `-promscrape.config` command-line flag to the path to `prometheus.yml` config - and VictoriaMetrics should start scraping the configured targets. If the provided configuration file contains [unsupported options](https://docs.victoriametrics.com/vmagent.html#unsupported-prometheus-config-sections), then either delete them from the file or just pass `-promscrape.config.strictParse=false` command-line flag to VictoriaMetrics, so it will ignore unsupported options.
|
||||
VictoriaMetrics can be used as drop-in replacement for Prometheus for scraping targets configured in `prometheus.yml` config file
|
||||
according to [the specification](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file).
|
||||
Just set `-promscrape.config` command-line flag to the path to `prometheus.yml` config - and VictoriaMetrics should start scraping the configured targets.
|
||||
If the provided configuration file contains [unsupported options](https://docs.victoriametrics.com/vmagent.html#unsupported-prometheus-config-sections),
|
||||
then either delete them from the file or just pass `-promscrape.config.strictParse=false` command-line flag to VictoriaMetrics, so it will ignore unsupported options.
|
||||
|
||||
The file pointed by `-promscrape.config` may contain `%{ENV_VAR}` placeholders, which are substituted by the corresponding `ENV_VAR` environment variable values.
|
||||
|
||||
See [the list of supported service discovery types for Prometheus scrape targets](https://docs.victoriametrics.com/sd_configs.html).
|
||||
See also:
|
||||
|
||||
- [scrape config examples](https://docs.victoriametrics.com/scrape_config_examples/)
|
||||
- [the list of supported service discovery types for Prometheus scrape targets](https://docs.victoriametrics.com/sd_configs.html)
|
||||
|
||||
VictoriaMetrics also supports [importing data in Prometheus exposition format](#how-to-import-data-in-prometheus-exposition-format).
|
||||
|
||||
|
@ -512,8 +487,9 @@ See also [vmagent](https://docs.victoriametrics.com/vmagent.html), which can be
|
|||
|
||||
## How to send data from DataDog agent
|
||||
|
||||
VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/agent/) or [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/)
|
||||
via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics) at `/datadog/api/v2/series` path.
|
||||
VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/agent/), [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/) and
|
||||
[DataDog Lambda Extension](https://docs.datadoghq.com/serverless/libraries_integrations/extension/)
|
||||
via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics) at `/datadog/api/v2/series` or via "sketches" API at `/datadog/api/beta/sketches`.
|
||||
|
||||
### Sending metrics to VictoriaMetrics
|
||||
|
||||
|
@ -539,11 +515,11 @@ add the following line:
|
|||
dd_url: http://victoriametrics:8428/datadog
|
||||
```
|
||||
|
||||
[vmagent](https://docs.victoriametrics.com/vmagent.html) also can accept Datadog metrics format. Depending on where vmagent will forward data,
|
||||
[vmagent](https://docs.victoriametrics.com/vmagent.html) also can accept DataDog metrics format. Depending on where vmagent will forward data,
|
||||
pick [single-node or cluster URL](https://docs.victoriametrics.com/url-examples.html#datadog) formats.
|
||||
|
||||
### Sending metrics to Datadog and VictoriaMetrics
|
||||
|
||||
### Sending metrics to DataDog and VictoriaMetrics
|
||||
|
||||
DataDog allows configuring [Dual Shipping](https://docs.datadoghq.com/agent/guide/dual-shipping/) for metrics
|
||||
sending via ENV variable `DD_ADDITIONAL_ENDPOINTS` or via configuration file `additional_endpoints`.
|
||||
|
||||
|
@ -567,6 +543,19 @@ additional_endpoints:
|
|||
- apikey
|
||||
```
|
||||
|
||||
### Send metrics via Serverless DataDog plugin
|
||||
|
||||
Disable logs (logs ingestion is not supported by VictoriaMetrics) and set a custom endpoint in `serverless.yaml`:
|
||||
|
||||
```
|
||||
custom:
|
||||
datadog:
|
||||
enableDDLogs: false # Disabled not supported DD logs
|
||||
apiKey: fakekey # Set any key, otherwise plugin fails
|
||||
provider:
|
||||
environment:
|
||||
DD_DD_URL: <<vm-url>>/datadog # VictoriaMetrics endpoint for DataDog
|
||||
```
|
||||
|
||||
### Send via cURL
|
||||
|
||||
|
@ -1049,7 +1038,7 @@ to your needs or when testing bugfixes.
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.20.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.22.
|
||||
1. Run `make victoria-metrics` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1065,7 +1054,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
### Development ARM build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.20.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.22.
|
||||
1. Run `make victoria-metrics-linux-arm` or `make victoria-metrics-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics-linux-arm` or `victoria-metrics-linux-arm64` binary respectively and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1079,7 +1068,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
`Pure Go` mode builds only Go code without [cgo](https://golang.org/cmd/cgo/) dependencies.
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.20.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.22.
|
||||
1. Run `make victoria-metrics-pure` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics-pure` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1527,7 +1516,7 @@ Set HTTP request header `Content-Encoding: gzip` when sending gzip-compressed da
|
|||
VictoriaMetrics accepts data in JSON line format at [/api/v1/import](#how-to-import-data-in-json-line-format)
|
||||
and exports data in this format at [/api/v1/export](#how-to-export-data-in-json-line-format).
|
||||
|
||||
The format follows [JSON streaming concept](http://ndjson.org/), e.g. each line contains JSON object with metrics data in the following format:
|
||||
The format follows [JSON streaming concept](https://jsonlines.org/), e.g. each line contains JSON object with metrics data in the following format:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -1950,13 +1939,26 @@ Additionally, alerting can be set up with the following tools:
|
|||
* With Promxy - see [the corresponding docs](https://github.com/jacksontj/promxy/blob/master/README.md#how-do-i-use-alertingrecording-rules-in-promxy).
|
||||
* With Grafana - see [the corresponding docs](https://grafana.com/docs/alerting/rules/).
|
||||
|
||||
## mTLS protection
|
||||
|
||||
By default `VictoriaMetrics` accepts http requests at `8428` port (this port can be changed via `-httpListenAddr` command-line flags).
|
||||
[Enterprise version of VictoriaMetrics](https://docs.victoriametrics.com/enterprise.html) supports the ability to accept [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication)
|
||||
requests at this port, by specifying `-tls` and `-mtls` command-line flags. For example, the following command runs `VictoriaMetrics`, which accepts only mTLS requests at port `8428`:
|
||||
|
||||
```
|
||||
./victoria-metrics -tls -mtls
|
||||
```
|
||||
|
||||
By default system-wide [TLS Root CA](https://en.wikipedia.org/wiki/Root_certificate) is used for verifying client certificates if `-mtls` command-line flag is specified.
|
||||
It is possible to specify custom TLS Root CA via `-mtlsCAFile` command-line flag.
|
||||
|
||||
## Security
|
||||
|
||||
General security recommendations:
|
||||
|
||||
- All the VictoriaMetrics components must run in protected private networks without direct access from untrusted networks such as Internet.
|
||||
The exception is [vmauth](https://docs.victoriametrics.com/vmauth.html) and [vmgateway](https://docs.victoriametrics.com/vmgateway.html),
|
||||
which are indended for serving public requests.
|
||||
which are indended for serving public requests and performing authorization with [TLS termination](https://en.wikipedia.org/wiki/TLS_termination_proxy).
|
||||
- All the requests from untrusted networks to VictoriaMetrics components must go through auth proxy such as [vmauth](https://docs.victoriametrics.com/vmauth.html)
|
||||
or [vmgateway](https://docs.victoriametrics.com/vmgateway.html). The proxy must be set up with proper authentication and authorization.
|
||||
- Prefer using lists of allowed API endpoints, while disallowing access to other endpoints when configuring [vmauth](https://docs.victoriametrics.com/vmauth.html)
|
||||
|
@ -1967,7 +1969,8 @@ General security recommendations:
|
|||
|
||||
VictoriaMetrics provides the following security-related command-line flags:
|
||||
|
||||
* `-tls`, `-tlsCertFile` and `-tlsKeyFile` for switching from HTTP to HTTPS.
|
||||
* `-tls`, `-tlsCertFile` and `-tlsKeyFile` for switching from HTTP to HTTPS at `-httpListenAddr` (8428 by default).
|
||||
* `-mtls` and `-mtlsCAFile` for enabling [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) for requests to `-httpListenAddr`. See [these docs](#mtls-protection).
|
||||
* `-httpAuth.username` and `-httpAuth.password` for protecting all the HTTP endpoints
|
||||
with [HTTP Basic Authentication](https://en.wikipedia.org/wiki/Basic_access_authentication).
|
||||
* `-deleteAuthKey` for protecting `/api/v1/admin/tsdb/delete_series` endpoint. See [how to delete time series](#how-to-delete-time-series).
|
||||
|
@ -2012,12 +2015,17 @@ mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
|
|||
## Monitoring
|
||||
|
||||
VictoriaMetrics exports internal metrics in Prometheus exposition format at `/metrics` page.
|
||||
These metrics can be scraped via [vmagent](https://docs.victoriametrics.com/vmagent.html) or Prometheus.
|
||||
These metrics can be scraped via [vmagent](https://docs.victoriametrics.com/vmagent.html) or any other Prometheus-compatible scraper.
|
||||
|
||||
If you use Google Cloud Managed Prometheus for scraping metrics from VictoriaMetrics components, then pass `-metrics.exposeMetadata`
|
||||
command-line to them, so they add `TYPE` and `HELP` comments per each exposed metric at `/metrics` page.
|
||||
See [these docs](https://cloud.google.com/stackdriver/docs/managed-prometheus/troubleshooting#missing-metric-type) for details.
|
||||
|
||||
Alternatively, single-node VictoriaMetrics can self-scrape the metrics when `-selfScrapeInterval` command-line flag is
|
||||
set to duration greater than 0. For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page
|
||||
with 10 seconds interval.
|
||||
|
||||
_Please note, never use loadbalancer address for scraping metrics. All monitored components should be scraped directly by their address._
|
||||
_Please note, never use loadbalancer address for scraping metrics. All the monitored components should be scraped directly by their address._
|
||||
|
||||
Official Grafana dashboards available for [single-node](https://grafana.com/grafana/dashboards/10229)
|
||||
and [clustered](https://grafana.com/grafana/dashboards/11176) VictoriaMetrics.
|
||||
|
@ -2280,6 +2288,21 @@ Sometimes it is needed to remove such caches on the next startup. This can be do
|
|||
In this case VictoriaMetrics will automatically remove all the caches on the next start.
|
||||
See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1447) for details.
|
||||
|
||||
It is also possible removing [rollup result cache](#rollup-result-cache) on startup by passing `-search.resetRollupResultCacheOnStartup` command-line flag to VictoriaMetrics.
|
||||
|
||||
## Rollup result cache
|
||||
|
||||
VictoriaMetrics caches query reponses by default. This allows increasing performance for repated queries
|
||||
to [`/api/v1/query`](https://docs.victoriametrics.com/keyconcepts/#instant-query) and [`/api/v1/query_range`](https://docs.victoriametrics.com/keyconcepts/#range-query)
|
||||
with the increasing `time`, `start` and `end` query args.
|
||||
|
||||
This cache may work incorrectly when ingesting historical data into VictoriaMetrics. See [these docs](#backfilling) for details.
|
||||
|
||||
The rollup cache can be disabled either globally by running VictoriaMetrics with `-search.disableCache` command-line flag
|
||||
or on a per-query basis by passing `nocache=1` query arg to `/api/v1/query` and `/api/v1/query_range`.
|
||||
|
||||
See also [cache removal docs](#cache-removal).
|
||||
|
||||
## Cache tuning
|
||||
|
||||
VictoriaMetrics uses various in-memory caches for faster data ingestion and query performance.
|
||||
|
@ -2344,11 +2367,12 @@ VictoriaMetrics accepts historical data in arbitrary order of time via [any supp
|
|||
See [how to backfill data with recording rules in vmalert](https://docs.victoriametrics.com/vmalert.html#rules-backfilling).
|
||||
Make sure that configured `-retentionPeriod` covers timestamps for the backfilled data.
|
||||
|
||||
It is recommended disabling query cache with `-search.disableCache` command-line flag when writing
|
||||
It is recommended disabling [query cache](#rollup-result-cache) with `-search.disableCache` command-line flag when writing
|
||||
historical data with timestamps from the past, since the cache assumes that the data is written with
|
||||
the current timestamps. Query cache can be enabled after the backfilling is complete.
|
||||
|
||||
An alternative solution is to query [/internal/resetRollupResultCache](https://docs.victoriametrics.com/url-examples.html#internalresetrollupresultcache) handler after the backfilling is complete. This will reset the query cache, which could contain incomplete data cached during the backfilling.
|
||||
An alternative solution is to query [/internal/resetRollupResultCache](https://docs.victoriametrics.com/url-examples.html#internalresetrollupresultcache)
|
||||
after the backfilling is complete. This will reset the [query cache](#rollup-result-cache), which could contain incomplete data cached during the backfilling.
|
||||
|
||||
Yet another solution is to increase `-search.cacheTimestampOffset` flag value in order to disable caching
|
||||
for data with timestamps close to the current time. Single-node VictoriaMetrics automatically resets response
|
||||
|
@ -2587,7 +2611,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-graphiteTrimTimestamp duration
|
||||
Trim timestamps for Graphite data to this duration. Minimum practical duration is 1s. Higher duration (i.e. 1m) may be used for reducing disk space usage for timestamp data (default 1s)
|
||||
-http.connTimeout duration
|
||||
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s)
|
||||
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem
|
||||
-http.disableResponseCompression
|
||||
Disable compression of HTTP responses to save CPU resources. By default, compression is enabled to save network bandwidth
|
||||
-http.header.csp default-src 'self'
|
||||
|
@ -2609,10 +2633,14 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Flag value can be read from the given file when using -httpAuth.password=file:///abs/path/to/file or -httpAuth.password=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -httpAuth.password=http://host/path or -httpAuth.password=https://host/path
|
||||
-httpAuth.username string
|
||||
Username for HTTP server's Basic Auth. The authentication is disabled if empty. See also -httpAuth.password
|
||||
-httpListenAddr string
|
||||
TCP address to listen for http connections. See also -tls and -httpListenAddr.useProxyProtocol (default ":8428")
|
||||
-httpListenAddr.useProxyProtocol
|
||||
Whether to use proxy protocol for connections accepted at -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
-httpListenAddr array
|
||||
TCP addresses to listen for incoming http requests. See also -tls and -httpListenAddr.useProxyProtocol
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-httpListenAddr.useProxyProtocol array
|
||||
Whether to use proxy protocol for connections accepted at the corresponding -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-import.maxLineLen size
|
||||
The maximum length in bytes of a single line accepted by /api/v1/import; the line length can be limited with 'max_rows_per_line' query arg passed to /api/v1/export
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 10485760)
|
||||
|
@ -2692,6 +2720,14 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-metricsAuthKey value
|
||||
Auth key for /metrics endpoint. It must be passed via authKey query arg. It overrides httpAuth.* settings
|
||||
Flag value can be read from the given file when using -metricsAuthKey=file:///abs/path/to/file or -metricsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -metricsAuthKey=http://host/path or -metricsAuthKey=https://host/path
|
||||
-mtls array
|
||||
Whether to require valid client certificate for https requests to the corresponding -httpListenAddr . This flag works only if -tls flag is set. See also -mtlsCAFile . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-mtlsCAFile array
|
||||
Optional path to TLS Root CA for verifying client certificates at the corresponding -httpListenAddr when -mtls is enabled. By default the host system TLS Root CA is used for client certificate verification. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-newrelic.maxInsertRequestSize size
|
||||
The maximum size in bytes of a single NewRelic request to /newrelic/infra/v2/metrics/events/bulk
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
|
||||
|
@ -2849,7 +2885,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-search.disableAutoCacheReset
|
||||
Whether to disable automatic response cache reset if a sample with timestamp outside -search.cacheTimestampOffset is inserted into VictoriaMetrics
|
||||
-search.disableCache
|
||||
Whether to disable response caching. This may be useful during data backfilling
|
||||
Whether to disable response caching. This may be useful when ingesting historical data. See https://docs.victoriametrics.com/#backfilling . See also -search.resetRollupResultCacheOnStartup
|
||||
-search.graphiteMaxPointsPerSeries int
|
||||
The maximum number of points per series Graphite render API can return (default 1000000)
|
||||
-search.graphiteStorageStep duration
|
||||
|
@ -2933,6 +2969,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-search.resetCacheAuthKey value
|
||||
Optional authKey for resetting rollup cache via /internal/resetRollupResultCache call
|
||||
Flag value can be read from the given file when using -search.resetCacheAuthKey=file:///abs/path/to/file or -search.resetCacheAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -search.resetCacheAuthKey=http://host/path or -search.resetCacheAuthKey=https://host/path
|
||||
-search.resetRollupResultCacheOnStartup
|
||||
Whether to reset rollup result cache on startup. See https://docs.victoriametrics.com/#rollup-result-cache . See also -search.disableCache
|
||||
-search.setLookbackToStep
|
||||
Whether to fix lookback interval to 'step' query arg value. If set to true, the query model becomes closer to InfluxDB data model. If set to true, then -search.maxLookback and -search.maxStalenessInterval are ignored
|
||||
-search.treatDotsAsIsInRegexps
|
||||
|
@ -2984,18 +3022,26 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Whether to drop all the input samples after the aggregation with -streamAggr.config. By default, only aggregated samples are dropped, while the remaining samples are stored in the database. See also -streamAggr.keepInput and https://docs.victoriametrics.com/stream-aggregation.html
|
||||
-streamAggr.keepInput
|
||||
Whether to keep all the input samples after the aggregation with -streamAggr.config. By default, only aggregated samples are dropped, while the remaining samples are stored in the database. See also -streamAggr.dropInput and https://docs.victoriametrics.com/stream-aggregation.html
|
||||
-tls
|
||||
Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tls array
|
||||
Whether to enable TLS for incoming HTTP requests at the given -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set. See also -mtls
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-tlsCertFile array
|
||||
Path to file with TLS certificate for the corresponding -httpListenAddr if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsCipherSuites array
|
||||
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsMinVersion string
|
||||
Optional minimum TLS version to use for incoming requests over HTTPS if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
-tlsKeyFile array
|
||||
Path to file with TLS key for the corresponding -httpListenAddr if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsMinVersion array
|
||||
Optional minimum TLS version to use for the corresponding -httpListenAddr if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-usePromCompatibleNaming
|
||||
Whether to replace characters unsupported by Prometheus with underscores in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
|
||||
-version
|
||||
|
|
|
@ -23,6 +23,7 @@ aliases:
|
|||
<img src="logo.webp" width="300" alt="VictoriaMetrics logo">
|
||||
|
||||
VictoriaMetrics is a fast, cost-effective and scalable monitoring solution and time series database.
|
||||
See [case studies for VictoriaMetrics](https://docs.victoriametrics.com/CaseStudies.html).
|
||||
|
||||
VictoriaMetrics is available in [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest),
|
||||
[Docker images](https://hub.docker.com/r/victoriametrics/victoria-metrics/), [Snap packages](https://snapcraft.io/victoriametrics)
|
||||
|
@ -110,40 +111,7 @@ VictoriaMetrics has the following prominent features:
|
|||
* It can store data on [NFS-based storages](https://en.wikipedia.org/wiki/Network_File_System) such as [Amazon EFS](https://aws.amazon.com/efs/)
|
||||
and [Google Filestore](https://cloud.google.com/filestore).
|
||||
|
||||
See also [various Articles about VictoriaMetrics](https://docs.victoriametrics.com/Articles.html).
|
||||
|
||||
## Case studies and talks
|
||||
|
||||
Case studies:
|
||||
|
||||
* [AbiosGaming](https://docs.victoriametrics.com/CaseStudies.html#abiosgaming)
|
||||
* [adidas](https://docs.victoriametrics.com/CaseStudies.html#adidas)
|
||||
* [Adsterra](https://docs.victoriametrics.com/CaseStudies.html#adsterra)
|
||||
* [ARNES](https://docs.victoriametrics.com/CaseStudies.html#arnes)
|
||||
* [Brandwatch](https://docs.victoriametrics.com/CaseStudies.html#brandwatch)
|
||||
* [CERN](https://docs.victoriametrics.com/CaseStudies.html#cern)
|
||||
* [COLOPL](https://docs.victoriametrics.com/CaseStudies.html#colopl)
|
||||
* [Criteo](https://docs.victoriametrics.com/CaseStudies.html#criteo)
|
||||
* [Dig Security](https://docs.victoriametrics.com/CaseStudies.html#dig-security)
|
||||
* [Fly.io](https://docs.victoriametrics.com/CaseStudies.html#flyio)
|
||||
* [German Research Center for Artificial Intelligence](https://docs.victoriametrics.com/CaseStudies.html#german-research-center-for-artificial-intelligence)
|
||||
* [Grammarly](https://docs.victoriametrics.com/CaseStudies.html#grammarly)
|
||||
* [Groove X](https://docs.victoriametrics.com/CaseStudies.html#groove-x)
|
||||
* [Idealo.de](https://docs.victoriametrics.com/CaseStudies.html#idealode)
|
||||
* [MHI Vestas Offshore Wind](https://docs.victoriametrics.com/CaseStudies.html#mhi-vestas-offshore-wind)
|
||||
* [Naver](https://docs.victoriametrics.com/CaseStudies.html#naver)
|
||||
* [Razorpay](https://docs.victoriametrics.com/CaseStudies.html#razorpay)
|
||||
* [Percona](https://docs.victoriametrics.com/CaseStudies.html#percona)
|
||||
* [Roblox](https://docs.victoriametrics.com/CaseStudies.html#roblox)
|
||||
* [Sensedia](https://docs.victoriametrics.com/CaseStudies.html#sensedia)
|
||||
* [Smarkets](https://docs.victoriametrics.com/CaseStudies.html#smarkets)
|
||||
* [Synthesio](https://docs.victoriametrics.com/CaseStudies.html#synthesio)
|
||||
* [Wedos.com](https://docs.victoriametrics.com/CaseStudies.html#wedoscom)
|
||||
* [Wix.com](https://docs.victoriametrics.com/CaseStudies.html#wixcom)
|
||||
* [Zerodha](https://docs.victoriametrics.com/CaseStudies.html#zerodha)
|
||||
* [zhihu](https://docs.victoriametrics.com/CaseStudies.html#zhihu)
|
||||
|
||||
See also [articles and slides about VictoriaMetrics from our users](https://docs.victoriametrics.com/Articles.html#third-party-articles-and-slides-about-victoriametrics)
|
||||
See [case studies for VictoriaMetrics](https://docs.victoriametrics.com/CaseStudies.html) and [various Articles about VictoriaMetrics](https://docs.victoriametrics.com/Articles.html).
|
||||
|
||||
## Operation
|
||||
|
||||
|
@ -508,11 +476,18 @@ Prometheus doesn't drop data during VictoriaMetrics restart. See [this article](
|
|||
|
||||
## How to scrape Prometheus exporters such as [node-exporter](https://github.com/prometheus/node_exporter)
|
||||
|
||||
VictoriaMetrics can be used as drop-in replacement for Prometheus for scraping targets configured in `prometheus.yml` config file according to [the specification](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file). Just set `-promscrape.config` command-line flag to the path to `prometheus.yml` config - and VictoriaMetrics should start scraping the configured targets. If the provided configuration file contains [unsupported options](https://docs.victoriametrics.com/vmagent.html#unsupported-prometheus-config-sections), then either delete them from the file or just pass `-promscrape.config.strictParse=false` command-line flag to VictoriaMetrics, so it will ignore unsupported options.
|
||||
VictoriaMetrics can be used as drop-in replacement for Prometheus for scraping targets configured in `prometheus.yml` config file
|
||||
according to [the specification](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file).
|
||||
Just set `-promscrape.config` command-line flag to the path to `prometheus.yml` config - and VictoriaMetrics should start scraping the configured targets.
|
||||
If the provided configuration file contains [unsupported options](https://docs.victoriametrics.com/vmagent.html#unsupported-prometheus-config-sections),
|
||||
then either delete them from the file or just pass `-promscrape.config.strictParse=false` command-line flag to VictoriaMetrics, so it will ignore unsupported options.
|
||||
|
||||
The file pointed by `-promscrape.config` may contain `%{ENV_VAR}` placeholders, which are substituted by the corresponding `ENV_VAR` environment variable values.
|
||||
|
||||
See [the list of supported service discovery types for Prometheus scrape targets](https://docs.victoriametrics.com/sd_configs.html).
|
||||
See also:
|
||||
|
||||
- [scrape config examples](https://docs.victoriametrics.com/scrape_config_examples/)
|
||||
- [the list of supported service discovery types for Prometheus scrape targets](https://docs.victoriametrics.com/sd_configs.html)
|
||||
|
||||
VictoriaMetrics also supports [importing data in Prometheus exposition format](#how-to-import-data-in-prometheus-exposition-format).
|
||||
|
||||
|
@ -520,8 +495,9 @@ See also [vmagent](https://docs.victoriametrics.com/vmagent.html), which can be
|
|||
|
||||
## How to send data from DataDog agent
|
||||
|
||||
VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/agent/) or [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/)
|
||||
via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics) at `/datadog/api/v2/series` path.
|
||||
VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/agent/), [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/) and
|
||||
[DataDog Lambda Extension](https://docs.datadoghq.com/serverless/libraries_integrations/extension/)
|
||||
via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics) at `/datadog/api/v2/series` or via "sketches" API at `/datadog/api/beta/sketches`.
|
||||
|
||||
### Sending metrics to VictoriaMetrics
|
||||
|
||||
|
@ -547,11 +523,11 @@ add the following line:
|
|||
dd_url: http://victoriametrics:8428/datadog
|
||||
```
|
||||
|
||||
[vmagent](https://docs.victoriametrics.com/vmagent.html) also can accept Datadog metrics format. Depending on where vmagent will forward data,
|
||||
[vmagent](https://docs.victoriametrics.com/vmagent.html) also can accept DataDog metrics format. Depending on where vmagent will forward data,
|
||||
pick [single-node or cluster URL](https://docs.victoriametrics.com/url-examples.html#datadog) formats.
|
||||
|
||||
### Sending metrics to Datadog and VictoriaMetrics
|
||||
|
||||
### Sending metrics to DataDog and VictoriaMetrics
|
||||
|
||||
DataDog allows configuring [Dual Shipping](https://docs.datadoghq.com/agent/guide/dual-shipping/) for metrics
|
||||
sending via ENV variable `DD_ADDITIONAL_ENDPOINTS` or via configuration file `additional_endpoints`.
|
||||
|
||||
|
@ -575,6 +551,19 @@ additional_endpoints:
|
|||
- apikey
|
||||
```
|
||||
|
||||
### Send metrics via Serverless DataDog plugin
|
||||
|
||||
Disable logs (logs ingestion is not supported by VictoriaMetrics) and set a custom endpoint in `serverless.yaml`:
|
||||
|
||||
```
|
||||
custom:
|
||||
datadog:
|
||||
enableDDLogs: false # Disabled not supported DD logs
|
||||
apiKey: fakekey # Set any key, otherwise plugin fails
|
||||
provider:
|
||||
environment:
|
||||
DD_DD_URL: <<vm-url>>/datadog # VictoriaMetrics endpoint for DataDog
|
||||
```
|
||||
|
||||
### Send via cURL
|
||||
|
||||
|
@ -1057,7 +1046,7 @@ to your needs or when testing bugfixes.
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.20.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.22.
|
||||
1. Run `make victoria-metrics` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1073,7 +1062,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
### Development ARM build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.20.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.22.
|
||||
1. Run `make victoria-metrics-linux-arm` or `make victoria-metrics-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics-linux-arm` or `victoria-metrics-linux-arm64` binary respectively and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1087,7 +1076,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
`Pure Go` mode builds only Go code without [cgo](https://golang.org/cmd/cgo/) dependencies.
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.20.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.22.
|
||||
1. Run `make victoria-metrics-pure` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics-pure` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1535,7 +1524,7 @@ Set HTTP request header `Content-Encoding: gzip` when sending gzip-compressed da
|
|||
VictoriaMetrics accepts data in JSON line format at [/api/v1/import](#how-to-import-data-in-json-line-format)
|
||||
and exports data in this format at [/api/v1/export](#how-to-export-data-in-json-line-format).
|
||||
|
||||
The format follows [JSON streaming concept](http://ndjson.org/), e.g. each line contains JSON object with metrics data in the following format:
|
||||
The format follows [JSON streaming concept](https://jsonlines.org/), e.g. each line contains JSON object with metrics data in the following format:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -1958,13 +1947,26 @@ Additionally, alerting can be set up with the following tools:
|
|||
* With Promxy - see [the corresponding docs](https://github.com/jacksontj/promxy/blob/master/README.md#how-do-i-use-alertingrecording-rules-in-promxy).
|
||||
* With Grafana - see [the corresponding docs](https://grafana.com/docs/alerting/rules/).
|
||||
|
||||
## mTLS protection
|
||||
|
||||
By default `VictoriaMetrics` accepts http requests at `8428` port (this port can be changed via `-httpListenAddr` command-line flags).
|
||||
[Enterprise version of VictoriaMetrics](https://docs.victoriametrics.com/enterprise.html) supports the ability to accept [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication)
|
||||
requests at this port, by specifying `-tls` and `-mtls` command-line flags. For example, the following command runs `VictoriaMetrics`, which accepts only mTLS requests at port `8428`:
|
||||
|
||||
```
|
||||
./victoria-metrics -tls -mtls
|
||||
```
|
||||
|
||||
By default system-wide [TLS Root CA](https://en.wikipedia.org/wiki/Root_certificate) is used for verifying client certificates if `-mtls` command-line flag is specified.
|
||||
It is possible to specify custom TLS Root CA via `-mtlsCAFile` command-line flag.
|
||||
|
||||
## Security
|
||||
|
||||
General security recommendations:
|
||||
|
||||
- All the VictoriaMetrics components must run in protected private networks without direct access from untrusted networks such as Internet.
|
||||
The exception is [vmauth](https://docs.victoriametrics.com/vmauth.html) and [vmgateway](https://docs.victoriametrics.com/vmgateway.html),
|
||||
which are indended for serving public requests.
|
||||
which are indended for serving public requests and performing authorization with [TLS termination](https://en.wikipedia.org/wiki/TLS_termination_proxy).
|
||||
- All the requests from untrusted networks to VictoriaMetrics components must go through auth proxy such as [vmauth](https://docs.victoriametrics.com/vmauth.html)
|
||||
or [vmgateway](https://docs.victoriametrics.com/vmgateway.html). The proxy must be set up with proper authentication and authorization.
|
||||
- Prefer using lists of allowed API endpoints, while disallowing access to other endpoints when configuring [vmauth](https://docs.victoriametrics.com/vmauth.html)
|
||||
|
@ -1975,7 +1977,8 @@ General security recommendations:
|
|||
|
||||
VictoriaMetrics provides the following security-related command-line flags:
|
||||
|
||||
* `-tls`, `-tlsCertFile` and `-tlsKeyFile` for switching from HTTP to HTTPS.
|
||||
* `-tls`, `-tlsCertFile` and `-tlsKeyFile` for switching from HTTP to HTTPS at `-httpListenAddr` (8428 by default).
|
||||
* `-mtls` and `-mtlsCAFile` for enabling [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) for requests to `-httpListenAddr`. See [these docs](#mtls-protection).
|
||||
* `-httpAuth.username` and `-httpAuth.password` for protecting all the HTTP endpoints
|
||||
with [HTTP Basic Authentication](https://en.wikipedia.org/wiki/Basic_access_authentication).
|
||||
* `-deleteAuthKey` for protecting `/api/v1/admin/tsdb/delete_series` endpoint. See [how to delete time series](#how-to-delete-time-series).
|
||||
|
@ -2020,12 +2023,17 @@ mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
|
|||
## Monitoring
|
||||
|
||||
VictoriaMetrics exports internal metrics in Prometheus exposition format at `/metrics` page.
|
||||
These metrics can be scraped via [vmagent](https://docs.victoriametrics.com/vmagent.html) or Prometheus.
|
||||
These metrics can be scraped via [vmagent](https://docs.victoriametrics.com/vmagent.html) or any other Prometheus-compatible scraper.
|
||||
|
||||
If you use Google Cloud Managed Prometheus for scraping metrics from VictoriaMetrics components, then pass `-metrics.exposeMetadata`
|
||||
command-line to them, so they add `TYPE` and `HELP` comments per each exposed metric at `/metrics` page.
|
||||
See [these docs](https://cloud.google.com/stackdriver/docs/managed-prometheus/troubleshooting#missing-metric-type) for details.
|
||||
|
||||
Alternatively, single-node VictoriaMetrics can self-scrape the metrics when `-selfScrapeInterval` command-line flag is
|
||||
set to duration greater than 0. For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page
|
||||
with 10 seconds interval.
|
||||
|
||||
_Please note, never use loadbalancer address for scraping metrics. All monitored components should be scraped directly by their address._
|
||||
_Please note, never use loadbalancer address for scraping metrics. All the monitored components should be scraped directly by their address._
|
||||
|
||||
Official Grafana dashboards available for [single-node](https://grafana.com/grafana/dashboards/10229)
|
||||
and [clustered](https://grafana.com/grafana/dashboards/11176) VictoriaMetrics.
|
||||
|
@ -2288,6 +2296,21 @@ Sometimes it is needed to remove such caches on the next startup. This can be do
|
|||
In this case VictoriaMetrics will automatically remove all the caches on the next start.
|
||||
See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1447) for details.
|
||||
|
||||
It is also possible removing [rollup result cache](#rollup-result-cache) on startup by passing `-search.resetRollupResultCacheOnStartup` command-line flag to VictoriaMetrics.
|
||||
|
||||
## Rollup result cache
|
||||
|
||||
VictoriaMetrics caches query reponses by default. This allows increasing performance for repated queries
|
||||
to [`/api/v1/query`](https://docs.victoriametrics.com/keyconcepts/#instant-query) and [`/api/v1/query_range`](https://docs.victoriametrics.com/keyconcepts/#range-query)
|
||||
with the increasing `time`, `start` and `end` query args.
|
||||
|
||||
This cache may work incorrectly when ingesting historical data into VictoriaMetrics. See [these docs](#backfilling) for details.
|
||||
|
||||
The rollup cache can be disabled either globally by running VictoriaMetrics with `-search.disableCache` command-line flag
|
||||
or on a per-query basis by passing `nocache=1` query arg to `/api/v1/query` and `/api/v1/query_range`.
|
||||
|
||||
See also [cache removal docs](#cache-removal).
|
||||
|
||||
## Cache tuning
|
||||
|
||||
VictoriaMetrics uses various in-memory caches for faster data ingestion and query performance.
|
||||
|
@ -2352,11 +2375,12 @@ VictoriaMetrics accepts historical data in arbitrary order of time via [any supp
|
|||
See [how to backfill data with recording rules in vmalert](https://docs.victoriametrics.com/vmalert.html#rules-backfilling).
|
||||
Make sure that configured `-retentionPeriod` covers timestamps for the backfilled data.
|
||||
|
||||
It is recommended disabling query cache with `-search.disableCache` command-line flag when writing
|
||||
It is recommended disabling [query cache](#rollup-result-cache) with `-search.disableCache` command-line flag when writing
|
||||
historical data with timestamps from the past, since the cache assumes that the data is written with
|
||||
the current timestamps. Query cache can be enabled after the backfilling is complete.
|
||||
|
||||
An alternative solution is to query [/internal/resetRollupResultCache](https://docs.victoriametrics.com/url-examples.html#internalresetrollupresultcache) handler after the backfilling is complete. This will reset the query cache, which could contain incomplete data cached during the backfilling.
|
||||
An alternative solution is to query [/internal/resetRollupResultCache](https://docs.victoriametrics.com/url-examples.html#internalresetrollupresultcache)
|
||||
after the backfilling is complete. This will reset the [query cache](#rollup-result-cache), which could contain incomplete data cached during the backfilling.
|
||||
|
||||
Yet another solution is to increase `-search.cacheTimestampOffset` flag value in order to disable caching
|
||||
for data with timestamps close to the current time. Single-node VictoriaMetrics automatically resets response
|
||||
|
@ -2595,7 +2619,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-graphiteTrimTimestamp duration
|
||||
Trim timestamps for Graphite data to this duration. Minimum practical duration is 1s. Higher duration (i.e. 1m) may be used for reducing disk space usage for timestamp data (default 1s)
|
||||
-http.connTimeout duration
|
||||
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s)
|
||||
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem
|
||||
-http.disableResponseCompression
|
||||
Disable compression of HTTP responses to save CPU resources. By default, compression is enabled to save network bandwidth
|
||||
-http.header.csp default-src 'self'
|
||||
|
@ -2617,10 +2641,14 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Flag value can be read from the given file when using -httpAuth.password=file:///abs/path/to/file or -httpAuth.password=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -httpAuth.password=http://host/path or -httpAuth.password=https://host/path
|
||||
-httpAuth.username string
|
||||
Username for HTTP server's Basic Auth. The authentication is disabled if empty. See also -httpAuth.password
|
||||
-httpListenAddr string
|
||||
TCP address to listen for http connections. See also -tls and -httpListenAddr.useProxyProtocol (default ":8428")
|
||||
-httpListenAddr.useProxyProtocol
|
||||
Whether to use proxy protocol for connections accepted at -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
-httpListenAddr array
|
||||
TCP addresses to listen for incoming http requests. See also -tls and -httpListenAddr.useProxyProtocol
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-httpListenAddr.useProxyProtocol array
|
||||
Whether to use proxy protocol for connections accepted at the corresponding -httpListenAddr . See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-import.maxLineLen size
|
||||
The maximum length in bytes of a single line accepted by /api/v1/import; the line length can be limited with 'max_rows_per_line' query arg passed to /api/v1/export
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 10485760)
|
||||
|
@ -2700,6 +2728,14 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-metricsAuthKey value
|
||||
Auth key for /metrics endpoint. It must be passed via authKey query arg. It overrides httpAuth.* settings
|
||||
Flag value can be read from the given file when using -metricsAuthKey=file:///abs/path/to/file or -metricsAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -metricsAuthKey=http://host/path or -metricsAuthKey=https://host/path
|
||||
-mtls array
|
||||
Whether to require valid client certificate for https requests to the corresponding -httpListenAddr . This flag works only if -tls flag is set. See also -mtlsCAFile . This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-mtlsCAFile array
|
||||
Optional path to TLS Root CA for verifying client certificates at the corresponding -httpListenAddr when -mtls is enabled. By default the host system TLS Root CA is used for client certificate verification. This flag is available only in Enterprise binaries. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-newrelic.maxInsertRequestSize size
|
||||
The maximum size in bytes of a single NewRelic request to /newrelic/infra/v2/metrics/events/bulk
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
|
||||
|
@ -2857,7 +2893,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-search.disableAutoCacheReset
|
||||
Whether to disable automatic response cache reset if a sample with timestamp outside -search.cacheTimestampOffset is inserted into VictoriaMetrics
|
||||
-search.disableCache
|
||||
Whether to disable response caching. This may be useful during data backfilling
|
||||
Whether to disable response caching. This may be useful when ingesting historical data. See https://docs.victoriametrics.com/#backfilling . See also -search.resetRollupResultCacheOnStartup
|
||||
-search.graphiteMaxPointsPerSeries int
|
||||
The maximum number of points per series Graphite render API can return (default 1000000)
|
||||
-search.graphiteStorageStep duration
|
||||
|
@ -2941,6 +2977,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-search.resetCacheAuthKey value
|
||||
Optional authKey for resetting rollup cache via /internal/resetRollupResultCache call
|
||||
Flag value can be read from the given file when using -search.resetCacheAuthKey=file:///abs/path/to/file or -search.resetCacheAuthKey=file://./relative/path/to/file . Flag value can be read from the given http/https url when using -search.resetCacheAuthKey=http://host/path or -search.resetCacheAuthKey=https://host/path
|
||||
-search.resetRollupResultCacheOnStartup
|
||||
Whether to reset rollup result cache on startup. See https://docs.victoriametrics.com/#rollup-result-cache . See also -search.disableCache
|
||||
-search.setLookbackToStep
|
||||
Whether to fix lookback interval to 'step' query arg value. If set to true, the query model becomes closer to InfluxDB data model. If set to true, then -search.maxLookback and -search.maxStalenessInterval are ignored
|
||||
-search.treatDotsAsIsInRegexps
|
||||
|
@ -2992,18 +3030,26 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Whether to drop all the input samples after the aggregation with -streamAggr.config. By default, only aggregated samples are dropped, while the remaining samples are stored in the database. See also -streamAggr.keepInput and https://docs.victoriametrics.com/stream-aggregation.html
|
||||
-streamAggr.keepInput
|
||||
Whether to keep all the input samples after the aggregation with -streamAggr.config. By default, only aggregated samples are dropped, while the remaining samples are stored in the database. See also -streamAggr.dropInput and https://docs.victoriametrics.com/stream-aggregation.html
|
||||
-tls
|
||||
Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tls array
|
||||
Whether to enable TLS for incoming HTTP requests at the given -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set. See also -mtls
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
Empty values are set to false.
|
||||
-tlsCertFile array
|
||||
Path to file with TLS certificate for the corresponding -httpListenAddr if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsCipherSuites array
|
||||
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsMinVersion string
|
||||
Optional minimum TLS version to use for incoming requests over HTTPS if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
-tlsKeyFile array
|
||||
Path to file with TLS key for the corresponding -httpListenAddr if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-tlsMinVersion array
|
||||
Optional minimum TLS version to use for the corresponding -httpListenAddr if -tls is set. Supported values: TLS10, TLS11, TLS12, TLS13
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
Value can contain comma inside single-quoted or double-quoted string, {}, [] and () braces.
|
||||
-usePromCompatibleNaming
|
||||
Whether to replace characters unsupported by Prometheus with underscores in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
|
||||
-version
|
||||
|
|
|
@ -296,6 +296,12 @@ There are the following most commons reasons for slow data ingestion in Victoria
|
|||
which exceeds the interval between ingested samples for the same time series (aka `scrape_interval`).
|
||||
See [this comment](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3976#issuecomment-1476883183) for more details.
|
||||
|
||||
1. If you see constant and abnormally high CPU usage of VictoriaMetrics component, check `CPU spent on GC` panel
|
||||
on the corresponding [Grafana dasbhoard](https://grafana.com/orgs/victoriametrics) in `Resource usage` section. If percentage of CPU time spent on garbage collection
|
||||
is high, then CPU usage of the component can be reduced at the cost of higher memory usage by changing [GOGC](https://tip.golang.org/doc/gc-guide#GOGC) environment variable
|
||||
to higher values. By default VictoriaMetrics components use `GOGC=30`. Try running VictoriaMetrics components with `GOGC=100` and see whether this helps reducing CPU usage.
|
||||
Note that higher `GOGC` values may increase memory usage.
|
||||
|
||||
## Slow queries
|
||||
|
||||
Some queries may take more time and resources (CPU, RAM, network bandwidth) than others.
|
||||
|
|
|
@ -84,7 +84,7 @@ Follow the following steps in order to build VictoriaLogs from source code:
|
|||
cd VictoriaMetrics
|
||||
```
|
||||
|
||||
- Build VictoriaLogs. The build command requires [Go 1.20](https://golang.org/doc/install).
|
||||
- Build VictoriaLogs. The build command requires [Go 1.22](https://golang.org/doc/install).
|
||||
|
||||
```sh
|
||||
make victoria-logs
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue