Merge branch 'master' into streaming-aggregation-ui

This commit is contained in:
Alexander Marshalov 2024-01-18 10:33:14 +01:00 committed by GitHub
commit 26144f6d87
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
479 changed files with 25565 additions and 17399 deletions

View file

@ -60,7 +60,7 @@ body:
For VictoriaMetrics health-state issues please provide full-length screenshots For VictoriaMetrics health-state issues please provide full-length screenshots
of Grafana dashboards if possible: of Grafana dashboards if possible:
* [Grafana dashboard for single-node VictoriaMetrics](https://grafana.com/grafana/dashboards/10229-victoriametrics/) * [Grafana dashboard for single-node VictoriaMetrics](https://grafana.com/grafana/dashboards/10229-victoriametrics-single-node/)
* [Grafana dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176-victoriametrics-cluster/) * [Grafana dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176-victoriametrics-cluster/)
See how to setup monitoring here: See how to setup monitoring here:

View file

@ -175,7 +175,7 @@
END OF TERMS AND CONDITIONS END OF TERMS AND CONDITIONS
Copyright 2019-2023 VictoriaMetrics, Inc. Copyright 2019-2024 VictoriaMetrics, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.

View file

@ -22,17 +22,17 @@ The cluster version of VictoriaMetrics is available [here](https://docs.victoria
Learn more about [key concepts](https://docs.victoriametrics.com/keyConcepts.html) of VictoriaMetrics and follow the Learn more about [key concepts](https://docs.victoriametrics.com/keyConcepts.html) of VictoriaMetrics and follow the
[quick start guide](https://docs.victoriametrics.com/Quick-Start.html) for a better experience. [quick start guide](https://docs.victoriametrics.com/Quick-Start.html) for a better experience.
There is also user-friendly database for logs - [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/). There is also a user-friendly database for logs - [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/).
If you have questions about VictoriaMetrics, then feel free asking them at [VictoriaMetrics community Slack chat](https://slack.victoriametrics.com/). If you have questions about VictoriaMetrics, then feel free asking them in the [VictoriaMetrics community Slack chat](https://slack.victoriametrics.com/).
[Contact us](mailto:info@victoriametrics.com) if you need enterprise support for VictoriaMetrics. [Contact us](mailto:info@victoriametrics.com) if you need enterprise support for VictoriaMetrics.
See [features available in enterprise package](https://docs.victoriametrics.com/enterprise.html). See [features available in enterprise package](https://docs.victoriametrics.com/enterprise.html).
Enterprise binaries can be downloaded and evaluated for free Enterprise binaries can be downloaded and evaluated for free
from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest). from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest).
See how to request a free trial license [here](https://victoriametrics.com/products/enterprise/trial/). You can also [request a free trial license](https://victoriametrics.com/products/enterprise/trial/).
VictoriaMetrics is developed at a fast pace, so it is recommended periodically checking the [CHANGELOG](https://docs.victoriametrics.com/CHANGELOG.html) and performing [regular upgrades](#how-to-upgrade-victoriametrics). VictoriaMetrics is developed at a fast pace, so it is recommended to check the [CHANGELOG](https://docs.victoriametrics.com/CHANGELOG.html) periodically, and to perform [regular upgrades](#how-to-upgrade-victoriametrics).
VictoriaMetrics has achieved security certifications for Database Software Development and Software-Based Monitoring Services. We apply strict security measures in everything we do. See our [Security page](https://victoriametrics.com/security/) for more details. VictoriaMetrics has achieved security certifications for Database Software Development and Software-Based Monitoring Services. We apply strict security measures in everything we do. See our [Security page](https://victoriametrics.com/security/) for more details.
@ -41,19 +41,19 @@ VictoriaMetrics has achieved security certifications for Database Software Devel
VictoriaMetrics has the following prominent features: VictoriaMetrics has the following prominent features:
* It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details. * It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details.
* It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage). * It can be used as a drop-in replacement for Prometheus in Grafana, because it supports the [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage). * It can be used as a drop-in replacement for Graphite in Grafana, because it supports the [Graphite API](#graphite-api-usage).
VictoriaMetrics allows reducing infrastructure costs by more than 10x comparing to Graphite - see [this case study](https://docs.victoriametrics.com/CaseStudies.html#grammarly). VictoriaMetrics allows reducing infrastructure costs by more than 10x comparing to Graphite - see [this case study](https://docs.victoriametrics.com/CaseStudies.html#grammarly).
* It is easy to setup and operate: * It is easy to setup and operate:
* VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d) * VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d)
without external dependencies. without external dependencies.
* All the configuration is done via explicit command-line flags with reasonable defaults. * All the configuration is done via explicit command-line flags with reasonable defaults.
* All the data is stored in a single directory pointed by `-storageDataPath` command-line flag. * All the data is stored in a single directory specified by the `-storageDataPath` command-line flag.
* Easy and fast backups from [instant snapshots](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282) * Easy and fast backups from [instant snapshots](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282)
can be done with [vmbackup](https://docs.victoriametrics.com/vmbackup.html) / [vmrestore](https://docs.victoriametrics.com/vmrestore.html) tools. can be done with [vmbackup](https://docs.victoriametrics.com/vmbackup.html) / [vmrestore](https://docs.victoriametrics.com/vmrestore.html) tools.
See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-series-databases-533c1a927883) for more details. See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-series-databases-533c1a927883) for more details.
* It implements PromQL-like query language - [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html), which provides improved functionality on top of PromQL. * It implements a PromQL-like query language - [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html), which provides improved functionality on top of PromQL.
* It provides global query view. Multiple Prometheus instances or any other data sources may ingest data into VictoriaMetrics. Later this data may be queried via a single query. * It provides a global query view. Multiple Prometheus instances or any other data sources may ingest data into VictoriaMetrics. Later this data may be queried via a single query.
* It provides high performance and good vertical and horizontal scalability for both * It provides high performance and good vertical and horizontal scalability for both
[data ingestion](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b) [data ingestion](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b)
and [data querying](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4). and [data querying](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4).
@ -62,9 +62,9 @@ VictoriaMetrics has the following prominent features:
and [up to 7x less RAM than Prometheus, Thanos or Cortex](https://valyala.medium.com/prometheus-vs-victoriametrics-benchmark-on-node-exporter-metrics-4ca29c75590f) and [up to 7x less RAM than Prometheus, Thanos or Cortex](https://valyala.medium.com/prometheus-vs-victoriametrics-benchmark-on-node-exporter-metrics-4ca29c75590f)
when dealing with millions of unique time series (aka [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality)). when dealing with millions of unique time series (aka [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality)).
* It is optimized for time series with [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). * It is optimized for time series with [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate).
* It provides high data compression, so up to 70x more data points may be stored into limited storage comparing to TimescaleDB * It provides high data compression: up to 70x more data points may be stored into limited storage compared with TimescaleDB
according to [these benchmarks](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4) according to [these benchmarks](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4),
and up to 7x less storage space is required compared to Prometheus, Thanos or Cortex and up to 7x less storage space is required compared to Prometheus, Thanos or Cortex.
according to [this benchmark](https://valyala.medium.com/prometheus-vs-victoriametrics-benchmark-on-node-exporter-metrics-4ca29c75590f). according to [this benchmark](https://valyala.medium.com/prometheus-vs-victoriametrics-benchmark-on-node-exporter-metrics-4ca29c75590f).
* It is optimized for storage with high-latency IO and low IOPS (HDD and network storage in AWS, Google Cloud, Microsoft Azure, etc). * It is optimized for storage with high-latency IO and low IOPS (HDD and network storage in AWS, Google Cloud, Microsoft Azure, etc).
See [disk IO graphs from these benchmarks](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b). See [disk IO graphs from these benchmarks](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b).
@ -75,7 +75,7 @@ VictoriaMetrics has the following prominent features:
from [PromCon 2019](https://promcon.io/2019-munich/talks/remote-write-storage-wars/). from [PromCon 2019](https://promcon.io/2019-munich/talks/remote-write-storage-wars/).
* It protects the storage from data corruption on unclean shutdown (i.e. OOM, hardware reset or `kill -9`) thanks to * It protects the storage from data corruption on unclean shutdown (i.e. OOM, hardware reset or `kill -9`) thanks to
[the storage architecture](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282). [the storage architecture](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282).
* It supports metrics' scraping, ingestion and [backfilling](#backfilling) via the following protocols: * It supports metrics scraping, ingestion and [backfilling](#backfilling) via the following protocols:
* [Metrics scraping from Prometheus exporters](#how-to-scrape-prometheus-exporters-such-as-node-exporter). * [Metrics scraping from Prometheus exporters](#how-to-scrape-prometheus-exporters-such-as-node-exporter).
* [Prometheus remote write API](#prometheus-setup). * [Prometheus remote write API](#prometheus-setup).
* [Prometheus exposition format](#how-to-import-data-in-prometheus-exposition-format). * [Prometheus exposition format](#how-to-import-data-in-prometheus-exposition-format).
@ -95,7 +95,7 @@ VictoriaMetrics has the following prominent features:
[high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) issues via [series limiter](#cardinality-limiter). [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) issues via [series limiter](#cardinality-limiter).
* It ideally works with big amounts of time series data from APM, Kubernetes, IoT sensors, connected cars, industrial telemetry, financial data * It ideally works with big amounts of time series data from APM, Kubernetes, IoT sensors, connected cars, industrial telemetry, financial data
and various [Enterprise workloads](https://docs.victoriametrics.com/enterprise.html). and various [Enterprise workloads](https://docs.victoriametrics.com/enterprise.html).
* It has open source [cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster). * It has an open source [cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster).
* It can store data on [NFS-based storages](https://en.wikipedia.org/wiki/Network_File_System) such as [Amazon EFS](https://aws.amazon.com/efs/) * It can store data on [NFS-based storages](https://en.wikipedia.org/wiki/Network_File_System) such as [Amazon EFS](https://aws.amazon.com/efs/)
and [Google Filestore](https://cloud.google.com/filestore). and [Google Filestore](https://cloud.google.com/filestore).
@ -138,7 +138,7 @@ See also [articles and slides about VictoriaMetrics from our users](https://docs
### Install ### Install
To quickly try VictoriaMetrics, just download [VictoriaMetrics executable](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest) To quickly try VictoriaMetrics, just download the [VictoriaMetrics executable](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest)
or [Docker image](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and start it with the desired command-line flags. or [Docker image](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and start it with the desired command-line flags.
See also [QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for additional information. See also [QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for additional information.
@ -155,10 +155,10 @@ VictoriaMetrics can also be installed via these installation methods:
The following command-line flags are used the most: The following command-line flags are used the most:
* `-storageDataPath` - VictoriaMetrics stores all the data in this directory. Default path is `victoria-metrics-data` in the current working directory. * `-storageDataPath` - VictoriaMetrics stores all the data in this directory. The default path is `victoria-metrics-data` in the current working directory.
* `-retentionPeriod` - retention for stored data. Older data is automatically deleted. Default retention is 1 month (31 days). The minimum retention period is 24h or 1d. See [these docs](#retention) for more details. * `-retentionPeriod` - retention for stored data. Older data is automatically deleted. Default retention is 1 month (31 days). The minimum retention period is 24h or 1d. See [these docs](#retention) for more details.
Other flags have good enough default values, so set them only if you really need this. Pass `-help` to see [all the available flags with description and default values](#list-of-command-line-flags). Other flags have good enough default values, so set them only if you really need to. Pass `-help` to see [all the available flags with description and default values](#list-of-command-line-flags).
The following docs may be useful during initial VictoriaMetrics setup: The following docs may be useful during initial VictoriaMetrics setup:
* [How to set up scraping of Prometheus-compatible targets](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter) * [How to set up scraping of Prometheus-compatible targets](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter)
@ -172,9 +172,6 @@ VictoriaMetrics accepts [Prometheus querying API requests](#prometheus-querying-
It is recommended setting up [monitoring](#monitoring) for VictoriaMetrics. It is recommended setting up [monitoring](#monitoring) for VictoriaMetrics.
VictoriaMetrics is developed at a fast pace, so it is recommended periodically checking the [CHANGELOG](https://docs.victoriametrics.com/CHANGELOG.html) and performing [regular upgrades](#how-to-upgrade-victoriametrics).
### Environment variables ### Environment variables
All the VictoriaMetrics components allow referring environment variables in `yaml` configuration files (such as `-promscrape.config`) All the VictoriaMetrics components allow referring environment variables in `yaml` configuration files (such as `-promscrape.config`)
@ -363,6 +360,8 @@ See more in [description](https://github.com/VictoriaMetrics/grafana-datasource#
Creating a datasource may require [specific permissions](https://grafana.com/docs/grafana/latest/administration/data-source-management/). Creating a datasource may require [specific permissions](https://grafana.com/docs/grafana/latest/administration/data-source-management/).
If you don't see an option to create a data source - try contacting system administrator. If you don't see an option to create a data source - try contacting system administrator.
Grafana playground is available for viewing at our [sandbox](https://play-grafana.victoriametrics.com).
## How to upgrade VictoriaMetrics ## How to upgrade VictoriaMetrics
VictoriaMetrics is developed at a fast pace, so it is recommended periodically checking [the CHANGELOG page](https://docs.victoriametrics.com/CHANGELOG.html) and performing regular upgrades. VictoriaMetrics is developed at a fast pace, so it is recommended periodically checking [the CHANGELOG page](https://docs.victoriametrics.com/CHANGELOG.html) and performing regular upgrades.
@ -516,10 +515,8 @@ See also [vmagent](https://docs.victoriametrics.com/vmagent.html), which can be
## How to send data from DataDog agent ## How to send data from DataDog agent
VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/agent/) VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/agent/) or [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/)
or [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/) via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics) at `/datadog/api/v2/series` path.
via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics)
at `/datadog/api/v1/series` path.
### Sending metrics to VictoriaMetrics ### Sending metrics to VictoriaMetrics
@ -531,6 +528,7 @@ or via [configuration file](https://docs.datadoghq.com/agent/guide/agent-configu
</p> </p>
To configure DataDog agent via ENV variable add the following prefix: To configure DataDog agent via ENV variable add the following prefix:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
``` ```
@ -545,14 +543,12 @@ To configure DataDog agent via [configuration file](https://github.com/DataDog/d
add the following line: add the following line:
<div class="with-copy" markdown="1"> <div class="with-copy" markdown="1">
``` ```
dd_url: http://victoriametrics:8428/datadog dd_url: http://victoriametrics:8428/datadog
``` ```
</div> </div>
vmagent also can accept Datadog metrics format. Depending on where vmagent will forward data, [vmagent](https://docs.victoriametrics.com/vmagent.html) also can accept Datadog metrics format. Depending on where vmagent will forward data,
pick [single-node or cluster URL](https://docs.victoriametrics.com/url-examples.html#datadog) formats. pick [single-node or cluster URL](https://docs.victoriametrics.com/url-examples.html#datadog) formats.
### Sending metrics to Datadog and VictoriaMetrics ### Sending metrics to Datadog and VictoriaMetrics
@ -593,8 +589,7 @@ additional_endpoints:
### Send via cURL ### Send via cURL
See how to send data to VictoriaMetrics via See how to send data to VictoriaMetrics via DataDog "submit metrics" API [here](https://docs.victoriametrics.com/url-examples.html#datadogapiv2series).
[DataDog "submit metrics"](https://docs.victoriametrics.com/url-examples.html#datadogapiv1series) from command line.
The imported data can be read via [export API](https://docs.victoriametrics.com/url-examples.html#apiv1export). The imported data can be read via [export API](https://docs.victoriametrics.com/url-examples.html#apiv1export).
@ -605,7 +600,7 @@ according to [DataDog metric naming recommendations](https://docs.datadoghq.com/
If you need accepting metric names as is without sanitizing, then pass `-datadog.sanitizeMetricName=false` command-line flag to VictoriaMetrics. If you need accepting metric names as is without sanitizing, then pass `-datadog.sanitizeMetricName=false` command-line flag to VictoriaMetrics.
Extra labels may be added to all the written time series by passing `extra_label=name=value` query args. Extra labels may be added to all the written time series by passing `extra_label=name=value` query args.
For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics. For example, `/datadog/api/v2/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics.
DataDog agent sends the [configured tags](https://docs.datadoghq.com/getting_started/tagging/) to DataDog agent sends the [configured tags](https://docs.datadoghq.com/getting_started/tagging/) to
undocumented endpoint - `/datadog/intake`. This endpoint isn't supported by VictoriaMetrics yet. undocumented endpoint - `/datadog/intake`. This endpoint isn't supported by VictoriaMetrics yet.
@ -1217,6 +1212,7 @@ before actually deleting the metrics. By default, this query will only scan seri
adjust `start` and `end` to a suitable range to achieve match hits. adjust `start` and `end` to a suitable range to achieve match hits.
The `/api/v1/admin/tsdb/delete_series` handler may be protected with `authKey` if `-deleteAuthKey` command-line flag is set. The `/api/v1/admin/tsdb/delete_series` handler may be protected with `authKey` if `-deleteAuthKey` command-line flag is set.
Note that handler accepts any HTTP method, so sending a `GET` request to `/api/v1/admin/tsdb/delete_series` will result in deletion of time series.
The delete API is intended mainly for the following cases: The delete API is intended mainly for the following cases:
@ -1772,6 +1768,10 @@ This aligns with the [staleness rules in Prometheus](https://prometheus.io/docs/
If multiple raw samples have **the same timestamp** on the given `-dedup.minScrapeInterval` discrete interval, If multiple raw samples have **the same timestamp** on the given `-dedup.minScrapeInterval` discrete interval,
then the sample with **the biggest value** is kept. then the sample with **the biggest value** is kept.
[Prometheus stalenes markers](https://docs.victoriametrics.com/vmagent.html#prometheus-staleness-markers) are processed as any other value during de-duplication.
If raw sample with the biggest timestamp on `-dedup.minScrapeInterval` contains a stale marker, then it is kept after the deduplication.
This allows properly preserving staleness markers during the de-duplication.
Please note, [labels](https://docs.victoriametrics.com/keyConcepts.html#labels) of raw samples should be identical Please note, [labels](https://docs.victoriametrics.com/keyConcepts.html#labels) of raw samples should be identical
in order to be deduplicated. For example, this is why [HA pair of vmagents](https://docs.victoriametrics.com/vmagent.html#high-availability) in order to be deduplicated. For example, this is why [HA pair of vmagents](https://docs.victoriametrics.com/vmagent.html#high-availability)
needs to be identically configured. needs to be identically configured.
@ -1855,7 +1855,7 @@ This increases overhead during data querying, since VictoriaMetrics needs to rea
bigger number of parts per each request. That's why it is recommended to have at least 20% bigger number of parts per each request. That's why it is recommended to have at least 20%
of free disk space under directory pointed by `-storageDataPath` command-line flag. of free disk space under directory pointed by `-storageDataPath` command-line flag.
Information about merging process is available in [the dashboard for single-node VictoriaMetrics](https://grafana.com/grafana/dashboards/10229-victoriametrics/) Information about merging process is available in [the dashboard for single-node VictoriaMetrics](https://grafana.com/grafana/dashboards/10229-victoriametrics-single-node/)
and [the dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176-victoriametrics-cluster/). and [the dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176-victoriametrics-cluster/).
See more details in [monitoring docs](#monitoring). See more details in [monitoring docs](#monitoring).
@ -2058,7 +2058,7 @@ with 10 seconds interval.
_Please note, never use loadbalancer address for scraping metrics. All monitored components should be scraped directly by their address._ _Please note, never use loadbalancer address for scraping metrics. All monitored components should be scraped directly by their address._
Official Grafana dashboards available for [single-node](https://grafana.com/grafana/dashboards/10229-victoriametrics/) Official Grafana dashboards available for [single-node](https://grafana.com/grafana/dashboards/10229-victoriametrics-single-node/)
and [clustered](https://grafana.com/grafana/dashboards/11176-victoriametrics-cluster/) VictoriaMetrics. and [clustered](https://grafana.com/grafana/dashboards/11176-victoriametrics-cluster/) VictoriaMetrics.
See an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831) See an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831)
created by community. created by community.
@ -2329,7 +2329,7 @@ The following metrics for each type of cache are exported at [`/metrics` page](#
* `vm_cache_misses_total` - the number of cache misses * `vm_cache_misses_total` - the number of cache misses
* `vm_cache_entries` - the number of entries in the cache * `vm_cache_entries` - the number of entries in the cache
Both Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/grafana/dashboards/10229-victoriametrics/) Both Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/grafana/dashboards/10229-victoriametrics-single-node/)
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176-victoriametrics-cluster/) and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176-victoriametrics-cluster/)
contain `Caches` section with cache metrics visualized. The panels show the current contain `Caches` section with cache metrics visualized. The panels show the current
memory usage by each type of cache, and also a cache hit rate. If hit rate is close to 100% memory usage by each type of cache, and also a cache hit rate. If hit rate is close to 100%
@ -2580,7 +2580,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-csvTrimTimestamp duration -csvTrimTimestamp duration
Trim timestamps when importing csv data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms) Trim timestamps when importing csv data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms)
-datadog.maxInsertRequestSize size -datadog.maxInsertRequestSize size
The maximum size in bytes of a single DataDog POST request to /api/v1/series The maximum size in bytes of a single DataDog POST request to /datadog/api/v2/series
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864) Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
-datadog.sanitizeMetricName -datadog.sanitizeMetricName
Sanitize metric names for the ingested DataDog data to comply with DataDog behaviour described at https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics (default true) Sanitize metric names for the ingested DataDog data to comply with DataDog behaviour described at https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics (default true)
@ -2709,7 +2709,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-loggerWarnsPerSecondLimit int -loggerWarnsPerSecondLimit int
Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit
-maxConcurrentInserts int -maxConcurrentInserts int
The maximum number of concurrent insert requests. Default value should work for most cases, since it minimizes the memory usage. The default value can be increased when clients send data over slow networks. See also -insert.maxQueueDuration (default 32) The maximum number of concurrent insert requests. Default value should work for most cases, since it minimizes the memory usage. The default value can be increased when clients send data over slow networks. See also -insert.maxQueueDuration.
-maxInsertRequestSize size -maxInsertRequestSize size
The maximum size in bytes of a single Prometheus remote_write API request The maximum size in bytes of a single Prometheus remote_write API request
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 33554432) Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 33554432)

View file

@ -37,7 +37,6 @@ func main() {
cgroup.SetGOGC(*gogc) cgroup.SetGOGC(*gogc)
buildinfo.Init() buildinfo.Init()
logger.Init() logger.Init()
pushmetrics.Init()
logger.Infof("starting VictoriaLogs at %q...", *httpListenAddr) logger.Infof("starting VictoriaLogs at %q...", *httpListenAddr)
startTime := time.Now() startTime := time.Now()
@ -49,8 +48,10 @@ func main() {
go httpserver.Serve(*httpListenAddr, *useProxyProtocol, requestHandler) go httpserver.Serve(*httpListenAddr, *useProxyProtocol, requestHandler)
logger.Infof("started VictoriaLogs in %.3f seconds; see https://docs.victoriametrics.com/VictoriaLogs/", time.Since(startTime).Seconds()) logger.Infof("started VictoriaLogs in %.3f seconds; see https://docs.victoriametrics.com/VictoriaLogs/", time.Since(startTime).Seconds())
pushmetrics.Init()
sig := procutil.WaitForSigterm() sig := procutil.WaitForSigterm()
logger.Infof("received signal %s", sig) logger.Infof("received signal %s", sig)
pushmetrics.Stop()
logger.Infof("gracefully shutting down webservice at %q", *httpListenAddr) logger.Infof("gracefully shutting down webservice at %q", *httpListenAddr)
startTime = time.Now() startTime = time.Now()

View file

@ -48,7 +48,6 @@ func main() {
envflag.Parse() envflag.Parse()
buildinfo.Init() buildinfo.Init()
logger.Init() logger.Init()
pushmetrics.Init()
if promscrape.IsDryRun() { if promscrape.IsDryRun() {
*dryRun = true *dryRun = true
@ -74,13 +73,16 @@ func main() {
vmstorage.Init(promql.ResetRollupResultCacheIfNeeded) vmstorage.Init(promql.ResetRollupResultCacheIfNeeded)
vmselect.Init() vmselect.Init()
vminsert.Init() vminsert.Init()
startSelfScraper() startSelfScraper()
go httpserver.Serve(*httpListenAddr, *useProxyProtocol, requestHandler) go httpserver.Serve(*httpListenAddr, *useProxyProtocol, requestHandler)
logger.Infof("started VictoriaMetrics in %.3f seconds", time.Since(startTime).Seconds()) logger.Infof("started VictoriaMetrics in %.3f seconds", time.Since(startTime).Seconds())
pushmetrics.Init()
sig := procutil.WaitForSigterm() sig := procutil.WaitForSigterm()
logger.Infof("received signal %s", sig) logger.Infof("received signal %s", sig)
pushmetrics.Stop()
stopSelfScraper() stopSelfScraper()
@ -89,8 +91,8 @@ func main() {
if err := httpserver.Stop(*httpListenAddr); err != nil { if err := httpserver.Stop(*httpListenAddr); err != nil {
logger.Fatalf("cannot stop the webservice: %s", err) logger.Fatalf("cannot stop the webservice: %s", err)
} }
vminsert.Stop()
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds()) logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
vminsert.Stop()
vmstorage.Stop() vmstorage.Stop()
vmselect.Stop() vmselect.Stop()

View file

@ -12,6 +12,7 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"reflect" "reflect"
"strconv"
"strings" "strings"
"testing" "testing"
"time" "time"
@ -54,15 +55,14 @@ var (
) )
type test struct { type test struct {
Name string `json:"name"` Name string `json:"name"`
Data []string `json:"data"` Data []string `json:"data"`
InsertQuery string `json:"insert_query"` InsertQuery string `json:"insert_query"`
Query []string `json:"query"` Query []string `json:"query"`
ResultMetrics []Metric `json:"result_metrics"` ResultMetrics []Metric `json:"result_metrics"`
ResultSeries Series `json:"result_series"` ResultSeries Series `json:"result_series"`
ResultQuery Query `json:"result_query"` ResultQuery Query `json:"result_query"`
ResultQueryRange QueryRange `json:"result_query_range"` Issue string `json:"issue"`
Issue string `json:"issue"`
} }
type Metric struct { type Metric struct {
@ -80,42 +80,90 @@ type Series struct {
Status string `json:"status"` Status string `json:"status"`
Data []map[string]string `json:"data"` Data []map[string]string `json:"data"`
} }
type Query struct { type Query struct {
Status string `json:"status"` Status string `json:"status"`
Data QueryData `json:"data"` Data struct {
} ResultType string `json:"resultType"`
type QueryData struct { Result json.RawMessage `json:"result"`
ResultType string `json:"resultType"` } `json:"data"`
Result []QueryDataResult `json:"result"`
} }
type QueryDataResult struct { const rtVector, rtMatrix = "vector", "matrix"
Metric map[string]string `json:"metric"`
Value []interface{} `json:"value"` func (q *Query) metrics() ([]Metric, error) {
switch q.Data.ResultType {
case rtVector:
var r QueryInstant
if err := json.Unmarshal(q.Data.Result, &r.Result); err != nil {
return nil, err
}
return r.metrics()
case rtMatrix:
var r QueryRange
if err := json.Unmarshal(q.Data.Result, &r.Result); err != nil {
return nil, err
}
return r.metrics()
default:
return nil, fmt.Errorf("unknown result type %q", q.Data.ResultType)
}
} }
func (r *QueryDataResult) UnmarshalJSON(b []byte) error { type QueryInstant struct {
type plain QueryDataResult Result []struct {
return json.Unmarshal(testutil.PopulateTimeTpl(b, insertionTime), (*plain)(r)) Labels map[string]string `json:"metric"`
TV [2]interface{} `json:"value"`
} `json:"result"`
}
func (q QueryInstant) metrics() ([]Metric, error) {
result := make([]Metric, len(q.Result))
for i, res := range q.Result {
f, err := strconv.ParseFloat(res.TV[1].(string), 64)
if err != nil {
return nil, fmt.Errorf("metric %v, unable to parse float64 from %s: %w", res, res.TV[1], err)
}
var m Metric
m.Metric = res.Labels
m.Timestamps = append(m.Timestamps, int64(res.TV[0].(float64)))
m.Values = append(m.Values, f)
result[i] = m
}
return result, nil
} }
type QueryRange struct { type QueryRange struct {
Status string `json:"status"` Result []struct {
Data QueryRangeData `json:"data"` Metric map[string]string `json:"metric"`
} Values [][]interface{} `json:"values"`
type QueryRangeData struct { } `json:"result"`
ResultType string `json:"resultType"`
Result []QueryRangeDataResult `json:"result"`
} }
type QueryRangeDataResult struct { func (q QueryRange) metrics() ([]Metric, error) {
Metric map[string]string `json:"metric"` var result []Metric
Values [][]interface{} `json:"values"` for i, res := range q.Result {
var m Metric
for _, tv := range res.Values {
f, err := strconv.ParseFloat(tv[1].(string), 64)
if err != nil {
return nil, fmt.Errorf("metric %v, unable to parse float64 from %s: %w", res, tv[1], err)
}
m.Values = append(m.Values, f)
m.Timestamps = append(m.Timestamps, int64(tv[0].(float64)))
}
if len(m.Values) < 1 || len(m.Timestamps) < 1 {
return nil, fmt.Errorf("metric %v contains no values", res)
}
m.Metric = q.Result[i].Metric
result = append(result, m)
}
return result, nil
} }
func (r *QueryRangeDataResult) UnmarshalJSON(b []byte) error { func (q *Query) UnmarshalJSON(b []byte) error {
type plain QueryRangeDataResult type plain Query
return json.Unmarshal(testutil.PopulateTimeTpl(b, insertionTime), (*plain)(r)) return json.Unmarshal(testutil.PopulateTimeTpl(b, insertionTime), (*plain)(q))
} }
func TestMain(m *testing.M) { func TestMain(m *testing.M) {
@ -197,6 +245,9 @@ func TestWriteRead(t *testing.T) {
func testWrite(t *testing.T) { func testWrite(t *testing.T) {
t.Run("prometheus", func(t *testing.T) { t.Run("prometheus", func(t *testing.T) {
for _, test := range readIn("prometheus", t, insertionTime) { for _, test := range readIn("prometheus", t, insertionTime) {
if test.Data == nil {
continue
}
s := newSuite(t) s := newSuite(t)
r := testutil.WriteRequest{} r := testutil.WriteRequest{}
s.noError(json.Unmarshal([]byte(strings.Join(test.Data, "\n")), &r.Timeseries)) s.noError(json.Unmarshal([]byte(strings.Join(test.Data, "\n")), &r.Timeseries))
@ -272,17 +323,19 @@ func testRead(t *testing.T) {
if err := checkSeriesResult(s, test.ResultSeries); err != nil { if err := checkSeriesResult(s, test.ResultSeries); err != nil {
t.Fatalf("Series. %s fails with error %s.%s", q, err, test.Issue) t.Fatalf("Series. %s fails with error %s.%s", q, err, test.Issue)
} }
case strings.HasPrefix(q, "/api/v1/query_range"):
queryResult := QueryRange{}
httpReadStruct(t, testReadHTTPPath, q, &queryResult)
if err := checkQueryRangeResult(queryResult, test.ResultQueryRange); err != nil {
t.Fatalf("Query Range. %s fails with error %s.%s", q, err, test.Issue)
}
case strings.HasPrefix(q, "/api/v1/query"): case strings.HasPrefix(q, "/api/v1/query"):
queryResult := Query{} queryResult := Query{}
httpReadStruct(t, testReadHTTPPath, q, &queryResult) httpReadStruct(t, testReadHTTPPath, q, &queryResult)
if err := checkQueryResult(queryResult, test.ResultQuery); err != nil { gotMetrics, err := queryResult.metrics()
t.Fatalf("Query. %s fails with error: %s.%s", q, err, test.Issue) if err != nil {
t.Fatalf("failed to parse query response: %s", err)
}
expMetrics, err := test.ResultQuery.metrics()
if err != nil {
t.Fatalf("failed to parse expected response: %s", err)
}
if err := checkMetricsResult(gotMetrics, expMetrics); err != nil {
t.Fatalf("%q fails with error %s.%s", q, err, test.Issue)
} }
default: default:
t.Fatalf("unsupported read query %s", q) t.Fatalf("unsupported read query %s", q)
@ -417,60 +470,6 @@ func removeIfFoundSeries(r map[string]string, contains []map[string]string) []ma
return contains return contains
} }
func checkQueryResult(got, want Query) error {
if got.Status != want.Status {
return fmt.Errorf("status mismatch %q - %q", want.Status, got.Status)
}
if got.Data.ResultType != want.Data.ResultType {
return fmt.Errorf("result type mismatch %q - %q", want.Data.ResultType, got.Data.ResultType)
}
wantData := append([]QueryDataResult(nil), want.Data.Result...)
for _, r := range got.Data.Result {
wantData = removeIfFoundQueryData(r, wantData)
}
if len(wantData) > 0 {
return fmt.Errorf("expected query result %+v not found in %+v", wantData, got.Data.Result)
}
return nil
}
func removeIfFoundQueryData(r QueryDataResult, contains []QueryDataResult) []QueryDataResult {
for i, item := range contains {
if reflect.DeepEqual(r.Metric, item.Metric) && reflect.DeepEqual(r.Value[0], item.Value[0]) && reflect.DeepEqual(r.Value[1], item.Value[1]) {
contains[i] = contains[len(contains)-1]
return contains[:len(contains)-1]
}
}
return contains
}
func checkQueryRangeResult(got, want QueryRange) error {
if got.Status != want.Status {
return fmt.Errorf("status mismatch %q - %q", want.Status, got.Status)
}
if got.Data.ResultType != want.Data.ResultType {
return fmt.Errorf("result type mismatch %q - %q", want.Data.ResultType, got.Data.ResultType)
}
wantData := append([]QueryRangeDataResult(nil), want.Data.Result...)
for _, r := range got.Data.Result {
wantData = removeIfFoundQueryRangeData(r, wantData)
}
if len(wantData) > 0 {
return fmt.Errorf("expected query range result %+v not found in %+v", wantData, got.Data.Result)
}
return nil
}
func removeIfFoundQueryRangeData(r QueryRangeDataResult, contains []QueryRangeDataResult) []QueryRangeDataResult {
for i, item := range contains {
if reflect.DeepEqual(r.Metric, item.Metric) && reflect.DeepEqual(r.Values, item.Values) {
contains[i] = contains[len(contains)-1]
return contains[:len(contains)-1]
}
}
return contains
}
type suite struct{ t *testing.T } type suite struct{ t *testing.T }
func newSuite(t *testing.T) *suite { return &suite{t: t} } func newSuite(t *testing.T) *suite { return &suite{t: t} }

View file

@ -98,7 +98,7 @@ func addLabel(dst []prompb.Label, key, value string) []prompb.Label {
dst = append(dst, prompb.Label{}) dst = append(dst, prompb.Label{})
} }
lb := &dst[len(dst)-1] lb := &dst[len(dst)-1]
lb.Name = bytesutil.ToUnsafeBytes(key) lb.Name = key
lb.Value = bytesutil.ToUnsafeBytes(value) lb.Value = value
return dst return dst
} }

View file

@ -7,7 +7,7 @@
"not_nan_not_inf;item=y 3 {TIME_S-1m}", "not_nan_not_inf;item=y 3 {TIME_S-1m}",
"not_nan_not_inf;item=y 1 {TIME_S-2m}"], "not_nan_not_inf;item=y 1 {TIME_S-2m}"],
"query": ["/api/v1/query_range?query=1/(not_nan_not_inf-1)!=inf!=nan&start={TIME_S-3m}&end={TIME_S}&step=60"], "query": ["/api/v1/query_range?query=1/(not_nan_not_inf-1)!=inf!=nan&start={TIME_S-3m}&end={TIME_S}&step=60"],
"result_query_range": { "result_query": {
"status":"success", "status":"success",
"data":{"resultType":"matrix", "data":{"resultType":"matrix",
"result":[ "result":[

View file

@ -6,7 +6,7 @@
"empty_label_match;foo=bar 2 {TIME_S-1m}", "empty_label_match;foo=bar 2 {TIME_S-1m}",
"empty_label_match;foo=baz 3 {TIME_S-1m}"], "empty_label_match;foo=baz 3 {TIME_S-1m}"],
"query": ["/api/v1/query_range?query=empty_label_match{foo=~'bar|'}&start={TIME_S-1m}&end={TIME_S}&step=60"], "query": ["/api/v1/query_range?query=empty_label_match{foo=~'bar|'}&start={TIME_S-1m}&end={TIME_S}&step=60"],
"result_query_range": { "result_query": {
"status":"success", "status":"success",
"data":{"resultType":"matrix", "data":{"resultType":"matrix",
"result":[ "result":[

View file

@ -8,7 +8,7 @@
"max_lookback_set 4 {TIME_S-150s}" "max_lookback_set 4 {TIME_S-150s}"
], ],
"query": ["/api/v1/query_range?query=max_lookback_set&start={TIME_S-150s}&end={TIME_S}&step=10s&max_lookback=1s"], "query": ["/api/v1/query_range?query=max_lookback_set&start={TIME_S-150s}&end={TIME_S}&step=10s&max_lookback=1s"],
"result_query_range": { "result_query": {
"status":"success", "status":"success",
"data":{"resultType":"matrix", "data":{"resultType":"matrix",
"result":[{"metric":{"__name__":"max_lookback_set"},"values":[ "result":[{"metric":{"__name__":"max_lookback_set"},"values":[

View file

@ -8,7 +8,7 @@
"max_lookback_unset 4 {TIME_S-150s}" "max_lookback_unset 4 {TIME_S-150s}"
], ],
"query": ["/api/v1/query_range?query=max_lookback_unset&start={TIME_S-150s}&end={TIME_S}&step=10s"], "query": ["/api/v1/query_range?query=max_lookback_unset&start={TIME_S-150s}&end={TIME_S}&step=10s"],
"result_query_range": { "result_query": {
"status":"success", "status":"success",
"data":{"resultType":"matrix", "data":{"resultType":"matrix",
"result":[{"metric":{"__name__":"max_lookback_unset"},"values":[ "result":[{"metric":{"__name__":"max_lookback_unset"},"values":[

View file

@ -8,7 +8,7 @@
"not_nan_as_missing_data;item=y 3 {TIME_S-1m}" "not_nan_as_missing_data;item=y 3 {TIME_S-1m}"
], ],
"query": ["/api/v1/query_range?query=not_nan_as_missing_data>1&start={TIME_S-2m}&end={TIME_S}&step=60"], "query": ["/api/v1/query_range?query=not_nan_as_missing_data>1&start={TIME_S-2m}&end={TIME_S}&step=60"],
"result_query_range": { "result_query": {
"status":"success", "status":"success",
"data":{"resultType":"matrix", "data":{"resultType":"matrix",
"result":[ "result":[

View file

@ -0,0 +1,12 @@
{
"name": "instant query with look-behind window",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"foo\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS-60s}\"}]}]"],
"query": ["/api/v1/query?query=foo[5m]"],
"result_query": {
"status": "success",
"data":{
"resultType":"matrix",
"result":[{"metric":{"__name__":"foo"},"values":[["{TIME_S-60s}", "1"]]}]
}
}
}

View file

@ -0,0 +1,11 @@
{
"name": "instant scalar query",
"query": ["/api/v1/query?query=42&time={TIME_S}"],
"result_query": {
"status": "success",
"data":{
"resultType":"vector",
"result":[{"metric":{},"value":["{TIME_S}", "42"]}]
}
}
}

View file

@ -0,0 +1,13 @@
{
"name": "too big look-behind window",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5553",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"foo\"},{\"name\":\"issue\",\"value\":\"5553\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS-60s}\"}]}]"],
"query": ["/api/v1/query?query=foo{issue=\"5553\"}[100y]"],
"result_query": {
"status": "success",
"data":{
"resultType":"matrix",
"result":[{"metric":{"__name__":"foo", "issue": "5553"},"values":[["{TIME_S-60s}", "1"]]}]
}
}
}

View file

@ -0,0 +1,18 @@
{
"name": "query range",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5553",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"bar\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS-60s}\"}, {\"value\":2,\"timestamp\":\"{TIME_MS-120s}\"}, {\"value\":1,\"timestamp\":\"{TIME_MS-180s}\"}]}]"],
"query": ["/api/v1/query_range?query=bar&step=30s&start={TIME_MS-180s}"],
"result_query": {
"status": "success",
"data":{
"resultType":"matrix",
"result":[
{
"metric":{"__name__":"bar"},
"values":[["{TIME_S-180s}", "1"],["{TIME_S-150s}", "1"],["{TIME_S-120s}", "2"],["{TIME_S-90s}", "2"], ["{TIME_S-60s}", "1"], ["{TIME_S-30s}", "1"], ["{TIME_S}", "1"]]
}
]
}
}
}

View file

@ -1,4 +1,4 @@
package datadog package datadogv1
import ( import (
"net/http" "net/http"
@ -8,33 +8,32 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth" "github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common" parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadog" "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadog/stream" "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics" "github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics" "github.com/VictoriaMetrics/metrics"
) )
var ( var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="datadog"}`) rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="datadogv1"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="datadog"}`) rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="datadogv1"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="datadog"}`) rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="datadogv1"}`)
) )
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v1/series request. // InsertHandlerForHTTP processes remote write for DataDog POST /api/v1/series request.
//
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error { func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req) extraLabels, err := parserCommon.GetExtraLabels(req)
if err != nil { if err != nil {
return err return err
} }
ce := req.Header.Get("Content-Encoding") ce := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, ce, func(series []datadog.Series) error { return stream.Parse(req.Body, ce, func(series []datadogv1.Series) error {
return insertRows(at, series, extraLabels) return insertRows(at, series, extraLabels)
}) })
} }
func insertRows(at *auth.Token, series []datadog.Series, extraLabels []prompbmarshal.Label) error { func insertRows(at *auth.Token, series []datadogv1.Series, extraLabels []prompbmarshal.Label) error {
ctx := common.GetPushCtx() ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx) defer common.PutPushCtx(ctx)
@ -63,7 +62,7 @@ func insertRows(at *auth.Token, series []datadog.Series, extraLabels []prompbmar
}) })
} }
for _, tag := range ss.Tags { for _, tag := range ss.Tags {
name, value := datadog.SplitTag(tag) name, value := datadogutils.SplitTag(tag)
if name == "host" { if name == "host" {
name = "exported_host" name = "exported_host"
} }

View file

@ -0,0 +1,102 @@
package datadogv2
import (
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="datadogv2"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="datadogv2"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="datadogv2"}`)
)
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v2/series request.
//
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
if err != nil {
return err
}
ct := req.Header.Get("Content-Type")
ce := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, ce, ct, func(series []datadogv2.Series) error {
return insertRows(at, series, extraLabels)
})
}
func insertRows(at *auth.Token, series []datadogv2.Series, extraLabels []prompbmarshal.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
rowsTotal := 0
tssDst := ctx.WriteRequest.Timeseries[:0]
labels := ctx.Labels[:0]
samples := ctx.Samples[:0]
for i := range series {
ss := &series[i]
rowsTotal += len(ss.Points)
labelsLen := len(labels)
labels = append(labels, prompbmarshal.Label{
Name: "__name__",
Value: ss.Metric,
})
for _, rs := range ss.Resources {
labels = append(labels, prompbmarshal.Label{
Name: rs.Type,
Value: rs.Name,
})
}
if ss.SourceTypeName != "" {
labels = append(labels, prompbmarshal.Label{
Name: "source_type_name",
Value: ss.SourceTypeName,
})
}
for _, tag := range ss.Tags {
name, value := datadogutils.SplitTag(tag)
if name == "host" {
name = "exported_host"
}
labels = append(labels, prompbmarshal.Label{
Name: name,
Value: value,
})
}
labels = append(labels, extraLabels...)
samplesLen := len(samples)
for _, pt := range ss.Points {
samples = append(samples, prompbmarshal.Sample{
Timestamp: pt.Timestamp * 1000,
Value: pt.Value,
})
}
tssDst = append(tssDst, prompbmarshal.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[samplesLen:],
})
}
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
rowsInserted.Add(rowsTotal)
if at != nil {
rowsTenantInserted.Get(at).Add(rowsTotal)
}
rowsPerInsert.Update(float64(rowsTotal))
return nil
}

View file

@ -12,7 +12,8 @@ import (
"time" "time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/csvimport" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/datadog" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/datadogv1"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/datadogv2"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/graphite" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/graphite"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/influx" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/native" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/native"
@ -96,7 +97,6 @@ func main() {
remotewrite.InitSecretFlags() remotewrite.InitSecretFlags()
buildinfo.Init() buildinfo.Init()
logger.Init() logger.Init()
pushmetrics.Init()
if promscrape.IsDryRun() { if promscrape.IsDryRun() {
if err := promscrape.CheckConfig(); err != nil { if err := promscrape.CheckConfig(); err != nil {
@ -147,8 +147,10 @@ func main() {
} }
logger.Infof("started vmagent in %.3f seconds", time.Since(startTime).Seconds()) logger.Infof("started vmagent in %.3f seconds", time.Since(startTime).Seconds())
pushmetrics.Init()
sig := procutil.WaitForSigterm() sig := procutil.WaitForSigterm()
logger.Infof("received signal %s", sig) logger.Infof("received signal %s", sig)
pushmetrics.Stop()
startTime = time.Now() startTime = time.Now()
if len(*httpListenAddr) > 0 { if len(*httpListenAddr) > 0 {
@ -345,9 +347,20 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
fmt.Fprintf(w, `{"status":"ok"}`) fmt.Fprintf(w, `{"status":"ok"}`)
return true return true
case "/datadog/api/v1/series": case "/datadog/api/v1/series":
datadogWriteRequests.Inc() datadogv1WriteRequests.Inc()
if err := datadog.InsertHandlerForHTTP(nil, r); err != nil { if err := datadogv1.InsertHandlerForHTTP(nil, r); err != nil {
datadogWriteErrors.Inc() datadogv1WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "/datadog/api/v2/series":
datadogv2WriteRequests.Inc()
if err := datadogv2.InsertHandlerForHTTP(nil, r); err != nil {
datadogv2WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err) httpserver.Errorf(w, r, "%s", err)
return true return true
} }
@ -571,9 +584,19 @@ func processMultitenantRequest(w http.ResponseWriter, r *http.Request, path stri
fmt.Fprintf(w, `{"status":"ok"}`) fmt.Fprintf(w, `{"status":"ok"}`)
return true return true
case "datadog/api/v1/series": case "datadog/api/v1/series":
datadogWriteRequests.Inc() datadogv1WriteRequests.Inc()
if err := datadog.InsertHandlerForHTTP(at, r); err != nil { if err := datadogv1.InsertHandlerForHTTP(at, r); err != nil {
datadogWriteErrors.Inc() datadogv1WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "datadog/api/v2/series":
datadogv2WriteRequests.Inc()
if err := datadogv2.InsertHandlerForHTTP(at, r); err != nil {
datadogv2WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err) httpserver.Errorf(w, r, "%s", err)
return true return true
} }
@ -631,8 +654,11 @@ var (
influxQueryRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/influx/query", protocol="influx"}`) influxQueryRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/influx/query", protocol="influx"}`)
datadogWriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/series", protocol="datadog"}`) datadogv1WriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogWriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/datadog/api/v1/series", protocol="datadog"}`) datadogv1WriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogv2WriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v2/series", protocol="datadog"}`)
datadogv2WriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/datadog/api/v2/series", protocol="datadog"}`)
datadogValidateRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`) datadogValidateRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`)
datadogCheckRunRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`) datadogCheckRunRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`)

View file

@ -6,7 +6,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth" "github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb" "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common" parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
@ -48,8 +47,8 @@ func insertRows(at *auth.Token, timeseries []prompb.TimeSeries, extraLabels []pr
for i := range ts.Labels { for i := range ts.Labels {
label := &ts.Labels[i] label := &ts.Labels[i]
labels = append(labels, prompbmarshal.Label{ labels = append(labels, prompbmarshal.Label{
Name: bytesutil.ToUnsafeString(label.Name), Name: label.Name,
Value: bytesutil.ToUnsafeString(label.Value), Value: label.Value,
}) })
} }
labels = append(labels, extraLabels...) labels = append(labels, extraLabels...)

View file

@ -58,8 +58,10 @@ var (
oauth2ClientID = flagutil.NewArrayString("remoteWrite.oauth2.clientID", "Optional OAuth2 clientID to use for the corresponding -remoteWrite.url") oauth2ClientID = flagutil.NewArrayString("remoteWrite.oauth2.clientID", "Optional OAuth2 clientID to use for the corresponding -remoteWrite.url")
oauth2ClientSecret = flagutil.NewArrayString("remoteWrite.oauth2.clientSecret", "Optional OAuth2 clientSecret to use for the corresponding -remoteWrite.url") oauth2ClientSecret = flagutil.NewArrayString("remoteWrite.oauth2.clientSecret", "Optional OAuth2 clientSecret to use for the corresponding -remoteWrite.url")
oauth2ClientSecretFile = flagutil.NewArrayString("remoteWrite.oauth2.clientSecretFile", "Optional OAuth2 clientSecretFile to use for the corresponding -remoteWrite.url") oauth2ClientSecretFile = flagutil.NewArrayString("remoteWrite.oauth2.clientSecretFile", "Optional OAuth2 clientSecretFile to use for the corresponding -remoteWrite.url")
oauth2TokenURL = flagutil.NewArrayString("remoteWrite.oauth2.tokenUrl", "Optional OAuth2 tokenURL to use for the corresponding -remoteWrite.url") oauth2EndpointParams = flagutil.NewArrayString("remoteWrite.oauth2.endpointParams", "Optional OAuth2 endpoint parameters to use for the corresponding -remoteWrite.url . "+
oauth2Scopes = flagutil.NewArrayString("remoteWrite.oauth2.scopes", "Optional OAuth2 scopes to use for the corresponding -remoteWrite.url. Scopes must be delimited by ';'") `The endpoint parameters must be set in JSON format: {"param1":"value1",...,"paramN":"valueN"}`)
oauth2TokenURL = flagutil.NewArrayString("remoteWrite.oauth2.tokenUrl", "Optional OAuth2 tokenURL to use for the corresponding -remoteWrite.url")
oauth2Scopes = flagutil.NewArrayString("remoteWrite.oauth2.scopes", "Optional OAuth2 scopes to use for the corresponding -remoteWrite.url. Scopes must be delimited by ';'")
awsUseSigv4 = flagutil.NewArrayBool("remoteWrite.aws.useSigv4", "Enables SigV4 request signing for the corresponding -remoteWrite.url. "+ awsUseSigv4 = flagutil.NewArrayBool("remoteWrite.aws.useSigv4", "Enables SigV4 request signing for the corresponding -remoteWrite.url. "+
"It is expected that other -remoteWrite.aws.* command-line flags are set if sigv4 request signing is enabled") "It is expected that other -remoteWrite.aws.* command-line flags are set if sigv4 request signing is enabled")
@ -234,10 +236,16 @@ func getAuthConfig(argIdx int) (*promauth.Config, error) {
clientSecret := oauth2ClientSecret.GetOptionalArg(argIdx) clientSecret := oauth2ClientSecret.GetOptionalArg(argIdx)
clientSecretFile := oauth2ClientSecretFile.GetOptionalArg(argIdx) clientSecretFile := oauth2ClientSecretFile.GetOptionalArg(argIdx)
if clientSecretFile != "" || clientSecret != "" { if clientSecretFile != "" || clientSecret != "" {
endpointParamsJSON := oauth2EndpointParams.GetOptionalArg(argIdx)
endpointParams, err := flagutil.ParseJSONMap(endpointParamsJSON)
if err != nil {
return nil, fmt.Errorf("cannot parse JSON for -remoteWrite.oauth2.endpointParams=%s: %w", endpointParamsJSON, err)
}
oauth2Cfg = &promauth.OAuth2Config{ oauth2Cfg = &promauth.OAuth2Config{
ClientID: oauth2ClientID.GetOptionalArg(argIdx), ClientID: oauth2ClientID.GetOptionalArg(argIdx),
ClientSecret: promauth.NewSecret(clientSecret), ClientSecret: promauth.NewSecret(clientSecret),
ClientSecretFile: clientSecretFile, ClientSecretFile: clientSecretFile,
EndpointParams: endpointParams,
TokenURL: oauth2TokenURL.GetOptionalArg(argIdx), TokenURL: oauth2TokenURL.GetOptionalArg(argIdx),
Scopes: strings.Split(oauth2Scopes.GetOptionalArg(argIdx), ";"), Scopes: strings.Split(oauth2Scopes.GetOptionalArg(argIdx), ";"),
} }

View file

@ -228,7 +228,7 @@ func tryPushWriteRequest(wr *prompbmarshal.WriteRequest, tryPushBlock func(block
return true return true
} }
bb := writeRequestBufPool.Get() bb := writeRequestBufPool.Get()
bb.B = prompbmarshal.MarshalWriteRequest(bb.B[:0], wr) bb.B = wr.MarshalProtobuf(bb.B[:0])
if len(bb.B) <= maxUnpackedBlockSize.IntN() { if len(bb.B) <= maxUnpackedBlockSize.IntN() {
zb := snappyBufPool.Get() zb := snappyBufPool.Get()
if isVMRemoteWrite { if isVMRemoteWrite {

View file

@ -43,7 +43,7 @@ func testPushWriteRequest(t *testing.T, rowsCount, expectedBlockLenProm, expecte
} }
// Check Prometheus remote write // Check Prometheus remote write
f(false, expectedBlockLenProm, 0) f(false, expectedBlockLenProm, 3)
// Check VictoriaMetrics remote write // Check VictoriaMetrics remote write
f(true, expectedBlockLenVM, 15) f(true, expectedBlockLenVM, 15)

View file

@ -4,7 +4,6 @@ import (
"fmt" "fmt"
"testing" "testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/golang/snappy" "github.com/golang/snappy"
"github.com/klauspost/compress/s2" "github.com/klauspost/compress/s2"
) )
@ -22,7 +21,7 @@ func benchmarkCompressWriteRequest(b *testing.B, compressFunc func(dst, src []by
for _, rowsCount := range []int{1, 10, 100, 1e3, 1e4} { for _, rowsCount := range []int{1, 10, 100, 1e3, 1e4} {
b.Run(fmt.Sprintf("rows_%d", rowsCount), func(b *testing.B) { b.Run(fmt.Sprintf("rows_%d", rowsCount), func(b *testing.B) {
wr := newTestWriteRequest(rowsCount, 10) wr := newTestWriteRequest(rowsCount, 10)
data := prompbmarshal.MarshalWriteRequest(nil, wr) data := wr.MarshalProtobuf(nil)
b.ReportAllocs() b.ReportAllocs()
b.SetBytes(int64(rowsCount)) b.SetBytes(int64(rowsCount))
b.RunParallel(func(pb *testing.PB) { b.RunParallel(func(pb *testing.PB) {

View file

@ -276,7 +276,7 @@ func reloadRelabelConfigs() {
var ( var (
relabelConfigReloads = metrics.NewCounter(`vmagent_relabel_config_reloads_total`) relabelConfigReloads = metrics.NewCounter(`vmagent_relabel_config_reloads_total`)
relabelConfigReloadErrors = metrics.NewCounter(`vmagent_relabel_config_reloads_errors_total`) relabelConfigReloadErrors = metrics.NewCounter(`vmagent_relabel_config_reloads_errors_total`)
relabelConfigSuccess = metrics.NewCounter(`vmagent_relabel_config_last_reload_successful`) relabelConfigSuccess = metrics.NewGauge(`vmagent_relabel_config_last_reload_successful`, nil)
relabelConfigTimestamp = metrics.NewCounter(`vmagent_relabel_config_last_reload_success_timestamp_seconds`) relabelConfigTimestamp = metrics.NewCounter(`vmagent_relabel_config_last_reload_success_timestamp_seconds`)
) )

View file

@ -37,11 +37,13 @@ var (
tlsCAFile = flag.String("datasource.tlsCAFile", "", `Optional path to TLS CA file to use for verifying connections to -datasource.url. By default, system CA is used`) tlsCAFile = flag.String("datasource.tlsCAFile", "", `Optional path to TLS CA file to use for verifying connections to -datasource.url. By default, system CA is used`)
tlsServerName = flag.String("datasource.tlsServerName", "", `Optional TLS server name to use for connections to -datasource.url. By default, the server name from -datasource.url is used`) tlsServerName = flag.String("datasource.tlsServerName", "", `Optional TLS server name to use for connections to -datasource.url. By default, the server name from -datasource.url is used`)
oauth2ClientID = flag.String("datasource.oauth2.clientID", "", "Optional OAuth2 clientID to use for -datasource.url. ") oauth2ClientID = flag.String("datasource.oauth2.clientID", "", "Optional OAuth2 clientID to use for -datasource.url")
oauth2ClientSecret = flag.String("datasource.oauth2.clientSecret", "", "Optional OAuth2 clientSecret to use for -datasource.url.") oauth2ClientSecret = flag.String("datasource.oauth2.clientSecret", "", "Optional OAuth2 clientSecret to use for -datasource.url")
oauth2ClientSecretFile = flag.String("datasource.oauth2.clientSecretFile", "", "Optional OAuth2 clientSecretFile to use for -datasource.url. ") oauth2ClientSecretFile = flag.String("datasource.oauth2.clientSecretFile", "", "Optional OAuth2 clientSecretFile to use for -datasource.url")
oauth2TokenURL = flag.String("datasource.oauth2.tokenUrl", "", "Optional OAuth2 tokenURL to use for -datasource.url.") oauth2EndpointParams = flag.String("datasource.oauth2.endpointParams", "", "Optional OAuth2 endpoint parameters to use for -datasource.url . "+
oauth2Scopes = flag.String("datasource.oauth2.scopes", "", "Optional OAuth2 scopes to use for -datasource.url. Scopes must be delimited by ';'") `The endpoint parameters must be set in JSON format: {"param1":"value1",...,"paramN":"valueN"}`)
oauth2TokenURL = flag.String("datasource.oauth2.tokenUrl", "", "Optional OAuth2 tokenURL to use for -datasource.url")
oauth2Scopes = flag.String("datasource.oauth2.scopes", "", "Optional OAuth2 scopes to use for -datasource.url. Scopes must be delimited by ';'")
lookBack = flag.Duration("datasource.lookback", 0, `Will be deprecated soon, please adjust "-search.latencyOffset" at datasource side `+ lookBack = flag.Duration("datasource.lookback", 0, `Will be deprecated soon, please adjust "-search.latencyOffset" at datasource side `+
`or specify "latency_offset" in rule group's params. Lookback defines how far into the past to look when evaluating queries. `+ `or specify "latency_offset" in rule group's params. Lookback defines how far into the past to look when evaluating queries. `+
@ -108,10 +110,14 @@ func Init(extraParams url.Values) (QuerierBuilder, error) {
extraParams.Set("round_digits", fmt.Sprintf("%d", *roundDigits)) extraParams.Set("round_digits", fmt.Sprintf("%d", *roundDigits))
} }
endpointParams, err := flagutil.ParseJSONMap(*oauth2EndpointParams)
if err != nil {
return nil, fmt.Errorf("cannot parse JSON for -datasource.oauth2.endpointParams=%s: %w", *oauth2EndpointParams, err)
}
authCfg, err := utils.AuthConfig( authCfg, err := utils.AuthConfig(
utils.WithBasicAuth(*basicAuthUsername, *basicAuthPassword, *basicAuthPasswordFile), utils.WithBasicAuth(*basicAuthUsername, *basicAuthPassword, *basicAuthPasswordFile),
utils.WithBearer(*bearerToken, *bearerTokenFile), utils.WithBearer(*bearerToken, *bearerTokenFile),
utils.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes), utils.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes, endpointParams),
utils.WithHeaders(*headers)) utils.WithHeaders(*headers))
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to configure auth: %w", err) return nil, fmt.Errorf("failed to configure auth: %w", err)

View file

@ -96,7 +96,6 @@ func main() {
notifier.InitSecretFlags() notifier.InitSecretFlags()
buildinfo.Init() buildinfo.Init()
logger.Init() logger.Init()
pushmetrics.Init()
if !*remoteReadIgnoreRestoreErrors { if !*remoteReadIgnoreRestoreErrors {
logger.Warnf("flag `remoteRead.ignoreRestoreErrors` is deprecated and will be removed in next releases.") logger.Warnf("flag `remoteRead.ignoreRestoreErrors` is deprecated and will be removed in next releases.")
@ -182,8 +181,11 @@ func main() {
rh := &requestHandler{m: manager} rh := &requestHandler{m: manager}
go httpserver.Serve(*httpListenAddr, *useProxyProtocol, rh.handler) go httpserver.Serve(*httpListenAddr, *useProxyProtocol, rh.handler)
pushmetrics.Init()
sig := procutil.WaitForSigterm() sig := procutil.WaitForSigterm()
logger.Infof("service received signal %s", sig) logger.Infof("service received signal %s", sig)
pushmetrics.Stop()
if err := httpserver.Stop(*httpListenAddr); err != nil { if err := httpserver.Stop(*httpListenAddr); err != nil {
logger.Fatalf("cannot stop the webservice: %s", err) logger.Fatalf("cannot stop the webservice: %s", err)
} }
@ -194,7 +196,7 @@ func main() {
var ( var (
configReloads = metrics.NewCounter(`vmalert_config_last_reload_total`) configReloads = metrics.NewCounter(`vmalert_config_last_reload_total`)
configReloadErrors = metrics.NewCounter(`vmalert_config_last_reload_errors_total`) configReloadErrors = metrics.NewCounter(`vmalert_config_last_reload_errors_total`)
configSuccess = metrics.NewCounter(`vmalert_config_last_reload_successful`) configSuccess = metrics.NewGauge(`vmalert_config_last_reload_successful`, nil)
configTimestamp = metrics.NewCounter(`vmalert_config_last_reload_success_timestamp_seconds`) configTimestamp = metrics.NewCounter(`vmalert_config_last_reload_success_timestamp_seconds`)
) )

View file

@ -141,7 +141,7 @@ groups:
t.Fatalf("expected to have config error %s; got nil instead", cErr) t.Fatalf("expected to have config error %s; got nil instead", cErr)
} }
if cfgSuc != 0 { if cfgSuc != 0 {
t.Fatalf("expected to have metric configSuccess to be set to 0; got %d instead", cfgSuc) t.Fatalf("expected to have metric configSuccess to be set to 0; got %v instead", cfgSuc)
} }
return return
} }
@ -150,7 +150,7 @@ groups:
t.Fatalf("unexpected config error: %s", cErr) t.Fatalf("unexpected config error: %s", cErr)
} }
if cfgSuc != 1 { if cfgSuc != 1 {
t.Fatalf("expected to have metric configSuccess to be set to 1; got %d instead", cfgSuc) t.Fatalf("expected to have metric configSuccess to be set to 1; got %v instead", cfgSuc)
} }
} }

View file

@ -144,7 +144,7 @@ func NewAlertManager(alertManagerURL string, fn AlertURLGenerator, authCfg proma
aCfg, err := utils.AuthConfig( aCfg, err := utils.AuthConfig(
utils.WithBasicAuth(ba.Username, ba.Password.String(), ba.PasswordFile), utils.WithBasicAuth(ba.Username, ba.Password.String(), ba.PasswordFile),
utils.WithBearer(authCfg.BearerToken.String(), authCfg.BearerTokenFile), utils.WithBearer(authCfg.BearerToken.String(), authCfg.BearerTokenFile),
utils.WithOAuth(oauth.ClientID, oauth.ClientSecretFile, oauth.ClientSecretFile, oauth.TokenURL, strings.Join(oauth.Scopes, ";"))) utils.WithOAuth(oauth.ClientID, oauth.ClientSecretFile, oauth.ClientSecretFile, oauth.TokenURL, strings.Join(oauth.Scopes, ";"), oauth.EndpointParams))
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to configure auth: %w", err) return nil, fmt.Errorf("failed to configure auth: %w", err)
} }

View file

@ -46,6 +46,8 @@ var (
"If multiple args are set, then they are applied independently for the corresponding -notifier.url") "If multiple args are set, then they are applied independently for the corresponding -notifier.url")
oauth2ClientSecretFile = flagutil.NewArrayString("notifier.oauth2.clientSecretFile", "Optional OAuth2 clientSecretFile to use for -notifier.url. "+ oauth2ClientSecretFile = flagutil.NewArrayString("notifier.oauth2.clientSecretFile", "Optional OAuth2 clientSecretFile to use for -notifier.url. "+
"If multiple args are set, then they are applied independently for the corresponding -notifier.url") "If multiple args are set, then they are applied independently for the corresponding -notifier.url")
oauth2EndpointParams = flagutil.NewArrayString("notifier.oauth2.endpointParams", "Optional OAuth2 endpoint parameters to use for the corresponding -notifier.url . "+
`The endpoint parameters must be set in JSON format: {"param1":"value1",...,"paramN":"valueN"}`)
oauth2TokenURL = flagutil.NewArrayString("notifier.oauth2.tokenUrl", "Optional OAuth2 tokenURL to use for -notifier.url. "+ oauth2TokenURL = flagutil.NewArrayString("notifier.oauth2.tokenUrl", "Optional OAuth2 tokenURL to use for -notifier.url. "+
"If multiple args are set, then they are applied independently for the corresponding -notifier.url") "If multiple args are set, then they are applied independently for the corresponding -notifier.url")
oauth2Scopes = flagutil.NewArrayString("notifier.oauth2.scopes", "Optional OAuth2 scopes to use for -notifier.url. Scopes must be delimited by ';'. "+ oauth2Scopes = flagutil.NewArrayString("notifier.oauth2.scopes", "Optional OAuth2 scopes to use for -notifier.url. Scopes must be delimited by ';'. "+
@ -141,6 +143,11 @@ func InitSecretFlags() {
func notifiersFromFlags(gen AlertURLGenerator) ([]Notifier, error) { func notifiersFromFlags(gen AlertURLGenerator) ([]Notifier, error) {
var notifiers []Notifier var notifiers []Notifier
for i, addr := range *addrs { for i, addr := range *addrs {
endpointParamsJSON := oauth2EndpointParams.GetOptionalArg(i)
endpointParams, err := flagutil.ParseJSONMap(endpointParamsJSON)
if err != nil {
return nil, fmt.Errorf("cannot parse JSON for -notifier.oauth2.endpointParams=%s: %w", endpointParamsJSON, err)
}
authCfg := promauth.HTTPClientConfig{ authCfg := promauth.HTTPClientConfig{
TLSConfig: &promauth.TLSConfig{ TLSConfig: &promauth.TLSConfig{
CAFile: tlsCAFile.GetOptionalArg(i), CAFile: tlsCAFile.GetOptionalArg(i),
@ -160,6 +167,7 @@ func notifiersFromFlags(gen AlertURLGenerator) ([]Notifier, error) {
ClientID: oauth2ClientID.GetOptionalArg(i), ClientID: oauth2ClientID.GetOptionalArg(i),
ClientSecret: promauth.NewSecret(oauth2ClientSecret.GetOptionalArg(i)), ClientSecret: promauth.NewSecret(oauth2ClientSecret.GetOptionalArg(i)),
ClientSecretFile: oauth2ClientSecretFile.GetOptionalArg(i), ClientSecretFile: oauth2ClientSecretFile.GetOptionalArg(i),
EndpointParams: endpointParams,
Scopes: strings.Split(oauth2Scopes.GetOptionalArg(i), ";"), Scopes: strings.Split(oauth2Scopes.GetOptionalArg(i), ";"),
TokenURL: oauth2TokenURL.GetOptionalArg(i), TokenURL: oauth2TokenURL.GetOptionalArg(i),
}, },

View file

@ -41,8 +41,10 @@ var (
oauth2ClientID = flag.String("remoteRead.oauth2.clientID", "", "Optional OAuth2 clientID to use for -remoteRead.url.") oauth2ClientID = flag.String("remoteRead.oauth2.clientID", "", "Optional OAuth2 clientID to use for -remoteRead.url.")
oauth2ClientSecret = flag.String("remoteRead.oauth2.clientSecret", "", "Optional OAuth2 clientSecret to use for -remoteRead.url.") oauth2ClientSecret = flag.String("remoteRead.oauth2.clientSecret", "", "Optional OAuth2 clientSecret to use for -remoteRead.url.")
oauth2ClientSecretFile = flag.String("remoteRead.oauth2.clientSecretFile", "", "Optional OAuth2 clientSecretFile to use for -remoteRead.url.") oauth2ClientSecretFile = flag.String("remoteRead.oauth2.clientSecretFile", "", "Optional OAuth2 clientSecretFile to use for -remoteRead.url.")
oauth2TokenURL = flag.String("remoteRead.oauth2.tokenUrl", "", "Optional OAuth2 tokenURL to use for -remoteRead.url. ") oauth2EndpointParams = flag.String("remoteRead.oauth2.endpointParams", "", "Optional OAuth2 endpoint parameters to use for -remoteRead.url . "+
oauth2Scopes = flag.String("remoteRead.oauth2.scopes", "", "Optional OAuth2 scopes to use for -remoteRead.url. Scopes must be delimited by ';'.") `The endpoint parameters must be set in JSON format: {"param1":"value1",...,"paramN":"valueN"}`)
oauth2TokenURL = flag.String("remoteRead.oauth2.tokenUrl", "", "Optional OAuth2 tokenURL to use for -remoteRead.url. ")
oauth2Scopes = flag.String("remoteRead.oauth2.scopes", "", "Optional OAuth2 scopes to use for -remoteRead.url. Scopes must be delimited by ';'.")
) )
// InitSecretFlags must be called after flag.Parse and before any logging // InitSecretFlags must be called after flag.Parse and before any logging
@ -63,10 +65,14 @@ func Init() (datasource.QuerierBuilder, error) {
return nil, fmt.Errorf("failed to create transport: %w", err) return nil, fmt.Errorf("failed to create transport: %w", err)
} }
endpointParams, err := flagutil.ParseJSONMap(*oauth2EndpointParams)
if err != nil {
return nil, fmt.Errorf("cannot parse JSON for -remoteRead.oauth2.endpointParams=%s: %w", *oauth2EndpointParams, err)
}
authCfg, err := utils.AuthConfig( authCfg, err := utils.AuthConfig(
utils.WithBasicAuth(*basicAuthUsername, *basicAuthPassword, *basicAuthPasswordFile), utils.WithBasicAuth(*basicAuthUsername, *basicAuthPassword, *basicAuthPasswordFile),
utils.WithBearer(*bearerToken, *bearerTokenFile), utils.WithBearer(*bearerToken, *bearerTokenFile),
utils.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes), utils.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes, endpointParams),
utils.WithHeaders(*headers)) utils.WithHeaders(*headers))
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to configure auth: %w", err) return nil, fmt.Errorf("failed to configure auth: %w", err)

View file

@ -123,14 +123,12 @@ func (c *Client) Push(s prompbmarshal.TimeSeries) error {
case <-c.doneCh: case <-c.doneCh:
rwErrors.Inc() rwErrors.Inc()
droppedRows.Add(len(s.Samples)) droppedRows.Add(len(s.Samples))
droppedBytes.Add(s.Size())
return fmt.Errorf("client is closed") return fmt.Errorf("client is closed")
case c.input <- s: case c.input <- s:
return nil return nil
default: default:
rwErrors.Inc() rwErrors.Inc()
droppedRows.Add(len(s.Samples)) droppedRows.Add(len(s.Samples))
droppedBytes.Add(s.Size())
return fmt.Errorf("failed to push timeseries - queue is full (%d entries). "+ return fmt.Errorf("failed to push timeseries - queue is full (%d entries). "+
"Queue size is controlled by -remoteWrite.maxQueueSize flag", "Queue size is controlled by -remoteWrite.maxQueueSize flag",
c.maxQueueSize) c.maxQueueSize)
@ -195,7 +193,6 @@ var (
sentRows = metrics.NewCounter(`vmalert_remotewrite_sent_rows_total`) sentRows = metrics.NewCounter(`vmalert_remotewrite_sent_rows_total`)
sentBytes = metrics.NewCounter(`vmalert_remotewrite_sent_bytes_total`) sentBytes = metrics.NewCounter(`vmalert_remotewrite_sent_bytes_total`)
droppedRows = metrics.NewCounter(`vmalert_remotewrite_dropped_rows_total`) droppedRows = metrics.NewCounter(`vmalert_remotewrite_dropped_rows_total`)
droppedBytes = metrics.NewCounter(`vmalert_remotewrite_dropped_bytes_total`)
sendDuration = metrics.NewFloatCounter(`vmalert_remotewrite_send_duration_seconds_total`) sendDuration = metrics.NewFloatCounter(`vmalert_remotewrite_send_duration_seconds_total`)
bufferFlushDuration = metrics.NewHistogram(`vmalert_remotewrite_flush_duration_seconds`) bufferFlushDuration = metrics.NewHistogram(`vmalert_remotewrite_flush_duration_seconds`)
@ -211,15 +208,10 @@ func (c *Client) flush(ctx context.Context, wr *prompbmarshal.WriteRequest) {
if len(wr.Timeseries) < 1 { if len(wr.Timeseries) < 1 {
return return
} }
defer prompbmarshal.ResetWriteRequest(wr) defer wr.Reset()
defer bufferFlushDuration.UpdateDuration(time.Now()) defer bufferFlushDuration.UpdateDuration(time.Now())
data, err := wr.Marshal() data := wr.MarshalProtobuf(nil)
if err != nil {
logger.Errorf("failed to marshal WriteRequest: %s", err)
return
}
b := snappy.Encode(nil, data) b := snappy.Encode(nil, data)
retryInterval, maxRetryInterval := *retryMinInterval, *retryMaxTime retryInterval, maxRetryInterval := *retryMinInterval, *retryMaxTime
@ -276,8 +268,11 @@ L:
} }
rwErrors.Inc() rwErrors.Inc()
droppedRows.Add(len(wr.Timeseries)) rows := 0
droppedBytes.Add(len(b)) for _, ts := range wr.Timeseries {
rows += len(ts.Samples)
}
droppedRows.Add(rows)
logger.Errorf("attempts to send remote-write request failed - dropping %d time series", logger.Errorf("attempts to send remote-write request failed - dropping %d time series",
len(wr.Timeseries)) len(wr.Timeseries))
} }

View file

@ -140,7 +140,7 @@ func (rw *rwServer) handler(w http.ResponseWriter, r *http.Request) {
return return
} }
wr := &prompb.WriteRequest{} wr := &prompb.WriteRequest{}
if err := wr.Unmarshal(b); err != nil { if err := wr.UnmarshalProtobuf(b); err != nil {
rw.err(w, fmt.Errorf("unmarhsal err: %w", err)) rw.err(w, fmt.Errorf("unmarhsal err: %w", err))
return return
} }

View file

@ -49,10 +49,7 @@ func (c *DebugClient) Push(s prompbmarshal.TimeSeries) error {
c.wg.Add(1) c.wg.Add(1)
defer c.wg.Done() defer c.wg.Done()
wr := &prompbmarshal.WriteRequest{Timeseries: []prompbmarshal.TimeSeries{s}} wr := &prompbmarshal.WriteRequest{Timeseries: []prompbmarshal.TimeSeries{s}}
data, err := wr.Marshal() data := wr.MarshalProtobuf(nil)
if err != nil {
return fmt.Errorf("failed to marshal the given time series: %w", err)
}
return c.send(data) return c.send(data)
} }

View file

@ -41,11 +41,13 @@ var (
tlsServerName = flag.String("remoteWrite.tlsServerName", "", "Optional TLS server name to use for connections to -remoteWrite.url. "+ tlsServerName = flag.String("remoteWrite.tlsServerName", "", "Optional TLS server name to use for connections to -remoteWrite.url. "+
"By default, the server name from -remoteWrite.url is used") "By default, the server name from -remoteWrite.url is used")
oauth2ClientID = flag.String("remoteWrite.oauth2.clientID", "", "Optional OAuth2 clientID to use for -remoteWrite.url.") oauth2ClientID = flag.String("remoteWrite.oauth2.clientID", "", "Optional OAuth2 clientID to use for -remoteWrite.url")
oauth2ClientSecret = flag.String("remoteWrite.oauth2.clientSecret", "", "Optional OAuth2 clientSecret to use for -remoteWrite.url.") oauth2ClientSecret = flag.String("remoteWrite.oauth2.clientSecret", "", "Optional OAuth2 clientSecret to use for -remoteWrite.url")
oauth2ClientSecretFile = flag.String("remoteWrite.oauth2.clientSecretFile", "", "Optional OAuth2 clientSecretFile to use for -remoteWrite.url.") oauth2ClientSecretFile = flag.String("remoteWrite.oauth2.clientSecretFile", "", "Optional OAuth2 clientSecretFile to use for -remoteWrite.url")
oauth2TokenURL = flag.String("remoteWrite.oauth2.tokenUrl", "", "Optional OAuth2 tokenURL to use for -notifier.url.") oauth2EndpointParams = flag.String("remoteWrite.oauth2.endpointParams", "", "Optional OAuth2 endpoint parameters to use for -remoteWrite.url . "+
oauth2Scopes = flag.String("remoteWrite.oauth2.scopes", "", "Optional OAuth2 scopes to use for -notifier.url. Scopes must be delimited by ';'.") `The endpoint parameters must be set in JSON format: {"param1":"value1",...,"paramN":"valueN"}`)
oauth2TokenURL = flag.String("remoteWrite.oauth2.tokenUrl", "", "Optional OAuth2 tokenURL to use for -notifier.url.")
oauth2Scopes = flag.String("remoteWrite.oauth2.scopes", "", "Optional OAuth2 scopes to use for -notifier.url. Scopes must be delimited by ';'.")
) )
// InitSecretFlags must be called after flag.Parse and before any logging // InitSecretFlags must be called after flag.Parse and before any logging
@ -67,10 +69,14 @@ func Init(ctx context.Context) (*Client, error) {
return nil, fmt.Errorf("failed to create transport: %w", err) return nil, fmt.Errorf("failed to create transport: %w", err)
} }
endpointParams, err := flagutil.ParseJSONMap(*oauth2EndpointParams)
if err != nil {
return nil, fmt.Errorf("cannot parse JSON for -remoteWrite.oauth2.endpointParams=%s: %w", *oauth2EndpointParams, err)
}
authCfg, err := utils.AuthConfig( authCfg, err := utils.AuthConfig(
utils.WithBasicAuth(*basicAuthUsername, *basicAuthPassword, *basicAuthPasswordFile), utils.WithBasicAuth(*basicAuthUsername, *basicAuthPassword, *basicAuthPasswordFile),
utils.WithBearer(*bearerToken, *bearerTokenFile), utils.WithBearer(*bearerToken, *bearerTokenFile),
utils.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes), utils.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes, endpointParams),
utils.WithHeaders(*headers)) utils.WithHeaders(*headers))
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to configure auth: %w", err) return nil, fmt.Errorf("failed to configure auth: %w", err)

View file

@ -237,11 +237,30 @@ type labelSet struct {
origin map[string]string origin map[string]string
// processed labels includes origin labels // processed labels includes origin labels
// plus extra labels (group labels, service labels like alertNameLabel). // plus extra labels (group labels, service labels like alertNameLabel).
// in case of conflicts, extra labels are preferred. // in case of key conflicts, origin labels are renamed with prefix `exported_` and extra labels are preferred.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5161
// used as labels attached to notifier.Alert and ALERTS series written to remote storage. // used as labels attached to notifier.Alert and ALERTS series written to remote storage.
processed map[string]string processed map[string]string
} }
// add adds a value v with key k to origin and processed label sets.
// On k conflicts in processed set, the passed v is preferred.
// On k conflicts in origin set, the original value is preferred and copied
// to processed with `exported_%k` key. The copy happens only if passed v isn't equal to origin[k] value.
func (ls *labelSet) add(k, v string) {
ls.processed[k] = v
ov, ok := ls.origin[k]
if !ok {
ls.origin[k] = v
return
}
if ov != v {
// copy value only if v and ov are different
key := fmt.Sprintf("exported_%s", k)
ls.processed[key] = ov
}
}
// toLabels converts labels from given Metric // toLabels converts labels from given Metric
// to labelSet which contains original and processed labels. // to labelSet which contains original and processed labels.
func (ar *AlertingRule) toLabels(m datasource.Metric, qFn templates.QueryFn) (*labelSet, error) { func (ar *AlertingRule) toLabels(m datasource.Metric, qFn templates.QueryFn) (*labelSet, error) {
@ -267,24 +286,14 @@ func (ar *AlertingRule) toLabels(m datasource.Metric, qFn templates.QueryFn) (*l
return nil, fmt.Errorf("failed to expand labels: %w", err) return nil, fmt.Errorf("failed to expand labels: %w", err)
} }
for k, v := range extraLabels { for k, v := range extraLabels {
ls.processed[k] = v ls.add(k, v)
if _, ok := ls.origin[k]; !ok {
ls.origin[k] = v
}
} }
// set additional labels to identify group and rule name // set additional labels to identify group and rule name
if ar.Name != "" { if ar.Name != "" {
ls.processed[alertNameLabel] = ar.Name ls.add(alertNameLabel, ar.Name)
if _, ok := ls.origin[alertNameLabel]; !ok {
ls.origin[alertNameLabel] = ar.Name
}
} }
if !*disableAlertGroupLabel && ar.GroupName != "" { if !*disableAlertGroupLabel && ar.GroupName != "" {
ls.processed[alertGroupNameLabel] = ar.GroupName ls.add(alertGroupNameLabel, ar.GroupName)
if _, ok := ls.origin[alertGroupNameLabel]; !ok {
ls.origin[alertGroupNameLabel] = ar.GroupName
}
} }
return ls, nil return ls, nil
} }
@ -414,8 +423,7 @@ func (ar *AlertingRule) exec(ctx context.Context, ts time.Time, limit int) ([]pr
} }
h := hash(ls.processed) h := hash(ls.processed)
if _, ok := updated[h]; ok { if _, ok := updated[h]; ok {
// duplicate may be caused by extra labels // duplicate may be caused the removal of `__name__` label
// conflicting with the metric labels
curState.Err = fmt.Errorf("labels %v: %w", ls.processed, errDuplicate) curState.Err = fmt.Errorf("labels %v: %w", ls.processed, errDuplicate)
return nil, curState.Err return nil, curState.Err
} }

View file

@ -768,14 +768,16 @@ func TestAlertingRule_Exec_Negative(t *testing.T) {
ar.q = fq ar.q = fq
// successful attempt // successful attempt
// label `job` will be overridden by rule extra label, the original value will be reserved by "exported_job"
fq.Add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "bar")) fq.Add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "bar"))
fq.Add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "baz"))
_, err := ar.exec(context.TODO(), time.Now(), 0) _, err := ar.exec(context.TODO(), time.Now(), 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
// label `job` will collide with rule extra label and will make both time series equal // label `__name__` will be omitted and get duplicated results here
fq.Add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "baz")) fq.Add(metricWithValueAndLabels(t, 1, "__name__", "foo_1", "job", "bar"))
_, err = ar.exec(context.TODO(), time.Now(), 0) _, err = ar.exec(context.TODO(), time.Now(), 0)
if !errors.Is(err, errDuplicate) { if !errors.Is(err, errDuplicate) {
t.Fatalf("expected to have %s error; got %s", errDuplicate, err) t.Fatalf("expected to have %s error; got %s", errDuplicate, err)
@ -899,20 +901,22 @@ func TestAlertingRule_Template(t *testing.T) {
metricWithValueAndLabels(t, 10, "__name__", "second", "instance", "bar", alertNameLabel, "override"), metricWithValueAndLabels(t, 10, "__name__", "second", "instance", "bar", alertNameLabel, "override"),
}, },
map[uint64]*notifier.Alert{ map[uint64]*notifier.Alert{
hash(map[string]string{alertNameLabel: "override label", "instance": "foo"}): { hash(map[string]string{alertNameLabel: "override label", "exported_alertname": "override", "instance": "foo"}): {
Labels: map[string]string{ Labels: map[string]string{
alertNameLabel: "override label", alertNameLabel: "override label",
"instance": "foo", "exported_alertname": "override",
"instance": "foo",
}, },
Annotations: map[string]string{ Annotations: map[string]string{
"summary": `first: Too high connection number for "foo"`, "summary": `first: Too high connection number for "foo"`,
"description": `override: It is 2 connections for "foo"`, "description": `override: It is 2 connections for "foo"`,
}, },
}, },
hash(map[string]string{alertNameLabel: "override label", "instance": "bar"}): { hash(map[string]string{alertNameLabel: "override label", "exported_alertname": "override", "instance": "bar"}): {
Labels: map[string]string{ Labels: map[string]string{
alertNameLabel: "override label", alertNameLabel: "override label",
"instance": "bar", "exported_alertname": "override",
"instance": "bar",
}, },
Annotations: map[string]string{ Annotations: map[string]string{
"summary": `second: Too high connection number for "bar"`, "summary": `second: Too high connection number for "bar"`,
@ -941,14 +945,18 @@ func TestAlertingRule_Template(t *testing.T) {
}, },
map[uint64]*notifier.Alert{ map[uint64]*notifier.Alert{
hash(map[string]string{ hash(map[string]string{
alertNameLabel: "OriginLabels", alertNameLabel: "OriginLabels",
alertGroupNameLabel: "Testing", "exported_alertname": "originAlertname",
"instance": "foo", alertGroupNameLabel: "Testing",
"exported_alertgroup": "originGroupname",
"instance": "foo",
}): { }): {
Labels: map[string]string{ Labels: map[string]string{
alertNameLabel: "OriginLabels", alertNameLabel: "OriginLabels",
alertGroupNameLabel: "Testing", "exported_alertname": "originAlertname",
"instance": "foo", alertGroupNameLabel: "Testing",
"exported_alertgroup": "originGroupname",
"instance": "foo",
}, },
Annotations: map[string]string{ Annotations: map[string]string{
"summary": `Alert "originAlertname(originGroupname)" for instance foo`, "summary": `Alert "originAlertname(originGroupname)" for instance foo`,
@ -1092,3 +1100,54 @@ func newTestAlertingRuleWithKeepFiring(name string, waitFor, keepFiringFor time.
rule.KeepFiringFor = keepFiringFor rule.KeepFiringFor = keepFiringFor
return rule return rule
} }
func TestAlertingRule_ToLabels(t *testing.T) {
metric := datasource.Metric{
Labels: []datasource.Label{
{Name: "instance", Value: "0.0.0.0:8800"},
{Name: "group", Value: "vmalert"},
{Name: "alertname", Value: "ConfigurationReloadFailure"},
},
Values: []float64{1},
Timestamps: []int64{time.Now().UnixNano()},
}
ar := &AlertingRule{
Labels: map[string]string{
"instance": "override", // this should override instance with new value
"group": "vmalert", // this shouldn't have effect since value in metric is equal
},
Expr: "sum(vmalert_alerting_rules_error) by(instance, group, alertname) > 0",
Name: "AlertingRulesError",
GroupName: "vmalert",
}
expectedOriginLabels := map[string]string{
"instance": "0.0.0.0:8800",
"group": "vmalert",
"alertname": "ConfigurationReloadFailure",
"alertgroup": "vmalert",
}
expectedProcessedLabels := map[string]string{
"instance": "override",
"exported_instance": "0.0.0.0:8800",
"alertname": "AlertingRulesError",
"exported_alertname": "ConfigurationReloadFailure",
"group": "vmalert",
"alertgroup": "vmalert",
}
ls, err := ar.toLabels(metric, nil)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if !reflect.DeepEqual(ls.origin, expectedOriginLabels) {
t.Errorf("origin labels mismatch, got: %v, want: %v", ls.origin, expectedOriginLabels)
}
if !reflect.DeepEqual(ls.processed, expectedProcessedLabels) {
t.Errorf("processed labels mismatch, got: %v, want: %v", ls.processed, expectedProcessedLabels)
}
}

View file

@ -194,6 +194,9 @@ func (rr *RecordingRule) toTimeSeries(m datasource.Metric) prompbmarshal.TimeSer
labels["__name__"] = rr.Name labels["__name__"] = rr.Name
// override existing labels with configured ones // override existing labels with configured ones
for k, v := range rr.Labels { for k, v := range rr.Labels {
if _, ok := labels[k]; ok && labels[k] != v {
labels[fmt.Sprintf("exported_%s", k)] = labels[k]
}
labels[k] = v labels[k] = v
} }
return newTimeSeries(m.Values, m.Timestamps, labels) return newTimeSeries(m.Values, m.Timestamps, labels)
@ -203,7 +206,7 @@ func (rr *RecordingRule) toTimeSeries(m datasource.Metric) prompbmarshal.TimeSer
func (rr *RecordingRule) updateWith(r Rule) error { func (rr *RecordingRule) updateWith(r Rule) error {
nr, ok := r.(*RecordingRule) nr, ok := r.(*RecordingRule)
if !ok { if !ok {
return fmt.Errorf("BUG: attempt to update recroding rule with wrong type %#v", r) return fmt.Errorf("BUG: attempt to update recording rule with wrong type %#v", r)
} }
rr.Expr = nr.Expr rr.Expr = nr.Expr
rr.Labels = nr.Labels rr.Labels = nr.Labels

View file

@ -61,7 +61,7 @@ func TestRecordingRule_Exec(t *testing.T) {
}, },
[]datasource.Metric{ []datasource.Metric{
metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "foo"), metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "foo"),
metricWithValueAndLabels(t, 1, "__name__", "bar", "job", "bar"), metricWithValueAndLabels(t, 1, "__name__", "bar", "job", "bar", "source", "origin"),
}, },
[]prompbmarshal.TimeSeries{ []prompbmarshal.TimeSeries{
newTimeSeries([]float64{2}, []int64{timestamp.UnixNano()}, map[string]string{ newTimeSeries([]float64{2}, []int64{timestamp.UnixNano()}, map[string]string{
@ -70,9 +70,10 @@ func TestRecordingRule_Exec(t *testing.T) {
"source": "test", "source": "test",
}), }),
newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{ newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": "job:foo", "__name__": "job:foo",
"job": "bar", "job": "bar",
"source": "test", "source": "test",
"exported_source": "origin",
}), }),
}, },
}, },
@ -254,10 +255,7 @@ func TestRecordingRule_ExecNegative(t *testing.T) {
fq.Add(metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "bar")) fq.Add(metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "bar"))
_, err = rr.exec(context.TODO(), time.Now(), 0) _, err = rr.exec(context.TODO(), time.Now(), 0)
if err == nil { if err != nil {
t.Fatalf("expected to get err; got nil") t.Fatal(err)
}
if !strings.Contains(err.Error(), errDuplicate.Error()) {
t.Fatalf("expected to get err %q; got %q insterad", errDuplicate, err)
} }
} }

View file

@ -45,13 +45,14 @@ func WithBearer(token, tokenFile string) AuthConfigOptions {
} }
// WithOAuth returns AuthConfigOptions and set OAuth params based on given params // WithOAuth returns AuthConfigOptions and set OAuth params based on given params
func WithOAuth(clientID, clientSecret, clientSecretFile, tokenURL, scopes string) AuthConfigOptions { func WithOAuth(clientID, clientSecret, clientSecretFile, tokenURL, scopes string, endpointParams map[string]string) AuthConfigOptions {
return func(config *promauth.HTTPClientConfig) { return func(config *promauth.HTTPClientConfig) {
if clientSecretFile != "" || clientSecret != "" { if clientSecretFile != "" || clientSecret != "" {
config.OAuth2 = &promauth.OAuth2Config{ config.OAuth2 = &promauth.OAuth2Config{
ClientID: clientID, ClientID: clientID,
ClientSecret: promauth.NewSecret(clientSecret), ClientSecret: promauth.NewSecret(clientSecret),
ClientSecretFile: clientSecretFile, ClientSecretFile: clientSecretFile,
EndpointParams: endpointParams,
TokenURL: tokenURL, TokenURL: tokenURL,
Scopes: strings.Split(scopes, ";"), Scopes: strings.Split(scopes, ";"),
} }

View file

@ -386,7 +386,7 @@ func (r *Regex) MarshalYAML() (interface{}, error) {
var ( var (
configReloads = metrics.NewCounter(`vmauth_config_last_reload_total`) configReloads = metrics.NewCounter(`vmauth_config_last_reload_total`)
configReloadErrors = metrics.NewCounter(`vmauth_config_last_reload_errors_total`) configReloadErrors = metrics.NewCounter(`vmauth_config_last_reload_errors_total`)
configSuccess = metrics.NewCounter(`vmauth_config_last_reload_successful`) configSuccess = metrics.NewGauge(`vmauth_config_last_reload_successful`, nil)
configTimestamp = metrics.NewCounter(`vmauth_config_last_reload_success_timestamp_seconds`) configTimestamp = metrics.NewCounter(`vmauth_config_last_reload_success_timestamp_seconds`)
) )

View file

@ -64,7 +64,6 @@ func main() {
envflag.Parse() envflag.Parse()
buildinfo.Init() buildinfo.Init()
logger.Init() logger.Init()
pushmetrics.Init()
logger.Infof("starting vmauth at %q...", *httpListenAddr) logger.Infof("starting vmauth at %q...", *httpListenAddr)
startTime := time.Now() startTime := time.Now()
@ -72,8 +71,10 @@ func main() {
go httpserver.Serve(*httpListenAddr, *useProxyProtocol, requestHandler) go httpserver.Serve(*httpListenAddr, *useProxyProtocol, requestHandler)
logger.Infof("started vmauth in %.3f seconds", time.Since(startTime).Seconds()) logger.Infof("started vmauth in %.3f seconds", time.Since(startTime).Seconds())
pushmetrics.Init()
sig := procutil.WaitForSigterm() sig := procutil.WaitForSigterm()
logger.Infof("received signal %s", sig) logger.Infof("received signal %s", sig)
pushmetrics.Stop()
startTime = time.Now() startTime = time.Now()
logger.Infof("gracefully shutting down webservice at %q", *httpListenAddr) logger.Infof("gracefully shutting down webservice at %q", *httpListenAddr)

View file

@ -47,7 +47,6 @@ func main() {
envflag.Parse() envflag.Parse()
buildinfo.Init() buildinfo.Init()
logger.Init() logger.Init()
pushmetrics.Init()
// Storing snapshot delete function to be able to call it in case // Storing snapshot delete function to be able to call it in case
// of error since logger.Fatal will exit the program without // of error since logger.Fatal will exit the program without
@ -96,11 +95,13 @@ func main() {
go httpserver.Serve(*httpListenAddr, false, nil) go httpserver.Serve(*httpListenAddr, false, nil)
pushmetrics.Init()
err := makeBackup() err := makeBackup()
deleteSnapshot() deleteSnapshot()
if err != nil { if err != nil {
logger.Fatalf("cannot create backup: %s", err) logger.Fatalf("cannot create backup: %s", err)
} }
pushmetrics.Stop()
startTime := time.Now() startTime := time.Now()
logger.Infof("gracefully shutting down http server for metrics at %q", *httpListenAddr) logger.Infof("gracefully shutting down http server for metrics at %q", *httpListenAddr)

View file

@ -330,17 +330,19 @@ const (
vmNativeDisableHTTPKeepAlive = "vm-native-disable-http-keep-alive" vmNativeDisableHTTPKeepAlive = "vm-native-disable-http-keep-alive"
vmNativeDisablePerMetricMigration = "vm-native-disable-per-metric-migration" vmNativeDisablePerMetricMigration = "vm-native-disable-per-metric-migration"
vmNativeSrcAddr = "vm-native-src-addr" vmNativeSrcAddr = "vm-native-src-addr"
vmNativeSrcUser = "vm-native-src-user" vmNativeSrcUser = "vm-native-src-user"
vmNativeSrcPassword = "vm-native-src-password" vmNativeSrcPassword = "vm-native-src-password"
vmNativeSrcHeaders = "vm-native-src-headers" vmNativeSrcHeaders = "vm-native-src-headers"
vmNativeSrcBearerToken = "vm-native-src-bearer-token" vmNativeSrcBearerToken = "vm-native-src-bearer-token"
vmNativeSrcInsecureSkipVerify = "vm-native-src-insecure-skip-verify"
vmNativeDstAddr = "vm-native-dst-addr" vmNativeDstAddr = "vm-native-dst-addr"
vmNativeDstUser = "vm-native-dst-user" vmNativeDstUser = "vm-native-dst-user"
vmNativeDstPassword = "vm-native-dst-password" vmNativeDstPassword = "vm-native-dst-password"
vmNativeDstHeaders = "vm-native-dst-headers" vmNativeDstHeaders = "vm-native-dst-headers"
vmNativeDstBearerToken = "vm-native-dst-bearer-token" vmNativeDstBearerToken = "vm-native-dst-bearer-token"
vmNativeDstInsecureSkipVerify = "vm-native-dst-insecure-skip-verify"
) )
var ( var (
@ -466,6 +468,16 @@ var (
"Non-binary export/import API is less efficient, but supports deduplication if it is configured on vm-native-src-addr side.", "Non-binary export/import API is less efficient, but supports deduplication if it is configured on vm-native-src-addr side.",
Value: false, Value: false,
}, },
&cli.BoolFlag{
Name: vmNativeSrcInsecureSkipVerify,
Usage: "Whether to skip TLS certificate verification when connecting to the source address",
Value: false,
},
&cli.BoolFlag{
Name: vmNativeDstInsecureSkipVerify,
Usage: "Whether to skip TLS certificate verification when connecting to the destination address",
Value: false,
},
} }
) )

View file

@ -2,6 +2,7 @@ package main
import ( import (
"context" "context"
"crypto/tls"
"fmt" "fmt"
"log" "log"
"net/http" "net/http"
@ -212,6 +213,7 @@ func main() {
var srcExtraLabels []string var srcExtraLabels []string
srcAddr := strings.Trim(c.String(vmNativeSrcAddr), "/") srcAddr := strings.Trim(c.String(vmNativeSrcAddr), "/")
srcInsecureSkipVerify := c.Bool(vmNativeSrcInsecureSkipVerify)
srcAuthConfig, err := auth.Generate( srcAuthConfig, err := auth.Generate(
auth.WithBasicAuth(c.String(vmNativeSrcUser), c.String(vmNativeSrcPassword)), auth.WithBasicAuth(c.String(vmNativeSrcUser), c.String(vmNativeSrcPassword)),
auth.WithBearer(c.String(vmNativeSrcBearerToken)), auth.WithBearer(c.String(vmNativeSrcBearerToken)),
@ -219,10 +221,16 @@ func main() {
if err != nil { if err != nil {
return fmt.Errorf("error initilize auth config for source: %s", srcAddr) return fmt.Errorf("error initilize auth config for source: %s", srcAddr)
} }
srcHTTPClient := &http.Client{Transport: &http.Transport{DisableKeepAlives: disableKeepAlive}} srcHTTPClient := &http.Client{Transport: &http.Transport{
DisableKeepAlives: disableKeepAlive,
TLSClientConfig: &tls.Config{
InsecureSkipVerify: srcInsecureSkipVerify,
},
}}
dstAddr := strings.Trim(c.String(vmNativeDstAddr), "/") dstAddr := strings.Trim(c.String(vmNativeDstAddr), "/")
dstExtraLabels := c.StringSlice(vmExtraLabel) dstExtraLabels := c.StringSlice(vmExtraLabel)
dstInsecureSkipVerify := c.Bool(vmNativeDstInsecureSkipVerify)
dstAuthConfig, err := auth.Generate( dstAuthConfig, err := auth.Generate(
auth.WithBasicAuth(c.String(vmNativeDstUser), c.String(vmNativeDstPassword)), auth.WithBasicAuth(c.String(vmNativeDstUser), c.String(vmNativeDstPassword)),
auth.WithBearer(c.String(vmNativeDstBearerToken)), auth.WithBearer(c.String(vmNativeDstBearerToken)),
@ -230,7 +238,12 @@ func main() {
if err != nil { if err != nil {
return fmt.Errorf("error initilize auth config for destination: %s", dstAddr) return fmt.Errorf("error initilize auth config for destination: %s", dstAddr)
} }
dstHTTPClient := &http.Client{Transport: &http.Transport{DisableKeepAlives: disableKeepAlive}} dstHTTPClient := &http.Client{Transport: &http.Transport{
DisableKeepAlives: disableKeepAlive,
TLSClientConfig: &tls.Config{
InsecureSkipVerify: dstInsecureSkipVerify,
},
}}
p := vmNativeProcessor{ p := vmNativeProcessor{
rateLimit: c.Int64(vmRateLimit), rateLimit: c.Int64(vmRateLimit),

View file

@ -266,10 +266,16 @@ func fillStorage(series []vm.TimeSeries) error {
for _, series := range series { for _, series := range series {
var labels []prompb.Label var labels []prompb.Label
for _, lp := range series.LabelPairs { for _, lp := range series.LabelPairs {
labels = append(labels, prompb.Label{Name: []byte(lp.Name), Value: []byte(lp.Value)}) labels = append(labels, prompb.Label{
Name: lp.Name,
Value: lp.Value,
})
} }
if series.Name != "" { if series.Name != "" {
labels = append(labels, prompb.Label{Name: []byte("__name__"), Value: []byte(series.Name)}) labels = append(labels, prompb.Label{
Name: "__name__",
Value: series.Name,
})
} }
mr := storage.MetricRow{} mr := storage.MetricRow{}
mr.MetricNameRaw = storage.MarshalMetricNameRaw(mr.MetricNameRaw[:0], labels) mr.MetricNameRaw = storage.MarshalMetricNameRaw(mr.MetricNameRaw[:0], labels)

View file

@ -27,12 +27,11 @@ type InsertCtx struct {
// Reset resets ctx for future fill with rowsLen rows. // Reset resets ctx for future fill with rowsLen rows.
func (ctx *InsertCtx) Reset(rowsLen int) { func (ctx *InsertCtx) Reset(rowsLen int) {
for i := range ctx.Labels { labels := ctx.Labels
label := &ctx.Labels[i] for i := range labels {
label.Name = nil labels[i] = prompb.Label{}
label.Value = nil
} }
ctx.Labels = ctx.Labels[:0] ctx.Labels = labels[:0]
mrs := ctx.mrs mrs := ctx.mrs
for i := range mrs { for i := range mrs {
@ -112,8 +111,8 @@ func (ctx *InsertCtx) AddLabelBytes(name, value []byte) {
ctx.Labels = append(ctx.Labels, prompb.Label{ ctx.Labels = append(ctx.Labels, prompb.Label{
// Do not copy name and value contents for performance reasons. // Do not copy name and value contents for performance reasons.
// This reduces GC overhead on the number of objects and allocations. // This reduces GC overhead on the number of objects and allocations.
Name: name, Name: bytesutil.ToUnsafeString(name),
Value: value, Value: bytesutil.ToUnsafeString(value),
}) })
} }
@ -130,8 +129,8 @@ func (ctx *InsertCtx) AddLabel(name, value string) {
ctx.Labels = append(ctx.Labels, prompb.Label{ ctx.Labels = append(ctx.Labels, prompb.Label{
// Do not copy name and value contents for performance reasons. // Do not copy name and value contents for performance reasons.
// This reduces GC overhead on the number of objects and allocations. // This reduces GC overhead on the number of objects and allocations.
Name: bytesutil.ToUnsafeBytes(name), Name: name,
Value: bytesutil.ToUnsafeBytes(value), Value: value,
}) })
} }

View file

@ -38,7 +38,7 @@ var (
saCfgReloads = metrics.NewCounter(`vminsert_streamagg_config_reloads_total`) saCfgReloads = metrics.NewCounter(`vminsert_streamagg_config_reloads_total`)
saCfgReloadErr = metrics.NewCounter(`vminsert_streamagg_config_reloads_errors_total`) saCfgReloadErr = metrics.NewCounter(`vminsert_streamagg_config_reloads_errors_total`)
saCfgSuccess = metrics.NewCounter(`vminsert_streamagg_config_last_reload_successful`) saCfgSuccess = metrics.NewGauge(`vminsert_streamagg_config_last_reload_successful`, nil)
saCfgTimestamp = metrics.NewCounter(`vminsert_streamagg_config_last_reload_success_timestamp_seconds`) saCfgTimestamp = metrics.NewCounter(`vminsert_streamagg_config_last_reload_success_timestamp_seconds`)
sasGlobal atomic.Pointer[streamaggr.Aggregators] sasGlobal atomic.Pointer[streamaggr.Aggregators]

View file

@ -1,4 +1,4 @@
package datadog package datadogv1
import ( import (
"net/http" "net/http"
@ -7,31 +7,30 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel" "github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common" parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadog" "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadog/stream" "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1/stream"
"github.com/VictoriaMetrics/metrics" "github.com/VictoriaMetrics/metrics"
) )
var ( var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="datadog"}`) rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="datadogv1"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="datadog"}`) rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="datadogv1"}`)
) )
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v1/series request. // InsertHandlerForHTTP processes remote write for DataDog POST /api/v1/series request.
//
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
func InsertHandlerForHTTP(req *http.Request) error { func InsertHandlerForHTTP(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req) extraLabels, err := parserCommon.GetExtraLabels(req)
if err != nil { if err != nil {
return err return err
} }
ce := req.Header.Get("Content-Encoding") ce := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, ce, func(series []parser.Series) error { return stream.Parse(req.Body, ce, func(series []datadogv1.Series) error {
return insertRows(series, extraLabels) return insertRows(series, extraLabels)
}) })
} }
func insertRows(series []parser.Series, extraLabels []prompbmarshal.Label) error { func insertRows(series []datadogv1.Series, extraLabels []prompbmarshal.Label) error {
ctx := common.GetInsertCtx() ctx := common.GetInsertCtx()
defer common.PutInsertCtx(ctx) defer common.PutInsertCtx(ctx)
@ -54,7 +53,7 @@ func insertRows(series []parser.Series, extraLabels []prompbmarshal.Label) error
ctx.AddLabel("device", ss.Device) ctx.AddLabel("device", ss.Device)
} }
for _, tag := range ss.Tags { for _, tag := range ss.Tags {
name, value := parser.SplitTag(tag) name, value := datadogutils.SplitTag(tag)
if name == "host" { if name == "host" {
name = "exported_host" name = "exported_host"
} }

View file

@ -0,0 +1,91 @@
package datadogv2
import (
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2/stream"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="datadogv2"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="datadogv2"}`)
)
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v2/series request.
//
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
func InsertHandlerForHTTP(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
if err != nil {
return err
}
ct := req.Header.Get("Content-Type")
ce := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, ce, ct, func(series []datadogv2.Series) error {
return insertRows(series, extraLabels)
})
}
func insertRows(series []datadogv2.Series, extraLabels []prompbmarshal.Label) error {
ctx := common.GetInsertCtx()
defer common.PutInsertCtx(ctx)
rowsLen := 0
for i := range series {
rowsLen += len(series[i].Points)
}
ctx.Reset(rowsLen)
rowsTotal := 0
hasRelabeling := relabel.HasRelabeling()
for i := range series {
ss := &series[i]
rowsTotal += len(ss.Points)
ctx.Labels = ctx.Labels[:0]
ctx.AddLabel("", ss.Metric)
for _, rs := range ss.Resources {
ctx.AddLabel(rs.Type, rs.Name)
}
for _, tag := range ss.Tags {
name, value := datadogutils.SplitTag(tag)
if name == "host" {
name = "exported_host"
}
ctx.AddLabel(name, value)
}
if ss.SourceTypeName != "" {
ctx.AddLabel("source_type_name", ss.SourceTypeName)
}
for j := range extraLabels {
label := &extraLabels[j]
ctx.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
continue
}
ctx.SortLabelsIfNeeded()
var metricNameRaw []byte
var err error
for _, pt := range ss.Points {
timestamp := pt.Timestamp * 1000
value := pt.Value
metricNameRaw, err = ctx.WriteDataPointExt(metricNameRaw, ctx.Labels, timestamp, value)
if err != nil {
return err
}
}
}
rowsInserted.Add(rowsTotal)
rowsPerInsert.Update(float64(rowsTotal))
return ctx.FlushBufs()
}

View file

@ -160,11 +160,9 @@ func (ctx *pushCtx) reset() {
originLabels := ctx.originLabels originLabels := ctx.originLabels
for i := range originLabels { for i := range originLabels {
label := &originLabels[i] originLabels[i] = prompb.Label{}
label.Name = nil
label.Value = nil
} }
ctx.originLabels = ctx.originLabels[:0] ctx.originLabels = originLabels[:0]
} }
func getPushCtx() *pushCtx { func getPushCtx() *pushCtx {

View file

@ -13,7 +13,8 @@ import (
vminsertCommon "github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common" vminsertCommon "github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/csvimport" "github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/datadog" "github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/datadogv1"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/datadogv2"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/graphite" "github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/graphite"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/influx" "github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/native" "github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/native"
@ -247,9 +248,20 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
fmt.Fprintf(w, `{"status":"ok"}`) fmt.Fprintf(w, `{"status":"ok"}`)
return true return true
case "/datadog/api/v1/series": case "/datadog/api/v1/series":
datadogWriteRequests.Inc() datadogv1WriteRequests.Inc()
if err := datadog.InsertHandlerForHTTP(r); err != nil { if err := datadogv1.InsertHandlerForHTTP(r); err != nil {
datadogWriteErrors.Inc() datadogv1WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "/datadog/api/v2/series":
datadogv2WriteRequests.Inc()
if err := datadogv2.InsertHandlerForHTTP(r); err != nil {
datadogv2WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err) httpserver.Errorf(w, r, "%s", err)
return true return true
} }
@ -375,8 +387,11 @@ var (
influxQueryRequests = metrics.NewCounter(`vm_http_requests_total{path="/influx/query", protocol="influx"}`) influxQueryRequests = metrics.NewCounter(`vm_http_requests_total{path="/influx/query", protocol="influx"}`)
datadogWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/series", protocol="datadog"}`) datadogv1WriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/datadog/api/v1/series", protocol="datadog"}`) datadogv1WriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogv2WriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v2/series", protocol="datadog"}`)
datadogv2WriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/datadog/api/v2/series", protocol="datadog"}`)
datadogValidateRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`) datadogValidateRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`)
datadogCheckRunRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`) datadogCheckRunRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`)

View file

@ -46,7 +46,7 @@ func insertRows(timeseries []prompb.TimeSeries, extraLabels []prompbmarshal.Labe
ctx.Labels = ctx.Labels[:0] ctx.Labels = ctx.Labels[:0]
srcLabels := ts.Labels srcLabels := ts.Labels
for _, srcLabel := range srcLabels { for _, srcLabel := range srcLabels {
ctx.AddLabelBytes(srcLabel.Name, srcLabel.Value) ctx.AddLabel(srcLabel.Name, srcLabel.Value)
} }
for j := range extraLabels { for j := range extraLabels {
label := &extraLabels[j] label := &extraLabels[j]

View file

@ -5,7 +5,6 @@ import (
"fmt" "fmt"
"sync/atomic" "sync/atomic"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime" "github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger" "github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
@ -65,7 +64,7 @@ func Init() {
var ( var (
configReloads = metrics.NewCounter(`vm_relabel_config_reloads_total`) configReloads = metrics.NewCounter(`vm_relabel_config_reloads_total`)
configReloadErrors = metrics.NewCounter(`vm_relabel_config_reloads_errors_total`) configReloadErrors = metrics.NewCounter(`vm_relabel_config_reloads_errors_total`)
configSuccess = metrics.NewCounter(`vm_relabel_config_last_reload_successful`) configSuccess = metrics.NewGauge(`vm_relabel_config_last_reload_successful`, nil)
configTimestamp = metrics.NewCounter(`vm_relabel_config_last_reload_success_timestamp_seconds`) configTimestamp = metrics.NewCounter(`vm_relabel_config_last_reload_success_timestamp_seconds`)
) )
@ -118,11 +117,11 @@ func (ctx *Ctx) ApplyRelabeling(labels []prompb.Label) []prompb.Label {
// Convert labels to prompbmarshal.Label format suitable for relabeling. // Convert labels to prompbmarshal.Label format suitable for relabeling.
tmpLabels := ctx.tmpLabels[:0] tmpLabels := ctx.tmpLabels[:0]
for _, label := range labels { for _, label := range labels {
name := bytesutil.ToUnsafeString(label.Name) name := label.Name
if len(name) == 0 { if name == "" {
name = "__name__" name = "__name__"
} }
value := bytesutil.ToUnsafeString(label.Value) value := label.Value
tmpLabels = append(tmpLabels, prompbmarshal.Label{ tmpLabels = append(tmpLabels, prompbmarshal.Label{
Name: name, Name: name,
Value: value, Value: value,
@ -155,11 +154,11 @@ func (ctx *Ctx) ApplyRelabeling(labels []prompb.Label) []prompb.Label {
// Return back labels to the desired format. // Return back labels to the desired format.
dst := labels[:0] dst := labels[:0]
for _, label := range tmpLabels { for _, label := range tmpLabels {
name := bytesutil.ToUnsafeBytes(label.Name) name := label.Name
if label.Name == "__name__" { if label.Name == "__name__" {
name = nil name = ""
} }
value := bytesutil.ToUnsafeBytes(label.Value) value := label.Value
dst = append(dst, prompb.Label{ dst = append(dst, prompb.Label{
Name: name, Name: name,
Value: value, Value: value,

View file

@ -36,7 +36,6 @@ func main() {
envflag.Parse() envflag.Parse()
buildinfo.Init() buildinfo.Init()
logger.Init() logger.Init()
pushmetrics.Init()
go httpserver.Serve(*httpListenAddr, false, nil) go httpserver.Serve(*httpListenAddr, false, nil)
@ -54,9 +53,11 @@ func main() {
Dst: dstFS, Dst: dstFS,
SkipBackupCompleteCheck: *skipBackupCompleteCheck, SkipBackupCompleteCheck: *skipBackupCompleteCheck,
} }
pushmetrics.Init()
if err := a.Run(); err != nil { if err := a.Run(); err != nil {
logger.Fatalf("cannot restore from backup: %s", err) logger.Fatalf("cannot restore from backup: %s", err)
} }
pushmetrics.Stop()
srcFS.MustStop() srcFS.MustStop()
dstFS.MustStop() dstFS.MustStop()

View file

@ -123,13 +123,13 @@ func registerMetrics(startTime time.Time, w http.ResponseWriter, r *http.Request
// Convert parsed metric and tags to labels. // Convert parsed metric and tags to labels.
labels = append(labels[:0], prompb.Label{ labels = append(labels[:0], prompb.Label{
Name: []byte("__name__"), Name: "__name__",
Value: []byte(row.Metric), Value: row.Metric,
}) })
for _, tag := range row.Tags { for _, tag := range row.Tags {
labels = append(labels, prompb.Label{ labels = append(labels, prompb.Label{
Name: []byte(tag.Key), Name: tag.Key,
Value: []byte(tag.Value), Value: tag.Value,
}) })
} }

View file

@ -3599,6 +3599,17 @@ func groupSeriesByNodes(ss []*series, nodes []graphiteql.Expr) map[string][]*ser
return m return m
} }
func getAbsoluteNodeIndex(index, size int) int {
// Handle the negative index case as Python does
if index < 0 {
index = size + index
}
if index < 0 || index >= size {
return -1
}
return index
}
func getNameFromNodes(name string, tags map[string]string, nodes []graphiteql.Expr) string { func getNameFromNodes(name string, tags map[string]string, nodes []graphiteql.Expr) string {
if len(nodes) == 0 { if len(nodes) == 0 {
return "" return ""
@ -3609,7 +3620,7 @@ func getNameFromNodes(name string, tags map[string]string, nodes []graphiteql.Ex
for _, node := range nodes { for _, node := range nodes {
switch t := node.(type) { switch t := node.(type) {
case *graphiteql.NumberExpr: case *graphiteql.NumberExpr:
if n := int(t.N); n >= 0 && n < len(parts) { if n := getAbsoluteNodeIndex(int(t.N), len(parts)); n >= 0 {
dstParts = append(dstParts, parts[n]) dstParts = append(dstParts, parts[n])
} }
case *graphiteql.StringExpr: case *graphiteql.StringExpr:

View file

@ -79,3 +79,31 @@ func TestGraphiteToGolangRegexpReplace(t *testing.T) {
f(`a\d+`, `a\d+`) f(`a\d+`, `a\d+`)
f(`\1f\\oo\2`, `$1f\\oo$2`) f(`\1f\\oo\2`, `$1f\\oo$2`)
} }
func TestGetAbsoluteNodeIndex(t *testing.T) {
f := func(index, size, expectedIndex int) {
t.Helper()
absoluteIndex := getAbsoluteNodeIndex(index, size)
if absoluteIndex != expectedIndex {
t.Fatalf("unexpected result for getAbsoluteNodeIndex(%d, %d); got %d; want %d", index, size, expectedIndex, absoluteIndex)
}
}
f(1, 1, -1)
f(0, 1, 0)
f(-1, 3, 2)
f(-3, 1, -1)
f(-1, 1, 0)
f(-2, 1, -1)
f(3, 2, -1)
f(2, 2, -1)
f(1, 2, 1)
f(0, 2, 0)
f(-1, 2, 1)
f(-2, 2, 0)
f(-3, 2, -1)
f(-5, 2, -1)
f(-1, 100, 99)
f(-99, 100, 1)
f(-100, 100, 0)
f(-101, 100, -1)
}

View file

@ -718,6 +718,9 @@ func QueryHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWr
start -= offset start -= offset
end := start end := start
start = end - window start = end - window
if start < 0 {
start = 0
}
// Do not include data point with a timestamp matching the lower boundary of the window as Prometheus does. // Do not include data point with a timestamp matching the lower boundary of the window as Prometheus does.
start++ start++
if end < start { if end < start {

View file

@ -651,13 +651,14 @@ func newAggrFuncTopK(isReverse bool) aggrFunc {
} }
afe := func(tss []*timeseries, modififer *metricsql.ModifierExpr) []*timeseries { afe := func(tss []*timeseries, modififer *metricsql.ModifierExpr) []*timeseries {
for n := range tss[0].Values { for n := range tss[0].Values {
lessFunc := lessWithNaNs
if isReverse {
lessFunc = greaterWithNaNs
}
sort.Slice(tss, func(i, j int) bool { sort.Slice(tss, func(i, j int) bool {
a := tss[i].Values[n] a := tss[i].Values[n]
b := tss[j].Values[n] b := tss[j].Values[n]
if isReverse { return lessFunc(a, b)
a, b = b, a
}
return lessWithNaNs(a, b)
}) })
fillNaNsAtIdx(n, ks[n], tss) fillNaNsAtIdx(n, ks[n], tss)
} }
@ -710,17 +711,19 @@ func getRangeTopKTimeseries(tss []*timeseries, modifier *metricsql.ModifierExpr,
value: value, value: value,
} }
} }
lessFunc := lessWithNaNs
if isReverse {
lessFunc = greaterWithNaNs
}
sort.Slice(maxs, func(i, j int) bool { sort.Slice(maxs, func(i, j int) bool {
a := maxs[i].value a := maxs[i].value
b := maxs[j].value b := maxs[j].value
if isReverse { return lessFunc(a, b)
a, b = b, a
}
return lessWithNaNs(a, b)
}) })
for i := range maxs { for i := range maxs {
tss[i] = maxs[i].ts tss[i] = maxs[i].ts
} }
remainingSumTS := getRemainingSumTimeseries(tss, modifier, ks, remainingSumTagName) remainingSumTS := getRemainingSumTimeseries(tss, modifier, ks, remainingSumTagName)
for i, k := range ks { for i, k := range ks {
fillNaNsAtIdx(i, k, tss) fillNaNsAtIdx(i, k, tss)
@ -1253,12 +1256,27 @@ func newAggrQuantileFunc(phis []float64) func(tss []*timeseries, modifier *metri
} }
func lessWithNaNs(a, b float64) bool { func lessWithNaNs(a, b float64) bool {
// consider NaNs are smaller than non-NaNs
if math.IsNaN(a) { if math.IsNaN(a) {
return !math.IsNaN(b) return !math.IsNaN(b)
} }
if math.IsNaN(b) {
return false
}
return a < b return a < b
} }
func greaterWithNaNs(a, b float64) bool {
// consider NaNs are bigger than non-NaNs
if math.IsNaN(a) {
return !math.IsNaN(b)
}
if math.IsNaN(b) {
return false
}
return a > b
}
func floatToIntBounded(f float64) int { func floatToIntBounded(f float64) int {
if f > math.MaxInt { if f > math.MaxInt {
return math.MaxInt return math.MaxInt

View file

@ -2,9 +2,57 @@ package promql
import ( import (
"math" "math"
"sort"
"testing" "testing"
) )
func TestSortWithNaNs(t *testing.T) {
f := func(a []float64, ascExpected, descExpected []float64) {
t.Helper()
equalSlices := func(a, b []float64) bool {
for i := range a {
x := a[i]
y := b[i]
if math.IsNaN(x) {
return math.IsNaN(y)
}
if math.IsNaN(y) {
return false
}
if x != y {
return false
}
}
return true
}
aCopy := append([]float64{}, a...)
sort.Slice(aCopy, func(i, j int) bool {
return lessWithNaNs(aCopy[i], aCopy[j])
})
if !equalSlices(aCopy, ascExpected) {
t.Fatalf("unexpected slice after asc sorting; got\n%v\nwant\n%v", aCopy, ascExpected)
}
aCopy = append(aCopy[:0], a...)
sort.Slice(aCopy, func(i, j int) bool {
return greaterWithNaNs(aCopy[i], aCopy[j])
})
if !equalSlices(aCopy, descExpected) {
t.Fatalf("unexpected slice after desc sorting; got\n%v\nwant\n%v", aCopy, descExpected)
}
}
f(nil, nil, nil)
f([]float64{1}, []float64{1}, []float64{1})
f([]float64{1, nan, 3, 2}, []float64{nan, 1, 2, 3}, []float64{nan, 3, 2, 1})
f([]float64{nan}, []float64{nan}, []float64{nan})
f([]float64{nan, nan, nan}, []float64{nan, nan, nan}, []float64{nan, nan, nan})
f([]float64{nan, 1, nan}, []float64{nan, nan, 1}, []float64{nan, nan, 1})
f([]float64{nan, 1, 0, 2, nan}, []float64{nan, nan, 0, 1, 2}, []float64{nan, nan, 2, 1, 0})
}
func TestModeNoNaNs(t *testing.T) { func TestModeNoNaNs(t *testing.T) {
f := func(prevValue float64, a []float64, expectedResult float64) { f := func(prevValue float64, a []float64, expectedResult float64) {
t.Helper() t.Helper()

View file

@ -404,9 +404,15 @@ func binaryOpDefault(bfa *binaryOpFuncArg) ([]*timeseries, error) {
func binaryOpOr(bfa *binaryOpFuncArg) ([]*timeseries, error) { func binaryOpOr(bfa *binaryOpFuncArg) ([]*timeseries, error) {
mLeft, mRight := createTimeseriesMapByTagSet(bfa.be, bfa.left, bfa.right) mLeft, mRight := createTimeseriesMapByTagSet(bfa.be, bfa.left, bfa.right)
var rvs []*timeseries var rvs []*timeseries
for _, tss := range mLeft { for _, tss := range mLeft {
rvs = append(rvs, tss...) rvs = append(rvs, tss...)
} }
// Sort left-hand-side series by metric name as Prometheus does.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5393
sortSeriesByMetricName(rvs)
rvsLen := len(rvs)
for k, tssRight := range mRight { for k, tssRight := range mRight {
tssLeft := mLeft[k] tssLeft := mLeft[k]
if tssLeft == nil { if tssLeft == nil {
@ -415,6 +421,10 @@ func binaryOpOr(bfa *binaryOpFuncArg) ([]*timeseries, error) {
} }
fillLeftNaNsWithRightValues(tssLeft, tssRight) fillLeftNaNsWithRightValues(tssLeft, tssRight)
} }
// Sort the added right-hand-side series by metric name as Prometheus does.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5393
sortSeriesByMetricName(rvs[rvsLen:])
return rvs, nil return rvs, nil
} }

View file

@ -110,6 +110,7 @@ func maySortResults(e metricsql.Expr) bool {
case "sort", "sort_desc", case "sort", "sort_desc",
"sort_by_label", "sort_by_label_desc", "sort_by_label", "sort_by_label_desc",
"sort_by_label_numeric", "sort_by_label_numeric_desc": "sort_by_label_numeric", "sort_by_label_numeric_desc":
// Results already sorted
return false return false
} }
case *metricsql.AggrFuncExpr: case *metricsql.AggrFuncExpr:
@ -117,6 +118,7 @@ func maySortResults(e metricsql.Expr) bool {
case "topk", "bottomk", "outliersk", case "topk", "bottomk", "outliersk",
"topk_max", "topk_min", "topk_avg", "topk_median", "topk_last", "topk_max", "topk_min", "topk_avg", "topk_median", "topk_last",
"bottomk_max", "bottomk_min", "bottomk_avg", "bottomk_median", "bottomk_last": "bottomk_max", "bottomk_min", "bottomk_avg", "bottomk_median", "bottomk_last":
// Results already sorted
return false return false
} }
case *metricsql.BinaryOpExpr: case *metricsql.BinaryOpExpr:
@ -131,6 +133,10 @@ func maySortResults(e metricsql.Expr) bool {
func timeseriesToResult(tss []*timeseries, maySort bool) ([]netstorage.Result, error) { func timeseriesToResult(tss []*timeseries, maySort bool) ([]netstorage.Result, error) {
tss = removeEmptySeries(tss) tss = removeEmptySeries(tss)
if maySort {
sortSeriesByMetricName(tss)
}
result := make([]netstorage.Result, len(tss)) result := make([]netstorage.Result, len(tss))
m := make(map[string]struct{}, len(tss)) m := make(map[string]struct{}, len(tss))
bb := bbPool.Get() bb := bbPool.Get()
@ -151,15 +157,15 @@ func timeseriesToResult(tss []*timeseries, maySort bool) ([]netstorage.Result, e
} }
bbPool.Put(bb) bbPool.Put(bb)
if maySort {
sort.Slice(result, func(i, j int) bool {
return metricNameLess(&result[i].MetricName, &result[j].MetricName)
})
}
return result, nil return result, nil
} }
func sortSeriesByMetricName(tss []*timeseries) {
sort.Slice(tss, func(i, j int) bool {
return metricNameLess(&tss[i].MetricName, &tss[j].MetricName)
})
}
func metricNameLess(a, b *storage.MetricName) bool { func metricNameLess(a, b *storage.MetricName) bool {
if string(a.MetricGroup) != string(b.MetricGroup) { if string(a.MetricGroup) != string(b.MetricGroup) {
return string(a.MetricGroup) < string(b.MetricGroup) return string(a.MetricGroup) < string(b.MetricGroup)

View file

@ -3049,6 +3049,51 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{r} resultExpected := []netstorage.Result{r}
f(q, resultExpected) f(q, resultExpected)
}) })
t.Run(`series or series`, func(t *testing.T) {
t.Parallel()
q := `(
label_set(time(), "x", "foo"),
label_set(time()+1, "x", "bar"),
) or (
label_set(time()+2, "x", "foo"),
label_set(time()+3, "x", "baz"),
)`
r1 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{1001, 1201, 1401, 1601, 1801, 2001},
Timestamps: timestampsExpected,
}
r1.MetricName.Tags = []storage.Tag{
{
Key: []byte("x"),
Value: []byte("bar"),
},
}
r2 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{1000, 1200, 1400, 1600, 1800, 2000},
Timestamps: timestampsExpected,
}
r2.MetricName.Tags = []storage.Tag{
{
Key: []byte("x"),
Value: []byte("foo"),
},
}
r3 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{1003, 1203, 1403, 1603, 1803, 2003},
Timestamps: timestampsExpected,
}
r3.MetricName.Tags = []storage.Tag{
{
Key: []byte("x"),
Value: []byte("baz"),
},
}
resultExpected := []netstorage.Result{r1, r2, r3}
f(q, resultExpected)
})
t.Run(`scalar or scalar`, func(t *testing.T) { t.Run(`scalar or scalar`, func(t *testing.T) {
t.Parallel() t.Parallel()
q := `time() > 1400 or 123` q := `time() > 1400 or 123`
@ -6545,7 +6590,7 @@ func TestExecSuccess(t *testing.T) {
}) })
t.Run(`bottomk(1)`, func(t *testing.T) { t.Run(`bottomk(1)`, func(t *testing.T) {
t.Parallel() t.Parallel()
q := `bottomk(1, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss"))` q := `bottomk(1, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss") or label_set(time()<100, "a", "b"))`
r1 := netstorage.Result{ r1 := netstorage.Result{
MetricName: metricNameExpected, MetricName: metricNameExpected,
Values: []float64{nan, nan, nan, 10, 10, 10}, Values: []float64{nan, nan, nan, 10, 10, 10},

View file

@ -2182,6 +2182,8 @@ func rollupFirst(rfa *rollupFuncArg) float64 {
return values[0] return values[0]
} }
var rollupLast = rollupDefault
func rollupDefault(rfa *rollupFuncArg) float64 { func rollupDefault(rfa *rollupFuncArg) float64 {
values := rfa.values values := rfa.values
if len(values) == 0 { if len(values) == 0 {
@ -2195,17 +2197,6 @@ func rollupDefault(rfa *rollupFuncArg) float64 {
return values[len(values)-1] return values[len(values)-1]
} }
func rollupLast(rfa *rollupFuncArg) float64 {
values := rfa.values
if len(values) == 0 {
// Do not take into account rfa.prevValue, since it may lead
// to inconsistent results comparing to Prometheus on broken time series
// with irregular data points.
return nan
}
return values[len(values)-1]
}
func rollupDistinct(rfa *rollupFuncArg) float64 { func rollupDistinct(rfa *rollupFuncArg) float64 {
// There is no need in handling NaNs here, since they must be cleaned up // There is no need in handling NaNs here, since they must be cleaned up
// before calling rollup funcs. // before calling rollup funcs.

View file

@ -4,6 +4,7 @@ import (
"errors" "errors"
"flag" "flag"
"fmt" "fmt"
"io"
"net/http" "net/http"
"strings" "strings"
"sync" "sync"
@ -121,9 +122,17 @@ func Init(resetCacheIfNeeded func(mrs []storage.MetricRow)) {
sizeBytes := tm.SmallSizeBytes + tm.BigSizeBytes sizeBytes := tm.SmallSizeBytes + tm.BigSizeBytes
logger.Infof("successfully opened storage %q in %.3f seconds; partsCount: %d; blocksCount: %d; rowsCount: %d; sizeBytes: %d", logger.Infof("successfully opened storage %q in %.3f seconds; partsCount: %d; blocksCount: %d; rowsCount: %d; sizeBytes: %d",
*DataPath, time.Since(startTime).Seconds(), partsCount, blocksCount, rowsCount, sizeBytes) *DataPath, time.Since(startTime).Seconds(), partsCount, blocksCount, rowsCount, sizeBytes)
registerStorageMetrics(Storage)
// register storage metrics
storageMetrics = metrics.NewSet()
storageMetrics.RegisterMetricsWriter(func(w io.Writer) {
writeStorageMetrics(w, strg)
})
metrics.RegisterSet(storageMetrics)
} }
var storageMetrics *metrics.Set
// Storage is a storage. // Storage is a storage.
// //
// Every storage call must be wrapped into WG.Add(1) ... WG.Done() // Every storage call must be wrapped into WG.Add(1) ... WG.Done()
@ -232,6 +241,10 @@ func GetSeriesCount(deadline uint64) (uint64, error) {
// Stop stops the vmstorage // Stop stops the vmstorage
func Stop() { func Stop() {
// deregister storage metrics
metrics.UnregisterSet(storageMetrics)
storageMetrics = nil
logger.Infof("gracefully closing the storage at %s", *DataPath) logger.Infof("gracefully closing the storage at %s", *DataPath)
startTime := time.Now() startTime := time.Now()
WG.WaitAndBlock() WG.WaitAndBlock()
@ -429,497 +442,194 @@ var (
snapshotsDeleteAllErrorsTotal = metrics.NewCounter(`vm_http_request_errors_total{path="/snapshot/delete_all"}`) snapshotsDeleteAllErrorsTotal = metrics.NewCounter(`vm_http_request_errors_total{path="/snapshot/delete_all"}`)
) )
func registerStorageMetrics(strg *storage.Storage) { func writeStorageMetrics(w io.Writer, strg *storage.Storage) {
mCache := &storage.Metrics{} var m storage.Metrics
var mCacheLock sync.Mutex strg.UpdateMetrics(&m)
var lastUpdateTime time.Time tm := &m.TableMetrics
idbm := &m.IndexDBMetrics
m := func() *storage.Metrics { metrics.WriteGaugeUint64(w, fmt.Sprintf(`vm_free_disk_space_bytes{path=%q}`, *DataPath), fs.MustGetFreeSpace(*DataPath))
mCacheLock.Lock() metrics.WriteGaugeUint64(w, fmt.Sprintf(`vm_free_disk_space_limit_bytes{path=%q}`, *DataPath), uint64(minFreeDiskSpaceBytes.N))
defer mCacheLock.Unlock()
if time.Since(lastUpdateTime) < time.Second { isReadOnly := 0
return mCache if strg.IsReadOnly() {
} isReadOnly = 1
var mc storage.Metrics
strg.UpdateMetrics(&mc)
mCache = &mc
lastUpdateTime = time.Now()
return mCache
}
tm := func() *storage.TableMetrics {
sm := m()
return &sm.TableMetrics
}
idbm := func() *storage.IndexDBMetrics {
sm := m()
return &sm.IndexDBMetrics
} }
metrics.WriteGaugeUint64(w, fmt.Sprintf(`vm_storage_is_read_only{path=%q}`, *DataPath), uint64(isReadOnly))
metrics.NewGauge(fmt.Sprintf(`vm_free_disk_space_bytes{path=%q}`, *DataPath), func() float64 { metrics.WriteGaugeUint64(w, `vm_active_merges{type="storage/inmemory"}`, tm.ActiveInmemoryMerges)
return float64(fs.MustGetFreeSpace(*DataPath)) metrics.WriteGaugeUint64(w, `vm_active_merges{type="storage/small"}`, tm.ActiveSmallMerges)
}) metrics.WriteGaugeUint64(w, `vm_active_merges{type="storage/big"}`, tm.ActiveBigMerges)
metrics.NewGauge(fmt.Sprintf(`vm_free_disk_space_limit_bytes{path=%q}`, *DataPath), func() float64 { metrics.WriteGaugeUint64(w, `vm_active_merges{type="indexdb/inmemory"}`, idbm.ActiveInmemoryMerges)
return float64(minFreeDiskSpaceBytes.N) metrics.WriteGaugeUint64(w, `vm_active_merges{type="indexdb/file"}`, idbm.ActiveFileMerges)
})
metrics.NewGauge(fmt.Sprintf(`vm_storage_is_read_only{path=%q}`, *DataPath), func() float64 {
if strg.IsReadOnly() {
return 1
}
return 0
})
metrics.NewGauge(`vm_active_merges{type="storage/inmemory"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_merges_total{type="storage/inmemory"}`, tm.InmemoryMergesCount)
return float64(tm().ActiveInmemoryMerges) metrics.WriteCounterUint64(w, `vm_merges_total{type="storage/small"}`, tm.SmallMergesCount)
}) metrics.WriteCounterUint64(w, `vm_merges_total{type="storage/big"}`, tm.BigMergesCount)
metrics.NewGauge(`vm_active_merges{type="storage/small"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_merges_total{type="indexdb/inmemory"}`, idbm.InmemoryMergesCount)
return float64(tm().ActiveSmallMerges) metrics.WriteCounterUint64(w, `vm_merges_total{type="indexdb/file"}`, idbm.FileMergesCount)
})
metrics.NewGauge(`vm_active_merges{type="storage/big"}`, func() float64 {
return float64(tm().ActiveBigMerges)
})
metrics.NewGauge(`vm_active_merges{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().ActiveInmemoryMerges)
})
metrics.NewGauge(`vm_active_merges{type="indexdb/file"}`, func() float64 {
return float64(idbm().ActiveFileMerges)
})
metrics.NewGauge(`vm_merges_total{type="storage/inmemory"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_rows_merged_total{type="storage/inmemory"}`, tm.InmemoryRowsMerged)
return float64(tm().InmemoryMergesCount) metrics.WriteCounterUint64(w, `vm_rows_merged_total{type="storage/small"}`, tm.SmallRowsMerged)
}) metrics.WriteCounterUint64(w, `vm_rows_merged_total{type="storage/big"}`, tm.BigRowsMerged)
metrics.NewGauge(`vm_merges_total{type="storage/small"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_rows_merged_total{type="indexdb/inmemory"}`, idbm.InmemoryItemsMerged)
return float64(tm().SmallMergesCount) metrics.WriteCounterUint64(w, `vm_rows_merged_total{type="indexdb/file"}`, idbm.FileItemsMerged)
})
metrics.NewGauge(`vm_merges_total{type="storage/big"}`, func() float64 {
return float64(tm().BigMergesCount)
})
metrics.NewGauge(`vm_merges_total{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().InmemoryMergesCount)
})
metrics.NewGauge(`vm_merges_total{type="indexdb/file"}`, func() float64 {
return float64(idbm().FileMergesCount)
})
metrics.NewGauge(`vm_rows_merged_total{type="storage/inmemory"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_rows_deleted_total{type="storage/inmemory"}`, tm.InmemoryRowsDeleted)
return float64(tm().InmemoryRowsMerged) metrics.WriteCounterUint64(w, `vm_rows_deleted_total{type="storage/small"}`, tm.SmallRowsDeleted)
}) metrics.WriteCounterUint64(w, `vm_rows_deleted_total{type="storage/big"}`, tm.BigRowsDeleted)
metrics.NewGauge(`vm_rows_merged_total{type="storage/small"}`, func() float64 {
return float64(tm().SmallRowsMerged)
})
metrics.NewGauge(`vm_rows_merged_total{type="storage/big"}`, func() float64 {
return float64(tm().BigRowsMerged)
})
metrics.NewGauge(`vm_rows_merged_total{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().InmemoryItemsMerged)
})
metrics.NewGauge(`vm_rows_merged_total{type="indexdb/file"}`, func() float64 {
return float64(idbm().FileItemsMerged)
})
metrics.NewGauge(`vm_rows_deleted_total{type="storage/inmemory"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_part_references{type="storage/inmemory"}`, tm.InmemoryPartsRefCount)
return float64(tm().InmemoryRowsDeleted) metrics.WriteGaugeUint64(w, `vm_part_references{type="storage/small"}`, tm.SmallPartsRefCount)
}) metrics.WriteGaugeUint64(w, `vm_part_references{type="storage/big"}`, tm.BigPartsRefCount)
metrics.NewGauge(`vm_rows_deleted_total{type="storage/small"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_partition_references{type="storage"}`, tm.PartitionsRefCount)
return float64(tm().SmallRowsDeleted) metrics.WriteGaugeUint64(w, `vm_object_references{type="indexdb"}`, idbm.IndexDBRefCount)
}) metrics.WriteGaugeUint64(w, `vm_part_references{type="indexdb"}`, idbm.PartsRefCount)
metrics.NewGauge(`vm_rows_deleted_total{type="storage/big"}`, func() float64 {
return float64(tm().BigRowsDeleted)
})
metrics.NewGauge(`vm_part_references{type="storage/inmemory"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_missing_tsids_for_metric_id_total`, idbm.MissingTSIDsForMetricID)
return float64(tm().InmemoryPartsRefCount) metrics.WriteCounterUint64(w, `vm_index_blocks_with_metric_ids_processed_total`, idbm.IndexBlocksWithMetricIDsProcessed)
}) metrics.WriteCounterUint64(w, `vm_index_blocks_with_metric_ids_incorrect_order_total`, idbm.IndexBlocksWithMetricIDsIncorrectOrder)
metrics.NewGauge(`vm_part_references{type="storage/small"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_composite_index_min_timestamp`, idbm.MinTimestampForCompositeIndex/1e3)
return float64(tm().SmallPartsRefCount) metrics.WriteCounterUint64(w, `vm_composite_filter_success_conversions_total`, idbm.CompositeFilterSuccessConversions)
}) metrics.WriteCounterUint64(w, `vm_composite_filter_missing_conversions_total`, idbm.CompositeFilterMissingConversions)
metrics.NewGauge(`vm_part_references{type="storage/big"}`, func() float64 {
return float64(tm().BigPartsRefCount)
})
metrics.NewGauge(`vm_partition_references{type="storage"}`, func() float64 {
return float64(tm().PartitionsRefCount)
})
metrics.NewGauge(`vm_object_references{type="indexdb"}`, func() float64 {
return float64(idbm().IndexDBRefCount)
})
metrics.NewGauge(`vm_part_references{type="indexdb"}`, func() float64 {
return float64(idbm().PartsRefCount)
})
metrics.NewGauge(`vm_missing_tsids_for_metric_id_total`, func() float64 { metrics.WriteCounterUint64(w, `vm_assisted_merges_total{type="storage/inmemory"}`, tm.InmemoryAssistedMerges)
return float64(idbm().MissingTSIDsForMetricID) metrics.WriteCounterUint64(w, `vm_assisted_merges_total{type="storage/small"}`, tm.SmallAssistedMerges)
})
metrics.NewGauge(`vm_index_blocks_with_metric_ids_processed_total`, func() float64 {
return float64(idbm().IndexBlocksWithMetricIDsProcessed)
})
metrics.NewGauge(`vm_index_blocks_with_metric_ids_incorrect_order_total`, func() float64 {
return float64(idbm().IndexBlocksWithMetricIDsIncorrectOrder)
})
metrics.NewGauge(`vm_composite_index_min_timestamp`, func() float64 {
return float64(idbm().MinTimestampForCompositeIndex) / 1e3
})
metrics.NewGauge(`vm_composite_filter_success_conversions_total`, func() float64 {
return float64(idbm().CompositeFilterSuccessConversions)
})
metrics.NewGauge(`vm_composite_filter_missing_conversions_total`, func() float64 {
return float64(idbm().CompositeFilterMissingConversions)
})
metrics.NewGauge(`vm_assisted_merges_total{type="storage/inmemory"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_assisted_merges_total{type="indexdb/inmemory"}`, idbm.InmemoryAssistedMerges)
return float64(tm().InmemoryAssistedMerges) metrics.WriteCounterUint64(w, `vm_assisted_merges_total{type="indexdb/file"}`, idbm.FileAssistedMerges)
})
metrics.NewGauge(`vm_assisted_merges_total{type="storage/small"}`, func() float64 {
return float64(tm().SmallAssistedMerges)
})
metrics.NewGauge(`vm_assisted_merges_total{type="indexdb/inmemory"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_indexdb_items_added_total`, idbm.ItemsAdded)
return float64(idbm().InmemoryAssistedMerges) metrics.WriteCounterUint64(w, `vm_indexdb_items_added_size_bytes_total`, idbm.ItemsAddedSizeBytes)
})
metrics.NewGauge(`vm_assisted_merges_total{type="indexdb/file"}`, func() float64 {
return float64(idbm().FileAssistedMerges)
})
metrics.NewGauge(`vm_indexdb_items_added_total`, func() float64 { metrics.WriteGaugeUint64(w, `vm_pending_rows{type="storage"}`, tm.PendingRows)
return float64(idbm().ItemsAdded) metrics.WriteGaugeUint64(w, `vm_pending_rows{type="indexdb"}`, idbm.PendingItems)
})
metrics.NewGauge(`vm_indexdb_items_added_size_bytes_total`, func() float64 {
return float64(idbm().ItemsAddedSizeBytes)
})
metrics.NewGauge(`vm_pending_rows{type="storage"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_parts{type="storage/inmemory"}`, tm.InmemoryPartsCount)
return float64(tm().PendingRows) metrics.WriteGaugeUint64(w, `vm_parts{type="storage/small"}`, tm.SmallPartsCount)
}) metrics.WriteGaugeUint64(w, `vm_parts{type="storage/big"}`, tm.BigPartsCount)
metrics.NewGauge(`vm_pending_rows{type="indexdb"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_parts{type="indexdb/inmemory"}`, idbm.InmemoryPartsCount)
return float64(idbm().PendingItems) metrics.WriteGaugeUint64(w, `vm_parts{type="indexdb/file"}`, idbm.FilePartsCount)
})
metrics.NewGauge(`vm_parts{type="storage/inmemory"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_blocks{type="storage/inmemory"}`, tm.InmemoryBlocksCount)
return float64(tm().InmemoryPartsCount) metrics.WriteGaugeUint64(w, `vm_blocks{type="storage/small"}`, tm.SmallBlocksCount)
}) metrics.WriteGaugeUint64(w, `vm_blocks{type="storage/big"}`, tm.BigBlocksCount)
metrics.NewGauge(`vm_parts{type="storage/small"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_blocks{type="indexdb/inmemory"}`, idbm.InmemoryBlocksCount)
return float64(tm().SmallPartsCount) metrics.WriteGaugeUint64(w, `vm_blocks{type="indexdb/file"}`, idbm.FileBlocksCount)
})
metrics.NewGauge(`vm_parts{type="storage/big"}`, func() float64 {
return float64(tm().BigPartsCount)
})
metrics.NewGauge(`vm_parts{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().InmemoryPartsCount)
})
metrics.NewGauge(`vm_parts{type="indexdb/file"}`, func() float64 {
return float64(idbm().FilePartsCount)
})
metrics.NewGauge(`vm_blocks{type="storage/inmemory"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_data_size_bytes{type="storage/inmemory"}`, tm.InmemorySizeBytes)
return float64(tm().InmemoryBlocksCount) metrics.WriteGaugeUint64(w, `vm_data_size_bytes{type="storage/small"}`, tm.SmallSizeBytes)
}) metrics.WriteGaugeUint64(w, `vm_data_size_bytes{type="storage/big"}`, tm.BigSizeBytes)
metrics.NewGauge(`vm_blocks{type="storage/small"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_data_size_bytes{type="indexdb/inmemory"}`, idbm.InmemorySizeBytes)
return float64(tm().SmallBlocksCount) metrics.WriteGaugeUint64(w, `vm_data_size_bytes{type="indexdb/file"}`, idbm.FileSizeBytes)
})
metrics.NewGauge(`vm_blocks{type="storage/big"}`, func() float64 {
return float64(tm().BigBlocksCount)
})
metrics.NewGauge(`vm_blocks{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().InmemoryBlocksCount)
})
metrics.NewGauge(`vm_blocks{type="indexdb/file"}`, func() float64 {
return float64(idbm().FileBlocksCount)
})
metrics.NewGauge(`vm_data_size_bytes{type="storage/inmemory"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_rows_added_to_storage_total`, m.RowsAddedTotal)
return float64(tm().InmemorySizeBytes) metrics.WriteCounterUint64(w, `vm_deduplicated_samples_total{type="merge"}`, m.DedupsDuringMerge)
})
metrics.NewGauge(`vm_data_size_bytes{type="storage/small"}`, func() float64 {
return float64(tm().SmallSizeBytes)
})
metrics.NewGauge(`vm_data_size_bytes{type="storage/big"}`, func() float64 {
return float64(tm().BigSizeBytes)
})
metrics.NewGauge(`vm_data_size_bytes{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().InmemorySizeBytes)
})
metrics.NewGauge(`vm_data_size_bytes{type="indexdb/file"}`, func() float64 {
return float64(idbm().FileSizeBytes)
})
metrics.NewGauge(`vm_rows_added_to_storage_total`, func() float64 { metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="big_timestamp"}`, m.TooBigTimestampRows)
return float64(m().RowsAddedTotal) metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="small_timestamp"}`, m.TooSmallTimestampRows)
})
metrics.NewGauge(`vm_deduplicated_samples_total{type="merge"}`, func() float64 {
return float64(m().DedupsDuringMerge)
})
metrics.NewGauge(`vm_rows_ignored_total{reason="big_timestamp"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_timeseries_repopulated_total`, m.TimeseriesRepopulated)
return float64(m().TooBigTimestampRows) metrics.WriteCounterUint64(w, `vm_timeseries_precreated_total`, m.TimeseriesPreCreated)
}) metrics.WriteCounterUint64(w, `vm_new_timeseries_created_total`, m.NewTimeseriesCreated)
metrics.NewGauge(`vm_rows_ignored_total{reason="small_timestamp"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_slow_row_inserts_total`, m.SlowRowInserts)
return float64(m().TooSmallTimestampRows) metrics.WriteCounterUint64(w, `vm_slow_per_day_index_inserts_total`, m.SlowPerDayIndexInserts)
}) metrics.WriteCounterUint64(w, `vm_slow_metric_name_loads_total`, m.SlowMetricNameLoads)
metrics.NewGauge(`vm_timeseries_repopulated_total`, func() float64 {
return float64(m().TimeseriesRepopulated)
})
metrics.NewGauge(`vm_timeseries_precreated_total`, func() float64 {
return float64(m().TimeseriesPreCreated)
})
metrics.NewGauge(`vm_new_timeseries_created_total`, func() float64 {
return float64(m().NewTimeseriesCreated)
})
metrics.NewGauge(`vm_slow_row_inserts_total`, func() float64 {
return float64(m().SlowRowInserts)
})
metrics.NewGauge(`vm_slow_per_day_index_inserts_total`, func() float64 {
return float64(m().SlowPerDayIndexInserts)
})
metrics.NewGauge(`vm_slow_metric_name_loads_total`, func() float64 {
return float64(m().SlowMetricNameLoads)
})
if *maxHourlySeries > 0 { if *maxHourlySeries > 0 {
metrics.NewGauge(`vm_hourly_series_limit_current_series`, func() float64 { metrics.WriteGaugeUint64(w, `vm_hourly_series_limit_current_series`, m.HourlySeriesLimitCurrentSeries)
return float64(m().HourlySeriesLimitCurrentSeries) metrics.WriteGaugeUint64(w, `vm_hourly_series_limit_max_series`, m.HourlySeriesLimitMaxSeries)
}) metrics.WriteCounterUint64(w, `vm_hourly_series_limit_rows_dropped_total`, m.HourlySeriesLimitRowsDropped)
metrics.NewGauge(`vm_hourly_series_limit_max_series`, func() float64 {
return float64(m().HourlySeriesLimitMaxSeries)
})
metrics.NewGauge(`vm_hourly_series_limit_rows_dropped_total`, func() float64 {
return float64(m().HourlySeriesLimitRowsDropped)
})
} }
if *maxDailySeries > 0 { if *maxDailySeries > 0 {
metrics.NewGauge(`vm_daily_series_limit_current_series`, func() float64 { metrics.WriteGaugeUint64(w, `vm_daily_series_limit_current_series`, m.DailySeriesLimitCurrentSeries)
return float64(m().DailySeriesLimitCurrentSeries) metrics.WriteGaugeUint64(w, `vm_daily_series_limit_max_series`, m.DailySeriesLimitMaxSeries)
}) metrics.WriteCounterUint64(w, `vm_daily_series_limit_rows_dropped_total`, m.DailySeriesLimitRowsDropped)
metrics.NewGauge(`vm_daily_series_limit_max_series`, func() float64 {
return float64(m().DailySeriesLimitMaxSeries)
})
metrics.NewGauge(`vm_daily_series_limit_rows_dropped_total`, func() float64 {
return float64(m().DailySeriesLimitRowsDropped)
})
} }
metrics.NewGauge(`vm_timestamps_blocks_merged_total`, func() float64 { metrics.WriteCounterUint64(w, `vm_timestamps_blocks_merged_total`, m.TimestampsBlocksMerged)
return float64(m().TimestampsBlocksMerged) metrics.WriteCounterUint64(w, `vm_timestamps_bytes_saved_total`, m.TimestampsBytesSaved)
})
metrics.NewGauge(`vm_timestamps_bytes_saved_total`, func() float64 {
return float64(m().TimestampsBytesSaved)
})
metrics.NewGauge(`vm_rows{type="storage/inmemory"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_rows{type="storage/inmemory"}`, tm.InmemoryRowsCount)
return float64(tm().InmemoryRowsCount) metrics.WriteGaugeUint64(w, `vm_rows{type="storage/small"}`, tm.SmallRowsCount)
}) metrics.WriteGaugeUint64(w, `vm_rows{type="storage/big"}`, tm.BigRowsCount)
metrics.NewGauge(`vm_rows{type="storage/small"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_rows{type="indexdb/inmemory"}`, idbm.InmemoryItemsCount)
return float64(tm().SmallRowsCount) metrics.WriteGaugeUint64(w, `vm_rows{type="indexdb/file"}`, idbm.FileItemsCount)
})
metrics.NewGauge(`vm_rows{type="storage/big"}`, func() float64 {
return float64(tm().BigRowsCount)
})
metrics.NewGauge(`vm_rows{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().InmemoryItemsCount)
})
metrics.NewGauge(`vm_rows{type="indexdb/file"}`, func() float64 {
return float64(idbm().FileItemsCount)
})
metrics.NewGauge(`vm_date_range_search_calls_total`, func() float64 { metrics.WriteCounterUint64(w, `vm_date_range_search_calls_total`, idbm.DateRangeSearchCalls)
return float64(idbm().DateRangeSearchCalls) metrics.WriteCounterUint64(w, `vm_date_range_hits_total`, idbm.DateRangeSearchHits)
}) metrics.WriteCounterUint64(w, `vm_global_search_calls_total`, idbm.GlobalSearchCalls)
metrics.NewGauge(`vm_date_range_hits_total`, func() float64 {
return float64(idbm().DateRangeSearchHits)
})
metrics.NewGauge(`vm_global_search_calls_total`, func() float64 {
return float64(idbm().GlobalSearchCalls)
})
metrics.NewGauge(`vm_missing_metric_names_for_metric_id_total`, func() float64 { metrics.WriteCounterUint64(w, `vm_missing_metric_names_for_metric_id_total`, idbm.MissingMetricNamesForMetricID)
return float64(idbm().MissingMetricNamesForMetricID)
})
metrics.NewGauge(`vm_date_metric_id_cache_syncs_total`, func() float64 { metrics.WriteCounterUint64(w, `vm_date_metric_id_cache_syncs_total`, m.DateMetricIDCacheSyncsCount)
return float64(m().DateMetricIDCacheSyncsCount) metrics.WriteCounterUint64(w, `vm_date_metric_id_cache_resets_total`, m.DateMetricIDCacheResetsCount)
})
metrics.NewGauge(`vm_date_metric_id_cache_resets_total`, func() float64 {
return float64(m().DateMetricIDCacheResetsCount)
})
metrics.NewGauge(`vm_cache_entries{type="storage/tsid"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/tsid"}`, m.TSIDCacheSize)
return float64(m().TSIDCacheSize) metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/metricIDs"}`, m.MetricIDCacheSize)
}) metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/metricName"}`, m.MetricNameCacheSize)
metrics.NewGauge(`vm_cache_entries{type="storage/metricIDs"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/date_metricID"}`, m.DateMetricIDCacheSize)
return float64(m().MetricIDCacheSize) metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/hour_metric_ids"}`, m.HourMetricIDCacheSize)
}) metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/next_day_metric_ids"}`, m.NextDayMetricIDCacheSize)
metrics.NewGauge(`vm_cache_entries{type="storage/metricName"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/indexBlocks"}`, tm.IndexBlocksCacheSize)
return float64(m().MetricNameCacheSize) metrics.WriteGaugeUint64(w, `vm_cache_entries{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheSize)
}) metrics.WriteGaugeUint64(w, `vm_cache_entries{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheSize)
metrics.NewGauge(`vm_cache_entries{type="storage/date_metricID"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_cache_entries{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheSize)
return float64(m().DateMetricIDCacheSize) metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/regexps"}`, uint64(storage.RegexpCacheSize()))
}) metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/regexpPrefixes"}`, uint64(storage.RegexpPrefixesCacheSize()))
metrics.NewGauge(`vm_cache_entries{type="storage/hour_metric_ids"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/prefetchedMetricIDs"}`, m.PrefetchedMetricIDsSize)
return float64(m().HourMetricIDCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="storage/next_day_metric_ids"}`, func() float64 {
return float64(m().NextDayMetricIDCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="storage/indexBlocks"}`, func() float64 {
return float64(tm().IndexBlocksCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="indexdb/dataBlocks"}`, func() float64 {
return float64(idbm().DataBlocksCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="indexdb/indexBlocks"}`, func() float64 {
return float64(idbm().IndexBlocksCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="indexdb/tagFiltersToMetricIDs"}`, func() float64 {
return float64(idbm().TagFiltersToMetricIDsCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="storage/regexps"}`, func() float64 {
return float64(storage.RegexpCacheSize())
})
metrics.NewGauge(`vm_cache_entries{type="storage/regexpPrefixes"}`, func() float64 {
return float64(storage.RegexpPrefixesCacheSize())
})
metrics.NewGauge(`vm_cache_entries{type="storage/prefetchedMetricIDs"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/tsid"}`, m.TSIDCacheSizeBytes)
return float64(m().PrefetchedMetricIDsSize) metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/metricIDs"}`, m.MetricIDCacheSizeBytes)
}) metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/metricName"}`, m.MetricNameCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/indexBlocks"}`, tm.IndexBlocksCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/date_metricID"}`, m.DateMetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/hour_metric_ids"}`, m.HourMetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/next_day_metric_ids"}`, m.NextDayMetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/regexps"}`, uint64(storage.RegexpCacheSizeBytes()))
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/regexpPrefixes"}`, uint64(storage.RegexpPrefixesCacheSizeBytes()))
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/prefetchedMetricIDs"}`, m.PrefetchedMetricIDsSizeBytes)
metrics.NewGauge(`vm_cache_size_bytes{type="storage/tsid"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/tsid"}`, m.TSIDCacheSizeMaxBytes)
return float64(m().TSIDCacheSizeBytes) metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/metricIDs"}`, m.MetricIDCacheSizeMaxBytes)
}) metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/metricName"}`, m.MetricNameCacheSizeMaxBytes)
metrics.NewGauge(`vm_cache_size_bytes{type="storage/metricIDs"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/indexBlocks"}`, tm.IndexBlocksCacheSizeMaxBytes)
return float64(m().MetricIDCacheSizeBytes) metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheSizeMaxBytes)
}) metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheSizeMaxBytes)
metrics.NewGauge(`vm_cache_size_bytes{type="storage/metricName"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheSizeMaxBytes)
return float64(m().MetricNameCacheSizeBytes) metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/regexps"}`, uint64(storage.RegexpCacheMaxSizeBytes()))
}) metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/regexpPrefixes"}`, uint64(storage.RegexpPrefixesCacheMaxSizeBytes()))
metrics.NewGauge(`vm_cache_size_bytes{type="storage/indexBlocks"}`, func() float64 {
return float64(tm().IndexBlocksCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="indexdb/dataBlocks"}`, func() float64 {
return float64(idbm().DataBlocksCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="indexdb/indexBlocks"}`, func() float64 {
return float64(idbm().IndexBlocksCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/date_metricID"}`, func() float64 {
return float64(m().DateMetricIDCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/hour_metric_ids"}`, func() float64 {
return float64(m().HourMetricIDCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/next_day_metric_ids"}`, func() float64 {
return float64(m().NextDayMetricIDCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="indexdb/tagFiltersToMetricIDs"}`, func() float64 {
return float64(idbm().TagFiltersToMetricIDsCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/regexps"}`, func() float64 {
return float64(storage.RegexpCacheSizeBytes())
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/regexpPrefixes"}`, func() float64 {
return float64(storage.RegexpPrefixesCacheSizeBytes())
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/prefetchedMetricIDs"}`, func() float64 {
return float64(m().PrefetchedMetricIDsSizeBytes)
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/tsid"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/tsid"}`, m.TSIDCacheRequests)
return float64(m().TSIDCacheSizeMaxBytes) metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/metricIDs"}`, m.MetricIDCacheRequests)
}) metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/metricName"}`, m.MetricNameCacheRequests)
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/metricIDs"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/indexBlocks"}`, tm.IndexBlocksCacheRequests)
return float64(m().MetricIDCacheSizeMaxBytes) metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheRequests)
}) metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheRequests)
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/metricName"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheRequests)
return float64(m().MetricNameCacheSizeMaxBytes) metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/regexps"}`, storage.RegexpCacheRequests())
}) metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/regexpPrefixes"}`, storage.RegexpPrefixesCacheRequests())
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/indexBlocks"}`, func() float64 {
return float64(tm().IndexBlocksCacheSizeMaxBytes)
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="indexdb/dataBlocks"}`, func() float64 {
return float64(idbm().DataBlocksCacheSizeMaxBytes)
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="indexdb/indexBlocks"}`, func() float64 {
return float64(idbm().IndexBlocksCacheSizeMaxBytes)
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="indexdb/tagFiltersToMetricIDs"}`, func() float64 {
return float64(idbm().TagFiltersToMetricIDsCacheSizeMaxBytes)
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/regexps"}`, func() float64 {
return float64(storage.RegexpCacheMaxSizeBytes())
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/regexpPrefixes"}`, func() float64 {
return float64(storage.RegexpPrefixesCacheMaxSizeBytes())
})
metrics.NewGauge(`vm_cache_requests_total{type="storage/tsid"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/tsid"}`, m.TSIDCacheMisses)
return float64(m().TSIDCacheRequests) metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/metricIDs"}`, m.MetricIDCacheMisses)
}) metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/metricName"}`, m.MetricNameCacheMisses)
metrics.NewGauge(`vm_cache_requests_total{type="storage/metricIDs"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/indexBlocks"}`, tm.IndexBlocksCacheMisses)
return float64(m().MetricIDCacheRequests) metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheMisses)
}) metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheMisses)
metrics.NewGauge(`vm_cache_requests_total{type="storage/metricName"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheMisses)
return float64(m().MetricNameCacheRequests) metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/regexps"}`, storage.RegexpCacheMisses())
}) metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/regexpPrefixes"}`, storage.RegexpPrefixesCacheMisses())
metrics.NewGauge(`vm_cache_requests_total{type="storage/indexBlocks"}`, func() float64 {
return float64(tm().IndexBlocksCacheRequests)
})
metrics.NewGauge(`vm_cache_requests_total{type="indexdb/dataBlocks"}`, func() float64 {
return float64(idbm().DataBlocksCacheRequests)
})
metrics.NewGauge(`vm_cache_requests_total{type="indexdb/indexBlocks"}`, func() float64 {
return float64(idbm().IndexBlocksCacheRequests)
})
metrics.NewGauge(`vm_cache_requests_total{type="indexdb/tagFiltersToMetricIDs"}`, func() float64 {
return float64(idbm().TagFiltersToMetricIDsCacheRequests)
})
metrics.NewGauge(`vm_cache_requests_total{type="storage/regexps"}`, func() float64 {
return float64(storage.RegexpCacheRequests())
})
metrics.NewGauge(`vm_cache_requests_total{type="storage/regexpPrefixes"}`, func() float64 {
return float64(storage.RegexpPrefixesCacheRequests())
})
metrics.NewGauge(`vm_cache_misses_total{type="storage/tsid"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_deleted_metrics_total{type="indexdb"}`, idbm.DeletedMetricsCount)
return float64(m().TSIDCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="storage/metricIDs"}`, func() float64 {
return float64(m().MetricIDCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="storage/metricName"}`, func() float64 {
return float64(m().MetricNameCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="storage/indexBlocks"}`, func() float64 {
return float64(tm().IndexBlocksCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="indexdb/dataBlocks"}`, func() float64 {
return float64(idbm().DataBlocksCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="indexdb/indexBlocks"}`, func() float64 {
return float64(idbm().IndexBlocksCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="indexdb/tagFiltersToMetricIDs"}`, func() float64 {
return float64(idbm().TagFiltersToMetricIDsCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="storage/regexps"}`, func() float64 {
return float64(storage.RegexpCacheMisses())
})
metrics.NewGauge(`vm_cache_misses_total{type="storage/regexpPrefixes"}`, func() float64 {
return float64(storage.RegexpPrefixesCacheMisses())
})
metrics.NewGauge(`vm_deleted_metrics_total{type="indexdb"}`, func() float64 { metrics.WriteCounterUint64(w, `vm_cache_collisions_total{type="storage/tsid"}`, m.TSIDCacheCollisions)
return float64(idbm().DeletedMetricsCount) metrics.WriteCounterUint64(w, `vm_cache_collisions_total{type="storage/metricName"}`, m.MetricNameCacheCollisions)
})
metrics.NewGauge(`vm_cache_collisions_total{type="storage/tsid"}`, func() float64 { metrics.WriteGaugeUint64(w, `vm_next_retention_seconds`, m.NextRetentionSeconds)
return float64(m().TSIDCacheCollisions)
})
metrics.NewGauge(`vm_cache_collisions_total{type="storage/metricName"}`, func() float64 {
return float64(m().MetricNameCacheCollisions)
})
metrics.NewGauge(`vm_next_retention_seconds`, func() float64 {
return float64(m().NextRetentionSeconds)
})
} }
func jsonResponseError(w http.ResponseWriter, err error) { func jsonResponseError(w http.ResponseWriter, err error) {

View file

@ -1,4 +1,4 @@
FROM golang:1.21.5 as build-web-stage FROM golang:1.21.6 as build-web-stage
COPY build /build COPY build /build
WORKDIR /build WORKDIR /build

View file

@ -22,6 +22,14 @@ vmui-logs-build: vmui-package-base-image
--entrypoint=/bin/bash \ --entrypoint=/bin/bash \
vmui-builder-image -c "npm install && npm run build:logs" vmui-builder-image -c "npm install && npm run build:logs"
vmui-anomaly-build: vmui-package-base-image
docker run --rm \
--user $(shell id -u):$(shell id -g) \
--mount type=bind,src="$(shell pwd)/app/vmui",dst=/build \
-w /build/packages/vmui \
--entrypoint=/bin/bash \
vmui-builder-image -c "npm install && npm run build:anomaly"
vmui-release: vmui-build vmui-release: vmui-build
docker build -t ${DOCKER_NAMESPACE}/vmui:latest -f app/vmui/Dockerfile-web ./app/vmui/packages/vmui docker build -t ${DOCKER_NAMESPACE}/vmui:latest -f app/vmui/Dockerfile-web ./app/vmui/packages/vmui
docker tag ${DOCKER_NAMESPACE}/vmui:latest ${DOCKER_NAMESPACE}/vmui:${PKG_TAG} docker tag ${DOCKER_NAMESPACE}/vmui:latest ${DOCKER_NAMESPACE}/vmui:${PKG_TAG}

View file

@ -14,10 +14,12 @@ module.exports = override(
new webpack.NormalModuleReplacementPlugin( new webpack.NormalModuleReplacementPlugin(
/\.\/App/, /\.\/App/,
function (resource) { function (resource) {
// eslint-disable-next-line no-undef if (process.env.REACT_APP_TYPE === "logs") {
if (process.env.REACT_APP_LOGS === "true") {
resource.request = "./AppLogs"; resource.request = "./AppLogs";
} }
if (process.env.REACT_APP_TYPE === "anomaly") {
resource.request = "./AppAnomaly";
}
} }
) )
) )

View file

@ -32,9 +32,11 @@
"scripts": { "scripts": {
"prestart": "npm run copy-metricsql-docs", "prestart": "npm run copy-metricsql-docs",
"start": "react-app-rewired start", "start": "react-app-rewired start",
"start:logs": "cross-env REACT_APP_LOGS=true npm run start", "start:logs": "cross-env REACT_APP_TYPE=logs npm run start",
"start:anomaly": "cross-env REACT_APP_TYPE=anomaly npm run start",
"build": "GENERATE_SOURCEMAP=false react-app-rewired build", "build": "GENERATE_SOURCEMAP=false react-app-rewired build",
"build:logs": "cross-env REACT_APP_LOGS=true npm run build", "build:logs": "cross-env REACT_APP_TYPE=logs npm run build",
"build:anomaly": "cross-env REACT_APP_TYPE=anomaly npm run build",
"lint": "eslint src --ext tsx,ts", "lint": "eslint src --ext tsx,ts",
"lint:fix": "eslint src --ext tsx,ts --fix", "lint:fix": "eslint src --ext tsx,ts --fix",
"analyze": "source-map-explorer 'build/static/js/*.js'", "analyze": "source-map-explorer 'build/static/js/*.js'",

View file

@ -0,0 +1,41 @@
import React, { FC, useState } from "preact/compat";
import { HashRouter, Route, Routes } from "react-router-dom";
import AppContextProvider from "./contexts/AppContextProvider";
import ThemeProvider from "./components/Main/ThemeProvider/ThemeProvider";
import AnomalyLayout from "./layouts/AnomalyLayout/AnomalyLayout";
import ExploreAnomaly from "./pages/ExploreAnomaly/ExploreAnomaly";
import router from "./router";
import CustomPanel from "./pages/CustomPanel";
const AppLogs: FC = () => {
const [loadedTheme, setLoadedTheme] = useState(false);
return <>
<HashRouter>
<AppContextProvider>
<>
<ThemeProvider onLoaded={setLoadedTheme}/>
{loadedTheme && (
<Routes>
<Route
path={"/"}
element={<AnomalyLayout/>}
>
<Route
path={"/"}
element={<ExploreAnomaly/>}
/>
<Route
path={router.query}
element={<CustomPanel/>}
/>
</Route>
</Routes>
)}
</>
</AppContextProvider>
</HashRouter>
</>;
};
export default AppLogs;

View file

@ -0,0 +1,85 @@
import React, { FC, useMemo } from "preact/compat";
import { ForecastType, SeriesItem } from "../../../../types";
import { anomalyColors } from "../../../../utils/color";
import "./style.scss";
type Props = {
series: SeriesItem[];
};
const titles: Partial<Record<ForecastType, string>> = {
[ForecastType.yhat]: "yhat",
[ForecastType.yhatLower]: "yhat_lower/_upper",
[ForecastType.yhatUpper]: "yhat_lower/_upper",
[ForecastType.anomaly]: "anomalies",
[ForecastType.training]: "training data",
[ForecastType.actual]: "y"
};
const LegendAnomaly: FC<Props> = ({ series }) => {
const uniqSeriesStyles = useMemo(() => {
const uniqSeries = series.reduce((accumulator, currentSeries) => {
const hasForecast = Object.prototype.hasOwnProperty.call(currentSeries, "forecast");
const isNotUpper = currentSeries.forecast !== ForecastType.yhatUpper;
const isUniqForecast = !accumulator.find(s => s.forecast === currentSeries.forecast);
if (hasForecast && isUniqForecast && isNotUpper) {
accumulator.push(currentSeries);
}
return accumulator;
}, [] as SeriesItem[]);
const trainingSeries = {
...uniqSeries[0],
forecast: ForecastType.training,
color: anomalyColors[ForecastType.training],
};
uniqSeries.splice(1, 0, trainingSeries);
return uniqSeries.map(s => ({
...s,
color: typeof s.stroke === "string" ? s.stroke : anomalyColors[s.forecast || ForecastType.actual],
}));
}, [series]);
const container = document.getElementById("legendAnomaly");
if (!container) return null;
return <>
<div className="vm-legend-anomaly">
{/* TODO: remove .filter() after the correct training data has been added */}
{uniqSeriesStyles.filter(f => f.forecast !== ForecastType.training).map((s, i) => (
<div
key={`${i}_${s.forecast}`}
className="vm-legend-anomaly-item"
>
<svg>
{s.forecast === ForecastType.anomaly ? (
<circle
cx="15"
cy="7"
r="4"
fill={s.color}
stroke={s.color}
strokeWidth="1.4"
/>
) : (
<line
x1="0"
y1="7"
x2="30"
y2="7"
stroke={s.color}
strokeWidth={s.width || 1}
strokeDasharray={s.dash?.join(",")}
/>
)}
</svg>
<div className="vm-legend-anomaly-item__title">{titles[s.forecast || ForecastType.actual]}</div>
</div>
))}
</div>
</>;
};
export default LegendAnomaly;

View file

@ -0,0 +1,23 @@
@use "src/styles/variables" as *;
.vm-legend-anomaly {
position: relative;
display: flex;
align-items: center;
justify-content: center;
flex-wrap: wrap;
gap: calc($padding-large * 2);
cursor: default;
&-item {
display: flex;
align-items: center;
justify-content: center;
gap: $padding-small;
svg {
width: 30px;
height: 14px;
}
}
}

View file

@ -5,14 +5,15 @@ import uPlot, {
Series as uPlotSeries, Series as uPlotSeries,
} from "uplot"; } from "uplot";
import { import {
getDefaultOptions,
addSeries, addSeries,
delSeries, delSeries,
getAxes,
getDefaultOptions,
getRangeX, getRangeX,
getRangeY, getRangeY,
getScales, getScales,
handleDestroy, handleDestroy,
getAxes, setBand,
setSelect setSelect
} from "../../../../utils/uplot"; } from "../../../../utils/uplot";
import { MetricResult } from "../../../../api/types"; import { MetricResult } from "../../../../api/types";
@ -39,6 +40,7 @@ export interface LineChartProps {
setPeriod: ({ from, to }: { from: Date, to: Date }) => void; setPeriod: ({ from, to }: { from: Date, to: Date }) => void;
layoutSize: ElementSize; layoutSize: ElementSize;
height?: number; height?: number;
anomalyView?: boolean;
} }
const LineChart: FC<LineChartProps> = ({ const LineChart: FC<LineChartProps> = ({
@ -50,7 +52,8 @@ const LineChart: FC<LineChartProps> = ({
unit, unit,
setPeriod, setPeriod,
layoutSize, layoutSize,
height height,
anomalyView
}) => { }) => {
const { isDarkTheme } = useAppState(); const { isDarkTheme } = useAppState();
@ -68,7 +71,7 @@ const LineChart: FC<LineChartProps> = ({
seriesFocus, seriesFocus,
setCursor, setCursor,
resetTooltips resetTooltips
} = useLineTooltip({ u: uPlotInst, metrics, series, unit }); } = useLineTooltip({ u: uPlotInst, metrics, series, unit, anomalyView });
const options: uPlotOptions = { const options: uPlotOptions = {
...getDefaultOptions({ width: layoutSize.width, height }), ...getDefaultOptions({ width: layoutSize.width, height }),
@ -82,6 +85,7 @@ const LineChart: FC<LineChartProps> = ({
setSelect: [setSelect(setPlotScale)], setSelect: [setSelect(setPlotScale)],
destroy: [handleDestroy], destroy: [handleDestroy],
}, },
bands: []
}; };
useEffect(() => { useEffect(() => {
@ -103,6 +107,7 @@ const LineChart: FC<LineChartProps> = ({
if (!uPlotInst) return; if (!uPlotInst) return;
delSeries(uPlotInst); delSeries(uPlotInst);
addSeries(uPlotInst, series); addSeries(uPlotInst, series);
setBand(uPlotInst, series);
uPlotInst.redraw(); uPlotInst.redraw();
}, [series]); }, [series]);

View file

@ -17,11 +17,14 @@ import ThemeControl from "../ThemeControl/ThemeControl";
import useDeviceDetect from "../../../hooks/useDeviceDetect"; import useDeviceDetect from "../../../hooks/useDeviceDetect";
import useBoolean from "../../../hooks/useBoolean"; import useBoolean from "../../../hooks/useBoolean";
import { getTenantIdFromUrl } from "../../../utils/tenants"; import { getTenantIdFromUrl } from "../../../utils/tenants";
import { AppType } from "../../../types/appType";
const title = "Settings"; const title = "Settings";
const { REACT_APP_TYPE } = process.env;
const isLogsApp = REACT_APP_TYPE === AppType.logs;
const GlobalSettings: FC = () => { const GlobalSettings: FC = () => {
const { REACT_APP_LOGS } = process.env;
const { isMobile } = useDeviceDetect(); const { isMobile } = useDeviceDetect();
const appModeEnable = getAppModeEnable(); const appModeEnable = getAppModeEnable();
@ -77,7 +80,7 @@ const GlobalSettings: FC = () => {
const controls = [ const controls = [
{ {
show: !appModeEnable && !REACT_APP_LOGS, show: !appModeEnable && !isLogsApp,
component: <ServerConfigurator component: <ServerConfigurator
stateServerUrl={stateServerUrl} stateServerUrl={stateServerUrl}
serverUrl={serverUrl} serverUrl={serverUrl}
@ -86,7 +89,7 @@ const GlobalSettings: FC = () => {
/> />
}, },
{ {
show: !REACT_APP_LOGS, show: !isLogsApp,
component: <LimitsConfigurator component: <LimitsConfigurator
limits={limits} limits={limits}
onChange={setLimits} onChange={setLimits}

View file

@ -16,9 +16,9 @@ export interface ServerConfiguratorProps {
} }
const fields: {label: string, type: DisplayType}[] = [ const fields: {label: string, type: DisplayType}[] = [
{ label: "Graph", type: "chart" }, { label: "Graph", type: DisplayType.chart },
{ label: "JSON", type: "code" }, { label: "JSON", type: DisplayType.code },
{ label: "Table", type: "table" } { label: "Table", type: DisplayType.table }
]; ];
const LimitsConfigurator: FC<ServerConfiguratorProps> = ({ limits, onChange , onEnter }) => { const LimitsConfigurator: FC<ServerConfiguratorProps> = ({ limits, onChange , onEnter }) => {

View file

@ -2,6 +2,11 @@ import React, { FC, useEffect, useState } from "preact/compat";
import { ErrorTypes } from "../../../../types"; import { ErrorTypes } from "../../../../types";
import TextField from "../../../Main/TextField/TextField"; import TextField from "../../../Main/TextField/TextField";
import { isValidHttpUrl } from "../../../../utils/url"; import { isValidHttpUrl } from "../../../../utils/url";
import Button from "../../../Main/Button/Button";
import { StorageIcon } from "../../../Main/Icons";
import Tooltip from "../../../Main/Tooltip/Tooltip";
import { getFromStorage, removeFromStorage, saveToStorage } from "../../../../utils/storage";
import useBoolean from "../../../../hooks/useBoolean";
export interface ServerConfiguratorProps { export interface ServerConfiguratorProps {
serverUrl: string serverUrl: string
@ -10,13 +15,21 @@ export interface ServerConfiguratorProps {
onEnter: () => void onEnter: () => void
} }
const tooltipSave = {
enable: "Enable to save the modified server URL to local storage, preventing reset upon page refresh.",
disable: "Disable to stop saving the server URL to local storage, reverting to the default URL on page refresh."
};
const ServerConfigurator: FC<ServerConfiguratorProps> = ({ const ServerConfigurator: FC<ServerConfiguratorProps> = ({
serverUrl, serverUrl,
stateServerUrl, stateServerUrl,
onChange , onChange ,
onEnter onEnter
}) => { }) => {
const {
value: enabledStorage,
toggle: handleToggleStorage,
} = useBoolean(!!getFromStorage("SERVER_URL"));
const [error, setError] = useState(""); const [error, setError] = useState("");
const onChangeServer = (val: string) => { const onChangeServer = (val: string) => {
@ -30,16 +43,39 @@ const ServerConfigurator: FC<ServerConfiguratorProps> = ({
if (!isValidHttpUrl(stateServerUrl)) setError(ErrorTypes.validServer); if (!isValidHttpUrl(stateServerUrl)) setError(ErrorTypes.validServer);
}, [stateServerUrl]); }, [stateServerUrl]);
useEffect(() => {
if (enabledStorage) {
saveToStorage("SERVER_URL", serverUrl);
} else {
removeFromStorage(["SERVER_URL"]);
}
}, [enabledStorage]);
return ( return (
<TextField <div>
autofocus <div className="vm-server-configurator__title">
label="Server URL" Server URL
value={serverUrl} </div>
error={error} <div className="vm-server-configurator-url">
onChange={onChangeServer} <TextField
onEnter={onEnter} autofocus
inputmode="url" value={serverUrl}
/> error={error}
onChange={onChangeServer}
onEnter={onEnter}
inputmode="url"
/>
<Tooltip title={enabledStorage ? tooltipSave.disable : tooltipSave.enable}>
<Button
className="vm-server-configurator-url__button"
variant="text"
color={enabledStorage ? "primary" : "gray"}
onClick={handleToggleStorage}
startIcon={<StorageIcon/>}
/>
</Tooltip>
</div>
</div>
); );
}; };

View file

@ -21,6 +21,12 @@
&__input { &__input {
width: 100%; width: 100%;
&_flex {
display: flex;
align-items: flex-start;
gap: $padding-global;
}
} }
&__title { &__title {
@ -33,6 +39,16 @@
margin-bottom: $padding-global; margin-bottom: $padding-global;
} }
&-url {
display: flex;
align-items: flex-start;
gap: $padding-small;
&__button {
margin-top: $padding-small;
}
}
&-footer { &-footer {
display: flex; display: flex;
align-items: center; align-items: center;

View file

@ -6,12 +6,11 @@ import { useTimeDispatch, useTimeState } from "../../../state/time/TimeStateCont
import { AxisRange } from "../../../state/graph/reducer"; import { AxisRange } from "../../../state/graph/reducer";
import Spinner from "../../Main/Spinner/Spinner"; import Spinner from "../../Main/Spinner/Spinner";
import Alert from "../../Main/Alert/Alert"; import Alert from "../../Main/Alert/Alert";
import Button from "../../Main/Button/Button";
import "./style.scss"; import "./style.scss";
import classNames from "classnames"; import classNames from "classnames";
import useDeviceDetect from "../../../hooks/useDeviceDetect"; import useDeviceDetect from "../../../hooks/useDeviceDetect";
import { getDurationFromMilliseconds, getSecondsFromDuration, getStepFromDuration } from "../../../utils/time"; import { getDurationFromMilliseconds, getSecondsFromDuration, getStepFromDuration } from "../../../utils/time";
import useBoolean from "../../../hooks/useBoolean"; import WarningLimitSeries from "../../../pages/CustomPanel/WarningLimitSeries/WarningLimitSeries";
interface ExploreMetricItemGraphProps { interface ExploreMetricItemGraphProps {
name: string, name: string,
@ -40,12 +39,9 @@ const ExploreMetricItem: FC<ExploreMetricItemGraphProps> = ({
const stepSeconds = getSecondsFromDuration(customStep); const stepSeconds = getSecondsFromDuration(customStep);
const heatmapStep = getDurationFromMilliseconds(stepSeconds * 10 * 1000); const heatmapStep = getDurationFromMilliseconds(stepSeconds * 10 * 1000);
const [isHeatmap, setIsHeatmap] = useState(false); const [isHeatmap, setIsHeatmap] = useState(false);
const [showAllSeries, setShowAllSeries] = useState(false);
const step = isHeatmap && customStep === defaultStep ? heatmapStep : customStep; const step = isHeatmap && customStep === defaultStep ? heatmapStep : customStep;
const {
value: showAllSeries,
setTrue: handleShowAll,
} = useBoolean(false);
const query = useMemo(() => { const query = useMemo(() => {
const params = Object.entries({ job, instance }) const params = Object.entries({ job, instance })
@ -99,18 +95,13 @@ with (q = ${queryBase}) (
{isLoading && <Spinner />} {isLoading && <Spinner />}
{error && <Alert variant="error">{error}</Alert>} {error && <Alert variant="error">{error}</Alert>}
{queryErrors[0] && <Alert variant="error">{queryErrors[0]}</Alert>} {queryErrors[0] && <Alert variant="error">{queryErrors[0]}</Alert>}
{warning && <Alert variant="warning"> {warning && (
<div className="vm-explore-metrics-graph__warning"> <WarningLimitSeries
<p>{warning}</p> warning={warning}
<Button query={[query]}
color="warning" onChange={setShowAllSeries}
variant="outlined" />
onClick={handleShowAll} )}
>
Show all
</Button>
</div>
</Alert>}
{graphData && period && ( {graphData && period && (
<GraphView <GraphView
data={graphData} data={graphData}

File diff suppressed because one or more lines are too long

View file

@ -18,6 +18,7 @@ interface SelectProps {
clearable?: boolean clearable?: boolean
searchable?: boolean searchable?: boolean
autofocus?: boolean autofocus?: boolean
disabled?: boolean
onChange: (value: string) => void onChange: (value: string) => void
} }
@ -30,6 +31,7 @@ const Select: FC<SelectProps> = ({
clearable = false, clearable = false,
searchable = false, searchable = false,
autofocus, autofocus,
disabled,
onChange onChange
}) => { }) => {
const { isDarkTheme } = useAppState(); const { isDarkTheme } = useAppState();
@ -64,11 +66,12 @@ const Select: FC<SelectProps> = ({
}; };
const handleFocus = () => { const handleFocus = () => {
if (disabled) return;
setOpenList(true); setOpenList(true);
}; };
const handleToggleList = (e: MouseEvent<HTMLDivElement>) => { const handleToggleList = (e: MouseEvent<HTMLDivElement>) => {
if (e.target instanceof HTMLInputElement) return; if (e.target instanceof HTMLInputElement || disabled) return;
setOpenList(prev => !prev); setOpenList(prev => !prev);
}; };
@ -112,7 +115,8 @@ const Select: FC<SelectProps> = ({
<div <div
className={classNames({ className={classNames({
"vm-select": true, "vm-select": true,
"vm-select_dark": isDarkTheme "vm-select_dark": isDarkTheme,
"vm-select_disabled": disabled
})} })}
> >
<div <div

View file

@ -126,4 +126,18 @@
max-height: calc(($vh * 100) - 70px); max-height: calc(($vh * 100) - 70px);
} }
} }
&_disabled {
* {
cursor: not-allowed;
}
.vm-select-input {
&-content {
input {
color: $color-text-disabled;
}
}
}
}
} }

View file

@ -24,6 +24,7 @@ import { promValueToNumber } from "../../../utils/metric";
import useDeviceDetect from "../../../hooks/useDeviceDetect"; import useDeviceDetect from "../../../hooks/useDeviceDetect";
import useElementSize from "../../../hooks/useElementSize"; import useElementSize from "../../../hooks/useElementSize";
import { ChartTooltipProps } from "../../Chart/ChartTooltip/ChartTooltip"; import { ChartTooltipProps } from "../../Chart/ChartTooltip/ChartTooltip";
import LegendAnomaly from "../../Chart/Line/LegendAnomaly/LegendAnomaly";
export interface GraphViewProps { export interface GraphViewProps {
data?: MetricResult[]; data?: MetricResult[];
@ -34,11 +35,12 @@ export interface GraphViewProps {
yaxis: YaxisState; yaxis: YaxisState;
unit?: string; unit?: string;
showLegend?: boolean; showLegend?: boolean;
setYaxisLimits: (val: AxisRange) => void setYaxisLimits: (val: AxisRange) => void;
setPeriod: ({ from, to }: { from: Date, to: Date }) => void setPeriod: ({ from, to }: { from: Date, to: Date }) => void;
fullWidth?: boolean fullWidth?: boolean;
height?: number height?: number;
isHistogram?: boolean isHistogram?: boolean;
anomalyView?: boolean;
} }
const GraphView: FC<GraphViewProps> = ({ const GraphView: FC<GraphViewProps> = ({
@ -54,7 +56,8 @@ const GraphView: FC<GraphViewProps> = ({
alias = [], alias = [],
fullWidth = true, fullWidth = true,
height, height,
isHistogram isHistogram,
anomalyView,
}) => { }) => {
const { isMobile } = useDeviceDetect(); const { isMobile } = useDeviceDetect();
const { timezone } = useTimeState(); const { timezone } = useTimeState();
@ -69,8 +72,8 @@ const GraphView: FC<GraphViewProps> = ({
const [legendValue, setLegendValue] = useState<ChartTooltipProps | null>(null); const [legendValue, setLegendValue] = useState<ChartTooltipProps | null>(null);
const getSeriesItem = useMemo(() => { const getSeriesItem = useMemo(() => {
return getSeriesItemContext(data, hideSeries, alias); return getSeriesItemContext(data, hideSeries, alias, anomalyView);
}, [data, hideSeries, alias]); }, [data, hideSeries, alias, anomalyView]);
const setLimitsYaxis = (values: { [key: string]: number[] }) => { const setLimitsYaxis = (values: { [key: string]: number[] }) => {
const limits = getLimitsYAxis(values, !isHistogram); const limits = getLimitsYAxis(values, !isHistogram);
@ -148,7 +151,7 @@ const GraphView: FC<GraphViewProps> = ({
const range = getMinMaxBuffer(getMinFromArray(resultAsNumber), getMaxFromArray(resultAsNumber)); const range = getMinMaxBuffer(getMinFromArray(resultAsNumber), getMaxFromArray(resultAsNumber));
const rangeStep = Math.abs(range[1] - range[0]); const rangeStep = Math.abs(range[1] - range[0]);
return (avg > rangeStep * 1e10) ? results.map(() => avg) : results; return (avg > rangeStep * 1e10) && !anomalyView ? results.map(() => avg) : results;
}); });
timeDataSeries.unshift(timeSeries); timeDataSeries.unshift(timeSeries);
setLimitsYaxis(tempValues); setLimitsYaxis(tempValues);
@ -192,6 +195,7 @@ const GraphView: FC<GraphViewProps> = ({
setPeriod={setPeriod} setPeriod={setPeriod}
layoutSize={containerSize} layoutSize={containerSize}
height={height} height={height}
anomalyView={anomalyView}
/> />
)} )}
{isHistogram && ( {isHistogram && (
@ -206,7 +210,7 @@ const GraphView: FC<GraphViewProps> = ({
onChangeLegend={setLegendValue} onChangeLegend={setLegendValue}
/> />
)} )}
{!isHistogram && showLegend && ( {!isHistogram && !anomalyView && showLegend && (
<Legend <Legend
labels={legend} labels={legend}
query={query} query={query}
@ -221,6 +225,11 @@ const GraphView: FC<GraphViewProps> = ({
legendValue={legendValue} legendValue={legendValue}
/> />
)} )}
{anomalyView && showLegend && (
<LegendAnomaly
series={series as SeriesItem[]}
/>
)}
</div> </div>
); );
}; };

View file

@ -7,6 +7,46 @@ export interface NavigationItem {
submenu?: NavigationItem[], submenu?: NavigationItem[],
} }
const explore = {
label: "Explore",
submenu: [
{
label: routerOptions[router.metrics].title,
value: router.metrics,
},
{
label: routerOptions[router.cardinality].title,
value: router.cardinality,
},
{
label: routerOptions[router.topQueries].title,
value: router.topQueries,
},
{
label: routerOptions[router.activeQueries].title,
value: router.activeQueries,
},
]
};
const tools = {
label: "Tools",
submenu: [
{
label: routerOptions[router.trace].title,
value: router.trace,
},
{
label: routerOptions[router.withTemplate].title,
value: router.withTemplate,
},
{
label: routerOptions[router.relabel].title,
value: router.relabel,
},
]
};
export const logsNavigation: NavigationItem[] = [ export const logsNavigation: NavigationItem[] = [
{ {
label: routerOptions[router.logs].title, label: routerOptions[router.logs].title,
@ -14,47 +54,22 @@ export const logsNavigation: NavigationItem[] = [
}, },
]; ];
export const anomalyNavigation: NavigationItem[] = [
{
label: routerOptions[router.anomaly].title,
value: router.home,
},
{
label: routerOptions[router.home].title,
value: router.query,
}
];
export const defaultNavigation: NavigationItem[] = [ export const defaultNavigation: NavigationItem[] = [
{ {
label: routerOptions[router.home].title, label: routerOptions[router.home].title,
value: router.home, value: router.home,
}, },
{ explore,
label: "Explore", tools,
submenu: [
{
label: routerOptions[router.metrics].title,
value: router.metrics,
},
{
label: routerOptions[router.cardinality].title,
value: router.cardinality,
},
{
label: routerOptions[router.topQueries].title,
value: router.topQueries,
},
{
label: routerOptions[router.activeQueries].title,
value: router.activeQueries,
},
]
},
{
label: "Tools",
submenu: [
{
label: routerOptions[router.trace].title,
value: router.trace,
},
{
label: routerOptions[router.withTemplate].title,
value: router.withTemplate,
},
{
label: routerOptions[router.relabel].title,
value: router.relabel,
},
]
}
]; ];

View file

@ -14,9 +14,10 @@ interface LineTooltipHook {
metrics: MetricResult[]; metrics: MetricResult[];
series: uPlotSeries[]; series: uPlotSeries[];
unit?: string; unit?: string;
anomalyView?: boolean;
} }
const useLineTooltip = ({ u, metrics, series, unit }: LineTooltipHook) => { const useLineTooltip = ({ u, metrics, series, unit, anomalyView }: LineTooltipHook) => {
const [showTooltip, setShowTooltip] = useState(false); const [showTooltip, setShowTooltip] = useState(false);
const [tooltipIdx, setTooltipIdx] = useState({ seriesIdx: -1, dataIdx: -1 }); const [tooltipIdx, setTooltipIdx] = useState({ seriesIdx: -1, dataIdx: -1 });
const [stickyTooltips, setStickyToolTips] = useState<ChartTooltipProps[]>([]); const [stickyTooltips, setStickyToolTips] = useState<ChartTooltipProps[]>([]);
@ -60,14 +61,14 @@ const useLineTooltip = ({ u, metrics, series, unit }: LineTooltipHook) => {
point, point,
u: u, u: u,
id: `${seriesIdx}_${dataIdx}`, id: `${seriesIdx}_${dataIdx}`,
title: groups.size > 1 ? `Query ${group}` : "", title: groups.size > 1 && !anomalyView ? `Query ${group}` : "",
dates: [date ? dayjs(date * 1000).tz().format(DATE_FULL_TIMEZONE_FORMAT) : "-"], dates: [date ? dayjs(date * 1000).tz().format(DATE_FULL_TIMEZONE_FORMAT) : "-"],
value: formatPrettyNumber(value, min, max), value: formatPrettyNumber(value, min, max),
info: getMetricName(metricItem), info: getMetricName(metricItem),
statsFormatted: seriesItem?.statsFormatted, statsFormatted: seriesItem?.statsFormatted,
marker: `${seriesItem?.stroke}`, marker: `${seriesItem?.stroke}`,
}; };
}, [u, tooltipIdx, metrics, series, unit]); }, [u, tooltipIdx, metrics, series, unit, anomalyView]);
const handleClick = useCallback(() => { const handleClick = useCallback(() => {
if (!showTooltip) return; if (!showTooltip) return;

View file

@ -4,9 +4,8 @@ import { getQueryRangeUrl, getQueryUrl } from "../api/query-range";
import { useAppState } from "../state/common/StateContext"; import { useAppState } from "../state/common/StateContext";
import { InstantMetricResult, MetricBase, MetricResult, QueryStats } from "../api/types"; import { InstantMetricResult, MetricBase, MetricResult, QueryStats } from "../api/types";
import { isValidHttpUrl } from "../utils/url"; import { isValidHttpUrl } from "../utils/url";
import { ErrorTypes, SeriesLimits } from "../types"; import { DisplayType, ErrorTypes, SeriesLimits } from "../types";
import debounce from "lodash.debounce"; import debounce from "lodash.debounce";
import { DisplayType } from "../pages/CustomPanel/DisplayTypeSwitch";
import Trace from "../components/TraceQuery/Trace"; import Trace from "../components/TraceQuery/Trace";
import { useQueryState } from "../state/query/QueryStateContext"; import { useQueryState } from "../state/query/QueryStateContext";
import { useTimeState } from "../state/time/TimeStateContext"; import { useTimeState } from "../state/time/TimeStateContext";
@ -90,7 +89,7 @@ export const useFetchQuery = ({
const controller = new AbortController(); const controller = new AbortController();
setFetchQueue([...fetchQueue, controller]); setFetchQueue([...fetchQueue, controller]);
try { try {
const isDisplayChart = displayType === "chart"; const isDisplayChart = displayType === DisplayType.chart;
const defaultLimit = showAllSeries ? Infinity : (+stateSeriesLimits[displayType] || Infinity); const defaultLimit = showAllSeries ? Infinity : (+stateSeriesLimits[displayType] || Infinity);
let seriesLimit = defaultLimit; let seriesLimit = defaultLimit;
const tempData: MetricBase[] = []; const tempData: MetricBase[] = [];
@ -165,7 +164,7 @@ export const useFetchQuery = ({
setQueryErrors([]); setQueryErrors([]);
setQueryStats([]); setQueryStats([]);
const expr = predefinedQuery ?? query; const expr = predefinedQuery ?? query;
const displayChart = (display || displayType) === "chart"; const displayChart = (display || displayType) === DisplayType.chart;
if (!period) return; if (!period) return;
if (!serverUrl) { if (!serverUrl) {
setError(ErrorTypes.emptyServer); setError(ErrorTypes.emptyServer);

View file

@ -0,0 +1,59 @@
import Header from "../Header/Header";
import React, { FC, useEffect } from "preact/compat";
import { Outlet, useLocation, useSearchParams } from "react-router-dom";
import qs from "qs";
import "../MainLayout/style.scss";
import { getAppModeEnable } from "../../utils/app-mode";
import classNames from "classnames";
import Footer from "../Footer/Footer";
import { routerOptions } from "../../router";
import { useFetchDashboards } from "../../pages/PredefinedPanels/hooks/useFetchDashboards";
import useDeviceDetect from "../../hooks/useDeviceDetect";
import ControlsAnomalyLayout from "./ControlsAnomalyLayout";
const AnomalyLayout: FC = () => {
const appModeEnable = getAppModeEnable();
const { isMobile } = useDeviceDetect();
const { pathname } = useLocation();
const [searchParams, setSearchParams] = useSearchParams();
useFetchDashboards();
const setDocumentTitle = () => {
const defaultTitle = "vmui for vmanomaly";
const routeTitle = routerOptions[pathname]?.title;
document.title = routeTitle ? `${routeTitle} - ${defaultTitle}` : defaultTitle;
};
// for support old links with search params
const redirectSearchToHashParams = () => {
const { search, href } = window.location;
if (search) {
const query = qs.parse(search, { ignoreQueryPrefix: true });
Object.entries(query).forEach(([key, value]) => searchParams.set(key, value as string));
setSearchParams(searchParams);
window.location.search = "";
}
const newHref = href.replace(/\/\?#\//, "/#/");
if (newHref !== href) window.location.replace(newHref);
};
useEffect(setDocumentTitle, [pathname]);
useEffect(redirectSearchToHashParams, []);
return <section className="vm-container">
<Header controlsComponent={ControlsAnomalyLayout}/>
<div
className={classNames({
"vm-container-body": true,
"vm-container-body_mobile": isMobile,
"vm-container-body_app": appModeEnable
})}
>
<Outlet/>
</div>
{!appModeEnable && <Footer/>}
</section>;
};
export default AnomalyLayout;

View file

@ -0,0 +1,38 @@
import React, { FC } from "preact/compat";
import classNames from "classnames";
import TenantsConfiguration
from "../../components/Configurators/GlobalSettings/TenantsConfiguration/TenantsConfiguration";
import StepConfigurator from "../../components/Configurators/StepConfigurator/StepConfigurator";
import { TimeSelector } from "../../components/Configurators/TimeRangeSettings/TimeSelector/TimeSelector";
import CardinalityDatePicker from "../../components/Configurators/CardinalityDatePicker/CardinalityDatePicker";
import { ExecutionControls } from "../../components/Configurators/TimeRangeSettings/ExecutionControls/ExecutionControls";
import GlobalSettings from "../../components/Configurators/GlobalSettings/GlobalSettings";
import ShortcutKeys from "../../components/Main/ShortcutKeys/ShortcutKeys";
import { ControlsProps } from "../Header/HeaderControls/HeaderControls";
const ControlsAnomalyLayout: FC<ControlsProps> = ({
displaySidebar,
isMobile,
headerSetup,
accountIds
}) => {
return (
<div
className={classNames({
"vm-header-controls": true,
"vm-header-controls_mobile": isMobile,
})}
>
{headerSetup?.tenant && <TenantsConfiguration accountIds={accountIds || []}/>}
{headerSetup?.stepControl && <StepConfigurator/>}
{headerSetup?.timeSelector && <TimeSelector/>}
{headerSetup?.cardinalityDatePicker && <CardinalityDatePicker/>}
{headerSetup?.executionControls && <ExecutionControls/>}
<GlobalSettings/>
{!displaySidebar && <ShortcutKeys/>}
</div>
);
};
export default ControlsAnomalyLayout;

View file

@ -2,7 +2,7 @@ import React, { FC, useMemo } from "preact/compat";
import { useNavigate } from "react-router-dom"; import { useNavigate } from "react-router-dom";
import router from "../../router"; import router from "../../router";
import { getAppModeEnable, getAppModeParams } from "../../utils/app-mode"; import { getAppModeEnable, getAppModeParams } from "../../utils/app-mode";
import { LogoIcon, LogoLogsIcon } from "../../components/Main/Icons"; import { LogoAnomalyIcon, LogoIcon, LogoLogsIcon } from "../../components/Main/Icons";
import { getCssVariable } from "../../utils/theme"; import { getCssVariable } from "../../utils/theme";
import "./style.scss"; import "./style.scss";
import classNames from "classnames"; import classNames from "classnames";
@ -13,13 +13,26 @@ import HeaderControls, { ControlsProps } from "./HeaderControls/HeaderControls";
import useDeviceDetect from "../../hooks/useDeviceDetect"; import useDeviceDetect from "../../hooks/useDeviceDetect";
import useWindowSize from "../../hooks/useWindowSize"; import useWindowSize from "../../hooks/useWindowSize";
import { ComponentType } from "react"; import { ComponentType } from "react";
import { AppType } from "../../types/appType";
export interface HeaderProps { export interface HeaderProps {
controlsComponent: ComponentType<ControlsProps> controlsComponent: ComponentType<ControlsProps>
} }
const { REACT_APP_TYPE } = process.env;
const isCustomApp = REACT_APP_TYPE === AppType.logs || REACT_APP_TYPE === AppType.anomaly;
const Logo = () => {
switch (REACT_APP_TYPE) {
case AppType.logs:
return <LogoLogsIcon/>;
case AppType.anomaly:
return <LogoAnomalyIcon/>;
default:
return <LogoIcon/>;
}
};
const Header: FC<HeaderProps> = ({ controlsComponent }) => { const Header: FC<HeaderProps> = ({ controlsComponent }) => {
const { REACT_APP_LOGS } = process.env;
const { isMobile } = useDeviceDetect(); const { isMobile } = useDeviceDetect();
const windowSize = useWindowSize(); const windowSize = useWindowSize();
@ -70,12 +83,12 @@ const Header: FC<HeaderProps> = ({ controlsComponent }) => {
<div <div
className={classNames({ className={classNames({
"vm-header-logo": true, "vm-header-logo": true,
"vm-header-logo_logs": REACT_APP_LOGS "vm-header-logo_logs": isCustomApp
})} })}
onClick={onClickLogo} onClick={onClickLogo}
style={{ color }} style={{ color }}
> >
{REACT_APP_LOGS ? <LogoLogsIcon/> : <LogoIcon/>} {<Logo/>}
</div> </div>
)} )}
<HeaderNav <HeaderNav
@ -89,12 +102,12 @@ const Header: FC<HeaderProps> = ({ controlsComponent }) => {
className={classNames({ className={classNames({
"vm-header-logo": true, "vm-header-logo": true,
"vm-header-logo_mobile": true, "vm-header-logo_mobile": true,
"vm-header-logo_logs": REACT_APP_LOGS "vm-header-logo_logs": isCustomApp
})} })}
onClick={onClickLogo} onClick={onClickLogo}
style={{ color }} style={{ color }}
> >
{REACT_APP_LOGS ? <LogoLogsIcon/> : <LogoIcon/>} {<Logo/>}
</div> </div>
)} )}
<HeaderControls <HeaderControls

View file

@ -8,7 +8,8 @@ import "./style.scss";
import NavItem from "./NavItem"; import NavItem from "./NavItem";
import NavSubItem from "./NavSubItem"; import NavSubItem from "./NavSubItem";
import classNames from "classnames"; import classNames from "classnames";
import { defaultNavigation, logsNavigation } from "../../../constants/navigation"; import { anomalyNavigation, defaultNavigation, logsNavigation } from "../../../constants/navigation";
import { AppType } from "../../../types/appType";
interface HeaderNavProps { interface HeaderNavProps {
color: string color: string
@ -17,21 +18,29 @@ interface HeaderNavProps {
} }
const HeaderNav: FC<HeaderNavProps> = ({ color, background, direction }) => { const HeaderNav: FC<HeaderNavProps> = ({ color, background, direction }) => {
const { REACT_APP_LOGS } = process.env;
const appModeEnable = getAppModeEnable(); const appModeEnable = getAppModeEnable();
const { dashboardsSettings } = useDashboardsState(); const { dashboardsSettings } = useDashboardsState();
const { pathname } = useLocation(); const { pathname } = useLocation();
const [activeMenu, setActiveMenu] = useState(pathname); const [activeMenu, setActiveMenu] = useState(pathname);
const menu = useMemo(() => REACT_APP_LOGS ? logsNavigation : ([ const menu = useMemo(() => {
...defaultNavigation, switch (process.env.REACT_APP_TYPE) {
{ case AppType.logs:
label: routerOptions[router.dashboards].title, return logsNavigation;
value: router.dashboards, case AppType.anomaly:
hide: appModeEnable || !dashboardsSettings.length, return anomalyNavigation;
default:
return ([
...defaultNavigation,
{
label: routerOptions[router.dashboards].title,
value: router.dashboards,
hide: appModeEnable || !dashboardsSettings.length,
}
].filter(r => !r.hide));
} }
].filter(r => !r.hide)), [appModeEnable, dashboardsSettings]); }, [appModeEnable, dashboardsSettings]);
useEffect(() => { useEffect(() => {
setActiveMenu(pathname); setActiveMenu(pathname);

View file

@ -8,17 +8,20 @@ import MenuBurger from "../../../components/Main/MenuBurger/MenuBurger";
import useDeviceDetect from "../../../hooks/useDeviceDetect"; import useDeviceDetect from "../../../hooks/useDeviceDetect";
import "./style.scss"; import "./style.scss";
import useBoolean from "../../../hooks/useBoolean"; import useBoolean from "../../../hooks/useBoolean";
import { AppType } from "../../../types/appType";
interface SidebarHeaderProps { interface SidebarHeaderProps {
background: string background: string
color: string color: string
} }
const { REACT_APP_TYPE } = process.env;
const isLogsApp = REACT_APP_TYPE === AppType.logs;
const SidebarHeader: FC<SidebarHeaderProps> = ({ const SidebarHeader: FC<SidebarHeaderProps> = ({
background, background,
color, color,
}) => { }) => {
const { REACT_APP_LOGS } = process.env;
const { pathname } = useLocation(); const { pathname } = useLocation();
const { isMobile } = useDeviceDetect(); const { isMobile } = useDeviceDetect();
@ -61,7 +64,7 @@ const SidebarHeader: FC<SidebarHeaderProps> = ({
/> />
</div> </div>
<div className="vm-header-sidebar-menu-settings"> <div className="vm-header-sidebar-menu-settings">
{!isMobile && !REACT_APP_LOGS && <ShortcutKeys showTitle={true}/>} {!isMobile && !isLogsApp && <ShortcutKeys showTitle={true}/>}
</div> </div>
</div> </div>
</div>; </div>;

View file

@ -1,7 +1,7 @@
import Header from "../Header/Header"; import Header from "../Header/Header";
import React, { FC, useEffect } from "preact/compat"; import React, { FC, useEffect } from "preact/compat";
import { Outlet, useLocation } from "react-router-dom"; import { Outlet, useLocation } from "react-router-dom";
import "./style.scss"; import "../MainLayout/style.scss";
import { getAppModeEnable } from "../../utils/app-mode"; import { getAppModeEnable } from "../../utils/app-mode";
import classNames from "classnames"; import classNames from "classnames";
import Footer from "../Footer/Footer"; import Footer from "../Footer/Footer";

View file

@ -1,27 +0,0 @@
@use "src/styles/variables" as *;
.vm-container {
display: flex;
flex-direction: column;
min-height: calc(($vh * 100) - var(--scrollbar-height));
&-body {
flex-grow: 1;
min-height: 100%;
padding: $padding-medium;
background-color: $color-background-body;
&_mobile {
padding: $padding-small 0 0;
}
@media (max-width: 768px) {
padding: $padding-small 0 0;
}
&_app {
padding: $padding-small 0;
background-color: transparent;
}
}
}

View file

@ -6,13 +6,12 @@ import "./style.scss";
import { getAppModeEnable } from "../../utils/app-mode"; import { getAppModeEnable } from "../../utils/app-mode";
import classNames from "classnames"; import classNames from "classnames";
import Footer from "../Footer/Footer"; import Footer from "../Footer/Footer";
import router, { routerOptions } from "../../router"; import { routerOptions } from "../../router";
import { useFetchDashboards } from "../../pages/PredefinedPanels/hooks/useFetchDashboards"; import { useFetchDashboards } from "../../pages/PredefinedPanels/hooks/useFetchDashboards";
import useDeviceDetect from "../../hooks/useDeviceDetect"; import useDeviceDetect from "../../hooks/useDeviceDetect";
import ControlsMainLayout from "./ControlsMainLayout"; import ControlsMainLayout from "./ControlsMainLayout";
const MainLayout: FC = () => { const MainLayout: FC = () => {
const { REACT_APP_LOGS } = process.env;
const appModeEnable = getAppModeEnable(); const appModeEnable = getAppModeEnable();
const { isMobile } = useDeviceDetect(); const { isMobile } = useDeviceDetect();
const { pathname } = useLocation(); const { pathname } = useLocation();
@ -22,7 +21,7 @@ const MainLayout: FC = () => {
const setDocumentTitle = () => { const setDocumentTitle = () => {
const defaultTitle = "vmui"; const defaultTitle = "vmui";
const routeTitle = REACT_APP_LOGS ? routerOptions[router.logs]?.title : routerOptions[pathname]?.title; const routeTitle = routerOptions[pathname]?.title;
document.title = routeTitle ? `${routeTitle} - ${defaultTitle}` : defaultTitle; document.title = routeTitle ? `${routeTitle} - ${defaultTitle}` : defaultTitle;
}; };

View file

@ -112,7 +112,7 @@ const CardinalityConfigurator: FC<CardinalityTotalsProps> = ({ isPrometheus, isC
{isCluster && {isCluster &&
<div className="vm-cardinality-configurator-bottom-helpful"> <div className="vm-cardinality-configurator-bottom-helpful">
<Hyperlink <Hyperlink
href="https://docs.victoriametrics.com/#cardinality-explorer-statistic-inaccurancy" href="https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cardinality-explorer-statistic-inaccuracy"
withIcon={true} withIcon={true}
> >
<WikiIcon/> <WikiIcon/>

View file

@ -0,0 +1,72 @@
import React, { FC } from "react";
import GraphView from "../../../components/Views/GraphView/GraphView";
import GraphTips from "../../../components/Chart/GraphTips/GraphTips";
import GraphSettings from "../../../components/Configurators/GraphSettings/GraphSettings";
import { AxisRange } from "../../../state/graph/reducer";
import { useTimeDispatch, useTimeState } from "../../../state/time/TimeStateContext";
import { useGraphDispatch, useGraphState } from "../../../state/graph/GraphStateContext";
import useDeviceDetect from "../../../hooks/useDeviceDetect";
import { useQueryState } from "../../../state/query/QueryStateContext";
import { MetricResult } from "../../../api/types";
import { createPortal } from "preact/compat";
type Props = {
isHistogram: boolean;
graphData: MetricResult[];
controlsRef: React.RefObject<HTMLDivElement>;
anomalyView?: boolean;
}
const GraphTab: FC<Props> = ({ isHistogram, graphData, controlsRef, anomalyView }) => {
const { isMobile } = useDeviceDetect();
const { customStep, yaxis } = useGraphState();
const { period } = useTimeState();
const { query } = useQueryState();
const timeDispatch = useTimeDispatch();
const graphDispatch = useGraphDispatch();
const setYaxisLimits = (limits: AxisRange) => {
graphDispatch({ type: "SET_YAXIS_LIMITS", payload: limits });
};
const toggleEnableLimits = () => {
graphDispatch({ type: "TOGGLE_ENABLE_YAXIS_LIMITS" });
};
const setPeriod = ({ from, to }: {from: Date, to: Date}) => {
timeDispatch({ type: "SET_PERIOD", payload: { from, to } });
};
const controls = (
<div className="vm-custom-panel-body-header__graph-controls">
<GraphTips/>
<GraphSettings
yaxis={yaxis}
setYaxisLimits={setYaxisLimits}
toggleEnableLimits={toggleEnableLimits}
/>
</div>
);
return (
<>
{controlsRef.current && createPortal(controls, controlsRef.current)}
<GraphView
data={graphData}
period={period}
customStep={customStep}
query={query}
yaxis={yaxis}
setYaxisLimits={setYaxisLimits}
setPeriod={setPeriod}
height={isMobile ? window.innerHeight * 0.5 : 500}
isHistogram={isHistogram}
anomalyView={anomalyView}
/>
</>
);
};
export default GraphTab;

View file

@ -0,0 +1,47 @@
import React, { FC } from "react";
import { InstantMetricResult } from "../../../api/types";
import { createPortal, useMemo, useState } from "preact/compat";
import TableView from "../../../components/Views/TableView/TableView";
import TableSettings from "../../../components/Table/TableSettings/TableSettings";
import { getColumns } from "../../../hooks/useSortedCategories";
import { useCustomPanelDispatch, useCustomPanelState } from "../../../state/customPanel/CustomPanelStateContext";
type Props = {
liveData: InstantMetricResult[];
controlsRef: React.RefObject<HTMLDivElement>;
}
const TableTab: FC<Props> = ({ liveData, controlsRef }) => {
const { tableCompact } = useCustomPanelState();
const customPanelDispatch = useCustomPanelDispatch();
const [displayColumns, setDisplayColumns] = useState<string[]>();
const columns = useMemo(() => getColumns(liveData || []).map(c => c.key), [liveData]);
const toggleTableCompact = () => {
customPanelDispatch({ type: "TOGGLE_TABLE_COMPACT" });
};
const controls = (
<TableSettings
columns={columns}
defaultColumns={displayColumns}
onChangeColumns={setDisplayColumns}
tableCompact={tableCompact}
toggleTableCompact={toggleTableCompact}
/>
);
return (
<>
{controlsRef.current && createPortal(controls, controlsRef.current)}
<TableView
data={liveData}
displayColumns={displayColumns}
/>
</>
);
};
export default TableTab;

View file

@ -0,0 +1,45 @@
import React, { FC, RefObject } from "react";
import GraphTab from "./GraphTab";
import JsonView from "../../../components/Views/JsonView/JsonView";
import TableTab from "./TableTab";
import { InstantMetricResult, MetricResult } from "../../../api/types";
import { DisplayType } from "../../../types";
type Props = {
graphData?: MetricResult[];
liveData?: InstantMetricResult[];
isHistogram: boolean;
displayType: DisplayType;
controlsRef: RefObject<HTMLDivElement>;
}
const CustomPanelTabs: FC<Props> = ({
graphData,
liveData,
isHistogram,
displayType,
controlsRef
}) => {
if (displayType === DisplayType.code && liveData) {
return <JsonView data={liveData} />;
}
if (displayType === DisplayType.table && liveData) {
return <TableTab
liveData={liveData}
controlsRef={controlsRef}
/>;
}
if (displayType === DisplayType.chart && graphData) {
return <GraphTab
graphData={graphData}
isHistogram={isHistogram}
controlsRef={controlsRef}
/>;
}
return null;
};
export default CustomPanelTabs;

View file

@ -0,0 +1,43 @@
import { useCustomPanelState } from "../../../state/customPanel/CustomPanelStateContext";
import TracingsView from "../../../components/TraceQuery/TracingsView";
import React, { FC, useEffect, useState } from "preact/compat";
import Trace from "../../../components/TraceQuery/Trace";
import { DisplayType } from "../../../types";
type Props = {
traces?: Trace[];
displayType: DisplayType;
}
const CustomPanelTraces: FC<Props> = ({ traces, displayType }) => {
const { isTracingEnabled } = useCustomPanelState();
const [tracesState, setTracesState] = useState<Trace[]>([]);
const handleTraceDelete = (trace: Trace) => {
const updatedTraces = tracesState.filter((data) => data.idValue !== trace.idValue);
setTracesState([...updatedTraces]);
};
useEffect(() => {
if (traces) {
setTracesState([...tracesState, ...traces]);
}
}, [traces]);
useEffect(() => {
setTracesState([]);
}, [displayType]);
return <>
{isTracingEnabled && (
<div className="vm-custom-panel__trace">
<TracingsView
traces={tracesState}
onDeleteClick={handleTraceDelete}
/>
</div>
)}
</>;
};
export default CustomPanelTraces;

View file

@ -2,8 +2,7 @@ import React, { FC } from "preact/compat";
import { useCustomPanelDispatch, useCustomPanelState } from "../../state/customPanel/CustomPanelStateContext"; import { useCustomPanelDispatch, useCustomPanelState } from "../../state/customPanel/CustomPanelStateContext";
import { ChartIcon, CodeIcon, TableIcon } from "../../components/Main/Icons"; import { ChartIcon, CodeIcon, TableIcon } from "../../components/Main/Icons";
import Tabs from "../../components/Main/Tabs/Tabs"; import Tabs from "../../components/Main/Tabs/Tabs";
import { DisplayType } from "../../types";
export type DisplayType = "table" | "chart" | "code";
type DisplayTab = { type DisplayTab = {
value: DisplayType value: DisplayType
@ -13,9 +12,9 @@ type DisplayTab = {
} }
export const displayTypeTabs: DisplayTab[] = [ export const displayTypeTabs: DisplayTab[] = [
{ value: "chart", icon: <ChartIcon/>, label: "Graph", prometheusCode: 0 }, { value: DisplayType.chart, icon: <ChartIcon/>, label: "Graph", prometheusCode: 0 },
{ value: "code", icon: <CodeIcon/>, label: "JSON", prometheusCode: 3 }, { value: DisplayType.code, icon: <CodeIcon/>, label: "JSON", prometheusCode: 3 },
{ value: "table", icon: <TableIcon/>, label: "Table", prometheusCode: 1 } { value: DisplayType.table, icon: <TableIcon/>, label: "Table", prometheusCode: 1 }
]; ];
export const DisplayTypeSwitch: FC = () => { export const DisplayTypeSwitch: FC = () => {

Some files were not shown because too many files have changed in this diff Show more