Merge branch 'master' into streaming-aggregation-ui

This commit is contained in:
Alexander Marshalov 2024-01-18 10:33:14 +01:00 committed by GitHub
commit 26144f6d87
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
479 changed files with 25565 additions and 17399 deletions

View file

@ -60,7 +60,7 @@ body:
For VictoriaMetrics health-state issues please provide full-length screenshots
of Grafana dashboards if possible:
* [Grafana dashboard for single-node VictoriaMetrics](https://grafana.com/grafana/dashboards/10229-victoriametrics/)
* [Grafana dashboard for single-node VictoriaMetrics](https://grafana.com/grafana/dashboards/10229-victoriametrics-single-node/)
* [Grafana dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176-victoriametrics-cluster/)
See how to setup monitoring here:

View file

@ -175,7 +175,7 @@
END OF TERMS AND CONDITIONS
Copyright 2019-2023 VictoriaMetrics, Inc.
Copyright 2019-2024 VictoriaMetrics, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View file

@ -22,17 +22,17 @@ The cluster version of VictoriaMetrics is available [here](https://docs.victoria
Learn more about [key concepts](https://docs.victoriametrics.com/keyConcepts.html) of VictoriaMetrics and follow the
[quick start guide](https://docs.victoriametrics.com/Quick-Start.html) for a better experience.
There is also user-friendly database for logs - [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/).
There is also a user-friendly database for logs - [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/).
If you have questions about VictoriaMetrics, then feel free asking them at [VictoriaMetrics community Slack chat](https://slack.victoriametrics.com/).
If you have questions about VictoriaMetrics, then feel free asking them in the [VictoriaMetrics community Slack chat](https://slack.victoriametrics.com/).
[Contact us](mailto:info@victoriametrics.com) if you need enterprise support for VictoriaMetrics.
See [features available in enterprise package](https://docs.victoriametrics.com/enterprise.html).
Enterprise binaries can be downloaded and evaluated for free
from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest).
See how to request a free trial license [here](https://victoriametrics.com/products/enterprise/trial/).
You can also [request a free trial license](https://victoriametrics.com/products/enterprise/trial/).
VictoriaMetrics is developed at a fast pace, so it is recommended periodically checking the [CHANGELOG](https://docs.victoriametrics.com/CHANGELOG.html) and performing [regular upgrades](#how-to-upgrade-victoriametrics).
VictoriaMetrics is developed at a fast pace, so it is recommended to check the [CHANGELOG](https://docs.victoriametrics.com/CHANGELOG.html) periodically, and to perform [regular upgrades](#how-to-upgrade-victoriametrics).
VictoriaMetrics has achieved security certifications for Database Software Development and Software-Based Monitoring Services. We apply strict security measures in everything we do. See our [Security page](https://victoriametrics.com/security/) for more details.
@ -41,19 +41,19 @@ VictoriaMetrics has achieved security certifications for Database Software Devel
VictoriaMetrics has the following prominent features:
* It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details.
* It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage).
* It can be used as a drop-in replacement for Prometheus in Grafana, because it supports the [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as a drop-in replacement for Graphite in Grafana, because it supports the [Graphite API](#graphite-api-usage).
VictoriaMetrics allows reducing infrastructure costs by more than 10x comparing to Graphite - see [this case study](https://docs.victoriametrics.com/CaseStudies.html#grammarly).
* It is easy to setup and operate:
* VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d)
without external dependencies.
* All the configuration is done via explicit command-line flags with reasonable defaults.
* All the data is stored in a single directory pointed by `-storageDataPath` command-line flag.
* All the data is stored in a single directory specified by the `-storageDataPath` command-line flag.
* Easy and fast backups from [instant snapshots](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282)
can be done with [vmbackup](https://docs.victoriametrics.com/vmbackup.html) / [vmrestore](https://docs.victoriametrics.com/vmrestore.html) tools.
See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-series-databases-533c1a927883) for more details.
* It implements PromQL-like query language - [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html), which provides improved functionality on top of PromQL.
* It provides global query view. Multiple Prometheus instances or any other data sources may ingest data into VictoriaMetrics. Later this data may be queried via a single query.
* It implements a PromQL-like query language - [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html), which provides improved functionality on top of PromQL.
* It provides a global query view. Multiple Prometheus instances or any other data sources may ingest data into VictoriaMetrics. Later this data may be queried via a single query.
* It provides high performance and good vertical and horizontal scalability for both
[data ingestion](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b)
and [data querying](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4).
@ -62,9 +62,9 @@ VictoriaMetrics has the following prominent features:
and [up to 7x less RAM than Prometheus, Thanos or Cortex](https://valyala.medium.com/prometheus-vs-victoriametrics-benchmark-on-node-exporter-metrics-4ca29c75590f)
when dealing with millions of unique time series (aka [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality)).
* It is optimized for time series with [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate).
* It provides high data compression, so up to 70x more data points may be stored into limited storage comparing to TimescaleDB
according to [these benchmarks](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4)
and up to 7x less storage space is required compared to Prometheus, Thanos or Cortex
* It provides high data compression: up to 70x more data points may be stored into limited storage compared with TimescaleDB
according to [these benchmarks](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4),
and up to 7x less storage space is required compared to Prometheus, Thanos or Cortex.
according to [this benchmark](https://valyala.medium.com/prometheus-vs-victoriametrics-benchmark-on-node-exporter-metrics-4ca29c75590f).
* It is optimized for storage with high-latency IO and low IOPS (HDD and network storage in AWS, Google Cloud, Microsoft Azure, etc).
See [disk IO graphs from these benchmarks](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b).
@ -75,7 +75,7 @@ VictoriaMetrics has the following prominent features:
from [PromCon 2019](https://promcon.io/2019-munich/talks/remote-write-storage-wars/).
* It protects the storage from data corruption on unclean shutdown (i.e. OOM, hardware reset or `kill -9`) thanks to
[the storage architecture](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282).
* It supports metrics' scraping, ingestion and [backfilling](#backfilling) via the following protocols:
* It supports metrics scraping, ingestion and [backfilling](#backfilling) via the following protocols:
* [Metrics scraping from Prometheus exporters](#how-to-scrape-prometheus-exporters-such-as-node-exporter).
* [Prometheus remote write API](#prometheus-setup).
* [Prometheus exposition format](#how-to-import-data-in-prometheus-exposition-format).
@ -95,7 +95,7 @@ VictoriaMetrics has the following prominent features:
[high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) issues via [series limiter](#cardinality-limiter).
* It ideally works with big amounts of time series data from APM, Kubernetes, IoT sensors, connected cars, industrial telemetry, financial data
and various [Enterprise workloads](https://docs.victoriametrics.com/enterprise.html).
* It has open source [cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster).
* It has an open source [cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster).
* It can store data on [NFS-based storages](https://en.wikipedia.org/wiki/Network_File_System) such as [Amazon EFS](https://aws.amazon.com/efs/)
and [Google Filestore](https://cloud.google.com/filestore).
@ -138,7 +138,7 @@ See also [articles and slides about VictoriaMetrics from our users](https://docs
### Install
To quickly try VictoriaMetrics, just download [VictoriaMetrics executable](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest)
To quickly try VictoriaMetrics, just download the [VictoriaMetrics executable](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest)
or [Docker image](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and start it with the desired command-line flags.
See also [QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for additional information.
@ -155,10 +155,10 @@ VictoriaMetrics can also be installed via these installation methods:
The following command-line flags are used the most:
* `-storageDataPath` - VictoriaMetrics stores all the data in this directory. Default path is `victoria-metrics-data` in the current working directory.
* `-storageDataPath` - VictoriaMetrics stores all the data in this directory. The default path is `victoria-metrics-data` in the current working directory.
* `-retentionPeriod` - retention for stored data. Older data is automatically deleted. Default retention is 1 month (31 days). The minimum retention period is 24h or 1d. See [these docs](#retention) for more details.
Other flags have good enough default values, so set them only if you really need this. Pass `-help` to see [all the available flags with description and default values](#list-of-command-line-flags).
Other flags have good enough default values, so set them only if you really need to. Pass `-help` to see [all the available flags with description and default values](#list-of-command-line-flags).
The following docs may be useful during initial VictoriaMetrics setup:
* [How to set up scraping of Prometheus-compatible targets](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter)
@ -172,9 +172,6 @@ VictoriaMetrics accepts [Prometheus querying API requests](#prometheus-querying-
It is recommended setting up [monitoring](#monitoring) for VictoriaMetrics.
VictoriaMetrics is developed at a fast pace, so it is recommended periodically checking the [CHANGELOG](https://docs.victoriametrics.com/CHANGELOG.html) and performing [regular upgrades](#how-to-upgrade-victoriametrics).
### Environment variables
All the VictoriaMetrics components allow referring environment variables in `yaml` configuration files (such as `-promscrape.config`)
@ -363,6 +360,8 @@ See more in [description](https://github.com/VictoriaMetrics/grafana-datasource#
Creating a datasource may require [specific permissions](https://grafana.com/docs/grafana/latest/administration/data-source-management/).
If you don't see an option to create a data source - try contacting system administrator.
Grafana playground is available for viewing at our [sandbox](https://play-grafana.victoriametrics.com).
## How to upgrade VictoriaMetrics
VictoriaMetrics is developed at a fast pace, so it is recommended periodically checking [the CHANGELOG page](https://docs.victoriametrics.com/CHANGELOG.html) and performing regular upgrades.
@ -516,10 +515,8 @@ See also [vmagent](https://docs.victoriametrics.com/vmagent.html), which can be
## How to send data from DataDog agent
VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/agent/)
or [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/)
via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics)
at `/datadog/api/v1/series` path.
VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/agent/) or [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/)
via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics) at `/datadog/api/v2/series` path.
### Sending metrics to VictoriaMetrics
@ -531,6 +528,7 @@ or via [configuration file](https://docs.datadoghq.com/agent/guide/agent-configu
</p>
To configure DataDog agent via ENV variable add the following prefix:
<div class="with-copy" markdown="1">
```
@ -545,14 +543,12 @@ To configure DataDog agent via [configuration file](https://github.com/DataDog/d
add the following line:
<div class="with-copy" markdown="1">
```
dd_url: http://victoriametrics:8428/datadog
```
</div>
vmagent also can accept Datadog metrics format. Depending on where vmagent will forward data,
[vmagent](https://docs.victoriametrics.com/vmagent.html) also can accept Datadog metrics format. Depending on where vmagent will forward data,
pick [single-node or cluster URL](https://docs.victoriametrics.com/url-examples.html#datadog) formats.
### Sending metrics to Datadog and VictoriaMetrics
@ -593,8 +589,7 @@ additional_endpoints:
### Send via cURL
See how to send data to VictoriaMetrics via
[DataDog "submit metrics"](https://docs.victoriametrics.com/url-examples.html#datadogapiv1series) from command line.
See how to send data to VictoriaMetrics via DataDog "submit metrics" API [here](https://docs.victoriametrics.com/url-examples.html#datadogapiv2series).
The imported data can be read via [export API](https://docs.victoriametrics.com/url-examples.html#apiv1export).
@ -605,7 +600,7 @@ according to [DataDog metric naming recommendations](https://docs.datadoghq.com/
If you need accepting metric names as is without sanitizing, then pass `-datadog.sanitizeMetricName=false` command-line flag to VictoriaMetrics.
Extra labels may be added to all the written time series by passing `extra_label=name=value` query args.
For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics.
For example, `/datadog/api/v2/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics.
DataDog agent sends the [configured tags](https://docs.datadoghq.com/getting_started/tagging/) to
undocumented endpoint - `/datadog/intake`. This endpoint isn't supported by VictoriaMetrics yet.
@ -1217,6 +1212,7 @@ before actually deleting the metrics. By default, this query will only scan seri
adjust `start` and `end` to a suitable range to achieve match hits.
The `/api/v1/admin/tsdb/delete_series` handler may be protected with `authKey` if `-deleteAuthKey` command-line flag is set.
Note that handler accepts any HTTP method, so sending a `GET` request to `/api/v1/admin/tsdb/delete_series` will result in deletion of time series.
The delete API is intended mainly for the following cases:
@ -1772,6 +1768,10 @@ This aligns with the [staleness rules in Prometheus](https://prometheus.io/docs/
If multiple raw samples have **the same timestamp** on the given `-dedup.minScrapeInterval` discrete interval,
then the sample with **the biggest value** is kept.
[Prometheus stalenes markers](https://docs.victoriametrics.com/vmagent.html#prometheus-staleness-markers) are processed as any other value during de-duplication.
If raw sample with the biggest timestamp on `-dedup.minScrapeInterval` contains a stale marker, then it is kept after the deduplication.
This allows properly preserving staleness markers during the de-duplication.
Please note, [labels](https://docs.victoriametrics.com/keyConcepts.html#labels) of raw samples should be identical
in order to be deduplicated. For example, this is why [HA pair of vmagents](https://docs.victoriametrics.com/vmagent.html#high-availability)
needs to be identically configured.
@ -1855,7 +1855,7 @@ This increases overhead during data querying, since VictoriaMetrics needs to rea
bigger number of parts per each request. That's why it is recommended to have at least 20%
of free disk space under directory pointed by `-storageDataPath` command-line flag.
Information about merging process is available in [the dashboard for single-node VictoriaMetrics](https://grafana.com/grafana/dashboards/10229-victoriametrics/)
Information about merging process is available in [the dashboard for single-node VictoriaMetrics](https://grafana.com/grafana/dashboards/10229-victoriametrics-single-node/)
and [the dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176-victoriametrics-cluster/).
See more details in [monitoring docs](#monitoring).
@ -2058,7 +2058,7 @@ with 10 seconds interval.
_Please note, never use loadbalancer address for scraping metrics. All monitored components should be scraped directly by their address._
Official Grafana dashboards available for [single-node](https://grafana.com/grafana/dashboards/10229-victoriametrics/)
Official Grafana dashboards available for [single-node](https://grafana.com/grafana/dashboards/10229-victoriametrics-single-node/)
and [clustered](https://grafana.com/grafana/dashboards/11176-victoriametrics-cluster/) VictoriaMetrics.
See an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831)
created by community.
@ -2329,7 +2329,7 @@ The following metrics for each type of cache are exported at [`/metrics` page](#
* `vm_cache_misses_total` - the number of cache misses
* `vm_cache_entries` - the number of entries in the cache
Both Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/grafana/dashboards/10229-victoriametrics/)
Both Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/grafana/dashboards/10229-victoriametrics-single-node/)
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176-victoriametrics-cluster/)
contain `Caches` section with cache metrics visualized. The panels show the current
memory usage by each type of cache, and also a cache hit rate. If hit rate is close to 100%
@ -2580,7 +2580,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-csvTrimTimestamp duration
Trim timestamps when importing csv data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms)
-datadog.maxInsertRequestSize size
The maximum size in bytes of a single DataDog POST request to /api/v1/series
The maximum size in bytes of a single DataDog POST request to /datadog/api/v2/series
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 67108864)
-datadog.sanitizeMetricName
Sanitize metric names for the ingested DataDog data to comply with DataDog behaviour described at https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics (default true)
@ -2709,7 +2709,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-loggerWarnsPerSecondLimit int
Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit
-maxConcurrentInserts int
The maximum number of concurrent insert requests. Default value should work for most cases, since it minimizes the memory usage. The default value can be increased when clients send data over slow networks. See also -insert.maxQueueDuration (default 32)
The maximum number of concurrent insert requests. Default value should work for most cases, since it minimizes the memory usage. The default value can be increased when clients send data over slow networks. See also -insert.maxQueueDuration.
-maxInsertRequestSize size
The maximum size in bytes of a single Prometheus remote_write API request
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 33554432)

View file

@ -37,7 +37,6 @@ func main() {
cgroup.SetGOGC(*gogc)
buildinfo.Init()
logger.Init()
pushmetrics.Init()
logger.Infof("starting VictoriaLogs at %q...", *httpListenAddr)
startTime := time.Now()
@ -49,8 +48,10 @@ func main() {
go httpserver.Serve(*httpListenAddr, *useProxyProtocol, requestHandler)
logger.Infof("started VictoriaLogs in %.3f seconds; see https://docs.victoriametrics.com/VictoriaLogs/", time.Since(startTime).Seconds())
pushmetrics.Init()
sig := procutil.WaitForSigterm()
logger.Infof("received signal %s", sig)
pushmetrics.Stop()
logger.Infof("gracefully shutting down webservice at %q", *httpListenAddr)
startTime = time.Now()

View file

@ -48,7 +48,6 @@ func main() {
envflag.Parse()
buildinfo.Init()
logger.Init()
pushmetrics.Init()
if promscrape.IsDryRun() {
*dryRun = true
@ -74,13 +73,16 @@ func main() {
vmstorage.Init(promql.ResetRollupResultCacheIfNeeded)
vmselect.Init()
vminsert.Init()
startSelfScraper()
go httpserver.Serve(*httpListenAddr, *useProxyProtocol, requestHandler)
logger.Infof("started VictoriaMetrics in %.3f seconds", time.Since(startTime).Seconds())
pushmetrics.Init()
sig := procutil.WaitForSigterm()
logger.Infof("received signal %s", sig)
pushmetrics.Stop()
stopSelfScraper()
@ -89,8 +91,8 @@ func main() {
if err := httpserver.Stop(*httpListenAddr); err != nil {
logger.Fatalf("cannot stop the webservice: %s", err)
}
vminsert.Stop()
logger.Infof("successfully shut down the webservice in %.3f seconds", time.Since(startTime).Seconds())
vminsert.Stop()
vmstorage.Stop()
vmselect.Stop()

View file

@ -12,6 +12,7 @@ import (
"os"
"path/filepath"
"reflect"
"strconv"
"strings"
"testing"
"time"
@ -54,15 +55,14 @@ var (
)
type test struct {
Name string `json:"name"`
Data []string `json:"data"`
InsertQuery string `json:"insert_query"`
Query []string `json:"query"`
ResultMetrics []Metric `json:"result_metrics"`
ResultSeries Series `json:"result_series"`
ResultQuery Query `json:"result_query"`
ResultQueryRange QueryRange `json:"result_query_range"`
Issue string `json:"issue"`
Name string `json:"name"`
Data []string `json:"data"`
InsertQuery string `json:"insert_query"`
Query []string `json:"query"`
ResultMetrics []Metric `json:"result_metrics"`
ResultSeries Series `json:"result_series"`
ResultQuery Query `json:"result_query"`
Issue string `json:"issue"`
}
type Metric struct {
@ -80,42 +80,90 @@ type Series struct {
Status string `json:"status"`
Data []map[string]string `json:"data"`
}
type Query struct {
Status string `json:"status"`
Data QueryData `json:"data"`
}
type QueryData struct {
ResultType string `json:"resultType"`
Result []QueryDataResult `json:"result"`
Status string `json:"status"`
Data struct {
ResultType string `json:"resultType"`
Result json.RawMessage `json:"result"`
} `json:"data"`
}
type QueryDataResult struct {
Metric map[string]string `json:"metric"`
Value []interface{} `json:"value"`
const rtVector, rtMatrix = "vector", "matrix"
func (q *Query) metrics() ([]Metric, error) {
switch q.Data.ResultType {
case rtVector:
var r QueryInstant
if err := json.Unmarshal(q.Data.Result, &r.Result); err != nil {
return nil, err
}
return r.metrics()
case rtMatrix:
var r QueryRange
if err := json.Unmarshal(q.Data.Result, &r.Result); err != nil {
return nil, err
}
return r.metrics()
default:
return nil, fmt.Errorf("unknown result type %q", q.Data.ResultType)
}
}
func (r *QueryDataResult) UnmarshalJSON(b []byte) error {
type plain QueryDataResult
return json.Unmarshal(testutil.PopulateTimeTpl(b, insertionTime), (*plain)(r))
type QueryInstant struct {
Result []struct {
Labels map[string]string `json:"metric"`
TV [2]interface{} `json:"value"`
} `json:"result"`
}
func (q QueryInstant) metrics() ([]Metric, error) {
result := make([]Metric, len(q.Result))
for i, res := range q.Result {
f, err := strconv.ParseFloat(res.TV[1].(string), 64)
if err != nil {
return nil, fmt.Errorf("metric %v, unable to parse float64 from %s: %w", res, res.TV[1], err)
}
var m Metric
m.Metric = res.Labels
m.Timestamps = append(m.Timestamps, int64(res.TV[0].(float64)))
m.Values = append(m.Values, f)
result[i] = m
}
return result, nil
}
type QueryRange struct {
Status string `json:"status"`
Data QueryRangeData `json:"data"`
}
type QueryRangeData struct {
ResultType string `json:"resultType"`
Result []QueryRangeDataResult `json:"result"`
Result []struct {
Metric map[string]string `json:"metric"`
Values [][]interface{} `json:"values"`
} `json:"result"`
}
type QueryRangeDataResult struct {
Metric map[string]string `json:"metric"`
Values [][]interface{} `json:"values"`
func (q QueryRange) metrics() ([]Metric, error) {
var result []Metric
for i, res := range q.Result {
var m Metric
for _, tv := range res.Values {
f, err := strconv.ParseFloat(tv[1].(string), 64)
if err != nil {
return nil, fmt.Errorf("metric %v, unable to parse float64 from %s: %w", res, tv[1], err)
}
m.Values = append(m.Values, f)
m.Timestamps = append(m.Timestamps, int64(tv[0].(float64)))
}
if len(m.Values) < 1 || len(m.Timestamps) < 1 {
return nil, fmt.Errorf("metric %v contains no values", res)
}
m.Metric = q.Result[i].Metric
result = append(result, m)
}
return result, nil
}
func (r *QueryRangeDataResult) UnmarshalJSON(b []byte) error {
type plain QueryRangeDataResult
return json.Unmarshal(testutil.PopulateTimeTpl(b, insertionTime), (*plain)(r))
func (q *Query) UnmarshalJSON(b []byte) error {
type plain Query
return json.Unmarshal(testutil.PopulateTimeTpl(b, insertionTime), (*plain)(q))
}
func TestMain(m *testing.M) {
@ -197,6 +245,9 @@ func TestWriteRead(t *testing.T) {
func testWrite(t *testing.T) {
t.Run("prometheus", func(t *testing.T) {
for _, test := range readIn("prometheus", t, insertionTime) {
if test.Data == nil {
continue
}
s := newSuite(t)
r := testutil.WriteRequest{}
s.noError(json.Unmarshal([]byte(strings.Join(test.Data, "\n")), &r.Timeseries))
@ -272,17 +323,19 @@ func testRead(t *testing.T) {
if err := checkSeriesResult(s, test.ResultSeries); err != nil {
t.Fatalf("Series. %s fails with error %s.%s", q, err, test.Issue)
}
case strings.HasPrefix(q, "/api/v1/query_range"):
queryResult := QueryRange{}
httpReadStruct(t, testReadHTTPPath, q, &queryResult)
if err := checkQueryRangeResult(queryResult, test.ResultQueryRange); err != nil {
t.Fatalf("Query Range. %s fails with error %s.%s", q, err, test.Issue)
}
case strings.HasPrefix(q, "/api/v1/query"):
queryResult := Query{}
httpReadStruct(t, testReadHTTPPath, q, &queryResult)
if err := checkQueryResult(queryResult, test.ResultQuery); err != nil {
t.Fatalf("Query. %s fails with error: %s.%s", q, err, test.Issue)
gotMetrics, err := queryResult.metrics()
if err != nil {
t.Fatalf("failed to parse query response: %s", err)
}
expMetrics, err := test.ResultQuery.metrics()
if err != nil {
t.Fatalf("failed to parse expected response: %s", err)
}
if err := checkMetricsResult(gotMetrics, expMetrics); err != nil {
t.Fatalf("%q fails with error %s.%s", q, err, test.Issue)
}
default:
t.Fatalf("unsupported read query %s", q)
@ -417,60 +470,6 @@ func removeIfFoundSeries(r map[string]string, contains []map[string]string) []ma
return contains
}
func checkQueryResult(got, want Query) error {
if got.Status != want.Status {
return fmt.Errorf("status mismatch %q - %q", want.Status, got.Status)
}
if got.Data.ResultType != want.Data.ResultType {
return fmt.Errorf("result type mismatch %q - %q", want.Data.ResultType, got.Data.ResultType)
}
wantData := append([]QueryDataResult(nil), want.Data.Result...)
for _, r := range got.Data.Result {
wantData = removeIfFoundQueryData(r, wantData)
}
if len(wantData) > 0 {
return fmt.Errorf("expected query result %+v not found in %+v", wantData, got.Data.Result)
}
return nil
}
func removeIfFoundQueryData(r QueryDataResult, contains []QueryDataResult) []QueryDataResult {
for i, item := range contains {
if reflect.DeepEqual(r.Metric, item.Metric) && reflect.DeepEqual(r.Value[0], item.Value[0]) && reflect.DeepEqual(r.Value[1], item.Value[1]) {
contains[i] = contains[len(contains)-1]
return contains[:len(contains)-1]
}
}
return contains
}
func checkQueryRangeResult(got, want QueryRange) error {
if got.Status != want.Status {
return fmt.Errorf("status mismatch %q - %q", want.Status, got.Status)
}
if got.Data.ResultType != want.Data.ResultType {
return fmt.Errorf("result type mismatch %q - %q", want.Data.ResultType, got.Data.ResultType)
}
wantData := append([]QueryRangeDataResult(nil), want.Data.Result...)
for _, r := range got.Data.Result {
wantData = removeIfFoundQueryRangeData(r, wantData)
}
if len(wantData) > 0 {
return fmt.Errorf("expected query range result %+v not found in %+v", wantData, got.Data.Result)
}
return nil
}
func removeIfFoundQueryRangeData(r QueryRangeDataResult, contains []QueryRangeDataResult) []QueryRangeDataResult {
for i, item := range contains {
if reflect.DeepEqual(r.Metric, item.Metric) && reflect.DeepEqual(r.Values, item.Values) {
contains[i] = contains[len(contains)-1]
return contains[:len(contains)-1]
}
}
return contains
}
type suite struct{ t *testing.T }
func newSuite(t *testing.T) *suite { return &suite{t: t} }

View file

@ -98,7 +98,7 @@ func addLabel(dst []prompb.Label, key, value string) []prompb.Label {
dst = append(dst, prompb.Label{})
}
lb := &dst[len(dst)-1]
lb.Name = bytesutil.ToUnsafeBytes(key)
lb.Value = bytesutil.ToUnsafeBytes(value)
lb.Name = key
lb.Value = value
return dst
}

View file

@ -7,7 +7,7 @@
"not_nan_not_inf;item=y 3 {TIME_S-1m}",
"not_nan_not_inf;item=y 1 {TIME_S-2m}"],
"query": ["/api/v1/query_range?query=1/(not_nan_not_inf-1)!=inf!=nan&start={TIME_S-3m}&end={TIME_S}&step=60"],
"result_query_range": {
"result_query": {
"status":"success",
"data":{"resultType":"matrix",
"result":[

View file

@ -6,7 +6,7 @@
"empty_label_match;foo=bar 2 {TIME_S-1m}",
"empty_label_match;foo=baz 3 {TIME_S-1m}"],
"query": ["/api/v1/query_range?query=empty_label_match{foo=~'bar|'}&start={TIME_S-1m}&end={TIME_S}&step=60"],
"result_query_range": {
"result_query": {
"status":"success",
"data":{"resultType":"matrix",
"result":[

View file

@ -8,7 +8,7 @@
"max_lookback_set 4 {TIME_S-150s}"
],
"query": ["/api/v1/query_range?query=max_lookback_set&start={TIME_S-150s}&end={TIME_S}&step=10s&max_lookback=1s"],
"result_query_range": {
"result_query": {
"status":"success",
"data":{"resultType":"matrix",
"result":[{"metric":{"__name__":"max_lookback_set"},"values":[

View file

@ -8,7 +8,7 @@
"max_lookback_unset 4 {TIME_S-150s}"
],
"query": ["/api/v1/query_range?query=max_lookback_unset&start={TIME_S-150s}&end={TIME_S}&step=10s"],
"result_query_range": {
"result_query": {
"status":"success",
"data":{"resultType":"matrix",
"result":[{"metric":{"__name__":"max_lookback_unset"},"values":[

View file

@ -8,7 +8,7 @@
"not_nan_as_missing_data;item=y 3 {TIME_S-1m}"
],
"query": ["/api/v1/query_range?query=not_nan_as_missing_data>1&start={TIME_S-2m}&end={TIME_S}&step=60"],
"result_query_range": {
"result_query": {
"status":"success",
"data":{"resultType":"matrix",
"result":[

View file

@ -0,0 +1,12 @@
{
"name": "instant query with look-behind window",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"foo\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS-60s}\"}]}]"],
"query": ["/api/v1/query?query=foo[5m]"],
"result_query": {
"status": "success",
"data":{
"resultType":"matrix",
"result":[{"metric":{"__name__":"foo"},"values":[["{TIME_S-60s}", "1"]]}]
}
}
}

View file

@ -0,0 +1,11 @@
{
"name": "instant scalar query",
"query": ["/api/v1/query?query=42&time={TIME_S}"],
"result_query": {
"status": "success",
"data":{
"resultType":"vector",
"result":[{"metric":{},"value":["{TIME_S}", "42"]}]
}
}
}

View file

@ -0,0 +1,13 @@
{
"name": "too big look-behind window",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5553",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"foo\"},{\"name\":\"issue\",\"value\":\"5553\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS-60s}\"}]}]"],
"query": ["/api/v1/query?query=foo{issue=\"5553\"}[100y]"],
"result_query": {
"status": "success",
"data":{
"resultType":"matrix",
"result":[{"metric":{"__name__":"foo", "issue": "5553"},"values":[["{TIME_S-60s}", "1"]]}]
}
}
}

View file

@ -0,0 +1,18 @@
{
"name": "query range",
"issue": "https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5553",
"data": ["[{\"labels\":[{\"name\":\"__name__\",\"value\":\"bar\"}],\"samples\":[{\"value\":1,\"timestamp\":\"{TIME_MS-60s}\"}, {\"value\":2,\"timestamp\":\"{TIME_MS-120s}\"}, {\"value\":1,\"timestamp\":\"{TIME_MS-180s}\"}]}]"],
"query": ["/api/v1/query_range?query=bar&step=30s&start={TIME_MS-180s}"],
"result_query": {
"status": "success",
"data":{
"resultType":"matrix",
"result":[
{
"metric":{"__name__":"bar"},
"values":[["{TIME_S-180s}", "1"],["{TIME_S-150s}", "1"],["{TIME_S-120s}", "2"],["{TIME_S-90s}", "2"], ["{TIME_S-60s}", "1"], ["{TIME_S-30s}", "1"], ["{TIME_S}", "1"]]
}
]
}
}
}

View file

@ -1,4 +1,4 @@
package datadog
package datadogv1
import (
"net/http"
@ -8,33 +8,32 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadog"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadog/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="datadog"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="datadog"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="datadog"}`)
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="datadogv1"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="datadogv1"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="datadogv1"}`)
)
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v1/series request.
//
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
if err != nil {
return err
}
ce := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, ce, func(series []datadog.Series) error {
return stream.Parse(req.Body, ce, func(series []datadogv1.Series) error {
return insertRows(at, series, extraLabels)
})
}
func insertRows(at *auth.Token, series []datadog.Series, extraLabels []prompbmarshal.Label) error {
func insertRows(at *auth.Token, series []datadogv1.Series, extraLabels []prompbmarshal.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
@ -63,7 +62,7 @@ func insertRows(at *auth.Token, series []datadog.Series, extraLabels []prompbmar
})
}
for _, tag := range ss.Tags {
name, value := datadog.SplitTag(tag)
name, value := datadogutils.SplitTag(tag)
if name == "host" {
name = "exported_host"
}

View file

@ -0,0 +1,102 @@
package datadogv2
import (
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="datadogv2"}`)
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="datadogv2"}`)
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="datadogv2"}`)
)
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v2/series request.
//
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
if err != nil {
return err
}
ct := req.Header.Get("Content-Type")
ce := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, ce, ct, func(series []datadogv2.Series) error {
return insertRows(at, series, extraLabels)
})
}
func insertRows(at *auth.Token, series []datadogv2.Series, extraLabels []prompbmarshal.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)
rowsTotal := 0
tssDst := ctx.WriteRequest.Timeseries[:0]
labels := ctx.Labels[:0]
samples := ctx.Samples[:0]
for i := range series {
ss := &series[i]
rowsTotal += len(ss.Points)
labelsLen := len(labels)
labels = append(labels, prompbmarshal.Label{
Name: "__name__",
Value: ss.Metric,
})
for _, rs := range ss.Resources {
labels = append(labels, prompbmarshal.Label{
Name: rs.Type,
Value: rs.Name,
})
}
if ss.SourceTypeName != "" {
labels = append(labels, prompbmarshal.Label{
Name: "source_type_name",
Value: ss.SourceTypeName,
})
}
for _, tag := range ss.Tags {
name, value := datadogutils.SplitTag(tag)
if name == "host" {
name = "exported_host"
}
labels = append(labels, prompbmarshal.Label{
Name: name,
Value: value,
})
}
labels = append(labels, extraLabels...)
samplesLen := len(samples)
for _, pt := range ss.Points {
samples = append(samples, prompbmarshal.Sample{
Timestamp: pt.Timestamp * 1000,
Value: pt.Value,
})
}
tssDst = append(tssDst, prompbmarshal.TimeSeries{
Labels: labels[labelsLen:],
Samples: samples[samplesLen:],
})
}
ctx.WriteRequest.Timeseries = tssDst
ctx.Labels = labels
ctx.Samples = samples
if !remotewrite.TryPush(at, &ctx.WriteRequest) {
return remotewrite.ErrQueueFullHTTPRetry
}
rowsInserted.Add(rowsTotal)
if at != nil {
rowsTenantInserted.Get(at).Add(rowsTotal)
}
rowsPerInsert.Update(float64(rowsTotal))
return nil
}

View file

@ -12,7 +12,8 @@ import (
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/datadog"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/datadogv1"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/datadogv2"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/graphite"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/native"
@ -96,7 +97,6 @@ func main() {
remotewrite.InitSecretFlags()
buildinfo.Init()
logger.Init()
pushmetrics.Init()
if promscrape.IsDryRun() {
if err := promscrape.CheckConfig(); err != nil {
@ -147,8 +147,10 @@ func main() {
}
logger.Infof("started vmagent in %.3f seconds", time.Since(startTime).Seconds())
pushmetrics.Init()
sig := procutil.WaitForSigterm()
logger.Infof("received signal %s", sig)
pushmetrics.Stop()
startTime = time.Now()
if len(*httpListenAddr) > 0 {
@ -345,9 +347,20 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "/datadog/api/v1/series":
datadogWriteRequests.Inc()
if err := datadog.InsertHandlerForHTTP(nil, r); err != nil {
datadogWriteErrors.Inc()
datadogv1WriteRequests.Inc()
if err := datadogv1.InsertHandlerForHTTP(nil, r); err != nil {
datadogv1WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "/datadog/api/v2/series":
datadogv2WriteRequests.Inc()
if err := datadogv2.InsertHandlerForHTTP(nil, r); err != nil {
datadogv2WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
@ -571,9 +584,19 @@ func processMultitenantRequest(w http.ResponseWriter, r *http.Request, path stri
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "datadog/api/v1/series":
datadogWriteRequests.Inc()
if err := datadog.InsertHandlerForHTTP(at, r); err != nil {
datadogWriteErrors.Inc()
datadogv1WriteRequests.Inc()
if err := datadogv1.InsertHandlerForHTTP(at, r); err != nil {
datadogv1WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "datadog/api/v2/series":
datadogv2WriteRequests.Inc()
if err := datadogv2.InsertHandlerForHTTP(at, r); err != nil {
datadogv2WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
@ -631,8 +654,11 @@ var (
influxQueryRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/influx/query", protocol="influx"}`)
datadogWriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogWriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogv1WriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogv1WriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogv2WriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v2/series", protocol="datadog"}`)
datadogv2WriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/datadog/api/v2/series", protocol="datadog"}`)
datadogValidateRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`)
datadogCheckRunRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`)

View file

@ -6,7 +6,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
@ -48,8 +47,8 @@ func insertRows(at *auth.Token, timeseries []prompb.TimeSeries, extraLabels []pr
for i := range ts.Labels {
label := &ts.Labels[i]
labels = append(labels, prompbmarshal.Label{
Name: bytesutil.ToUnsafeString(label.Name),
Value: bytesutil.ToUnsafeString(label.Value),
Name: label.Name,
Value: label.Value,
})
}
labels = append(labels, extraLabels...)

View file

@ -58,8 +58,10 @@ var (
oauth2ClientID = flagutil.NewArrayString("remoteWrite.oauth2.clientID", "Optional OAuth2 clientID to use for the corresponding -remoteWrite.url")
oauth2ClientSecret = flagutil.NewArrayString("remoteWrite.oauth2.clientSecret", "Optional OAuth2 clientSecret to use for the corresponding -remoteWrite.url")
oauth2ClientSecretFile = flagutil.NewArrayString("remoteWrite.oauth2.clientSecretFile", "Optional OAuth2 clientSecretFile to use for the corresponding -remoteWrite.url")
oauth2TokenURL = flagutil.NewArrayString("remoteWrite.oauth2.tokenUrl", "Optional OAuth2 tokenURL to use for the corresponding -remoteWrite.url")
oauth2Scopes = flagutil.NewArrayString("remoteWrite.oauth2.scopes", "Optional OAuth2 scopes to use for the corresponding -remoteWrite.url. Scopes must be delimited by ';'")
oauth2EndpointParams = flagutil.NewArrayString("remoteWrite.oauth2.endpointParams", "Optional OAuth2 endpoint parameters to use for the corresponding -remoteWrite.url . "+
`The endpoint parameters must be set in JSON format: {"param1":"value1",...,"paramN":"valueN"}`)
oauth2TokenURL = flagutil.NewArrayString("remoteWrite.oauth2.tokenUrl", "Optional OAuth2 tokenURL to use for the corresponding -remoteWrite.url")
oauth2Scopes = flagutil.NewArrayString("remoteWrite.oauth2.scopes", "Optional OAuth2 scopes to use for the corresponding -remoteWrite.url. Scopes must be delimited by ';'")
awsUseSigv4 = flagutil.NewArrayBool("remoteWrite.aws.useSigv4", "Enables SigV4 request signing for the corresponding -remoteWrite.url. "+
"It is expected that other -remoteWrite.aws.* command-line flags are set if sigv4 request signing is enabled")
@ -234,10 +236,16 @@ func getAuthConfig(argIdx int) (*promauth.Config, error) {
clientSecret := oauth2ClientSecret.GetOptionalArg(argIdx)
clientSecretFile := oauth2ClientSecretFile.GetOptionalArg(argIdx)
if clientSecretFile != "" || clientSecret != "" {
endpointParamsJSON := oauth2EndpointParams.GetOptionalArg(argIdx)
endpointParams, err := flagutil.ParseJSONMap(endpointParamsJSON)
if err != nil {
return nil, fmt.Errorf("cannot parse JSON for -remoteWrite.oauth2.endpointParams=%s: %w", endpointParamsJSON, err)
}
oauth2Cfg = &promauth.OAuth2Config{
ClientID: oauth2ClientID.GetOptionalArg(argIdx),
ClientSecret: promauth.NewSecret(clientSecret),
ClientSecretFile: clientSecretFile,
EndpointParams: endpointParams,
TokenURL: oauth2TokenURL.GetOptionalArg(argIdx),
Scopes: strings.Split(oauth2Scopes.GetOptionalArg(argIdx), ";"),
}

View file

@ -228,7 +228,7 @@ func tryPushWriteRequest(wr *prompbmarshal.WriteRequest, tryPushBlock func(block
return true
}
bb := writeRequestBufPool.Get()
bb.B = prompbmarshal.MarshalWriteRequest(bb.B[:0], wr)
bb.B = wr.MarshalProtobuf(bb.B[:0])
if len(bb.B) <= maxUnpackedBlockSize.IntN() {
zb := snappyBufPool.Get()
if isVMRemoteWrite {

View file

@ -43,7 +43,7 @@ func testPushWriteRequest(t *testing.T, rowsCount, expectedBlockLenProm, expecte
}
// Check Prometheus remote write
f(false, expectedBlockLenProm, 0)
f(false, expectedBlockLenProm, 3)
// Check VictoriaMetrics remote write
f(true, expectedBlockLenVM, 15)

View file

@ -4,7 +4,6 @@ import (
"fmt"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/golang/snappy"
"github.com/klauspost/compress/s2"
)
@ -22,7 +21,7 @@ func benchmarkCompressWriteRequest(b *testing.B, compressFunc func(dst, src []by
for _, rowsCount := range []int{1, 10, 100, 1e3, 1e4} {
b.Run(fmt.Sprintf("rows_%d", rowsCount), func(b *testing.B) {
wr := newTestWriteRequest(rowsCount, 10)
data := prompbmarshal.MarshalWriteRequest(nil, wr)
data := wr.MarshalProtobuf(nil)
b.ReportAllocs()
b.SetBytes(int64(rowsCount))
b.RunParallel(func(pb *testing.PB) {

View file

@ -276,7 +276,7 @@ func reloadRelabelConfigs() {
var (
relabelConfigReloads = metrics.NewCounter(`vmagent_relabel_config_reloads_total`)
relabelConfigReloadErrors = metrics.NewCounter(`vmagent_relabel_config_reloads_errors_total`)
relabelConfigSuccess = metrics.NewCounter(`vmagent_relabel_config_last_reload_successful`)
relabelConfigSuccess = metrics.NewGauge(`vmagent_relabel_config_last_reload_successful`, nil)
relabelConfigTimestamp = metrics.NewCounter(`vmagent_relabel_config_last_reload_success_timestamp_seconds`)
)

View file

@ -37,11 +37,13 @@ var (
tlsCAFile = flag.String("datasource.tlsCAFile", "", `Optional path to TLS CA file to use for verifying connections to -datasource.url. By default, system CA is used`)
tlsServerName = flag.String("datasource.tlsServerName", "", `Optional TLS server name to use for connections to -datasource.url. By default, the server name from -datasource.url is used`)
oauth2ClientID = flag.String("datasource.oauth2.clientID", "", "Optional OAuth2 clientID to use for -datasource.url. ")
oauth2ClientSecret = flag.String("datasource.oauth2.clientSecret", "", "Optional OAuth2 clientSecret to use for -datasource.url.")
oauth2ClientSecretFile = flag.String("datasource.oauth2.clientSecretFile", "", "Optional OAuth2 clientSecretFile to use for -datasource.url. ")
oauth2TokenURL = flag.String("datasource.oauth2.tokenUrl", "", "Optional OAuth2 tokenURL to use for -datasource.url.")
oauth2Scopes = flag.String("datasource.oauth2.scopes", "", "Optional OAuth2 scopes to use for -datasource.url. Scopes must be delimited by ';'")
oauth2ClientID = flag.String("datasource.oauth2.clientID", "", "Optional OAuth2 clientID to use for -datasource.url")
oauth2ClientSecret = flag.String("datasource.oauth2.clientSecret", "", "Optional OAuth2 clientSecret to use for -datasource.url")
oauth2ClientSecretFile = flag.String("datasource.oauth2.clientSecretFile", "", "Optional OAuth2 clientSecretFile to use for -datasource.url")
oauth2EndpointParams = flag.String("datasource.oauth2.endpointParams", "", "Optional OAuth2 endpoint parameters to use for -datasource.url . "+
`The endpoint parameters must be set in JSON format: {"param1":"value1",...,"paramN":"valueN"}`)
oauth2TokenURL = flag.String("datasource.oauth2.tokenUrl", "", "Optional OAuth2 tokenURL to use for -datasource.url")
oauth2Scopes = flag.String("datasource.oauth2.scopes", "", "Optional OAuth2 scopes to use for -datasource.url. Scopes must be delimited by ';'")
lookBack = flag.Duration("datasource.lookback", 0, `Will be deprecated soon, please adjust "-search.latencyOffset" at datasource side `+
`or specify "latency_offset" in rule group's params. Lookback defines how far into the past to look when evaluating queries. `+
@ -108,10 +110,14 @@ func Init(extraParams url.Values) (QuerierBuilder, error) {
extraParams.Set("round_digits", fmt.Sprintf("%d", *roundDigits))
}
endpointParams, err := flagutil.ParseJSONMap(*oauth2EndpointParams)
if err != nil {
return nil, fmt.Errorf("cannot parse JSON for -datasource.oauth2.endpointParams=%s: %w", *oauth2EndpointParams, err)
}
authCfg, err := utils.AuthConfig(
utils.WithBasicAuth(*basicAuthUsername, *basicAuthPassword, *basicAuthPasswordFile),
utils.WithBearer(*bearerToken, *bearerTokenFile),
utils.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes),
utils.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes, endpointParams),
utils.WithHeaders(*headers))
if err != nil {
return nil, fmt.Errorf("failed to configure auth: %w", err)

View file

@ -96,7 +96,6 @@ func main() {
notifier.InitSecretFlags()
buildinfo.Init()
logger.Init()
pushmetrics.Init()
if !*remoteReadIgnoreRestoreErrors {
logger.Warnf("flag `remoteRead.ignoreRestoreErrors` is deprecated and will be removed in next releases.")
@ -182,8 +181,11 @@ func main() {
rh := &requestHandler{m: manager}
go httpserver.Serve(*httpListenAddr, *useProxyProtocol, rh.handler)
pushmetrics.Init()
sig := procutil.WaitForSigterm()
logger.Infof("service received signal %s", sig)
pushmetrics.Stop()
if err := httpserver.Stop(*httpListenAddr); err != nil {
logger.Fatalf("cannot stop the webservice: %s", err)
}
@ -194,7 +196,7 @@ func main() {
var (
configReloads = metrics.NewCounter(`vmalert_config_last_reload_total`)
configReloadErrors = metrics.NewCounter(`vmalert_config_last_reload_errors_total`)
configSuccess = metrics.NewCounter(`vmalert_config_last_reload_successful`)
configSuccess = metrics.NewGauge(`vmalert_config_last_reload_successful`, nil)
configTimestamp = metrics.NewCounter(`vmalert_config_last_reload_success_timestamp_seconds`)
)

View file

@ -141,7 +141,7 @@ groups:
t.Fatalf("expected to have config error %s; got nil instead", cErr)
}
if cfgSuc != 0 {
t.Fatalf("expected to have metric configSuccess to be set to 0; got %d instead", cfgSuc)
t.Fatalf("expected to have metric configSuccess to be set to 0; got %v instead", cfgSuc)
}
return
}
@ -150,7 +150,7 @@ groups:
t.Fatalf("unexpected config error: %s", cErr)
}
if cfgSuc != 1 {
t.Fatalf("expected to have metric configSuccess to be set to 1; got %d instead", cfgSuc)
t.Fatalf("expected to have metric configSuccess to be set to 1; got %v instead", cfgSuc)
}
}

View file

@ -144,7 +144,7 @@ func NewAlertManager(alertManagerURL string, fn AlertURLGenerator, authCfg proma
aCfg, err := utils.AuthConfig(
utils.WithBasicAuth(ba.Username, ba.Password.String(), ba.PasswordFile),
utils.WithBearer(authCfg.BearerToken.String(), authCfg.BearerTokenFile),
utils.WithOAuth(oauth.ClientID, oauth.ClientSecretFile, oauth.ClientSecretFile, oauth.TokenURL, strings.Join(oauth.Scopes, ";")))
utils.WithOAuth(oauth.ClientID, oauth.ClientSecretFile, oauth.ClientSecretFile, oauth.TokenURL, strings.Join(oauth.Scopes, ";"), oauth.EndpointParams))
if err != nil {
return nil, fmt.Errorf("failed to configure auth: %w", err)
}

View file

@ -46,6 +46,8 @@ var (
"If multiple args are set, then they are applied independently for the corresponding -notifier.url")
oauth2ClientSecretFile = flagutil.NewArrayString("notifier.oauth2.clientSecretFile", "Optional OAuth2 clientSecretFile to use for -notifier.url. "+
"If multiple args are set, then they are applied independently for the corresponding -notifier.url")
oauth2EndpointParams = flagutil.NewArrayString("notifier.oauth2.endpointParams", "Optional OAuth2 endpoint parameters to use for the corresponding -notifier.url . "+
`The endpoint parameters must be set in JSON format: {"param1":"value1",...,"paramN":"valueN"}`)
oauth2TokenURL = flagutil.NewArrayString("notifier.oauth2.tokenUrl", "Optional OAuth2 tokenURL to use for -notifier.url. "+
"If multiple args are set, then they are applied independently for the corresponding -notifier.url")
oauth2Scopes = flagutil.NewArrayString("notifier.oauth2.scopes", "Optional OAuth2 scopes to use for -notifier.url. Scopes must be delimited by ';'. "+
@ -141,6 +143,11 @@ func InitSecretFlags() {
func notifiersFromFlags(gen AlertURLGenerator) ([]Notifier, error) {
var notifiers []Notifier
for i, addr := range *addrs {
endpointParamsJSON := oauth2EndpointParams.GetOptionalArg(i)
endpointParams, err := flagutil.ParseJSONMap(endpointParamsJSON)
if err != nil {
return nil, fmt.Errorf("cannot parse JSON for -notifier.oauth2.endpointParams=%s: %w", endpointParamsJSON, err)
}
authCfg := promauth.HTTPClientConfig{
TLSConfig: &promauth.TLSConfig{
CAFile: tlsCAFile.GetOptionalArg(i),
@ -160,6 +167,7 @@ func notifiersFromFlags(gen AlertURLGenerator) ([]Notifier, error) {
ClientID: oauth2ClientID.GetOptionalArg(i),
ClientSecret: promauth.NewSecret(oauth2ClientSecret.GetOptionalArg(i)),
ClientSecretFile: oauth2ClientSecretFile.GetOptionalArg(i),
EndpointParams: endpointParams,
Scopes: strings.Split(oauth2Scopes.GetOptionalArg(i), ";"),
TokenURL: oauth2TokenURL.GetOptionalArg(i),
},

View file

@ -41,8 +41,10 @@ var (
oauth2ClientID = flag.String("remoteRead.oauth2.clientID", "", "Optional OAuth2 clientID to use for -remoteRead.url.")
oauth2ClientSecret = flag.String("remoteRead.oauth2.clientSecret", "", "Optional OAuth2 clientSecret to use for -remoteRead.url.")
oauth2ClientSecretFile = flag.String("remoteRead.oauth2.clientSecretFile", "", "Optional OAuth2 clientSecretFile to use for -remoteRead.url.")
oauth2TokenURL = flag.String("remoteRead.oauth2.tokenUrl", "", "Optional OAuth2 tokenURL to use for -remoteRead.url. ")
oauth2Scopes = flag.String("remoteRead.oauth2.scopes", "", "Optional OAuth2 scopes to use for -remoteRead.url. Scopes must be delimited by ';'.")
oauth2EndpointParams = flag.String("remoteRead.oauth2.endpointParams", "", "Optional OAuth2 endpoint parameters to use for -remoteRead.url . "+
`The endpoint parameters must be set in JSON format: {"param1":"value1",...,"paramN":"valueN"}`)
oauth2TokenURL = flag.String("remoteRead.oauth2.tokenUrl", "", "Optional OAuth2 tokenURL to use for -remoteRead.url. ")
oauth2Scopes = flag.String("remoteRead.oauth2.scopes", "", "Optional OAuth2 scopes to use for -remoteRead.url. Scopes must be delimited by ';'.")
)
// InitSecretFlags must be called after flag.Parse and before any logging
@ -63,10 +65,14 @@ func Init() (datasource.QuerierBuilder, error) {
return nil, fmt.Errorf("failed to create transport: %w", err)
}
endpointParams, err := flagutil.ParseJSONMap(*oauth2EndpointParams)
if err != nil {
return nil, fmt.Errorf("cannot parse JSON for -remoteRead.oauth2.endpointParams=%s: %w", *oauth2EndpointParams, err)
}
authCfg, err := utils.AuthConfig(
utils.WithBasicAuth(*basicAuthUsername, *basicAuthPassword, *basicAuthPasswordFile),
utils.WithBearer(*bearerToken, *bearerTokenFile),
utils.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes),
utils.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes, endpointParams),
utils.WithHeaders(*headers))
if err != nil {
return nil, fmt.Errorf("failed to configure auth: %w", err)

View file

@ -123,14 +123,12 @@ func (c *Client) Push(s prompbmarshal.TimeSeries) error {
case <-c.doneCh:
rwErrors.Inc()
droppedRows.Add(len(s.Samples))
droppedBytes.Add(s.Size())
return fmt.Errorf("client is closed")
case c.input <- s:
return nil
default:
rwErrors.Inc()
droppedRows.Add(len(s.Samples))
droppedBytes.Add(s.Size())
return fmt.Errorf("failed to push timeseries - queue is full (%d entries). "+
"Queue size is controlled by -remoteWrite.maxQueueSize flag",
c.maxQueueSize)
@ -195,7 +193,6 @@ var (
sentRows = metrics.NewCounter(`vmalert_remotewrite_sent_rows_total`)
sentBytes = metrics.NewCounter(`vmalert_remotewrite_sent_bytes_total`)
droppedRows = metrics.NewCounter(`vmalert_remotewrite_dropped_rows_total`)
droppedBytes = metrics.NewCounter(`vmalert_remotewrite_dropped_bytes_total`)
sendDuration = metrics.NewFloatCounter(`vmalert_remotewrite_send_duration_seconds_total`)
bufferFlushDuration = metrics.NewHistogram(`vmalert_remotewrite_flush_duration_seconds`)
@ -211,15 +208,10 @@ func (c *Client) flush(ctx context.Context, wr *prompbmarshal.WriteRequest) {
if len(wr.Timeseries) < 1 {
return
}
defer prompbmarshal.ResetWriteRequest(wr)
defer wr.Reset()
defer bufferFlushDuration.UpdateDuration(time.Now())
data, err := wr.Marshal()
if err != nil {
logger.Errorf("failed to marshal WriteRequest: %s", err)
return
}
data := wr.MarshalProtobuf(nil)
b := snappy.Encode(nil, data)
retryInterval, maxRetryInterval := *retryMinInterval, *retryMaxTime
@ -276,8 +268,11 @@ L:
}
rwErrors.Inc()
droppedRows.Add(len(wr.Timeseries))
droppedBytes.Add(len(b))
rows := 0
for _, ts := range wr.Timeseries {
rows += len(ts.Samples)
}
droppedRows.Add(rows)
logger.Errorf("attempts to send remote-write request failed - dropping %d time series",
len(wr.Timeseries))
}

View file

@ -140,7 +140,7 @@ func (rw *rwServer) handler(w http.ResponseWriter, r *http.Request) {
return
}
wr := &prompb.WriteRequest{}
if err := wr.Unmarshal(b); err != nil {
if err := wr.UnmarshalProtobuf(b); err != nil {
rw.err(w, fmt.Errorf("unmarhsal err: %w", err))
return
}

View file

@ -49,10 +49,7 @@ func (c *DebugClient) Push(s prompbmarshal.TimeSeries) error {
c.wg.Add(1)
defer c.wg.Done()
wr := &prompbmarshal.WriteRequest{Timeseries: []prompbmarshal.TimeSeries{s}}
data, err := wr.Marshal()
if err != nil {
return fmt.Errorf("failed to marshal the given time series: %w", err)
}
data := wr.MarshalProtobuf(nil)
return c.send(data)
}

View file

@ -41,11 +41,13 @@ var (
tlsServerName = flag.String("remoteWrite.tlsServerName", "", "Optional TLS server name to use for connections to -remoteWrite.url. "+
"By default, the server name from -remoteWrite.url is used")
oauth2ClientID = flag.String("remoteWrite.oauth2.clientID", "", "Optional OAuth2 clientID to use for -remoteWrite.url.")
oauth2ClientSecret = flag.String("remoteWrite.oauth2.clientSecret", "", "Optional OAuth2 clientSecret to use for -remoteWrite.url.")
oauth2ClientSecretFile = flag.String("remoteWrite.oauth2.clientSecretFile", "", "Optional OAuth2 clientSecretFile to use for -remoteWrite.url.")
oauth2TokenURL = flag.String("remoteWrite.oauth2.tokenUrl", "", "Optional OAuth2 tokenURL to use for -notifier.url.")
oauth2Scopes = flag.String("remoteWrite.oauth2.scopes", "", "Optional OAuth2 scopes to use for -notifier.url. Scopes must be delimited by ';'.")
oauth2ClientID = flag.String("remoteWrite.oauth2.clientID", "", "Optional OAuth2 clientID to use for -remoteWrite.url")
oauth2ClientSecret = flag.String("remoteWrite.oauth2.clientSecret", "", "Optional OAuth2 clientSecret to use for -remoteWrite.url")
oauth2ClientSecretFile = flag.String("remoteWrite.oauth2.clientSecretFile", "", "Optional OAuth2 clientSecretFile to use for -remoteWrite.url")
oauth2EndpointParams = flag.String("remoteWrite.oauth2.endpointParams", "", "Optional OAuth2 endpoint parameters to use for -remoteWrite.url . "+
`The endpoint parameters must be set in JSON format: {"param1":"value1",...,"paramN":"valueN"}`)
oauth2TokenURL = flag.String("remoteWrite.oauth2.tokenUrl", "", "Optional OAuth2 tokenURL to use for -notifier.url.")
oauth2Scopes = flag.String("remoteWrite.oauth2.scopes", "", "Optional OAuth2 scopes to use for -notifier.url. Scopes must be delimited by ';'.")
)
// InitSecretFlags must be called after flag.Parse and before any logging
@ -67,10 +69,14 @@ func Init(ctx context.Context) (*Client, error) {
return nil, fmt.Errorf("failed to create transport: %w", err)
}
endpointParams, err := flagutil.ParseJSONMap(*oauth2EndpointParams)
if err != nil {
return nil, fmt.Errorf("cannot parse JSON for -remoteWrite.oauth2.endpointParams=%s: %w", *oauth2EndpointParams, err)
}
authCfg, err := utils.AuthConfig(
utils.WithBasicAuth(*basicAuthUsername, *basicAuthPassword, *basicAuthPasswordFile),
utils.WithBearer(*bearerToken, *bearerTokenFile),
utils.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes),
utils.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes, endpointParams),
utils.WithHeaders(*headers))
if err != nil {
return nil, fmt.Errorf("failed to configure auth: %w", err)

View file

@ -237,11 +237,30 @@ type labelSet struct {
origin map[string]string
// processed labels includes origin labels
// plus extra labels (group labels, service labels like alertNameLabel).
// in case of conflicts, extra labels are preferred.
// in case of key conflicts, origin labels are renamed with prefix `exported_` and extra labels are preferred.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5161
// used as labels attached to notifier.Alert and ALERTS series written to remote storage.
processed map[string]string
}
// add adds a value v with key k to origin and processed label sets.
// On k conflicts in processed set, the passed v is preferred.
// On k conflicts in origin set, the original value is preferred and copied
// to processed with `exported_%k` key. The copy happens only if passed v isn't equal to origin[k] value.
func (ls *labelSet) add(k, v string) {
ls.processed[k] = v
ov, ok := ls.origin[k]
if !ok {
ls.origin[k] = v
return
}
if ov != v {
// copy value only if v and ov are different
key := fmt.Sprintf("exported_%s", k)
ls.processed[key] = ov
}
}
// toLabels converts labels from given Metric
// to labelSet which contains original and processed labels.
func (ar *AlertingRule) toLabels(m datasource.Metric, qFn templates.QueryFn) (*labelSet, error) {
@ -267,24 +286,14 @@ func (ar *AlertingRule) toLabels(m datasource.Metric, qFn templates.QueryFn) (*l
return nil, fmt.Errorf("failed to expand labels: %w", err)
}
for k, v := range extraLabels {
ls.processed[k] = v
if _, ok := ls.origin[k]; !ok {
ls.origin[k] = v
}
ls.add(k, v)
}
// set additional labels to identify group and rule name
if ar.Name != "" {
ls.processed[alertNameLabel] = ar.Name
if _, ok := ls.origin[alertNameLabel]; !ok {
ls.origin[alertNameLabel] = ar.Name
}
ls.add(alertNameLabel, ar.Name)
}
if !*disableAlertGroupLabel && ar.GroupName != "" {
ls.processed[alertGroupNameLabel] = ar.GroupName
if _, ok := ls.origin[alertGroupNameLabel]; !ok {
ls.origin[alertGroupNameLabel] = ar.GroupName
}
ls.add(alertGroupNameLabel, ar.GroupName)
}
return ls, nil
}
@ -414,8 +423,7 @@ func (ar *AlertingRule) exec(ctx context.Context, ts time.Time, limit int) ([]pr
}
h := hash(ls.processed)
if _, ok := updated[h]; ok {
// duplicate may be caused by extra labels
// conflicting with the metric labels
// duplicate may be caused the removal of `__name__` label
curState.Err = fmt.Errorf("labels %v: %w", ls.processed, errDuplicate)
return nil, curState.Err
}

View file

@ -768,14 +768,16 @@ func TestAlertingRule_Exec_Negative(t *testing.T) {
ar.q = fq
// successful attempt
// label `job` will be overridden by rule extra label, the original value will be reserved by "exported_job"
fq.Add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "bar"))
fq.Add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "baz"))
_, err := ar.exec(context.TODO(), time.Now(), 0)
if err != nil {
t.Fatal(err)
}
// label `job` will collide with rule extra label and will make both time series equal
fq.Add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "baz"))
// label `__name__` will be omitted and get duplicated results here
fq.Add(metricWithValueAndLabels(t, 1, "__name__", "foo_1", "job", "bar"))
_, err = ar.exec(context.TODO(), time.Now(), 0)
if !errors.Is(err, errDuplicate) {
t.Fatalf("expected to have %s error; got %s", errDuplicate, err)
@ -899,20 +901,22 @@ func TestAlertingRule_Template(t *testing.T) {
metricWithValueAndLabels(t, 10, "__name__", "second", "instance", "bar", alertNameLabel, "override"),
},
map[uint64]*notifier.Alert{
hash(map[string]string{alertNameLabel: "override label", "instance": "foo"}): {
hash(map[string]string{alertNameLabel: "override label", "exported_alertname": "override", "instance": "foo"}): {
Labels: map[string]string{
alertNameLabel: "override label",
"instance": "foo",
alertNameLabel: "override label",
"exported_alertname": "override",
"instance": "foo",
},
Annotations: map[string]string{
"summary": `first: Too high connection number for "foo"`,
"description": `override: It is 2 connections for "foo"`,
},
},
hash(map[string]string{alertNameLabel: "override label", "instance": "bar"}): {
hash(map[string]string{alertNameLabel: "override label", "exported_alertname": "override", "instance": "bar"}): {
Labels: map[string]string{
alertNameLabel: "override label",
"instance": "bar",
alertNameLabel: "override label",
"exported_alertname": "override",
"instance": "bar",
},
Annotations: map[string]string{
"summary": `second: Too high connection number for "bar"`,
@ -941,14 +945,18 @@ func TestAlertingRule_Template(t *testing.T) {
},
map[uint64]*notifier.Alert{
hash(map[string]string{
alertNameLabel: "OriginLabels",
alertGroupNameLabel: "Testing",
"instance": "foo",
alertNameLabel: "OriginLabels",
"exported_alertname": "originAlertname",
alertGroupNameLabel: "Testing",
"exported_alertgroup": "originGroupname",
"instance": "foo",
}): {
Labels: map[string]string{
alertNameLabel: "OriginLabels",
alertGroupNameLabel: "Testing",
"instance": "foo",
alertNameLabel: "OriginLabels",
"exported_alertname": "originAlertname",
alertGroupNameLabel: "Testing",
"exported_alertgroup": "originGroupname",
"instance": "foo",
},
Annotations: map[string]string{
"summary": `Alert "originAlertname(originGroupname)" for instance foo`,
@ -1092,3 +1100,54 @@ func newTestAlertingRuleWithKeepFiring(name string, waitFor, keepFiringFor time.
rule.KeepFiringFor = keepFiringFor
return rule
}
func TestAlertingRule_ToLabels(t *testing.T) {
metric := datasource.Metric{
Labels: []datasource.Label{
{Name: "instance", Value: "0.0.0.0:8800"},
{Name: "group", Value: "vmalert"},
{Name: "alertname", Value: "ConfigurationReloadFailure"},
},
Values: []float64{1},
Timestamps: []int64{time.Now().UnixNano()},
}
ar := &AlertingRule{
Labels: map[string]string{
"instance": "override", // this should override instance with new value
"group": "vmalert", // this shouldn't have effect since value in metric is equal
},
Expr: "sum(vmalert_alerting_rules_error) by(instance, group, alertname) > 0",
Name: "AlertingRulesError",
GroupName: "vmalert",
}
expectedOriginLabels := map[string]string{
"instance": "0.0.0.0:8800",
"group": "vmalert",
"alertname": "ConfigurationReloadFailure",
"alertgroup": "vmalert",
}
expectedProcessedLabels := map[string]string{
"instance": "override",
"exported_instance": "0.0.0.0:8800",
"alertname": "AlertingRulesError",
"exported_alertname": "ConfigurationReloadFailure",
"group": "vmalert",
"alertgroup": "vmalert",
}
ls, err := ar.toLabels(metric, nil)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if !reflect.DeepEqual(ls.origin, expectedOriginLabels) {
t.Errorf("origin labels mismatch, got: %v, want: %v", ls.origin, expectedOriginLabels)
}
if !reflect.DeepEqual(ls.processed, expectedProcessedLabels) {
t.Errorf("processed labels mismatch, got: %v, want: %v", ls.processed, expectedProcessedLabels)
}
}

View file

@ -194,6 +194,9 @@ func (rr *RecordingRule) toTimeSeries(m datasource.Metric) prompbmarshal.TimeSer
labels["__name__"] = rr.Name
// override existing labels with configured ones
for k, v := range rr.Labels {
if _, ok := labels[k]; ok && labels[k] != v {
labels[fmt.Sprintf("exported_%s", k)] = labels[k]
}
labels[k] = v
}
return newTimeSeries(m.Values, m.Timestamps, labels)
@ -203,7 +206,7 @@ func (rr *RecordingRule) toTimeSeries(m datasource.Metric) prompbmarshal.TimeSer
func (rr *RecordingRule) updateWith(r Rule) error {
nr, ok := r.(*RecordingRule)
if !ok {
return fmt.Errorf("BUG: attempt to update recroding rule with wrong type %#v", r)
return fmt.Errorf("BUG: attempt to update recording rule with wrong type %#v", r)
}
rr.Expr = nr.Expr
rr.Labels = nr.Labels

View file

@ -61,7 +61,7 @@ func TestRecordingRule_Exec(t *testing.T) {
},
[]datasource.Metric{
metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "foo"),
metricWithValueAndLabels(t, 1, "__name__", "bar", "job", "bar"),
metricWithValueAndLabels(t, 1, "__name__", "bar", "job", "bar", "source", "origin"),
},
[]prompbmarshal.TimeSeries{
newTimeSeries([]float64{2}, []int64{timestamp.UnixNano()}, map[string]string{
@ -70,9 +70,10 @@ func TestRecordingRule_Exec(t *testing.T) {
"source": "test",
}),
newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": "job:foo",
"job": "bar",
"source": "test",
"__name__": "job:foo",
"job": "bar",
"source": "test",
"exported_source": "origin",
}),
},
},
@ -254,10 +255,7 @@ func TestRecordingRule_ExecNegative(t *testing.T) {
fq.Add(metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "bar"))
_, err = rr.exec(context.TODO(), time.Now(), 0)
if err == nil {
t.Fatalf("expected to get err; got nil")
}
if !strings.Contains(err.Error(), errDuplicate.Error()) {
t.Fatalf("expected to get err %q; got %q insterad", errDuplicate, err)
if err != nil {
t.Fatal(err)
}
}

View file

@ -45,13 +45,14 @@ func WithBearer(token, tokenFile string) AuthConfigOptions {
}
// WithOAuth returns AuthConfigOptions and set OAuth params based on given params
func WithOAuth(clientID, clientSecret, clientSecretFile, tokenURL, scopes string) AuthConfigOptions {
func WithOAuth(clientID, clientSecret, clientSecretFile, tokenURL, scopes string, endpointParams map[string]string) AuthConfigOptions {
return func(config *promauth.HTTPClientConfig) {
if clientSecretFile != "" || clientSecret != "" {
config.OAuth2 = &promauth.OAuth2Config{
ClientID: clientID,
ClientSecret: promauth.NewSecret(clientSecret),
ClientSecretFile: clientSecretFile,
EndpointParams: endpointParams,
TokenURL: tokenURL,
Scopes: strings.Split(scopes, ";"),
}

View file

@ -386,7 +386,7 @@ func (r *Regex) MarshalYAML() (interface{}, error) {
var (
configReloads = metrics.NewCounter(`vmauth_config_last_reload_total`)
configReloadErrors = metrics.NewCounter(`vmauth_config_last_reload_errors_total`)
configSuccess = metrics.NewCounter(`vmauth_config_last_reload_successful`)
configSuccess = metrics.NewGauge(`vmauth_config_last_reload_successful`, nil)
configTimestamp = metrics.NewCounter(`vmauth_config_last_reload_success_timestamp_seconds`)
)

View file

@ -64,7 +64,6 @@ func main() {
envflag.Parse()
buildinfo.Init()
logger.Init()
pushmetrics.Init()
logger.Infof("starting vmauth at %q...", *httpListenAddr)
startTime := time.Now()
@ -72,8 +71,10 @@ func main() {
go httpserver.Serve(*httpListenAddr, *useProxyProtocol, requestHandler)
logger.Infof("started vmauth in %.3f seconds", time.Since(startTime).Seconds())
pushmetrics.Init()
sig := procutil.WaitForSigterm()
logger.Infof("received signal %s", sig)
pushmetrics.Stop()
startTime = time.Now()
logger.Infof("gracefully shutting down webservice at %q", *httpListenAddr)

View file

@ -47,7 +47,6 @@ func main() {
envflag.Parse()
buildinfo.Init()
logger.Init()
pushmetrics.Init()
// Storing snapshot delete function to be able to call it in case
// of error since logger.Fatal will exit the program without
@ -96,11 +95,13 @@ func main() {
go httpserver.Serve(*httpListenAddr, false, nil)
pushmetrics.Init()
err := makeBackup()
deleteSnapshot()
if err != nil {
logger.Fatalf("cannot create backup: %s", err)
}
pushmetrics.Stop()
startTime := time.Now()
logger.Infof("gracefully shutting down http server for metrics at %q", *httpListenAddr)

View file

@ -330,17 +330,19 @@ const (
vmNativeDisableHTTPKeepAlive = "vm-native-disable-http-keep-alive"
vmNativeDisablePerMetricMigration = "vm-native-disable-per-metric-migration"
vmNativeSrcAddr = "vm-native-src-addr"
vmNativeSrcUser = "vm-native-src-user"
vmNativeSrcPassword = "vm-native-src-password"
vmNativeSrcHeaders = "vm-native-src-headers"
vmNativeSrcBearerToken = "vm-native-src-bearer-token"
vmNativeSrcAddr = "vm-native-src-addr"
vmNativeSrcUser = "vm-native-src-user"
vmNativeSrcPassword = "vm-native-src-password"
vmNativeSrcHeaders = "vm-native-src-headers"
vmNativeSrcBearerToken = "vm-native-src-bearer-token"
vmNativeSrcInsecureSkipVerify = "vm-native-src-insecure-skip-verify"
vmNativeDstAddr = "vm-native-dst-addr"
vmNativeDstUser = "vm-native-dst-user"
vmNativeDstPassword = "vm-native-dst-password"
vmNativeDstHeaders = "vm-native-dst-headers"
vmNativeDstBearerToken = "vm-native-dst-bearer-token"
vmNativeDstAddr = "vm-native-dst-addr"
vmNativeDstUser = "vm-native-dst-user"
vmNativeDstPassword = "vm-native-dst-password"
vmNativeDstHeaders = "vm-native-dst-headers"
vmNativeDstBearerToken = "vm-native-dst-bearer-token"
vmNativeDstInsecureSkipVerify = "vm-native-dst-insecure-skip-verify"
)
var (
@ -466,6 +468,16 @@ var (
"Non-binary export/import API is less efficient, but supports deduplication if it is configured on vm-native-src-addr side.",
Value: false,
},
&cli.BoolFlag{
Name: vmNativeSrcInsecureSkipVerify,
Usage: "Whether to skip TLS certificate verification when connecting to the source address",
Value: false,
},
&cli.BoolFlag{
Name: vmNativeDstInsecureSkipVerify,
Usage: "Whether to skip TLS certificate verification when connecting to the destination address",
Value: false,
},
}
)

View file

@ -2,6 +2,7 @@ package main
import (
"context"
"crypto/tls"
"fmt"
"log"
"net/http"
@ -212,6 +213,7 @@ func main() {
var srcExtraLabels []string
srcAddr := strings.Trim(c.String(vmNativeSrcAddr), "/")
srcInsecureSkipVerify := c.Bool(vmNativeSrcInsecureSkipVerify)
srcAuthConfig, err := auth.Generate(
auth.WithBasicAuth(c.String(vmNativeSrcUser), c.String(vmNativeSrcPassword)),
auth.WithBearer(c.String(vmNativeSrcBearerToken)),
@ -219,10 +221,16 @@ func main() {
if err != nil {
return fmt.Errorf("error initilize auth config for source: %s", srcAddr)
}
srcHTTPClient := &http.Client{Transport: &http.Transport{DisableKeepAlives: disableKeepAlive}}
srcHTTPClient := &http.Client{Transport: &http.Transport{
DisableKeepAlives: disableKeepAlive,
TLSClientConfig: &tls.Config{
InsecureSkipVerify: srcInsecureSkipVerify,
},
}}
dstAddr := strings.Trim(c.String(vmNativeDstAddr), "/")
dstExtraLabels := c.StringSlice(vmExtraLabel)
dstInsecureSkipVerify := c.Bool(vmNativeDstInsecureSkipVerify)
dstAuthConfig, err := auth.Generate(
auth.WithBasicAuth(c.String(vmNativeDstUser), c.String(vmNativeDstPassword)),
auth.WithBearer(c.String(vmNativeDstBearerToken)),
@ -230,7 +238,12 @@ func main() {
if err != nil {
return fmt.Errorf("error initilize auth config for destination: %s", dstAddr)
}
dstHTTPClient := &http.Client{Transport: &http.Transport{DisableKeepAlives: disableKeepAlive}}
dstHTTPClient := &http.Client{Transport: &http.Transport{
DisableKeepAlives: disableKeepAlive,
TLSClientConfig: &tls.Config{
InsecureSkipVerify: dstInsecureSkipVerify,
},
}}
p := vmNativeProcessor{
rateLimit: c.Int64(vmRateLimit),

View file

@ -266,10 +266,16 @@ func fillStorage(series []vm.TimeSeries) error {
for _, series := range series {
var labels []prompb.Label
for _, lp := range series.LabelPairs {
labels = append(labels, prompb.Label{Name: []byte(lp.Name), Value: []byte(lp.Value)})
labels = append(labels, prompb.Label{
Name: lp.Name,
Value: lp.Value,
})
}
if series.Name != "" {
labels = append(labels, prompb.Label{Name: []byte("__name__"), Value: []byte(series.Name)})
labels = append(labels, prompb.Label{
Name: "__name__",
Value: series.Name,
})
}
mr := storage.MetricRow{}
mr.MetricNameRaw = storage.MarshalMetricNameRaw(mr.MetricNameRaw[:0], labels)

View file

@ -27,12 +27,11 @@ type InsertCtx struct {
// Reset resets ctx for future fill with rowsLen rows.
func (ctx *InsertCtx) Reset(rowsLen int) {
for i := range ctx.Labels {
label := &ctx.Labels[i]
label.Name = nil
label.Value = nil
labels := ctx.Labels
for i := range labels {
labels[i] = prompb.Label{}
}
ctx.Labels = ctx.Labels[:0]
ctx.Labels = labels[:0]
mrs := ctx.mrs
for i := range mrs {
@ -112,8 +111,8 @@ func (ctx *InsertCtx) AddLabelBytes(name, value []byte) {
ctx.Labels = append(ctx.Labels, prompb.Label{
// Do not copy name and value contents for performance reasons.
// This reduces GC overhead on the number of objects and allocations.
Name: name,
Value: value,
Name: bytesutil.ToUnsafeString(name),
Value: bytesutil.ToUnsafeString(value),
})
}
@ -130,8 +129,8 @@ func (ctx *InsertCtx) AddLabel(name, value string) {
ctx.Labels = append(ctx.Labels, prompb.Label{
// Do not copy name and value contents for performance reasons.
// This reduces GC overhead on the number of objects and allocations.
Name: bytesutil.ToUnsafeBytes(name),
Value: bytesutil.ToUnsafeBytes(value),
Name: name,
Value: value,
})
}

View file

@ -38,7 +38,7 @@ var (
saCfgReloads = metrics.NewCounter(`vminsert_streamagg_config_reloads_total`)
saCfgReloadErr = metrics.NewCounter(`vminsert_streamagg_config_reloads_errors_total`)
saCfgSuccess = metrics.NewCounter(`vminsert_streamagg_config_last_reload_successful`)
saCfgSuccess = metrics.NewGauge(`vminsert_streamagg_config_last_reload_successful`, nil)
saCfgTimestamp = metrics.NewCounter(`vminsert_streamagg_config_last_reload_success_timestamp_seconds`)
sasGlobal atomic.Pointer[streamaggr.Aggregators]

View file

@ -1,4 +1,4 @@
package datadog
package datadogv1
import (
"net/http"
@ -7,31 +7,30 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadog"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadog/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1/stream"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="datadog"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="datadog"}`)
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="datadogv1"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="datadogv1"}`)
)
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v1/series request.
//
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
func InsertHandlerForHTTP(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
if err != nil {
return err
}
ce := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, ce, func(series []parser.Series) error {
return stream.Parse(req.Body, ce, func(series []datadogv1.Series) error {
return insertRows(series, extraLabels)
})
}
func insertRows(series []parser.Series, extraLabels []prompbmarshal.Label) error {
func insertRows(series []datadogv1.Series, extraLabels []prompbmarshal.Label) error {
ctx := common.GetInsertCtx()
defer common.PutInsertCtx(ctx)
@ -54,7 +53,7 @@ func insertRows(series []parser.Series, extraLabels []prompbmarshal.Label) error
ctx.AddLabel("device", ss.Device)
}
for _, tag := range ss.Tags {
name, value := parser.SplitTag(tag)
name, value := datadogutils.SplitTag(tag)
if name == "host" {
name = "exported_host"
}

View file

@ -0,0 +1,91 @@
package datadogv2
import (
"net/http"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2/stream"
"github.com/VictoriaMetrics/metrics"
)
var (
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="datadogv2"}`)
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="datadogv2"}`)
)
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v2/series request.
//
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
func InsertHandlerForHTTP(req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
if err != nil {
return err
}
ct := req.Header.Get("Content-Type")
ce := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, ce, ct, func(series []datadogv2.Series) error {
return insertRows(series, extraLabels)
})
}
func insertRows(series []datadogv2.Series, extraLabels []prompbmarshal.Label) error {
ctx := common.GetInsertCtx()
defer common.PutInsertCtx(ctx)
rowsLen := 0
for i := range series {
rowsLen += len(series[i].Points)
}
ctx.Reset(rowsLen)
rowsTotal := 0
hasRelabeling := relabel.HasRelabeling()
for i := range series {
ss := &series[i]
rowsTotal += len(ss.Points)
ctx.Labels = ctx.Labels[:0]
ctx.AddLabel("", ss.Metric)
for _, rs := range ss.Resources {
ctx.AddLabel(rs.Type, rs.Name)
}
for _, tag := range ss.Tags {
name, value := datadogutils.SplitTag(tag)
if name == "host" {
name = "exported_host"
}
ctx.AddLabel(name, value)
}
if ss.SourceTypeName != "" {
ctx.AddLabel("source_type_name", ss.SourceTypeName)
}
for j := range extraLabels {
label := &extraLabels[j]
ctx.AddLabel(label.Name, label.Value)
}
if hasRelabeling {
ctx.ApplyRelabeling()
}
if len(ctx.Labels) == 0 {
// Skip metric without labels.
continue
}
ctx.SortLabelsIfNeeded()
var metricNameRaw []byte
var err error
for _, pt := range ss.Points {
timestamp := pt.Timestamp * 1000
value := pt.Value
metricNameRaw, err = ctx.WriteDataPointExt(metricNameRaw, ctx.Labels, timestamp, value)
if err != nil {
return err
}
}
}
rowsInserted.Add(rowsTotal)
rowsPerInsert.Update(float64(rowsTotal))
return ctx.FlushBufs()
}

View file

@ -160,11 +160,9 @@ func (ctx *pushCtx) reset() {
originLabels := ctx.originLabels
for i := range originLabels {
label := &originLabels[i]
label.Name = nil
label.Value = nil
originLabels[i] = prompb.Label{}
}
ctx.originLabels = ctx.originLabels[:0]
ctx.originLabels = originLabels[:0]
}
func getPushCtx() *pushCtx {

View file

@ -13,7 +13,8 @@ import (
vminsertCommon "github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/datadog"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/datadogv1"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/datadogv2"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/graphite"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/native"
@ -247,9 +248,20 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "/datadog/api/v1/series":
datadogWriteRequests.Inc()
if err := datadog.InsertHandlerForHTTP(r); err != nil {
datadogWriteErrors.Inc()
datadogv1WriteRequests.Inc()
if err := datadogv1.InsertHandlerForHTTP(r); err != nil {
datadogv1WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "/datadog/api/v2/series":
datadogv2WriteRequests.Inc()
if err := datadogv2.InsertHandlerForHTTP(r); err != nil {
datadogv2WriteErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
@ -375,8 +387,11 @@ var (
influxQueryRequests = metrics.NewCounter(`vm_http_requests_total{path="/influx/query", protocol="influx"}`)
datadogWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogv1WriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogv1WriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/datadog/api/v1/series", protocol="datadog"}`)
datadogv2WriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v2/series", protocol="datadog"}`)
datadogv2WriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/datadog/api/v2/series", protocol="datadog"}`)
datadogValidateRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`)
datadogCheckRunRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`)

View file

@ -46,7 +46,7 @@ func insertRows(timeseries []prompb.TimeSeries, extraLabels []prompbmarshal.Labe
ctx.Labels = ctx.Labels[:0]
srcLabels := ts.Labels
for _, srcLabel := range srcLabels {
ctx.AddLabelBytes(srcLabel.Name, srcLabel.Value)
ctx.AddLabel(srcLabel.Name, srcLabel.Value)
}
for j := range extraLabels {
label := &extraLabels[j]

View file

@ -5,7 +5,6 @@ import (
"fmt"
"sync/atomic"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
@ -65,7 +64,7 @@ func Init() {
var (
configReloads = metrics.NewCounter(`vm_relabel_config_reloads_total`)
configReloadErrors = metrics.NewCounter(`vm_relabel_config_reloads_errors_total`)
configSuccess = metrics.NewCounter(`vm_relabel_config_last_reload_successful`)
configSuccess = metrics.NewGauge(`vm_relabel_config_last_reload_successful`, nil)
configTimestamp = metrics.NewCounter(`vm_relabel_config_last_reload_success_timestamp_seconds`)
)
@ -118,11 +117,11 @@ func (ctx *Ctx) ApplyRelabeling(labels []prompb.Label) []prompb.Label {
// Convert labels to prompbmarshal.Label format suitable for relabeling.
tmpLabels := ctx.tmpLabels[:0]
for _, label := range labels {
name := bytesutil.ToUnsafeString(label.Name)
if len(name) == 0 {
name := label.Name
if name == "" {
name = "__name__"
}
value := bytesutil.ToUnsafeString(label.Value)
value := label.Value
tmpLabels = append(tmpLabels, prompbmarshal.Label{
Name: name,
Value: value,
@ -155,11 +154,11 @@ func (ctx *Ctx) ApplyRelabeling(labels []prompb.Label) []prompb.Label {
// Return back labels to the desired format.
dst := labels[:0]
for _, label := range tmpLabels {
name := bytesutil.ToUnsafeBytes(label.Name)
name := label.Name
if label.Name == "__name__" {
name = nil
name = ""
}
value := bytesutil.ToUnsafeBytes(label.Value)
value := label.Value
dst = append(dst, prompb.Label{
Name: name,
Value: value,

View file

@ -36,7 +36,6 @@ func main() {
envflag.Parse()
buildinfo.Init()
logger.Init()
pushmetrics.Init()
go httpserver.Serve(*httpListenAddr, false, nil)
@ -54,9 +53,11 @@ func main() {
Dst: dstFS,
SkipBackupCompleteCheck: *skipBackupCompleteCheck,
}
pushmetrics.Init()
if err := a.Run(); err != nil {
logger.Fatalf("cannot restore from backup: %s", err)
}
pushmetrics.Stop()
srcFS.MustStop()
dstFS.MustStop()

View file

@ -123,13 +123,13 @@ func registerMetrics(startTime time.Time, w http.ResponseWriter, r *http.Request
// Convert parsed metric and tags to labels.
labels = append(labels[:0], prompb.Label{
Name: []byte("__name__"),
Value: []byte(row.Metric),
Name: "__name__",
Value: row.Metric,
})
for _, tag := range row.Tags {
labels = append(labels, prompb.Label{
Name: []byte(tag.Key),
Value: []byte(tag.Value),
Name: tag.Key,
Value: tag.Value,
})
}

View file

@ -3599,6 +3599,17 @@ func groupSeriesByNodes(ss []*series, nodes []graphiteql.Expr) map[string][]*ser
return m
}
func getAbsoluteNodeIndex(index, size int) int {
// Handle the negative index case as Python does
if index < 0 {
index = size + index
}
if index < 0 || index >= size {
return -1
}
return index
}
func getNameFromNodes(name string, tags map[string]string, nodes []graphiteql.Expr) string {
if len(nodes) == 0 {
return ""
@ -3609,7 +3620,7 @@ func getNameFromNodes(name string, tags map[string]string, nodes []graphiteql.Ex
for _, node := range nodes {
switch t := node.(type) {
case *graphiteql.NumberExpr:
if n := int(t.N); n >= 0 && n < len(parts) {
if n := getAbsoluteNodeIndex(int(t.N), len(parts)); n >= 0 {
dstParts = append(dstParts, parts[n])
}
case *graphiteql.StringExpr:

View file

@ -79,3 +79,31 @@ func TestGraphiteToGolangRegexpReplace(t *testing.T) {
f(`a\d+`, `a\d+`)
f(`\1f\\oo\2`, `$1f\\oo$2`)
}
func TestGetAbsoluteNodeIndex(t *testing.T) {
f := func(index, size, expectedIndex int) {
t.Helper()
absoluteIndex := getAbsoluteNodeIndex(index, size)
if absoluteIndex != expectedIndex {
t.Fatalf("unexpected result for getAbsoluteNodeIndex(%d, %d); got %d; want %d", index, size, expectedIndex, absoluteIndex)
}
}
f(1, 1, -1)
f(0, 1, 0)
f(-1, 3, 2)
f(-3, 1, -1)
f(-1, 1, 0)
f(-2, 1, -1)
f(3, 2, -1)
f(2, 2, -1)
f(1, 2, 1)
f(0, 2, 0)
f(-1, 2, 1)
f(-2, 2, 0)
f(-3, 2, -1)
f(-5, 2, -1)
f(-1, 100, 99)
f(-99, 100, 1)
f(-100, 100, 0)
f(-101, 100, -1)
}

View file

@ -718,6 +718,9 @@ func QueryHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWr
start -= offset
end := start
start = end - window
if start < 0 {
start = 0
}
// Do not include data point with a timestamp matching the lower boundary of the window as Prometheus does.
start++
if end < start {

View file

@ -651,13 +651,14 @@ func newAggrFuncTopK(isReverse bool) aggrFunc {
}
afe := func(tss []*timeseries, modififer *metricsql.ModifierExpr) []*timeseries {
for n := range tss[0].Values {
lessFunc := lessWithNaNs
if isReverse {
lessFunc = greaterWithNaNs
}
sort.Slice(tss, func(i, j int) bool {
a := tss[i].Values[n]
b := tss[j].Values[n]
if isReverse {
a, b = b, a
}
return lessWithNaNs(a, b)
return lessFunc(a, b)
})
fillNaNsAtIdx(n, ks[n], tss)
}
@ -710,17 +711,19 @@ func getRangeTopKTimeseries(tss []*timeseries, modifier *metricsql.ModifierExpr,
value: value,
}
}
lessFunc := lessWithNaNs
if isReverse {
lessFunc = greaterWithNaNs
}
sort.Slice(maxs, func(i, j int) bool {
a := maxs[i].value
b := maxs[j].value
if isReverse {
a, b = b, a
}
return lessWithNaNs(a, b)
return lessFunc(a, b)
})
for i := range maxs {
tss[i] = maxs[i].ts
}
remainingSumTS := getRemainingSumTimeseries(tss, modifier, ks, remainingSumTagName)
for i, k := range ks {
fillNaNsAtIdx(i, k, tss)
@ -1253,12 +1256,27 @@ func newAggrQuantileFunc(phis []float64) func(tss []*timeseries, modifier *metri
}
func lessWithNaNs(a, b float64) bool {
// consider NaNs are smaller than non-NaNs
if math.IsNaN(a) {
return !math.IsNaN(b)
}
if math.IsNaN(b) {
return false
}
return a < b
}
func greaterWithNaNs(a, b float64) bool {
// consider NaNs are bigger than non-NaNs
if math.IsNaN(a) {
return !math.IsNaN(b)
}
if math.IsNaN(b) {
return false
}
return a > b
}
func floatToIntBounded(f float64) int {
if f > math.MaxInt {
return math.MaxInt

View file

@ -2,9 +2,57 @@ package promql
import (
"math"
"sort"
"testing"
)
func TestSortWithNaNs(t *testing.T) {
f := func(a []float64, ascExpected, descExpected []float64) {
t.Helper()
equalSlices := func(a, b []float64) bool {
for i := range a {
x := a[i]
y := b[i]
if math.IsNaN(x) {
return math.IsNaN(y)
}
if math.IsNaN(y) {
return false
}
if x != y {
return false
}
}
return true
}
aCopy := append([]float64{}, a...)
sort.Slice(aCopy, func(i, j int) bool {
return lessWithNaNs(aCopy[i], aCopy[j])
})
if !equalSlices(aCopy, ascExpected) {
t.Fatalf("unexpected slice after asc sorting; got\n%v\nwant\n%v", aCopy, ascExpected)
}
aCopy = append(aCopy[:0], a...)
sort.Slice(aCopy, func(i, j int) bool {
return greaterWithNaNs(aCopy[i], aCopy[j])
})
if !equalSlices(aCopy, descExpected) {
t.Fatalf("unexpected slice after desc sorting; got\n%v\nwant\n%v", aCopy, descExpected)
}
}
f(nil, nil, nil)
f([]float64{1}, []float64{1}, []float64{1})
f([]float64{1, nan, 3, 2}, []float64{nan, 1, 2, 3}, []float64{nan, 3, 2, 1})
f([]float64{nan}, []float64{nan}, []float64{nan})
f([]float64{nan, nan, nan}, []float64{nan, nan, nan}, []float64{nan, nan, nan})
f([]float64{nan, 1, nan}, []float64{nan, nan, 1}, []float64{nan, nan, 1})
f([]float64{nan, 1, 0, 2, nan}, []float64{nan, nan, 0, 1, 2}, []float64{nan, nan, 2, 1, 0})
}
func TestModeNoNaNs(t *testing.T) {
f := func(prevValue float64, a []float64, expectedResult float64) {
t.Helper()

View file

@ -404,9 +404,15 @@ func binaryOpDefault(bfa *binaryOpFuncArg) ([]*timeseries, error) {
func binaryOpOr(bfa *binaryOpFuncArg) ([]*timeseries, error) {
mLeft, mRight := createTimeseriesMapByTagSet(bfa.be, bfa.left, bfa.right)
var rvs []*timeseries
for _, tss := range mLeft {
rvs = append(rvs, tss...)
}
// Sort left-hand-side series by metric name as Prometheus does.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5393
sortSeriesByMetricName(rvs)
rvsLen := len(rvs)
for k, tssRight := range mRight {
tssLeft := mLeft[k]
if tssLeft == nil {
@ -415,6 +421,10 @@ func binaryOpOr(bfa *binaryOpFuncArg) ([]*timeseries, error) {
}
fillLeftNaNsWithRightValues(tssLeft, tssRight)
}
// Sort the added right-hand-side series by metric name as Prometheus does.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5393
sortSeriesByMetricName(rvs[rvsLen:])
return rvs, nil
}

View file

@ -110,6 +110,7 @@ func maySortResults(e metricsql.Expr) bool {
case "sort", "sort_desc",
"sort_by_label", "sort_by_label_desc",
"sort_by_label_numeric", "sort_by_label_numeric_desc":
// Results already sorted
return false
}
case *metricsql.AggrFuncExpr:
@ -117,6 +118,7 @@ func maySortResults(e metricsql.Expr) bool {
case "topk", "bottomk", "outliersk",
"topk_max", "topk_min", "topk_avg", "topk_median", "topk_last",
"bottomk_max", "bottomk_min", "bottomk_avg", "bottomk_median", "bottomk_last":
// Results already sorted
return false
}
case *metricsql.BinaryOpExpr:
@ -131,6 +133,10 @@ func maySortResults(e metricsql.Expr) bool {
func timeseriesToResult(tss []*timeseries, maySort bool) ([]netstorage.Result, error) {
tss = removeEmptySeries(tss)
if maySort {
sortSeriesByMetricName(tss)
}
result := make([]netstorage.Result, len(tss))
m := make(map[string]struct{}, len(tss))
bb := bbPool.Get()
@ -151,15 +157,15 @@ func timeseriesToResult(tss []*timeseries, maySort bool) ([]netstorage.Result, e
}
bbPool.Put(bb)
if maySort {
sort.Slice(result, func(i, j int) bool {
return metricNameLess(&result[i].MetricName, &result[j].MetricName)
})
}
return result, nil
}
func sortSeriesByMetricName(tss []*timeseries) {
sort.Slice(tss, func(i, j int) bool {
return metricNameLess(&tss[i].MetricName, &tss[j].MetricName)
})
}
func metricNameLess(a, b *storage.MetricName) bool {
if string(a.MetricGroup) != string(b.MetricGroup) {
return string(a.MetricGroup) < string(b.MetricGroup)

View file

@ -3049,6 +3049,51 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`series or series`, func(t *testing.T) {
t.Parallel()
q := `(
label_set(time(), "x", "foo"),
label_set(time()+1, "x", "bar"),
) or (
label_set(time()+2, "x", "foo"),
label_set(time()+3, "x", "baz"),
)`
r1 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{1001, 1201, 1401, 1601, 1801, 2001},
Timestamps: timestampsExpected,
}
r1.MetricName.Tags = []storage.Tag{
{
Key: []byte("x"),
Value: []byte("bar"),
},
}
r2 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{1000, 1200, 1400, 1600, 1800, 2000},
Timestamps: timestampsExpected,
}
r2.MetricName.Tags = []storage.Tag{
{
Key: []byte("x"),
Value: []byte("foo"),
},
}
r3 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{1003, 1203, 1403, 1603, 1803, 2003},
Timestamps: timestampsExpected,
}
r3.MetricName.Tags = []storage.Tag{
{
Key: []byte("x"),
Value: []byte("baz"),
},
}
resultExpected := []netstorage.Result{r1, r2, r3}
f(q, resultExpected)
})
t.Run(`scalar or scalar`, func(t *testing.T) {
t.Parallel()
q := `time() > 1400 or 123`
@ -6545,7 +6590,7 @@ func TestExecSuccess(t *testing.T) {
})
t.Run(`bottomk(1)`, func(t *testing.T) {
t.Parallel()
q := `bottomk(1, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss"))`
q := `bottomk(1, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss") or label_set(time()<100, "a", "b"))`
r1 := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{nan, nan, nan, 10, 10, 10},

View file

@ -2182,6 +2182,8 @@ func rollupFirst(rfa *rollupFuncArg) float64 {
return values[0]
}
var rollupLast = rollupDefault
func rollupDefault(rfa *rollupFuncArg) float64 {
values := rfa.values
if len(values) == 0 {
@ -2195,17 +2197,6 @@ func rollupDefault(rfa *rollupFuncArg) float64 {
return values[len(values)-1]
}
func rollupLast(rfa *rollupFuncArg) float64 {
values := rfa.values
if len(values) == 0 {
// Do not take into account rfa.prevValue, since it may lead
// to inconsistent results comparing to Prometheus on broken time series
// with irregular data points.
return nan
}
return values[len(values)-1]
}
func rollupDistinct(rfa *rollupFuncArg) float64 {
// There is no need in handling NaNs here, since they must be cleaned up
// before calling rollup funcs.

View file

@ -4,6 +4,7 @@ import (
"errors"
"flag"
"fmt"
"io"
"net/http"
"strings"
"sync"
@ -121,9 +122,17 @@ func Init(resetCacheIfNeeded func(mrs []storage.MetricRow)) {
sizeBytes := tm.SmallSizeBytes + tm.BigSizeBytes
logger.Infof("successfully opened storage %q in %.3f seconds; partsCount: %d; blocksCount: %d; rowsCount: %d; sizeBytes: %d",
*DataPath, time.Since(startTime).Seconds(), partsCount, blocksCount, rowsCount, sizeBytes)
registerStorageMetrics(Storage)
// register storage metrics
storageMetrics = metrics.NewSet()
storageMetrics.RegisterMetricsWriter(func(w io.Writer) {
writeStorageMetrics(w, strg)
})
metrics.RegisterSet(storageMetrics)
}
var storageMetrics *metrics.Set
// Storage is a storage.
//
// Every storage call must be wrapped into WG.Add(1) ... WG.Done()
@ -232,6 +241,10 @@ func GetSeriesCount(deadline uint64) (uint64, error) {
// Stop stops the vmstorage
func Stop() {
// deregister storage metrics
metrics.UnregisterSet(storageMetrics)
storageMetrics = nil
logger.Infof("gracefully closing the storage at %s", *DataPath)
startTime := time.Now()
WG.WaitAndBlock()
@ -429,497 +442,194 @@ var (
snapshotsDeleteAllErrorsTotal = metrics.NewCounter(`vm_http_request_errors_total{path="/snapshot/delete_all"}`)
)
func registerStorageMetrics(strg *storage.Storage) {
mCache := &storage.Metrics{}
var mCacheLock sync.Mutex
var lastUpdateTime time.Time
func writeStorageMetrics(w io.Writer, strg *storage.Storage) {
var m storage.Metrics
strg.UpdateMetrics(&m)
tm := &m.TableMetrics
idbm := &m.IndexDBMetrics
m := func() *storage.Metrics {
mCacheLock.Lock()
defer mCacheLock.Unlock()
if time.Since(lastUpdateTime) < time.Second {
return mCache
}
var mc storage.Metrics
strg.UpdateMetrics(&mc)
mCache = &mc
lastUpdateTime = time.Now()
return mCache
}
tm := func() *storage.TableMetrics {
sm := m()
return &sm.TableMetrics
}
idbm := func() *storage.IndexDBMetrics {
sm := m()
return &sm.IndexDBMetrics
metrics.WriteGaugeUint64(w, fmt.Sprintf(`vm_free_disk_space_bytes{path=%q}`, *DataPath), fs.MustGetFreeSpace(*DataPath))
metrics.WriteGaugeUint64(w, fmt.Sprintf(`vm_free_disk_space_limit_bytes{path=%q}`, *DataPath), uint64(minFreeDiskSpaceBytes.N))
isReadOnly := 0
if strg.IsReadOnly() {
isReadOnly = 1
}
metrics.WriteGaugeUint64(w, fmt.Sprintf(`vm_storage_is_read_only{path=%q}`, *DataPath), uint64(isReadOnly))
metrics.NewGauge(fmt.Sprintf(`vm_free_disk_space_bytes{path=%q}`, *DataPath), func() float64 {
return float64(fs.MustGetFreeSpace(*DataPath))
})
metrics.NewGauge(fmt.Sprintf(`vm_free_disk_space_limit_bytes{path=%q}`, *DataPath), func() float64 {
return float64(minFreeDiskSpaceBytes.N)
})
metrics.NewGauge(fmt.Sprintf(`vm_storage_is_read_only{path=%q}`, *DataPath), func() float64 {
if strg.IsReadOnly() {
return 1
}
return 0
})
metrics.WriteGaugeUint64(w, `vm_active_merges{type="storage/inmemory"}`, tm.ActiveInmemoryMerges)
metrics.WriteGaugeUint64(w, `vm_active_merges{type="storage/small"}`, tm.ActiveSmallMerges)
metrics.WriteGaugeUint64(w, `vm_active_merges{type="storage/big"}`, tm.ActiveBigMerges)
metrics.WriteGaugeUint64(w, `vm_active_merges{type="indexdb/inmemory"}`, idbm.ActiveInmemoryMerges)
metrics.WriteGaugeUint64(w, `vm_active_merges{type="indexdb/file"}`, idbm.ActiveFileMerges)
metrics.NewGauge(`vm_active_merges{type="storage/inmemory"}`, func() float64 {
return float64(tm().ActiveInmemoryMerges)
})
metrics.NewGauge(`vm_active_merges{type="storage/small"}`, func() float64 {
return float64(tm().ActiveSmallMerges)
})
metrics.NewGauge(`vm_active_merges{type="storage/big"}`, func() float64 {
return float64(tm().ActiveBigMerges)
})
metrics.NewGauge(`vm_active_merges{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().ActiveInmemoryMerges)
})
metrics.NewGauge(`vm_active_merges{type="indexdb/file"}`, func() float64 {
return float64(idbm().ActiveFileMerges)
})
metrics.WriteCounterUint64(w, `vm_merges_total{type="storage/inmemory"}`, tm.InmemoryMergesCount)
metrics.WriteCounterUint64(w, `vm_merges_total{type="storage/small"}`, tm.SmallMergesCount)
metrics.WriteCounterUint64(w, `vm_merges_total{type="storage/big"}`, tm.BigMergesCount)
metrics.WriteCounterUint64(w, `vm_merges_total{type="indexdb/inmemory"}`, idbm.InmemoryMergesCount)
metrics.WriteCounterUint64(w, `vm_merges_total{type="indexdb/file"}`, idbm.FileMergesCount)
metrics.NewGauge(`vm_merges_total{type="storage/inmemory"}`, func() float64 {
return float64(tm().InmemoryMergesCount)
})
metrics.NewGauge(`vm_merges_total{type="storage/small"}`, func() float64 {
return float64(tm().SmallMergesCount)
})
metrics.NewGauge(`vm_merges_total{type="storage/big"}`, func() float64 {
return float64(tm().BigMergesCount)
})
metrics.NewGauge(`vm_merges_total{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().InmemoryMergesCount)
})
metrics.NewGauge(`vm_merges_total{type="indexdb/file"}`, func() float64 {
return float64(idbm().FileMergesCount)
})
metrics.WriteCounterUint64(w, `vm_rows_merged_total{type="storage/inmemory"}`, tm.InmemoryRowsMerged)
metrics.WriteCounterUint64(w, `vm_rows_merged_total{type="storage/small"}`, tm.SmallRowsMerged)
metrics.WriteCounterUint64(w, `vm_rows_merged_total{type="storage/big"}`, tm.BigRowsMerged)
metrics.WriteCounterUint64(w, `vm_rows_merged_total{type="indexdb/inmemory"}`, idbm.InmemoryItemsMerged)
metrics.WriteCounterUint64(w, `vm_rows_merged_total{type="indexdb/file"}`, idbm.FileItemsMerged)
metrics.NewGauge(`vm_rows_merged_total{type="storage/inmemory"}`, func() float64 {
return float64(tm().InmemoryRowsMerged)
})
metrics.NewGauge(`vm_rows_merged_total{type="storage/small"}`, func() float64 {
return float64(tm().SmallRowsMerged)
})
metrics.NewGauge(`vm_rows_merged_total{type="storage/big"}`, func() float64 {
return float64(tm().BigRowsMerged)
})
metrics.NewGauge(`vm_rows_merged_total{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().InmemoryItemsMerged)
})
metrics.NewGauge(`vm_rows_merged_total{type="indexdb/file"}`, func() float64 {
return float64(idbm().FileItemsMerged)
})
metrics.WriteCounterUint64(w, `vm_rows_deleted_total{type="storage/inmemory"}`, tm.InmemoryRowsDeleted)
metrics.WriteCounterUint64(w, `vm_rows_deleted_total{type="storage/small"}`, tm.SmallRowsDeleted)
metrics.WriteCounterUint64(w, `vm_rows_deleted_total{type="storage/big"}`, tm.BigRowsDeleted)
metrics.NewGauge(`vm_rows_deleted_total{type="storage/inmemory"}`, func() float64 {
return float64(tm().InmemoryRowsDeleted)
})
metrics.NewGauge(`vm_rows_deleted_total{type="storage/small"}`, func() float64 {
return float64(tm().SmallRowsDeleted)
})
metrics.NewGauge(`vm_rows_deleted_total{type="storage/big"}`, func() float64 {
return float64(tm().BigRowsDeleted)
})
metrics.WriteGaugeUint64(w, `vm_part_references{type="storage/inmemory"}`, tm.InmemoryPartsRefCount)
metrics.WriteGaugeUint64(w, `vm_part_references{type="storage/small"}`, tm.SmallPartsRefCount)
metrics.WriteGaugeUint64(w, `vm_part_references{type="storage/big"}`, tm.BigPartsRefCount)
metrics.WriteGaugeUint64(w, `vm_partition_references{type="storage"}`, tm.PartitionsRefCount)
metrics.WriteGaugeUint64(w, `vm_object_references{type="indexdb"}`, idbm.IndexDBRefCount)
metrics.WriteGaugeUint64(w, `vm_part_references{type="indexdb"}`, idbm.PartsRefCount)
metrics.NewGauge(`vm_part_references{type="storage/inmemory"}`, func() float64 {
return float64(tm().InmemoryPartsRefCount)
})
metrics.NewGauge(`vm_part_references{type="storage/small"}`, func() float64 {
return float64(tm().SmallPartsRefCount)
})
metrics.NewGauge(`vm_part_references{type="storage/big"}`, func() float64 {
return float64(tm().BigPartsRefCount)
})
metrics.NewGauge(`vm_partition_references{type="storage"}`, func() float64 {
return float64(tm().PartitionsRefCount)
})
metrics.NewGauge(`vm_object_references{type="indexdb"}`, func() float64 {
return float64(idbm().IndexDBRefCount)
})
metrics.NewGauge(`vm_part_references{type="indexdb"}`, func() float64 {
return float64(idbm().PartsRefCount)
})
metrics.WriteCounterUint64(w, `vm_missing_tsids_for_metric_id_total`, idbm.MissingTSIDsForMetricID)
metrics.WriteCounterUint64(w, `vm_index_blocks_with_metric_ids_processed_total`, idbm.IndexBlocksWithMetricIDsProcessed)
metrics.WriteCounterUint64(w, `vm_index_blocks_with_metric_ids_incorrect_order_total`, idbm.IndexBlocksWithMetricIDsIncorrectOrder)
metrics.WriteGaugeUint64(w, `vm_composite_index_min_timestamp`, idbm.MinTimestampForCompositeIndex/1e3)
metrics.WriteCounterUint64(w, `vm_composite_filter_success_conversions_total`, idbm.CompositeFilterSuccessConversions)
metrics.WriteCounterUint64(w, `vm_composite_filter_missing_conversions_total`, idbm.CompositeFilterMissingConversions)
metrics.NewGauge(`vm_missing_tsids_for_metric_id_total`, func() float64 {
return float64(idbm().MissingTSIDsForMetricID)
})
metrics.NewGauge(`vm_index_blocks_with_metric_ids_processed_total`, func() float64 {
return float64(idbm().IndexBlocksWithMetricIDsProcessed)
})
metrics.NewGauge(`vm_index_blocks_with_metric_ids_incorrect_order_total`, func() float64 {
return float64(idbm().IndexBlocksWithMetricIDsIncorrectOrder)
})
metrics.NewGauge(`vm_composite_index_min_timestamp`, func() float64 {
return float64(idbm().MinTimestampForCompositeIndex) / 1e3
})
metrics.NewGauge(`vm_composite_filter_success_conversions_total`, func() float64 {
return float64(idbm().CompositeFilterSuccessConversions)
})
metrics.NewGauge(`vm_composite_filter_missing_conversions_total`, func() float64 {
return float64(idbm().CompositeFilterMissingConversions)
})
metrics.WriteCounterUint64(w, `vm_assisted_merges_total{type="storage/inmemory"}`, tm.InmemoryAssistedMerges)
metrics.WriteCounterUint64(w, `vm_assisted_merges_total{type="storage/small"}`, tm.SmallAssistedMerges)
metrics.NewGauge(`vm_assisted_merges_total{type="storage/inmemory"}`, func() float64 {
return float64(tm().InmemoryAssistedMerges)
})
metrics.NewGauge(`vm_assisted_merges_total{type="storage/small"}`, func() float64 {
return float64(tm().SmallAssistedMerges)
})
metrics.WriteCounterUint64(w, `vm_assisted_merges_total{type="indexdb/inmemory"}`, idbm.InmemoryAssistedMerges)
metrics.WriteCounterUint64(w, `vm_assisted_merges_total{type="indexdb/file"}`, idbm.FileAssistedMerges)
metrics.NewGauge(`vm_assisted_merges_total{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().InmemoryAssistedMerges)
})
metrics.NewGauge(`vm_assisted_merges_total{type="indexdb/file"}`, func() float64 {
return float64(idbm().FileAssistedMerges)
})
metrics.WriteCounterUint64(w, `vm_indexdb_items_added_total`, idbm.ItemsAdded)
metrics.WriteCounterUint64(w, `vm_indexdb_items_added_size_bytes_total`, idbm.ItemsAddedSizeBytes)
metrics.NewGauge(`vm_indexdb_items_added_total`, func() float64 {
return float64(idbm().ItemsAdded)
})
metrics.NewGauge(`vm_indexdb_items_added_size_bytes_total`, func() float64 {
return float64(idbm().ItemsAddedSizeBytes)
})
metrics.WriteGaugeUint64(w, `vm_pending_rows{type="storage"}`, tm.PendingRows)
metrics.WriteGaugeUint64(w, `vm_pending_rows{type="indexdb"}`, idbm.PendingItems)
metrics.NewGauge(`vm_pending_rows{type="storage"}`, func() float64 {
return float64(tm().PendingRows)
})
metrics.NewGauge(`vm_pending_rows{type="indexdb"}`, func() float64 {
return float64(idbm().PendingItems)
})
metrics.WriteGaugeUint64(w, `vm_parts{type="storage/inmemory"}`, tm.InmemoryPartsCount)
metrics.WriteGaugeUint64(w, `vm_parts{type="storage/small"}`, tm.SmallPartsCount)
metrics.WriteGaugeUint64(w, `vm_parts{type="storage/big"}`, tm.BigPartsCount)
metrics.WriteGaugeUint64(w, `vm_parts{type="indexdb/inmemory"}`, idbm.InmemoryPartsCount)
metrics.WriteGaugeUint64(w, `vm_parts{type="indexdb/file"}`, idbm.FilePartsCount)
metrics.NewGauge(`vm_parts{type="storage/inmemory"}`, func() float64 {
return float64(tm().InmemoryPartsCount)
})
metrics.NewGauge(`vm_parts{type="storage/small"}`, func() float64 {
return float64(tm().SmallPartsCount)
})
metrics.NewGauge(`vm_parts{type="storage/big"}`, func() float64 {
return float64(tm().BigPartsCount)
})
metrics.NewGauge(`vm_parts{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().InmemoryPartsCount)
})
metrics.NewGauge(`vm_parts{type="indexdb/file"}`, func() float64 {
return float64(idbm().FilePartsCount)
})
metrics.WriteGaugeUint64(w, `vm_blocks{type="storage/inmemory"}`, tm.InmemoryBlocksCount)
metrics.WriteGaugeUint64(w, `vm_blocks{type="storage/small"}`, tm.SmallBlocksCount)
metrics.WriteGaugeUint64(w, `vm_blocks{type="storage/big"}`, tm.BigBlocksCount)
metrics.WriteGaugeUint64(w, `vm_blocks{type="indexdb/inmemory"}`, idbm.InmemoryBlocksCount)
metrics.WriteGaugeUint64(w, `vm_blocks{type="indexdb/file"}`, idbm.FileBlocksCount)
metrics.NewGauge(`vm_blocks{type="storage/inmemory"}`, func() float64 {
return float64(tm().InmemoryBlocksCount)
})
metrics.NewGauge(`vm_blocks{type="storage/small"}`, func() float64 {
return float64(tm().SmallBlocksCount)
})
metrics.NewGauge(`vm_blocks{type="storage/big"}`, func() float64 {
return float64(tm().BigBlocksCount)
})
metrics.NewGauge(`vm_blocks{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().InmemoryBlocksCount)
})
metrics.NewGauge(`vm_blocks{type="indexdb/file"}`, func() float64 {
return float64(idbm().FileBlocksCount)
})
metrics.WriteGaugeUint64(w, `vm_data_size_bytes{type="storage/inmemory"}`, tm.InmemorySizeBytes)
metrics.WriteGaugeUint64(w, `vm_data_size_bytes{type="storage/small"}`, tm.SmallSizeBytes)
metrics.WriteGaugeUint64(w, `vm_data_size_bytes{type="storage/big"}`, tm.BigSizeBytes)
metrics.WriteGaugeUint64(w, `vm_data_size_bytes{type="indexdb/inmemory"}`, idbm.InmemorySizeBytes)
metrics.WriteGaugeUint64(w, `vm_data_size_bytes{type="indexdb/file"}`, idbm.FileSizeBytes)
metrics.NewGauge(`vm_data_size_bytes{type="storage/inmemory"}`, func() float64 {
return float64(tm().InmemorySizeBytes)
})
metrics.NewGauge(`vm_data_size_bytes{type="storage/small"}`, func() float64 {
return float64(tm().SmallSizeBytes)
})
metrics.NewGauge(`vm_data_size_bytes{type="storage/big"}`, func() float64 {
return float64(tm().BigSizeBytes)
})
metrics.NewGauge(`vm_data_size_bytes{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().InmemorySizeBytes)
})
metrics.NewGauge(`vm_data_size_bytes{type="indexdb/file"}`, func() float64 {
return float64(idbm().FileSizeBytes)
})
metrics.WriteCounterUint64(w, `vm_rows_added_to_storage_total`, m.RowsAddedTotal)
metrics.WriteCounterUint64(w, `vm_deduplicated_samples_total{type="merge"}`, m.DedupsDuringMerge)
metrics.NewGauge(`vm_rows_added_to_storage_total`, func() float64 {
return float64(m().RowsAddedTotal)
})
metrics.NewGauge(`vm_deduplicated_samples_total{type="merge"}`, func() float64 {
return float64(m().DedupsDuringMerge)
})
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="big_timestamp"}`, m.TooBigTimestampRows)
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="small_timestamp"}`, m.TooSmallTimestampRows)
metrics.NewGauge(`vm_rows_ignored_total{reason="big_timestamp"}`, func() float64 {
return float64(m().TooBigTimestampRows)
})
metrics.NewGauge(`vm_rows_ignored_total{reason="small_timestamp"}`, func() float64 {
return float64(m().TooSmallTimestampRows)
})
metrics.NewGauge(`vm_timeseries_repopulated_total`, func() float64 {
return float64(m().TimeseriesRepopulated)
})
metrics.NewGauge(`vm_timeseries_precreated_total`, func() float64 {
return float64(m().TimeseriesPreCreated)
})
metrics.NewGauge(`vm_new_timeseries_created_total`, func() float64 {
return float64(m().NewTimeseriesCreated)
})
metrics.NewGauge(`vm_slow_row_inserts_total`, func() float64 {
return float64(m().SlowRowInserts)
})
metrics.NewGauge(`vm_slow_per_day_index_inserts_total`, func() float64 {
return float64(m().SlowPerDayIndexInserts)
})
metrics.NewGauge(`vm_slow_metric_name_loads_total`, func() float64 {
return float64(m().SlowMetricNameLoads)
})
metrics.WriteCounterUint64(w, `vm_timeseries_repopulated_total`, m.TimeseriesRepopulated)
metrics.WriteCounterUint64(w, `vm_timeseries_precreated_total`, m.TimeseriesPreCreated)
metrics.WriteCounterUint64(w, `vm_new_timeseries_created_total`, m.NewTimeseriesCreated)
metrics.WriteCounterUint64(w, `vm_slow_row_inserts_total`, m.SlowRowInserts)
metrics.WriteCounterUint64(w, `vm_slow_per_day_index_inserts_total`, m.SlowPerDayIndexInserts)
metrics.WriteCounterUint64(w, `vm_slow_metric_name_loads_total`, m.SlowMetricNameLoads)
if *maxHourlySeries > 0 {
metrics.NewGauge(`vm_hourly_series_limit_current_series`, func() float64 {
return float64(m().HourlySeriesLimitCurrentSeries)
})
metrics.NewGauge(`vm_hourly_series_limit_max_series`, func() float64 {
return float64(m().HourlySeriesLimitMaxSeries)
})
metrics.NewGauge(`vm_hourly_series_limit_rows_dropped_total`, func() float64 {
return float64(m().HourlySeriesLimitRowsDropped)
})
metrics.WriteGaugeUint64(w, `vm_hourly_series_limit_current_series`, m.HourlySeriesLimitCurrentSeries)
metrics.WriteGaugeUint64(w, `vm_hourly_series_limit_max_series`, m.HourlySeriesLimitMaxSeries)
metrics.WriteCounterUint64(w, `vm_hourly_series_limit_rows_dropped_total`, m.HourlySeriesLimitRowsDropped)
}
if *maxDailySeries > 0 {
metrics.NewGauge(`vm_daily_series_limit_current_series`, func() float64 {
return float64(m().DailySeriesLimitCurrentSeries)
})
metrics.NewGauge(`vm_daily_series_limit_max_series`, func() float64 {
return float64(m().DailySeriesLimitMaxSeries)
})
metrics.NewGauge(`vm_daily_series_limit_rows_dropped_total`, func() float64 {
return float64(m().DailySeriesLimitRowsDropped)
})
metrics.WriteGaugeUint64(w, `vm_daily_series_limit_current_series`, m.DailySeriesLimitCurrentSeries)
metrics.WriteGaugeUint64(w, `vm_daily_series_limit_max_series`, m.DailySeriesLimitMaxSeries)
metrics.WriteCounterUint64(w, `vm_daily_series_limit_rows_dropped_total`, m.DailySeriesLimitRowsDropped)
}
metrics.NewGauge(`vm_timestamps_blocks_merged_total`, func() float64 {
return float64(m().TimestampsBlocksMerged)
})
metrics.NewGauge(`vm_timestamps_bytes_saved_total`, func() float64 {
return float64(m().TimestampsBytesSaved)
})
metrics.WriteCounterUint64(w, `vm_timestamps_blocks_merged_total`, m.TimestampsBlocksMerged)
metrics.WriteCounterUint64(w, `vm_timestamps_bytes_saved_total`, m.TimestampsBytesSaved)
metrics.NewGauge(`vm_rows{type="storage/inmemory"}`, func() float64 {
return float64(tm().InmemoryRowsCount)
})
metrics.NewGauge(`vm_rows{type="storage/small"}`, func() float64 {
return float64(tm().SmallRowsCount)
})
metrics.NewGauge(`vm_rows{type="storage/big"}`, func() float64 {
return float64(tm().BigRowsCount)
})
metrics.NewGauge(`vm_rows{type="indexdb/inmemory"}`, func() float64 {
return float64(idbm().InmemoryItemsCount)
})
metrics.NewGauge(`vm_rows{type="indexdb/file"}`, func() float64 {
return float64(idbm().FileItemsCount)
})
metrics.WriteGaugeUint64(w, `vm_rows{type="storage/inmemory"}`, tm.InmemoryRowsCount)
metrics.WriteGaugeUint64(w, `vm_rows{type="storage/small"}`, tm.SmallRowsCount)
metrics.WriteGaugeUint64(w, `vm_rows{type="storage/big"}`, tm.BigRowsCount)
metrics.WriteGaugeUint64(w, `vm_rows{type="indexdb/inmemory"}`, idbm.InmemoryItemsCount)
metrics.WriteGaugeUint64(w, `vm_rows{type="indexdb/file"}`, idbm.FileItemsCount)
metrics.NewGauge(`vm_date_range_search_calls_total`, func() float64 {
return float64(idbm().DateRangeSearchCalls)
})
metrics.NewGauge(`vm_date_range_hits_total`, func() float64 {
return float64(idbm().DateRangeSearchHits)
})
metrics.NewGauge(`vm_global_search_calls_total`, func() float64 {
return float64(idbm().GlobalSearchCalls)
})
metrics.WriteCounterUint64(w, `vm_date_range_search_calls_total`, idbm.DateRangeSearchCalls)
metrics.WriteCounterUint64(w, `vm_date_range_hits_total`, idbm.DateRangeSearchHits)
metrics.WriteCounterUint64(w, `vm_global_search_calls_total`, idbm.GlobalSearchCalls)
metrics.NewGauge(`vm_missing_metric_names_for_metric_id_total`, func() float64 {
return float64(idbm().MissingMetricNamesForMetricID)
})
metrics.WriteCounterUint64(w, `vm_missing_metric_names_for_metric_id_total`, idbm.MissingMetricNamesForMetricID)
metrics.NewGauge(`vm_date_metric_id_cache_syncs_total`, func() float64 {
return float64(m().DateMetricIDCacheSyncsCount)
})
metrics.NewGauge(`vm_date_metric_id_cache_resets_total`, func() float64 {
return float64(m().DateMetricIDCacheResetsCount)
})
metrics.WriteCounterUint64(w, `vm_date_metric_id_cache_syncs_total`, m.DateMetricIDCacheSyncsCount)
metrics.WriteCounterUint64(w, `vm_date_metric_id_cache_resets_total`, m.DateMetricIDCacheResetsCount)
metrics.NewGauge(`vm_cache_entries{type="storage/tsid"}`, func() float64 {
return float64(m().TSIDCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="storage/metricIDs"}`, func() float64 {
return float64(m().MetricIDCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="storage/metricName"}`, func() float64 {
return float64(m().MetricNameCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="storage/date_metricID"}`, func() float64 {
return float64(m().DateMetricIDCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="storage/hour_metric_ids"}`, func() float64 {
return float64(m().HourMetricIDCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="storage/next_day_metric_ids"}`, func() float64 {
return float64(m().NextDayMetricIDCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="storage/indexBlocks"}`, func() float64 {
return float64(tm().IndexBlocksCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="indexdb/dataBlocks"}`, func() float64 {
return float64(idbm().DataBlocksCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="indexdb/indexBlocks"}`, func() float64 {
return float64(idbm().IndexBlocksCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="indexdb/tagFiltersToMetricIDs"}`, func() float64 {
return float64(idbm().TagFiltersToMetricIDsCacheSize)
})
metrics.NewGauge(`vm_cache_entries{type="storage/regexps"}`, func() float64 {
return float64(storage.RegexpCacheSize())
})
metrics.NewGauge(`vm_cache_entries{type="storage/regexpPrefixes"}`, func() float64 {
return float64(storage.RegexpPrefixesCacheSize())
})
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/tsid"}`, m.TSIDCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/metricIDs"}`, m.MetricIDCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/metricName"}`, m.MetricNameCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/date_metricID"}`, m.DateMetricIDCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/hour_metric_ids"}`, m.HourMetricIDCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/next_day_metric_ids"}`, m.NextDayMetricIDCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/indexBlocks"}`, tm.IndexBlocksCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheSize)
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/regexps"}`, uint64(storage.RegexpCacheSize()))
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/regexpPrefixes"}`, uint64(storage.RegexpPrefixesCacheSize()))
metrics.WriteGaugeUint64(w, `vm_cache_entries{type="storage/prefetchedMetricIDs"}`, m.PrefetchedMetricIDsSize)
metrics.NewGauge(`vm_cache_entries{type="storage/prefetchedMetricIDs"}`, func() float64 {
return float64(m().PrefetchedMetricIDsSize)
})
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/tsid"}`, m.TSIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/metricIDs"}`, m.MetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/metricName"}`, m.MetricNameCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/indexBlocks"}`, tm.IndexBlocksCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/date_metricID"}`, m.DateMetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/hour_metric_ids"}`, m.HourMetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/next_day_metric_ids"}`, m.NextDayMetricIDCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheSizeBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/regexps"}`, uint64(storage.RegexpCacheSizeBytes()))
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/regexpPrefixes"}`, uint64(storage.RegexpPrefixesCacheSizeBytes()))
metrics.WriteGaugeUint64(w, `vm_cache_size_bytes{type="storage/prefetchedMetricIDs"}`, m.PrefetchedMetricIDsSizeBytes)
metrics.NewGauge(`vm_cache_size_bytes{type="storage/tsid"}`, func() float64 {
return float64(m().TSIDCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/metricIDs"}`, func() float64 {
return float64(m().MetricIDCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/metricName"}`, func() float64 {
return float64(m().MetricNameCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/indexBlocks"}`, func() float64 {
return float64(tm().IndexBlocksCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="indexdb/dataBlocks"}`, func() float64 {
return float64(idbm().DataBlocksCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="indexdb/indexBlocks"}`, func() float64 {
return float64(idbm().IndexBlocksCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/date_metricID"}`, func() float64 {
return float64(m().DateMetricIDCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/hour_metric_ids"}`, func() float64 {
return float64(m().HourMetricIDCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/next_day_metric_ids"}`, func() float64 {
return float64(m().NextDayMetricIDCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="indexdb/tagFiltersToMetricIDs"}`, func() float64 {
return float64(idbm().TagFiltersToMetricIDsCacheSizeBytes)
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/regexps"}`, func() float64 {
return float64(storage.RegexpCacheSizeBytes())
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/regexpPrefixes"}`, func() float64 {
return float64(storage.RegexpPrefixesCacheSizeBytes())
})
metrics.NewGauge(`vm_cache_size_bytes{type="storage/prefetchedMetricIDs"}`, func() float64 {
return float64(m().PrefetchedMetricIDsSizeBytes)
})
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/tsid"}`, m.TSIDCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/metricIDs"}`, m.MetricIDCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/metricName"}`, m.MetricNameCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/indexBlocks"}`, tm.IndexBlocksCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheSizeMaxBytes)
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/regexps"}`, uint64(storage.RegexpCacheMaxSizeBytes()))
metrics.WriteGaugeUint64(w, `vm_cache_size_max_bytes{type="storage/regexpPrefixes"}`, uint64(storage.RegexpPrefixesCacheMaxSizeBytes()))
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/tsid"}`, func() float64 {
return float64(m().TSIDCacheSizeMaxBytes)
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/metricIDs"}`, func() float64 {
return float64(m().MetricIDCacheSizeMaxBytes)
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/metricName"}`, func() float64 {
return float64(m().MetricNameCacheSizeMaxBytes)
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/indexBlocks"}`, func() float64 {
return float64(tm().IndexBlocksCacheSizeMaxBytes)
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="indexdb/dataBlocks"}`, func() float64 {
return float64(idbm().DataBlocksCacheSizeMaxBytes)
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="indexdb/indexBlocks"}`, func() float64 {
return float64(idbm().IndexBlocksCacheSizeMaxBytes)
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="indexdb/tagFiltersToMetricIDs"}`, func() float64 {
return float64(idbm().TagFiltersToMetricIDsCacheSizeMaxBytes)
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/regexps"}`, func() float64 {
return float64(storage.RegexpCacheMaxSizeBytes())
})
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/regexpPrefixes"}`, func() float64 {
return float64(storage.RegexpPrefixesCacheMaxSizeBytes())
})
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/tsid"}`, m.TSIDCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/metricIDs"}`, m.MetricIDCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/metricName"}`, m.MetricNameCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/indexBlocks"}`, tm.IndexBlocksCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheRequests)
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/regexps"}`, storage.RegexpCacheRequests())
metrics.WriteCounterUint64(w, `vm_cache_requests_total{type="storage/regexpPrefixes"}`, storage.RegexpPrefixesCacheRequests())
metrics.NewGauge(`vm_cache_requests_total{type="storage/tsid"}`, func() float64 {
return float64(m().TSIDCacheRequests)
})
metrics.NewGauge(`vm_cache_requests_total{type="storage/metricIDs"}`, func() float64 {
return float64(m().MetricIDCacheRequests)
})
metrics.NewGauge(`vm_cache_requests_total{type="storage/metricName"}`, func() float64 {
return float64(m().MetricNameCacheRequests)
})
metrics.NewGauge(`vm_cache_requests_total{type="storage/indexBlocks"}`, func() float64 {
return float64(tm().IndexBlocksCacheRequests)
})
metrics.NewGauge(`vm_cache_requests_total{type="indexdb/dataBlocks"}`, func() float64 {
return float64(idbm().DataBlocksCacheRequests)
})
metrics.NewGauge(`vm_cache_requests_total{type="indexdb/indexBlocks"}`, func() float64 {
return float64(idbm().IndexBlocksCacheRequests)
})
metrics.NewGauge(`vm_cache_requests_total{type="indexdb/tagFiltersToMetricIDs"}`, func() float64 {
return float64(idbm().TagFiltersToMetricIDsCacheRequests)
})
metrics.NewGauge(`vm_cache_requests_total{type="storage/regexps"}`, func() float64 {
return float64(storage.RegexpCacheRequests())
})
metrics.NewGauge(`vm_cache_requests_total{type="storage/regexpPrefixes"}`, func() float64 {
return float64(storage.RegexpPrefixesCacheRequests())
})
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/tsid"}`, m.TSIDCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/metricIDs"}`, m.MetricIDCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/metricName"}`, m.MetricNameCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/indexBlocks"}`, tm.IndexBlocksCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="indexdb/dataBlocks"}`, idbm.DataBlocksCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="indexdb/indexBlocks"}`, idbm.IndexBlocksCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="indexdb/tagFiltersToMetricIDs"}`, idbm.TagFiltersToMetricIDsCacheMisses)
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/regexps"}`, storage.RegexpCacheMisses())
metrics.WriteCounterUint64(w, `vm_cache_misses_total{type="storage/regexpPrefixes"}`, storage.RegexpPrefixesCacheMisses())
metrics.NewGauge(`vm_cache_misses_total{type="storage/tsid"}`, func() float64 {
return float64(m().TSIDCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="storage/metricIDs"}`, func() float64 {
return float64(m().MetricIDCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="storage/metricName"}`, func() float64 {
return float64(m().MetricNameCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="storage/indexBlocks"}`, func() float64 {
return float64(tm().IndexBlocksCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="indexdb/dataBlocks"}`, func() float64 {
return float64(idbm().DataBlocksCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="indexdb/indexBlocks"}`, func() float64 {
return float64(idbm().IndexBlocksCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="indexdb/tagFiltersToMetricIDs"}`, func() float64 {
return float64(idbm().TagFiltersToMetricIDsCacheMisses)
})
metrics.NewGauge(`vm_cache_misses_total{type="storage/regexps"}`, func() float64 {
return float64(storage.RegexpCacheMisses())
})
metrics.NewGauge(`vm_cache_misses_total{type="storage/regexpPrefixes"}`, func() float64 {
return float64(storage.RegexpPrefixesCacheMisses())
})
metrics.WriteCounterUint64(w, `vm_deleted_metrics_total{type="indexdb"}`, idbm.DeletedMetricsCount)
metrics.NewGauge(`vm_deleted_metrics_total{type="indexdb"}`, func() float64 {
return float64(idbm().DeletedMetricsCount)
})
metrics.WriteCounterUint64(w, `vm_cache_collisions_total{type="storage/tsid"}`, m.TSIDCacheCollisions)
metrics.WriteCounterUint64(w, `vm_cache_collisions_total{type="storage/metricName"}`, m.MetricNameCacheCollisions)
metrics.NewGauge(`vm_cache_collisions_total{type="storage/tsid"}`, func() float64 {
return float64(m().TSIDCacheCollisions)
})
metrics.NewGauge(`vm_cache_collisions_total{type="storage/metricName"}`, func() float64 {
return float64(m().MetricNameCacheCollisions)
})
metrics.NewGauge(`vm_next_retention_seconds`, func() float64 {
return float64(m().NextRetentionSeconds)
})
metrics.WriteGaugeUint64(w, `vm_next_retention_seconds`, m.NextRetentionSeconds)
}
func jsonResponseError(w http.ResponseWriter, err error) {

View file

@ -1,4 +1,4 @@
FROM golang:1.21.5 as build-web-stage
FROM golang:1.21.6 as build-web-stage
COPY build /build
WORKDIR /build

View file

@ -22,6 +22,14 @@ vmui-logs-build: vmui-package-base-image
--entrypoint=/bin/bash \
vmui-builder-image -c "npm install && npm run build:logs"
vmui-anomaly-build: vmui-package-base-image
docker run --rm \
--user $(shell id -u):$(shell id -g) \
--mount type=bind,src="$(shell pwd)/app/vmui",dst=/build \
-w /build/packages/vmui \
--entrypoint=/bin/bash \
vmui-builder-image -c "npm install && npm run build:anomaly"
vmui-release: vmui-build
docker build -t ${DOCKER_NAMESPACE}/vmui:latest -f app/vmui/Dockerfile-web ./app/vmui/packages/vmui
docker tag ${DOCKER_NAMESPACE}/vmui:latest ${DOCKER_NAMESPACE}/vmui:${PKG_TAG}

View file

@ -14,10 +14,12 @@ module.exports = override(
new webpack.NormalModuleReplacementPlugin(
/\.\/App/,
function (resource) {
// eslint-disable-next-line no-undef
if (process.env.REACT_APP_LOGS === "true") {
if (process.env.REACT_APP_TYPE === "logs") {
resource.request = "./AppLogs";
}
if (process.env.REACT_APP_TYPE === "anomaly") {
resource.request = "./AppAnomaly";
}
}
)
)

View file

@ -32,9 +32,11 @@
"scripts": {
"prestart": "npm run copy-metricsql-docs",
"start": "react-app-rewired start",
"start:logs": "cross-env REACT_APP_LOGS=true npm run start",
"start:logs": "cross-env REACT_APP_TYPE=logs npm run start",
"start:anomaly": "cross-env REACT_APP_TYPE=anomaly npm run start",
"build": "GENERATE_SOURCEMAP=false react-app-rewired build",
"build:logs": "cross-env REACT_APP_LOGS=true npm run build",
"build:logs": "cross-env REACT_APP_TYPE=logs npm run build",
"build:anomaly": "cross-env REACT_APP_TYPE=anomaly npm run build",
"lint": "eslint src --ext tsx,ts",
"lint:fix": "eslint src --ext tsx,ts --fix",
"analyze": "source-map-explorer 'build/static/js/*.js'",

View file

@ -0,0 +1,41 @@
import React, { FC, useState } from "preact/compat";
import { HashRouter, Route, Routes } from "react-router-dom";
import AppContextProvider from "./contexts/AppContextProvider";
import ThemeProvider from "./components/Main/ThemeProvider/ThemeProvider";
import AnomalyLayout from "./layouts/AnomalyLayout/AnomalyLayout";
import ExploreAnomaly from "./pages/ExploreAnomaly/ExploreAnomaly";
import router from "./router";
import CustomPanel from "./pages/CustomPanel";
const AppLogs: FC = () => {
const [loadedTheme, setLoadedTheme] = useState(false);
return <>
<HashRouter>
<AppContextProvider>
<>
<ThemeProvider onLoaded={setLoadedTheme}/>
{loadedTheme && (
<Routes>
<Route
path={"/"}
element={<AnomalyLayout/>}
>
<Route
path={"/"}
element={<ExploreAnomaly/>}
/>
<Route
path={router.query}
element={<CustomPanel/>}
/>
</Route>
</Routes>
)}
</>
</AppContextProvider>
</HashRouter>
</>;
};
export default AppLogs;

View file

@ -0,0 +1,85 @@
import React, { FC, useMemo } from "preact/compat";
import { ForecastType, SeriesItem } from "../../../../types";
import { anomalyColors } from "../../../../utils/color";
import "./style.scss";
type Props = {
series: SeriesItem[];
};
const titles: Partial<Record<ForecastType, string>> = {
[ForecastType.yhat]: "yhat",
[ForecastType.yhatLower]: "yhat_lower/_upper",
[ForecastType.yhatUpper]: "yhat_lower/_upper",
[ForecastType.anomaly]: "anomalies",
[ForecastType.training]: "training data",
[ForecastType.actual]: "y"
};
const LegendAnomaly: FC<Props> = ({ series }) => {
const uniqSeriesStyles = useMemo(() => {
const uniqSeries = series.reduce((accumulator, currentSeries) => {
const hasForecast = Object.prototype.hasOwnProperty.call(currentSeries, "forecast");
const isNotUpper = currentSeries.forecast !== ForecastType.yhatUpper;
const isUniqForecast = !accumulator.find(s => s.forecast === currentSeries.forecast);
if (hasForecast && isUniqForecast && isNotUpper) {
accumulator.push(currentSeries);
}
return accumulator;
}, [] as SeriesItem[]);
const trainingSeries = {
...uniqSeries[0],
forecast: ForecastType.training,
color: anomalyColors[ForecastType.training],
};
uniqSeries.splice(1, 0, trainingSeries);
return uniqSeries.map(s => ({
...s,
color: typeof s.stroke === "string" ? s.stroke : anomalyColors[s.forecast || ForecastType.actual],
}));
}, [series]);
const container = document.getElementById("legendAnomaly");
if (!container) return null;
return <>
<div className="vm-legend-anomaly">
{/* TODO: remove .filter() after the correct training data has been added */}
{uniqSeriesStyles.filter(f => f.forecast !== ForecastType.training).map((s, i) => (
<div
key={`${i}_${s.forecast}`}
className="vm-legend-anomaly-item"
>
<svg>
{s.forecast === ForecastType.anomaly ? (
<circle
cx="15"
cy="7"
r="4"
fill={s.color}
stroke={s.color}
strokeWidth="1.4"
/>
) : (
<line
x1="0"
y1="7"
x2="30"
y2="7"
stroke={s.color}
strokeWidth={s.width || 1}
strokeDasharray={s.dash?.join(",")}
/>
)}
</svg>
<div className="vm-legend-anomaly-item__title">{titles[s.forecast || ForecastType.actual]}</div>
</div>
))}
</div>
</>;
};
export default LegendAnomaly;

View file

@ -0,0 +1,23 @@
@use "src/styles/variables" as *;
.vm-legend-anomaly {
position: relative;
display: flex;
align-items: center;
justify-content: center;
flex-wrap: wrap;
gap: calc($padding-large * 2);
cursor: default;
&-item {
display: flex;
align-items: center;
justify-content: center;
gap: $padding-small;
svg {
width: 30px;
height: 14px;
}
}
}

View file

@ -5,14 +5,15 @@ import uPlot, {
Series as uPlotSeries,
} from "uplot";
import {
getDefaultOptions,
addSeries,
delSeries,
getAxes,
getDefaultOptions,
getRangeX,
getRangeY,
getScales,
handleDestroy,
getAxes,
setBand,
setSelect
} from "../../../../utils/uplot";
import { MetricResult } from "../../../../api/types";
@ -39,6 +40,7 @@ export interface LineChartProps {
setPeriod: ({ from, to }: { from: Date, to: Date }) => void;
layoutSize: ElementSize;
height?: number;
anomalyView?: boolean;
}
const LineChart: FC<LineChartProps> = ({
@ -50,7 +52,8 @@ const LineChart: FC<LineChartProps> = ({
unit,
setPeriod,
layoutSize,
height
height,
anomalyView
}) => {
const { isDarkTheme } = useAppState();
@ -68,7 +71,7 @@ const LineChart: FC<LineChartProps> = ({
seriesFocus,
setCursor,
resetTooltips
} = useLineTooltip({ u: uPlotInst, metrics, series, unit });
} = useLineTooltip({ u: uPlotInst, metrics, series, unit, anomalyView });
const options: uPlotOptions = {
...getDefaultOptions({ width: layoutSize.width, height }),
@ -82,6 +85,7 @@ const LineChart: FC<LineChartProps> = ({
setSelect: [setSelect(setPlotScale)],
destroy: [handleDestroy],
},
bands: []
};
useEffect(() => {
@ -103,6 +107,7 @@ const LineChart: FC<LineChartProps> = ({
if (!uPlotInst) return;
delSeries(uPlotInst);
addSeries(uPlotInst, series);
setBand(uPlotInst, series);
uPlotInst.redraw();
}, [series]);

View file

@ -17,11 +17,14 @@ import ThemeControl from "../ThemeControl/ThemeControl";
import useDeviceDetect from "../../../hooks/useDeviceDetect";
import useBoolean from "../../../hooks/useBoolean";
import { getTenantIdFromUrl } from "../../../utils/tenants";
import { AppType } from "../../../types/appType";
const title = "Settings";
const { REACT_APP_TYPE } = process.env;
const isLogsApp = REACT_APP_TYPE === AppType.logs;
const GlobalSettings: FC = () => {
const { REACT_APP_LOGS } = process.env;
const { isMobile } = useDeviceDetect();
const appModeEnable = getAppModeEnable();
@ -77,7 +80,7 @@ const GlobalSettings: FC = () => {
const controls = [
{
show: !appModeEnable && !REACT_APP_LOGS,
show: !appModeEnable && !isLogsApp,
component: <ServerConfigurator
stateServerUrl={stateServerUrl}
serverUrl={serverUrl}
@ -86,7 +89,7 @@ const GlobalSettings: FC = () => {
/>
},
{
show: !REACT_APP_LOGS,
show: !isLogsApp,
component: <LimitsConfigurator
limits={limits}
onChange={setLimits}

View file

@ -16,9 +16,9 @@ export interface ServerConfiguratorProps {
}
const fields: {label: string, type: DisplayType}[] = [
{ label: "Graph", type: "chart" },
{ label: "JSON", type: "code" },
{ label: "Table", type: "table" }
{ label: "Graph", type: DisplayType.chart },
{ label: "JSON", type: DisplayType.code },
{ label: "Table", type: DisplayType.table }
];
const LimitsConfigurator: FC<ServerConfiguratorProps> = ({ limits, onChange , onEnter }) => {

View file

@ -2,6 +2,11 @@ import React, { FC, useEffect, useState } from "preact/compat";
import { ErrorTypes } from "../../../../types";
import TextField from "../../../Main/TextField/TextField";
import { isValidHttpUrl } from "../../../../utils/url";
import Button from "../../../Main/Button/Button";
import { StorageIcon } from "../../../Main/Icons";
import Tooltip from "../../../Main/Tooltip/Tooltip";
import { getFromStorage, removeFromStorage, saveToStorage } from "../../../../utils/storage";
import useBoolean from "../../../../hooks/useBoolean";
export interface ServerConfiguratorProps {
serverUrl: string
@ -10,13 +15,21 @@ export interface ServerConfiguratorProps {
onEnter: () => void
}
const tooltipSave = {
enable: "Enable to save the modified server URL to local storage, preventing reset upon page refresh.",
disable: "Disable to stop saving the server URL to local storage, reverting to the default URL on page refresh."
};
const ServerConfigurator: FC<ServerConfiguratorProps> = ({
serverUrl,
stateServerUrl,
onChange ,
onEnter
}) => {
const {
value: enabledStorage,
toggle: handleToggleStorage,
} = useBoolean(!!getFromStorage("SERVER_URL"));
const [error, setError] = useState("");
const onChangeServer = (val: string) => {
@ -30,16 +43,39 @@ const ServerConfigurator: FC<ServerConfiguratorProps> = ({
if (!isValidHttpUrl(stateServerUrl)) setError(ErrorTypes.validServer);
}, [stateServerUrl]);
useEffect(() => {
if (enabledStorage) {
saveToStorage("SERVER_URL", serverUrl);
} else {
removeFromStorage(["SERVER_URL"]);
}
}, [enabledStorage]);
return (
<TextField
autofocus
label="Server URL"
value={serverUrl}
error={error}
onChange={onChangeServer}
onEnter={onEnter}
inputmode="url"
/>
<div>
<div className="vm-server-configurator__title">
Server URL
</div>
<div className="vm-server-configurator-url">
<TextField
autofocus
value={serverUrl}
error={error}
onChange={onChangeServer}
onEnter={onEnter}
inputmode="url"
/>
<Tooltip title={enabledStorage ? tooltipSave.disable : tooltipSave.enable}>
<Button
className="vm-server-configurator-url__button"
variant="text"
color={enabledStorage ? "primary" : "gray"}
onClick={handleToggleStorage}
startIcon={<StorageIcon/>}
/>
</Tooltip>
</div>
</div>
);
};

View file

@ -21,6 +21,12 @@
&__input {
width: 100%;
&_flex {
display: flex;
align-items: flex-start;
gap: $padding-global;
}
}
&__title {
@ -33,6 +39,16 @@
margin-bottom: $padding-global;
}
&-url {
display: flex;
align-items: flex-start;
gap: $padding-small;
&__button {
margin-top: $padding-small;
}
}
&-footer {
display: flex;
align-items: center;

View file

@ -6,12 +6,11 @@ import { useTimeDispatch, useTimeState } from "../../../state/time/TimeStateCont
import { AxisRange } from "../../../state/graph/reducer";
import Spinner from "../../Main/Spinner/Spinner";
import Alert from "../../Main/Alert/Alert";
import Button from "../../Main/Button/Button";
import "./style.scss";
import classNames from "classnames";
import useDeviceDetect from "../../../hooks/useDeviceDetect";
import { getDurationFromMilliseconds, getSecondsFromDuration, getStepFromDuration } from "../../../utils/time";
import useBoolean from "../../../hooks/useBoolean";
import WarningLimitSeries from "../../../pages/CustomPanel/WarningLimitSeries/WarningLimitSeries";
interface ExploreMetricItemGraphProps {
name: string,
@ -40,12 +39,9 @@ const ExploreMetricItem: FC<ExploreMetricItemGraphProps> = ({
const stepSeconds = getSecondsFromDuration(customStep);
const heatmapStep = getDurationFromMilliseconds(stepSeconds * 10 * 1000);
const [isHeatmap, setIsHeatmap] = useState(false);
const [showAllSeries, setShowAllSeries] = useState(false);
const step = isHeatmap && customStep === defaultStep ? heatmapStep : customStep;
const {
value: showAllSeries,
setTrue: handleShowAll,
} = useBoolean(false);
const query = useMemo(() => {
const params = Object.entries({ job, instance })
@ -99,18 +95,13 @@ with (q = ${queryBase}) (
{isLoading && <Spinner />}
{error && <Alert variant="error">{error}</Alert>}
{queryErrors[0] && <Alert variant="error">{queryErrors[0]}</Alert>}
{warning && <Alert variant="warning">
<div className="vm-explore-metrics-graph__warning">
<p>{warning}</p>
<Button
color="warning"
variant="outlined"
onClick={handleShowAll}
>
Show all
</Button>
</div>
</Alert>}
{warning && (
<WarningLimitSeries
warning={warning}
query={[query]}
onChange={setShowAllSeries}
/>
)}
{graphData && period && (
<GraphView
data={graphData}

File diff suppressed because one or more lines are too long

View file

@ -18,6 +18,7 @@ interface SelectProps {
clearable?: boolean
searchable?: boolean
autofocus?: boolean
disabled?: boolean
onChange: (value: string) => void
}
@ -30,6 +31,7 @@ const Select: FC<SelectProps> = ({
clearable = false,
searchable = false,
autofocus,
disabled,
onChange
}) => {
const { isDarkTheme } = useAppState();
@ -64,11 +66,12 @@ const Select: FC<SelectProps> = ({
};
const handleFocus = () => {
if (disabled) return;
setOpenList(true);
};
const handleToggleList = (e: MouseEvent<HTMLDivElement>) => {
if (e.target instanceof HTMLInputElement) return;
if (e.target instanceof HTMLInputElement || disabled) return;
setOpenList(prev => !prev);
};
@ -112,7 +115,8 @@ const Select: FC<SelectProps> = ({
<div
className={classNames({
"vm-select": true,
"vm-select_dark": isDarkTheme
"vm-select_dark": isDarkTheme,
"vm-select_disabled": disabled
})}
>
<div

View file

@ -126,4 +126,18 @@
max-height: calc(($vh * 100) - 70px);
}
}
&_disabled {
* {
cursor: not-allowed;
}
.vm-select-input {
&-content {
input {
color: $color-text-disabled;
}
}
}
}
}

View file

@ -24,6 +24,7 @@ import { promValueToNumber } from "../../../utils/metric";
import useDeviceDetect from "../../../hooks/useDeviceDetect";
import useElementSize from "../../../hooks/useElementSize";
import { ChartTooltipProps } from "../../Chart/ChartTooltip/ChartTooltip";
import LegendAnomaly from "../../Chart/Line/LegendAnomaly/LegendAnomaly";
export interface GraphViewProps {
data?: MetricResult[];
@ -34,11 +35,12 @@ export interface GraphViewProps {
yaxis: YaxisState;
unit?: string;
showLegend?: boolean;
setYaxisLimits: (val: AxisRange) => void
setPeriod: ({ from, to }: { from: Date, to: Date }) => void
fullWidth?: boolean
height?: number
isHistogram?: boolean
setYaxisLimits: (val: AxisRange) => void;
setPeriod: ({ from, to }: { from: Date, to: Date }) => void;
fullWidth?: boolean;
height?: number;
isHistogram?: boolean;
anomalyView?: boolean;
}
const GraphView: FC<GraphViewProps> = ({
@ -54,7 +56,8 @@ const GraphView: FC<GraphViewProps> = ({
alias = [],
fullWidth = true,
height,
isHistogram
isHistogram,
anomalyView,
}) => {
const { isMobile } = useDeviceDetect();
const { timezone } = useTimeState();
@ -69,8 +72,8 @@ const GraphView: FC<GraphViewProps> = ({
const [legendValue, setLegendValue] = useState<ChartTooltipProps | null>(null);
const getSeriesItem = useMemo(() => {
return getSeriesItemContext(data, hideSeries, alias);
}, [data, hideSeries, alias]);
return getSeriesItemContext(data, hideSeries, alias, anomalyView);
}, [data, hideSeries, alias, anomalyView]);
const setLimitsYaxis = (values: { [key: string]: number[] }) => {
const limits = getLimitsYAxis(values, !isHistogram);
@ -148,7 +151,7 @@ const GraphView: FC<GraphViewProps> = ({
const range = getMinMaxBuffer(getMinFromArray(resultAsNumber), getMaxFromArray(resultAsNumber));
const rangeStep = Math.abs(range[1] - range[0]);
return (avg > rangeStep * 1e10) ? results.map(() => avg) : results;
return (avg > rangeStep * 1e10) && !anomalyView ? results.map(() => avg) : results;
});
timeDataSeries.unshift(timeSeries);
setLimitsYaxis(tempValues);
@ -192,6 +195,7 @@ const GraphView: FC<GraphViewProps> = ({
setPeriod={setPeriod}
layoutSize={containerSize}
height={height}
anomalyView={anomalyView}
/>
)}
{isHistogram && (
@ -206,7 +210,7 @@ const GraphView: FC<GraphViewProps> = ({
onChangeLegend={setLegendValue}
/>
)}
{!isHistogram && showLegend && (
{!isHistogram && !anomalyView && showLegend && (
<Legend
labels={legend}
query={query}
@ -221,6 +225,11 @@ const GraphView: FC<GraphViewProps> = ({
legendValue={legendValue}
/>
)}
{anomalyView && showLegend && (
<LegendAnomaly
series={series as SeriesItem[]}
/>
)}
</div>
);
};

View file

@ -7,6 +7,46 @@ export interface NavigationItem {
submenu?: NavigationItem[],
}
const explore = {
label: "Explore",
submenu: [
{
label: routerOptions[router.metrics].title,
value: router.metrics,
},
{
label: routerOptions[router.cardinality].title,
value: router.cardinality,
},
{
label: routerOptions[router.topQueries].title,
value: router.topQueries,
},
{
label: routerOptions[router.activeQueries].title,
value: router.activeQueries,
},
]
};
const tools = {
label: "Tools",
submenu: [
{
label: routerOptions[router.trace].title,
value: router.trace,
},
{
label: routerOptions[router.withTemplate].title,
value: router.withTemplate,
},
{
label: routerOptions[router.relabel].title,
value: router.relabel,
},
]
};
export const logsNavigation: NavigationItem[] = [
{
label: routerOptions[router.logs].title,
@ -14,47 +54,22 @@ export const logsNavigation: NavigationItem[] = [
},
];
export const anomalyNavigation: NavigationItem[] = [
{
label: routerOptions[router.anomaly].title,
value: router.home,
},
{
label: routerOptions[router.home].title,
value: router.query,
}
];
export const defaultNavigation: NavigationItem[] = [
{
label: routerOptions[router.home].title,
value: router.home,
},
{
label: "Explore",
submenu: [
{
label: routerOptions[router.metrics].title,
value: router.metrics,
},
{
label: routerOptions[router.cardinality].title,
value: router.cardinality,
},
{
label: routerOptions[router.topQueries].title,
value: router.topQueries,
},
{
label: routerOptions[router.activeQueries].title,
value: router.activeQueries,
},
]
},
{
label: "Tools",
submenu: [
{
label: routerOptions[router.trace].title,
value: router.trace,
},
{
label: routerOptions[router.withTemplate].title,
value: router.withTemplate,
},
{
label: routerOptions[router.relabel].title,
value: router.relabel,
},
]
}
explore,
tools,
];

View file

@ -14,9 +14,10 @@ interface LineTooltipHook {
metrics: MetricResult[];
series: uPlotSeries[];
unit?: string;
anomalyView?: boolean;
}
const useLineTooltip = ({ u, metrics, series, unit }: LineTooltipHook) => {
const useLineTooltip = ({ u, metrics, series, unit, anomalyView }: LineTooltipHook) => {
const [showTooltip, setShowTooltip] = useState(false);
const [tooltipIdx, setTooltipIdx] = useState({ seriesIdx: -1, dataIdx: -1 });
const [stickyTooltips, setStickyToolTips] = useState<ChartTooltipProps[]>([]);
@ -60,14 +61,14 @@ const useLineTooltip = ({ u, metrics, series, unit }: LineTooltipHook) => {
point,
u: u,
id: `${seriesIdx}_${dataIdx}`,
title: groups.size > 1 ? `Query ${group}` : "",
title: groups.size > 1 && !anomalyView ? `Query ${group}` : "",
dates: [date ? dayjs(date * 1000).tz().format(DATE_FULL_TIMEZONE_FORMAT) : "-"],
value: formatPrettyNumber(value, min, max),
info: getMetricName(metricItem),
statsFormatted: seriesItem?.statsFormatted,
marker: `${seriesItem?.stroke}`,
};
}, [u, tooltipIdx, metrics, series, unit]);
}, [u, tooltipIdx, metrics, series, unit, anomalyView]);
const handleClick = useCallback(() => {
if (!showTooltip) return;

View file

@ -4,9 +4,8 @@ import { getQueryRangeUrl, getQueryUrl } from "../api/query-range";
import { useAppState } from "../state/common/StateContext";
import { InstantMetricResult, MetricBase, MetricResult, QueryStats } from "../api/types";
import { isValidHttpUrl } from "../utils/url";
import { ErrorTypes, SeriesLimits } from "../types";
import { DisplayType, ErrorTypes, SeriesLimits } from "../types";
import debounce from "lodash.debounce";
import { DisplayType } from "../pages/CustomPanel/DisplayTypeSwitch";
import Trace from "../components/TraceQuery/Trace";
import { useQueryState } from "../state/query/QueryStateContext";
import { useTimeState } from "../state/time/TimeStateContext";
@ -90,7 +89,7 @@ export const useFetchQuery = ({
const controller = new AbortController();
setFetchQueue([...fetchQueue, controller]);
try {
const isDisplayChart = displayType === "chart";
const isDisplayChart = displayType === DisplayType.chart;
const defaultLimit = showAllSeries ? Infinity : (+stateSeriesLimits[displayType] || Infinity);
let seriesLimit = defaultLimit;
const tempData: MetricBase[] = [];
@ -165,7 +164,7 @@ export const useFetchQuery = ({
setQueryErrors([]);
setQueryStats([]);
const expr = predefinedQuery ?? query;
const displayChart = (display || displayType) === "chart";
const displayChart = (display || displayType) === DisplayType.chart;
if (!period) return;
if (!serverUrl) {
setError(ErrorTypes.emptyServer);

View file

@ -0,0 +1,59 @@
import Header from "../Header/Header";
import React, { FC, useEffect } from "preact/compat";
import { Outlet, useLocation, useSearchParams } from "react-router-dom";
import qs from "qs";
import "../MainLayout/style.scss";
import { getAppModeEnable } from "../../utils/app-mode";
import classNames from "classnames";
import Footer from "../Footer/Footer";
import { routerOptions } from "../../router";
import { useFetchDashboards } from "../../pages/PredefinedPanels/hooks/useFetchDashboards";
import useDeviceDetect from "../../hooks/useDeviceDetect";
import ControlsAnomalyLayout from "./ControlsAnomalyLayout";
const AnomalyLayout: FC = () => {
const appModeEnable = getAppModeEnable();
const { isMobile } = useDeviceDetect();
const { pathname } = useLocation();
const [searchParams, setSearchParams] = useSearchParams();
useFetchDashboards();
const setDocumentTitle = () => {
const defaultTitle = "vmui for vmanomaly";
const routeTitle = routerOptions[pathname]?.title;
document.title = routeTitle ? `${routeTitle} - ${defaultTitle}` : defaultTitle;
};
// for support old links with search params
const redirectSearchToHashParams = () => {
const { search, href } = window.location;
if (search) {
const query = qs.parse(search, { ignoreQueryPrefix: true });
Object.entries(query).forEach(([key, value]) => searchParams.set(key, value as string));
setSearchParams(searchParams);
window.location.search = "";
}
const newHref = href.replace(/\/\?#\//, "/#/");
if (newHref !== href) window.location.replace(newHref);
};
useEffect(setDocumentTitle, [pathname]);
useEffect(redirectSearchToHashParams, []);
return <section className="vm-container">
<Header controlsComponent={ControlsAnomalyLayout}/>
<div
className={classNames({
"vm-container-body": true,
"vm-container-body_mobile": isMobile,
"vm-container-body_app": appModeEnable
})}
>
<Outlet/>
</div>
{!appModeEnable && <Footer/>}
</section>;
};
export default AnomalyLayout;

View file

@ -0,0 +1,38 @@
import React, { FC } from "preact/compat";
import classNames from "classnames";
import TenantsConfiguration
from "../../components/Configurators/GlobalSettings/TenantsConfiguration/TenantsConfiguration";
import StepConfigurator from "../../components/Configurators/StepConfigurator/StepConfigurator";
import { TimeSelector } from "../../components/Configurators/TimeRangeSettings/TimeSelector/TimeSelector";
import CardinalityDatePicker from "../../components/Configurators/CardinalityDatePicker/CardinalityDatePicker";
import { ExecutionControls } from "../../components/Configurators/TimeRangeSettings/ExecutionControls/ExecutionControls";
import GlobalSettings from "../../components/Configurators/GlobalSettings/GlobalSettings";
import ShortcutKeys from "../../components/Main/ShortcutKeys/ShortcutKeys";
import { ControlsProps } from "../Header/HeaderControls/HeaderControls";
const ControlsAnomalyLayout: FC<ControlsProps> = ({
displaySidebar,
isMobile,
headerSetup,
accountIds
}) => {
return (
<div
className={classNames({
"vm-header-controls": true,
"vm-header-controls_mobile": isMobile,
})}
>
{headerSetup?.tenant && <TenantsConfiguration accountIds={accountIds || []}/>}
{headerSetup?.stepControl && <StepConfigurator/>}
{headerSetup?.timeSelector && <TimeSelector/>}
{headerSetup?.cardinalityDatePicker && <CardinalityDatePicker/>}
{headerSetup?.executionControls && <ExecutionControls/>}
<GlobalSettings/>
{!displaySidebar && <ShortcutKeys/>}
</div>
);
};
export default ControlsAnomalyLayout;

View file

@ -2,7 +2,7 @@ import React, { FC, useMemo } from "preact/compat";
import { useNavigate } from "react-router-dom";
import router from "../../router";
import { getAppModeEnable, getAppModeParams } from "../../utils/app-mode";
import { LogoIcon, LogoLogsIcon } from "../../components/Main/Icons";
import { LogoAnomalyIcon, LogoIcon, LogoLogsIcon } from "../../components/Main/Icons";
import { getCssVariable } from "../../utils/theme";
import "./style.scss";
import classNames from "classnames";
@ -13,13 +13,26 @@ import HeaderControls, { ControlsProps } from "./HeaderControls/HeaderControls";
import useDeviceDetect from "../../hooks/useDeviceDetect";
import useWindowSize from "../../hooks/useWindowSize";
import { ComponentType } from "react";
import { AppType } from "../../types/appType";
export interface HeaderProps {
controlsComponent: ComponentType<ControlsProps>
}
const { REACT_APP_TYPE } = process.env;
const isCustomApp = REACT_APP_TYPE === AppType.logs || REACT_APP_TYPE === AppType.anomaly;
const Logo = () => {
switch (REACT_APP_TYPE) {
case AppType.logs:
return <LogoLogsIcon/>;
case AppType.anomaly:
return <LogoAnomalyIcon/>;
default:
return <LogoIcon/>;
}
};
const Header: FC<HeaderProps> = ({ controlsComponent }) => {
const { REACT_APP_LOGS } = process.env;
const { isMobile } = useDeviceDetect();
const windowSize = useWindowSize();
@ -70,12 +83,12 @@ const Header: FC<HeaderProps> = ({ controlsComponent }) => {
<div
className={classNames({
"vm-header-logo": true,
"vm-header-logo_logs": REACT_APP_LOGS
"vm-header-logo_logs": isCustomApp
})}
onClick={onClickLogo}
style={{ color }}
>
{REACT_APP_LOGS ? <LogoLogsIcon/> : <LogoIcon/>}
{<Logo/>}
</div>
)}
<HeaderNav
@ -89,12 +102,12 @@ const Header: FC<HeaderProps> = ({ controlsComponent }) => {
className={classNames({
"vm-header-logo": true,
"vm-header-logo_mobile": true,
"vm-header-logo_logs": REACT_APP_LOGS
"vm-header-logo_logs": isCustomApp
})}
onClick={onClickLogo}
style={{ color }}
>
{REACT_APP_LOGS ? <LogoLogsIcon/> : <LogoIcon/>}
{<Logo/>}
</div>
)}
<HeaderControls

View file

@ -8,7 +8,8 @@ import "./style.scss";
import NavItem from "./NavItem";
import NavSubItem from "./NavSubItem";
import classNames from "classnames";
import { defaultNavigation, logsNavigation } from "../../../constants/navigation";
import { anomalyNavigation, defaultNavigation, logsNavigation } from "../../../constants/navigation";
import { AppType } from "../../../types/appType";
interface HeaderNavProps {
color: string
@ -17,21 +18,29 @@ interface HeaderNavProps {
}
const HeaderNav: FC<HeaderNavProps> = ({ color, background, direction }) => {
const { REACT_APP_LOGS } = process.env;
const appModeEnable = getAppModeEnable();
const { dashboardsSettings } = useDashboardsState();
const { pathname } = useLocation();
const [activeMenu, setActiveMenu] = useState(pathname);
const menu = useMemo(() => REACT_APP_LOGS ? logsNavigation : ([
...defaultNavigation,
{
label: routerOptions[router.dashboards].title,
value: router.dashboards,
hide: appModeEnable || !dashboardsSettings.length,
const menu = useMemo(() => {
switch (process.env.REACT_APP_TYPE) {
case AppType.logs:
return logsNavigation;
case AppType.anomaly:
return anomalyNavigation;
default:
return ([
...defaultNavigation,
{
label: routerOptions[router.dashboards].title,
value: router.dashboards,
hide: appModeEnable || !dashboardsSettings.length,
}
].filter(r => !r.hide));
}
].filter(r => !r.hide)), [appModeEnable, dashboardsSettings]);
}, [appModeEnable, dashboardsSettings]);
useEffect(() => {
setActiveMenu(pathname);

View file

@ -8,17 +8,20 @@ import MenuBurger from "../../../components/Main/MenuBurger/MenuBurger";
import useDeviceDetect from "../../../hooks/useDeviceDetect";
import "./style.scss";
import useBoolean from "../../../hooks/useBoolean";
import { AppType } from "../../../types/appType";
interface SidebarHeaderProps {
background: string
color: string
}
const { REACT_APP_TYPE } = process.env;
const isLogsApp = REACT_APP_TYPE === AppType.logs;
const SidebarHeader: FC<SidebarHeaderProps> = ({
background,
color,
}) => {
const { REACT_APP_LOGS } = process.env;
const { pathname } = useLocation();
const { isMobile } = useDeviceDetect();
@ -61,7 +64,7 @@ const SidebarHeader: FC<SidebarHeaderProps> = ({
/>
</div>
<div className="vm-header-sidebar-menu-settings">
{!isMobile && !REACT_APP_LOGS && <ShortcutKeys showTitle={true}/>}
{!isMobile && !isLogsApp && <ShortcutKeys showTitle={true}/>}
</div>
</div>
</div>;

View file

@ -1,7 +1,7 @@
import Header from "../Header/Header";
import React, { FC, useEffect } from "preact/compat";
import { Outlet, useLocation } from "react-router-dom";
import "./style.scss";
import "../MainLayout/style.scss";
import { getAppModeEnable } from "../../utils/app-mode";
import classNames from "classnames";
import Footer from "../Footer/Footer";

View file

@ -1,27 +0,0 @@
@use "src/styles/variables" as *;
.vm-container {
display: flex;
flex-direction: column;
min-height: calc(($vh * 100) - var(--scrollbar-height));
&-body {
flex-grow: 1;
min-height: 100%;
padding: $padding-medium;
background-color: $color-background-body;
&_mobile {
padding: $padding-small 0 0;
}
@media (max-width: 768px) {
padding: $padding-small 0 0;
}
&_app {
padding: $padding-small 0;
background-color: transparent;
}
}
}

View file

@ -6,13 +6,12 @@ import "./style.scss";
import { getAppModeEnable } from "../../utils/app-mode";
import classNames from "classnames";
import Footer from "../Footer/Footer";
import router, { routerOptions } from "../../router";
import { routerOptions } from "../../router";
import { useFetchDashboards } from "../../pages/PredefinedPanels/hooks/useFetchDashboards";
import useDeviceDetect from "../../hooks/useDeviceDetect";
import ControlsMainLayout from "./ControlsMainLayout";
const MainLayout: FC = () => {
const { REACT_APP_LOGS } = process.env;
const appModeEnable = getAppModeEnable();
const { isMobile } = useDeviceDetect();
const { pathname } = useLocation();
@ -22,7 +21,7 @@ const MainLayout: FC = () => {
const setDocumentTitle = () => {
const defaultTitle = "vmui";
const routeTitle = REACT_APP_LOGS ? routerOptions[router.logs]?.title : routerOptions[pathname]?.title;
const routeTitle = routerOptions[pathname]?.title;
document.title = routeTitle ? `${routeTitle} - ${defaultTitle}` : defaultTitle;
};

View file

@ -112,7 +112,7 @@ const CardinalityConfigurator: FC<CardinalityTotalsProps> = ({ isPrometheus, isC
{isCluster &&
<div className="vm-cardinality-configurator-bottom-helpful">
<Hyperlink
href="https://docs.victoriametrics.com/#cardinality-explorer-statistic-inaccurancy"
href="https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cardinality-explorer-statistic-inaccuracy"
withIcon={true}
>
<WikiIcon/>

View file

@ -0,0 +1,72 @@
import React, { FC } from "react";
import GraphView from "../../../components/Views/GraphView/GraphView";
import GraphTips from "../../../components/Chart/GraphTips/GraphTips";
import GraphSettings from "../../../components/Configurators/GraphSettings/GraphSettings";
import { AxisRange } from "../../../state/graph/reducer";
import { useTimeDispatch, useTimeState } from "../../../state/time/TimeStateContext";
import { useGraphDispatch, useGraphState } from "../../../state/graph/GraphStateContext";
import useDeviceDetect from "../../../hooks/useDeviceDetect";
import { useQueryState } from "../../../state/query/QueryStateContext";
import { MetricResult } from "../../../api/types";
import { createPortal } from "preact/compat";
type Props = {
isHistogram: boolean;
graphData: MetricResult[];
controlsRef: React.RefObject<HTMLDivElement>;
anomalyView?: boolean;
}
const GraphTab: FC<Props> = ({ isHistogram, graphData, controlsRef, anomalyView }) => {
const { isMobile } = useDeviceDetect();
const { customStep, yaxis } = useGraphState();
const { period } = useTimeState();
const { query } = useQueryState();
const timeDispatch = useTimeDispatch();
const graphDispatch = useGraphDispatch();
const setYaxisLimits = (limits: AxisRange) => {
graphDispatch({ type: "SET_YAXIS_LIMITS", payload: limits });
};
const toggleEnableLimits = () => {
graphDispatch({ type: "TOGGLE_ENABLE_YAXIS_LIMITS" });
};
const setPeriod = ({ from, to }: {from: Date, to: Date}) => {
timeDispatch({ type: "SET_PERIOD", payload: { from, to } });
};
const controls = (
<div className="vm-custom-panel-body-header__graph-controls">
<GraphTips/>
<GraphSettings
yaxis={yaxis}
setYaxisLimits={setYaxisLimits}
toggleEnableLimits={toggleEnableLimits}
/>
</div>
);
return (
<>
{controlsRef.current && createPortal(controls, controlsRef.current)}
<GraphView
data={graphData}
period={period}
customStep={customStep}
query={query}
yaxis={yaxis}
setYaxisLimits={setYaxisLimits}
setPeriod={setPeriod}
height={isMobile ? window.innerHeight * 0.5 : 500}
isHistogram={isHistogram}
anomalyView={anomalyView}
/>
</>
);
};
export default GraphTab;

View file

@ -0,0 +1,47 @@
import React, { FC } from "react";
import { InstantMetricResult } from "../../../api/types";
import { createPortal, useMemo, useState } from "preact/compat";
import TableView from "../../../components/Views/TableView/TableView";
import TableSettings from "../../../components/Table/TableSettings/TableSettings";
import { getColumns } from "../../../hooks/useSortedCategories";
import { useCustomPanelDispatch, useCustomPanelState } from "../../../state/customPanel/CustomPanelStateContext";
type Props = {
liveData: InstantMetricResult[];
controlsRef: React.RefObject<HTMLDivElement>;
}
const TableTab: FC<Props> = ({ liveData, controlsRef }) => {
const { tableCompact } = useCustomPanelState();
const customPanelDispatch = useCustomPanelDispatch();
const [displayColumns, setDisplayColumns] = useState<string[]>();
const columns = useMemo(() => getColumns(liveData || []).map(c => c.key), [liveData]);
const toggleTableCompact = () => {
customPanelDispatch({ type: "TOGGLE_TABLE_COMPACT" });
};
const controls = (
<TableSettings
columns={columns}
defaultColumns={displayColumns}
onChangeColumns={setDisplayColumns}
tableCompact={tableCompact}
toggleTableCompact={toggleTableCompact}
/>
);
return (
<>
{controlsRef.current && createPortal(controls, controlsRef.current)}
<TableView
data={liveData}
displayColumns={displayColumns}
/>
</>
);
};
export default TableTab;

View file

@ -0,0 +1,45 @@
import React, { FC, RefObject } from "react";
import GraphTab from "./GraphTab";
import JsonView from "../../../components/Views/JsonView/JsonView";
import TableTab from "./TableTab";
import { InstantMetricResult, MetricResult } from "../../../api/types";
import { DisplayType } from "../../../types";
type Props = {
graphData?: MetricResult[];
liveData?: InstantMetricResult[];
isHistogram: boolean;
displayType: DisplayType;
controlsRef: RefObject<HTMLDivElement>;
}
const CustomPanelTabs: FC<Props> = ({
graphData,
liveData,
isHistogram,
displayType,
controlsRef
}) => {
if (displayType === DisplayType.code && liveData) {
return <JsonView data={liveData} />;
}
if (displayType === DisplayType.table && liveData) {
return <TableTab
liveData={liveData}
controlsRef={controlsRef}
/>;
}
if (displayType === DisplayType.chart && graphData) {
return <GraphTab
graphData={graphData}
isHistogram={isHistogram}
controlsRef={controlsRef}
/>;
}
return null;
};
export default CustomPanelTabs;

View file

@ -0,0 +1,43 @@
import { useCustomPanelState } from "../../../state/customPanel/CustomPanelStateContext";
import TracingsView from "../../../components/TraceQuery/TracingsView";
import React, { FC, useEffect, useState } from "preact/compat";
import Trace from "../../../components/TraceQuery/Trace";
import { DisplayType } from "../../../types";
type Props = {
traces?: Trace[];
displayType: DisplayType;
}
const CustomPanelTraces: FC<Props> = ({ traces, displayType }) => {
const { isTracingEnabled } = useCustomPanelState();
const [tracesState, setTracesState] = useState<Trace[]>([]);
const handleTraceDelete = (trace: Trace) => {
const updatedTraces = tracesState.filter((data) => data.idValue !== trace.idValue);
setTracesState([...updatedTraces]);
};
useEffect(() => {
if (traces) {
setTracesState([...tracesState, ...traces]);
}
}, [traces]);
useEffect(() => {
setTracesState([]);
}, [displayType]);
return <>
{isTracingEnabled && (
<div className="vm-custom-panel__trace">
<TracingsView
traces={tracesState}
onDeleteClick={handleTraceDelete}
/>
</div>
)}
</>;
};
export default CustomPanelTraces;

View file

@ -2,8 +2,7 @@ import React, { FC } from "preact/compat";
import { useCustomPanelDispatch, useCustomPanelState } from "../../state/customPanel/CustomPanelStateContext";
import { ChartIcon, CodeIcon, TableIcon } from "../../components/Main/Icons";
import Tabs from "../../components/Main/Tabs/Tabs";
export type DisplayType = "table" | "chart" | "code";
import { DisplayType } from "../../types";
type DisplayTab = {
value: DisplayType
@ -13,9 +12,9 @@ type DisplayTab = {
}
export const displayTypeTabs: DisplayTab[] = [
{ value: "chart", icon: <ChartIcon/>, label: "Graph", prometheusCode: 0 },
{ value: "code", icon: <CodeIcon/>, label: "JSON", prometheusCode: 3 },
{ value: "table", icon: <TableIcon/>, label: "Table", prometheusCode: 1 }
{ value: DisplayType.chart, icon: <ChartIcon/>, label: "Graph", prometheusCode: 0 },
{ value: DisplayType.code, icon: <CodeIcon/>, label: "JSON", prometheusCode: 3 },
{ value: DisplayType.table, icon: <TableIcon/>, label: "Table", prometheusCode: 1 }
];
export const DisplayTypeSwitch: FC = () => {

Some files were not shown because too many files have changed in this diff Show more