Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files

This commit is contained in:
Aliaksandr Valialkin 2022-06-09 13:33:07 +03:00
commit 7fc62feddc
No known key found for this signature in database
GPG key ID: A72BEC6CD3D0DED1
130 changed files with 4265 additions and 2164 deletions

View file

@ -17,10 +17,10 @@ VictoriaMetrics is available in [binary releases](https://github.com/VictoriaMet
and [source code](https://github.com/VictoriaMetrics/VictoriaMetrics).
Just download VictoriaMetrics and follow [these instructions](https://docs.victoriametrics.com/Quick-Start.html).
Cluster version of VictoriaMetrics is available [here](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html).
The cluster version of VictoriaMetrics is available [here](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html).
Learn more about [key concepts](https://docs.victoriametrics.com/keyConcepts.html) of VictoriaMetrics and follow
[QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for better experience.
Learn more about [key concepts](https://docs.victoriametrics.com/keyConcepts.html) of VictoriaMetrics and follow the
[QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for a better experience.
[Contact us](mailto:info@victoriametrics.com) if you need enterprise support for VictoriaMetrics.
See [features available in enterprise package](https://victoriametrics.com/products/enterprise/).
@ -32,8 +32,8 @@ from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/rele
VictoriaMetrics has the following prominent features:
* It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details.
* It can be used as drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage).
* It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage).
* It features easy setup and operation:
* VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d) without external dependencies.
* All the configuration is done via explicit command-line flags with reasonable defaults.
@ -243,7 +243,9 @@ Prometheus doesn't drop data during VictoriaMetrics restart. See [this article](
## vmui
VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at `http://victoriametrics:8428/vmui`.
The UI allows exploring query results via graphs and tables. Graphs support scrolling and zooming:
The UI allows exploring query results via graphs and tables. It also provides support for [cardinality explorer](#cardinality-explorer).
Graphs in vmui support scrolling and zooming:
* Drag the graph to the left / right in order to move the displayed time range into the past / future.
* Hold `Ctrl` (or `Cmd` on MacOS) and scroll up / down in order to zoom in / out the graph.
@ -261,6 +263,23 @@ VMUI allows investigating correlations between two queries on the same graph. Ju
See the [example VMUI at VictoriaMetrics playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/?g0.expr=100%20*%20sum(rate(process_cpu_seconds_total))%20by%20(job)&g0.range_input=1d).
## Cardinality explorer
VictoriaMetrics provides an ability to explore time series cardinality at `cardinality` tab in [vmui](#vmui) in the following ways:
- To identify metric names with the highest number of series.
- To identify label=name pairs with the highest number of series.
- To identify labels with the highest number of unique values.
By default cardinality explorer analyzes time series for the current date. It provides the ability to select different day at the top right corner.
By default all the time series for the selected date are analyzed. It is possible to narrow down the analysis to series
matching the specified [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors).
Cardinality explorer is built on top of [/api/v1/status/tsdb](#tsdb-stats).
See [cardinality explorer playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/cardinality).
## How to apply new config to VictoriaMetrics
VictoriaMetrics is configured via command-line flags, so it must be restarted when new command-line flags should be applied:
@ -824,6 +843,11 @@ Each JSON line contains samples for a single time series. An example output:
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
Optional `max_rows_per_line` arg may be added to the request for limiting the maximum number of rows exported per each JSON line.
Optional `reduce_mem_usage=1` arg may be added to the request for reducing memory usage when exporting big number of time series.
@ -863,6 +887,11 @@ for metrics to export.
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
The exported CSV data can be imported to VictoriaMetrics via [/api/v1/import/csv](#how-to-import-csv-data).
@ -885,6 +914,11 @@ wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
The exported data can be imported to VictoriaMetrics via [/api/v1/import/native](#how-to-import-data-in-native-format).
The native export format may change in incompatible way between VictoriaMetrics releases, so the data exported from the release X
@ -1079,8 +1113,13 @@ VictoriaMetrics exports [Prometheus-compatible federation data](https://promethe
at `http://<victoriametrics-addr>:8428/federate?match[]=<timeseries_selector_for_federation>`.
Optional `start` and `end` args may be added to the request in order to scrape the last point for each selected time series on the `[start ... end]` interval.
`start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. By default, the last point
on the interval `[now - max_lookback ... now]` is scraped for each time series. The default value for `max_lookback` is `5m` (5 minutes), but it can be overridden.
`start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
By default, the last point on the interval `[now - max_lookback ... now]` is scraped for each time series. The default value for `max_lookback` is `5m` (5 minutes), but it can be overridden with `max_lookback` query arg.
For instance, `/federate?match[]=up&max_lookback=1h` would return last points on the `[now - 1h ... now]` interval. This may be useful for time series federation
with scrape intervals exceeding `5m`.
@ -1187,9 +1226,18 @@ values and timestamps. These are sorted and compressed raw time series values. A
index files for searching for specific series in the values and timestamps files.
`Parts` are periodically merged into the bigger parts. The resulting `part` is constructed
under `<-storageDataPath>/data/{small,big}/YYYY_MM/tmp` subdirectory. When the resulting `part` is complete, it is atomically moved from the `tmp`
under `<-storageDataPath>/data/{small,big}/YYYY_MM/tmp` subdirectory.
When the resulting `part` is complete, it is atomically moved from the `tmp`
to its own subdirectory, while the source parts are atomically removed. The end result is that the source
parts are substituted by a single resulting bigger `part` in the `<-storageDataPath>/data/{small,big}/YYYY_MM/` directory.
VictoriaMetrics doesn't merge parts if their summary size exceeds free disk space.
This prevents from potential out of disk space errors during merge.
The number of parts may significantly increase over time under free disk space shortage.
This increases overhead during data querying, since VictoriaMetrics needs to read data from
bigger number of parts per each request. That's why it is recommended to have at least 20%
of free disk space under directory pointed by `-storageDataPath` command-line flag.
Information about merging process is available in [single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards.
See more details in [monitoring docs](#monitoring).
@ -1259,7 +1307,7 @@ The downsampling can be evaluated for free by downloading and using enterprise b
## Multi-tenancy
Single-node VictoriaMetrics doesn't support multi-tenancy. Use [cluster version](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#multitenancy) instead.
Single-node VictoriaMetrics doesn't support multi-tenancy. Use the [cluster version](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#multitenancy) instead.
## Scalability and cluster version
@ -1267,7 +1315,7 @@ Though single-node VictoriaMetrics cannot scale to multiple nodes, it is optimiz
This means that a single-node VictoriaMetrics may scale vertically and substitute a moderately sized cluster built with competing solutions
such as Thanos, Uber M3, InfluxDB or TimescaleDB. See [vertical scalability benchmarks](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae).
So try single-node VictoriaMetrics at first and then [switch to cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster) if you still need
So try single-node VictoriaMetrics at first and then [switch to the cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster) if you still need
horizontally scalable long-term remote storage for really large Prometheus deployments.
[Contact us](mailto:info@victoriametrics.com) for enterprise support.
@ -1342,7 +1390,7 @@ The most interesting metrics are:
aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
* `increase(vm_new_timeseries_created_total[1h])` - time series [churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) during the previous hour.
* `sum(vm_rows{type=~"storage/.*"})` - total number of `(timestamp, value)` data points in the database.
* `sum(rate(vm_rows_inserted_total[5m]))` - ingestion rate, i.e. how many samples are inserted int the database per second.
* `sum(rate(vm_rows_inserted_total[5m]))` - ingestion rate, i.e. how many samples are inserted in the database per second.
* `vm_free_disk_space_bytes` - free space left at `-storageDataPath`.
* `sum(vm_data_size_bytes)` - the total size of data on disk.
* `increase(vm_slow_row_inserts_total[5m])` - the number of slow inserts during the last 5 minutes.
@ -1365,6 +1413,8 @@ VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way simi
* `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account.
* `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details.
VictoriaMetrics provides an UI on top of `/api/v1/status/tsdb` - see [cardinality explorer docs](#cardinality-explorer).
## Query tracing
VictoriaMetrics supports query tracing, which can be used for determining bottlenecks during query processing.
@ -1375,7 +1425,7 @@ In this case VictoriaMetrics puts query trace into `trace` field in the output J
For example, the following command:
```bash
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq -r '.trace'
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace'
```
would return the following trace:
@ -1502,7 +1552,7 @@ See also more advanced [cardinality limiter in vmagent](https://docs.victoriamet
It may be needed in order to suppress default gap filling algorithm used by VictoriaMetrics - by default it assumes
each time series is continuous instead of discrete, so it fills gaps between real samples with regular intervals.
* Metrics and labels leading to [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) or [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) can be determined at `/api/v1/status/tsdb` page. See [these docs](#tsdb-stats) for details.
* Metrics and labels leading to [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) or [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) can be determined via [cardinality explorer](#cardinality-explorer) and via [/api/v1/status/tsdb](#tsdb-stats) endpoint.
* New time series can be logged if `-logNewSeries` command-line flag is passed to VictoriaMetrics.

View file

@ -103,7 +103,8 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
fmt.Fprintf(w, "Useful endpoints:</br>")
httpserver.WriteAPIHelp(w, [][2]string{
{"vmui", "Web UI"},
{"targets", "discovered targets list"},
{"targets", "status for discovered active targets"},
{"service-discovery", "labels before and after relabeling for discovered targets"},
{"api/v1/targets", "advanced information about discovered targets in JSON format"},
{"config", "-promscrape.config contents"},
{"metrics", "available service metrics"},

View file

@ -167,7 +167,8 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
fmt.Fprintf(w, "See docs at <a href='https://docs.victoriametrics.com/vmagent.html'>https://docs.victoriametrics.com/vmagent.html</a></br>")
fmt.Fprintf(w, "Useful endpoints:</br>")
httpserver.WriteAPIHelp(w, [][2]string{
{"targets", "discovered targets list"},
{"targets", "status for discovered active targets"},
{"service-discovery", "labels before and after relabeling for discovered targets"},
{"api/v1/targets", "advanced information about discovered targets in JSON format"},
{"config", "-promscrape.config contents"},
{"metrics", "available service metrics"},
@ -178,6 +179,11 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
}
path := strings.Replace(r.URL.Path, "//", "/", -1)
if strings.HasPrefix(path, "datadog/") {
// Trim suffix from paths starting from /datadog/ in order to support legacy DataDog agent.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2670
path = strings.TrimSuffix(path, "/")
}
switch path {
case "/api/v1/write":
prometheusWriteRequests.Inc()
@ -262,7 +268,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "/datadog/intake/":
case "/datadog/intake":
datadogIntakeRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{}`)
@ -271,6 +277,10 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
promscrapeTargetsRequests.Inc()
promscrape.WriteHumanReadableTargetsStatus(w, r)
return true
case "/service-discovery":
promscrapeServiceDiscoveryRequests.Inc()
promscrape.WriteServiceDiscovery(w, r)
return true
case "/target_response":
promscrapeTargetResponseRequests.Inc()
if err := promscrape.WriteTargetResponse(w, r); err != nil {
@ -356,6 +366,11 @@ func processMultitenantRequest(w http.ResponseWriter, r *http.Request, path stri
httpserver.Errorf(w, r, "cannot obtain auth token: %s", err)
return true
}
if strings.HasPrefix(p.Suffix, "datadog/") {
// Trim suffix from paths starting from /datadog/ in order to support legacy DataDog agent.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2670
p.Suffix = strings.TrimSuffix(p.Suffix, "/")
}
switch p.Suffix {
case "prometheus/", "prometheus", "prometheus/api/v1/write":
prometheusWriteRequests.Inc()
@ -439,7 +454,7 @@ func processMultitenantRequest(w http.ResponseWriter, r *http.Request, path stri
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "datadog/intake/":
case "datadog/intake":
datadogIntakeRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{}`)
@ -476,10 +491,11 @@ var (
datadogValidateRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`)
datadogCheckRunRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`)
datadogIntakeRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/intake/", protocol="datadog"}`)
datadogIntakeRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/intake", protocol="datadog"}`)
promscrapeTargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/targets"}`)
promscrapeAPIV1TargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/targets"}`)
promscrapeTargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/targets"}`)
promscrapeServiceDiscoveryRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/service-discovery"}`)
promscrapeAPIV1TargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/targets"}`)
promscrapeTargetResponseRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/target_response"}`)
promscrapeTargetResponseErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/target_response"}`)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -88,7 +88,7 @@ run-vmalert-sd: vmalert
-configCheckInterval=10s
replay-vmalert: vmalert
./bin/vmalert -rule=app/vmalert/config/testdata/rules-replay-good.rules \
./bin/vmalert -rule=app/vmalert/config/testdata/rules/rules-replay-good.rules \
-datasource.url=http://localhost:8428 \
-remoteWrite.url=http://localhost:8428 \
-external.label=cluster=east-1 \

View file

@ -101,6 +101,10 @@ name: <string>
# How often rules in the group are evaluated.
[ interval: <duration> | default = -evaluationInterval flag ]
# Limit the number of alerts an alerting rule and series a recording
# rule can produce. 0 is no limit.
[ limit: <int> | default = 0 ]
# How many rules execute at once within a group. Increasing concurrency may speed
# up round execution speed.
[ concurrency: <integer> | default = 1 ]
@ -535,6 +539,7 @@ See full description for these flags in `./vmalert --help`.
* Graphite engine isn't supported yet;
* `query` template function is disabled for performance reasons (might be changed in future);
* `limit` group's param has no effect during replay (might be changed in future);
## Monitoring

View file

@ -240,7 +240,7 @@ const resolvedRetention = 15 * time.Minute
// Exec executes AlertingRule expression via the given Querier.
// Based on the Querier results AlertingRule maintains notifier.Alerts
func (ar *AlertingRule) Exec(ctx context.Context, ts time.Time) ([]prompbmarshal.TimeSeries, error) {
func (ar *AlertingRule) Exec(ctx context.Context, ts time.Time, limit int) ([]prompbmarshal.TimeSeries, error) {
start := time.Now()
qMetrics, err := ar.q.Query(ctx, ar.Expr, ts)
ar.mu.Lock()
@ -307,7 +307,7 @@ func (ar *AlertingRule) Exec(ctx context.Context, ts time.Time) ([]prompbmarshal
a.ActiveAt = ts
ar.alerts[h] = a
}
var numActivePending int
for h, a := range ar.alerts {
// if alert wasn't updated in this iteration
// means it is resolved already
@ -324,12 +324,17 @@ func (ar *AlertingRule) Exec(ctx context.Context, ts time.Time) ([]prompbmarshal
}
continue
}
numActivePending++
if a.State == notifier.StatePending && ts.Sub(a.ActiveAt) >= ar.For {
a.State = notifier.StateFiring
a.Start = ts
alertsFired.Inc()
}
}
if limit > 0 && numActivePending > limit {
ar.alerts = map[uint64]*notifier.Alert{}
return nil, fmt.Errorf("exec exceeded limit of %d with %d alerts", limit, numActivePending)
}
return ar.toTimeSeries(ts.Unix()), nil
}

View file

@ -304,7 +304,7 @@ func TestAlertingRule_Exec(t *testing.T) {
for _, step := range tc.steps {
fq.reset()
fq.add(step...)
if _, err := tc.rule.Exec(context.TODO(), time.Now()); err != nil {
if _, err := tc.rule.Exec(context.TODO(), time.Now(), 0); err != nil {
t.Fatalf("unexpected err: %s", err)
}
// artificial delay between applying steps
@ -624,14 +624,14 @@ func TestAlertingRule_Exec_Negative(t *testing.T) {
// successful attempt
fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "bar"))
_, err := ar.Exec(context.TODO(), time.Now())
_, err := ar.Exec(context.TODO(), time.Now(), 0)
if err != nil {
t.Fatal(err)
}
// label `job` will collide with rule extra label and will make both time series equal
fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "baz"))
_, err = ar.Exec(context.TODO(), time.Now())
_, err = ar.Exec(context.TODO(), time.Now(), 0)
if !errors.Is(err, errDuplicate) {
t.Fatalf("expected to have %s error; got %s", errDuplicate, err)
}
@ -640,7 +640,7 @@ func TestAlertingRule_Exec_Negative(t *testing.T) {
expErr := "connection reset by peer"
fq.setErr(errors.New(expErr))
_, err = ar.Exec(context.TODO(), time.Now())
_, err = ar.Exec(context.TODO(), time.Now(), 0)
if err == nil {
t.Fatalf("expected to get err; got nil")
}
@ -649,6 +649,50 @@ func TestAlertingRule_Exec_Negative(t *testing.T) {
}
}
func TestAlertingRuleLimit(t *testing.T) {
fq := &fakeQuerier{}
ar := newTestAlertingRule("test", 0)
ar.Labels = map[string]string{"job": "test"}
ar.q = fq
ar.For = time.Minute
testCases := []struct {
limit int
err string
tssNum int
}{
{
limit: 0,
tssNum: 4,
},
{
limit: -1,
tssNum: 4,
},
{
limit: 1,
err: "exec exceeded limit of 1 with 2 alerts",
tssNum: 0,
},
{
limit: 4,
tssNum: 4,
},
}
var (
err error
timestamp = time.Now()
)
fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "bar"))
fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "bar", "job"))
for _, testCase := range testCases {
_, err = ar.Exec(context.TODO(), timestamp, testCase.limit)
if err != nil && !strings.EqualFold(err.Error(), testCase.err) {
t.Fatal(err)
}
}
fq.reset()
}
func TestAlertingRule_Template(t *testing.T) {
testCases := []struct {
rule *AlertingRule
@ -761,7 +805,7 @@ func TestAlertingRule_Template(t *testing.T) {
tc.rule.GroupID = fakeGroup.ID()
tc.rule.q = fq
fq.add(tc.metrics...)
if _, err := tc.rule.Exec(context.TODO(), time.Now()); err != nil {
if _, err := tc.rule.Exec(context.TODO(), time.Now(), 0); err != nil {
t.Fatalf("unexpected err: %s", err)
}
for hash, expAlert := range tc.expAlerts {

View file

@ -27,6 +27,7 @@ type Group struct {
File string
Name string `yaml:"name"`
Interval *promutils.Duration `yaml:"interval,omitempty"`
Limit int `yaml:"limit,omitempty"`
Rules []Rule `yaml:"rules"`
Concurrency int `yaml:"concurrency"`
// ExtraFilterLabels is a list label filters applied to every rule

View file

@ -489,6 +489,22 @@ rules:
name: TestGroup
params:
nocache: ["0"]
rules:
- alert: foo
expr: sum by(job) (up == 1)
`)
})
t.Run("`limit` change", func(t *testing.T) {
f(t, `
name: TestGroup
limit: 5
rules:
- alert: foo
expr: sum by(job) (up == 1)
`, `
name: TestGroup
limit: 10
rules:
- alert: foo
expr: sum by(job) (up == 1)

View file

@ -2,6 +2,7 @@ groups:
- name: ReplayGroup
interval: 1m
concurrency: 1
limit: 1000
rules:
- record: type:vm_cache_entries:rate5m
expr: sum(rate(vm_cache_entries[5m])) by (type)

View file

@ -2,6 +2,7 @@ groups:
- name: TestGroup
interval: 2s
concurrency: 2
limit: 1000
params:
denyPartialResponse: ["true"]
extra_label: ["env=dev"]

View file

@ -10,6 +10,8 @@ import (
"sync"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier"
@ -18,7 +20,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/metrics"
)
// Group is an entity for grouping rules
@ -29,6 +30,7 @@ type Group struct {
Rules []Rule
Type datasource.Type
Interval time.Duration
Limit int
Concurrency int
Checksum string
LastEvaluation time.Time
@ -90,6 +92,7 @@ func newGroup(cfg config.Group, qb datasource.QuerierBuilder, defaultInterval ti
Name: cfg.Name,
File: cfg.File,
Interval: cfg.Interval.Duration(),
Limit: cfg.Limit,
Concurrency: cfg.Concurrency,
Checksum: cfg.Checksum,
Params: cfg.Params,
@ -215,6 +218,7 @@ func (g *Group) updateWith(newGroup *Group) error {
g.Concurrency = newGroup.Concurrency
g.Params = newGroup.Params
g.Labels = newGroup.Labels
g.Limit = newGroup.Limit
g.Checksum = newGroup.Checksum
g.Rules = newRules
return nil
@ -282,7 +286,7 @@ func (g *Group) start(ctx context.Context, nts func() []notifier.Notifier, rw *r
}
resolveDuration := getResolveDuration(g.Interval, *resendDelay, *maxResolveDuration)
errs := e.execConcurrently(ctx, g.Rules, ts, g.Concurrency, resolveDuration)
errs := e.execConcurrently(ctx, g.Rules, ts, g.Concurrency, resolveDuration, g.Limit)
for err := range errs {
if err != nil {
logger.Errorf("group %q: %s", g.Name, err)
@ -360,12 +364,12 @@ type executor struct {
previouslySentSeriesToRW map[uint64]map[string][]prompbmarshal.Label
}
func (e *executor) execConcurrently(ctx context.Context, rules []Rule, ts time.Time, concurrency int, resolveDuration time.Duration) chan error {
func (e *executor) execConcurrently(ctx context.Context, rules []Rule, ts time.Time, concurrency int, resolveDuration time.Duration, limit int) chan error {
res := make(chan error, len(rules))
if concurrency == 1 {
// fast path
for _, rule := range rules {
res <- e.exec(ctx, rule, ts, resolveDuration)
res <- e.exec(ctx, rule, ts, resolveDuration, limit)
}
close(res)
return res
@ -378,7 +382,7 @@ func (e *executor) execConcurrently(ctx context.Context, rules []Rule, ts time.T
sem <- struct{}{}
wg.Add(1)
go func(r Rule) {
res <- e.exec(ctx, r, ts, resolveDuration)
res <- e.exec(ctx, r, ts, resolveDuration, limit)
<-sem
wg.Done()
}(rule)
@ -399,10 +403,10 @@ var (
remoteWriteTotal = metrics.NewCounter(`vmalert_remotewrite_total`)
)
func (e *executor) exec(ctx context.Context, rule Rule, ts time.Time, resolveDuration time.Duration) error {
func (e *executor) exec(ctx context.Context, rule Rule, ts time.Time, resolveDuration time.Duration, limit int) error {
execTotal.Inc()
tss, err := rule.Exec(ctx, ts)
tss, err := rule.Exec(ctx, ts, limit)
if err != nil {
execErrors.Inc()
return fmt.Errorf("rule %q: failed to execute: %w", rule, err)

View file

@ -124,7 +124,7 @@ func (rr *RecordingRule) ExecRange(ctx context.Context, start, end time.Time) ([
}
// Exec executes RecordingRule expression via the given Querier.
func (rr *RecordingRule) Exec(ctx context.Context, ts time.Time) ([]prompbmarshal.TimeSeries, error) {
func (rr *RecordingRule) Exec(ctx context.Context, ts time.Time, limit int) ([]prompbmarshal.TimeSeries, error) {
qMetrics, err := rr.q.Query(ctx, rr.Expr, ts)
rr.mu.Lock()
defer rr.mu.Unlock()
@ -137,6 +137,11 @@ func (rr *RecordingRule) Exec(ctx context.Context, ts time.Time) ([]prompbmarsha
return nil, fmt.Errorf("failed to execute query %q: %w", rr.Expr, err)
}
numSeries := len(qMetrics)
if limit > 0 && numSeries > limit {
return nil, fmt.Errorf("exec exceeded limit of %d with %d series", limit, numSeries)
}
duplicates := make(map[string]struct{}, len(qMetrics))
var tss []prompbmarshal.TimeSeries
for _, r := range qMetrics {

View file

@ -11,7 +11,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
)
func TestRecoridngRule_Exec(t *testing.T) {
func TestRecordingRule_Exec(t *testing.T) {
timestamp := time.Now()
testCases := []struct {
rule *RecordingRule
@ -77,7 +77,7 @@ func TestRecoridngRule_Exec(t *testing.T) {
fq := &fakeQuerier{}
fq.add(tc.metrics...)
tc.rule.q = fq
tss, err := tc.rule.Exec(context.TODO(), time.Now())
tss, err := tc.rule.Exec(context.TODO(), time.Now(), 0)
if err != nil {
t.Fatalf("unexpected Exec err: %s", err)
}
@ -88,7 +88,7 @@ func TestRecoridngRule_Exec(t *testing.T) {
}
}
func TestRecoridngRule_ExecRange(t *testing.T) {
func TestRecordingRule_ExecRange(t *testing.T) {
timestamp := time.Now()
testCases := []struct {
rule *RecordingRule
@ -169,7 +169,48 @@ func TestRecoridngRule_ExecRange(t *testing.T) {
}
}
func TestRecoridngRule_ExecNegative(t *testing.T) {
func TestRecordingRuleLimit(t *testing.T) {
timestamp := time.Now()
testCases := []struct {
limit int
err string
}{
{
limit: 0,
},
{
limit: -1,
},
{
limit: 1,
err: "exec exceeded limit of 1 with 3 series",
},
{
limit: 2,
err: "exec exceeded limit of 2 with 3 series",
},
}
testMetrics := []datasource.Metric{
metricWithValuesAndLabels(t, []float64{1}, "__name__", "foo", "job", "foo"),
metricWithValuesAndLabels(t, []float64{2, 3}, "__name__", "bar", "job", "bar"),
metricWithValuesAndLabels(t, []float64{4, 5, 6}, "__name__", "baz", "job", "baz"),
}
rule := &RecordingRule{Name: "job:foo", Labels: map[string]string{
"source": "test_limit",
}}
var err error
for _, testCase := range testCases {
fq := &fakeQuerier{}
fq.add(testMetrics...)
rule.q = fq
_, err = rule.Exec(context.TODO(), timestamp, testCase.limit)
if err != nil && !strings.EqualFold(err.Error(), testCase.err) {
t.Fatal(err)
}
}
}
func TestRecordingRule_ExecNegative(t *testing.T) {
rr := &RecordingRule{Name: "job:foo", Labels: map[string]string{
"job": "test",
}}
@ -178,7 +219,7 @@ func TestRecoridngRule_ExecNegative(t *testing.T) {
expErr := "connection reset by peer"
fq.setErr(errors.New(expErr))
rr.q = fq
_, err := rr.Exec(context.TODO(), time.Now())
_, err := rr.Exec(context.TODO(), time.Now(), 0)
if err == nil {
t.Fatalf("expected to get err; got nil")
}
@ -193,7 +234,7 @@ func TestRecoridngRule_ExecNegative(t *testing.T) {
fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "foo"))
fq.add(metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "bar"))
_, err = rr.Exec(context.TODO(), time.Now())
_, err = rr.Exec(context.TODO(), time.Now(), 0)
if err == nil {
t.Fatalf("expected to get err; got nil")
}

View file

@ -236,6 +236,9 @@ func (c *Client) send(ctx context.Context, data []byte) error {
if err != nil {
return fmt.Errorf("failed to create new HTTP request: %w", err)
}
req.Header.Set("Content-Encoding", "snappy")
if c.authCfg != nil {
if auth := c.authCfg.GetAuthHeader(); auth != "" {
req.Header.Set("Authorization", auth)

View file

@ -80,6 +80,12 @@ func (rw *rwServer) handler(w http.ResponseWriter, r *http.Request) {
rw.err(w, fmt.Errorf("bad method %q", r.Method))
return
}
h := r.Header.Get("Content-Encoding")
if h != "snappy" {
rw.err(w, fmt.Errorf("header read error: Content-Encoding is not snappy (%q)", h))
}
data, err := ioutil.ReadAll(r.Body)
if err != nil {
rw.err(w, fmt.Errorf("body read err: %w", err))

View file

@ -7,12 +7,13 @@ import (
"strings"
"time"
"github.com/dmitryk-dk/pb/v3"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/dmitryk-dk/pb/v3"
)
var (
@ -87,6 +88,10 @@ func (g *Group) replay(start, end time.Time, rw *remotewrite.Client) int {
"\nrequests to make: \t%d"+
"\nmax range per request: \t%v\n",
g.Name, g.Interval, iterations, step)
if g.Limit > 0 {
fmt.Printf("\nPlease note, `limit: %d` param has no effect during replay.\n",
g.Limit)
}
for _, rule := range g.Rules {
fmt.Printf("> Rule %q (ID: %d)\n", rule, rule.ID())
var bar *pb.ProgressBar

View file

@ -15,9 +15,10 @@ type Rule interface {
// ID returns unique ID that may be used for
// identifying this Rule among others.
ID() uint64
// Exec executes the rule with given context at the given timestamp
Exec(ctx context.Context, ts time.Time) ([]prompbmarshal.TimeSeries, error)
// ExecRange executes the rule on the given time range
// Exec executes the rule with given context at the given timestamp and limit.
// returns an err if number of resulting time series exceeds the limit.
Exec(ctx context.Context, ts time.Time, limit int) ([]prompbmarshal.TimeSeries, error)
// ExecRange executes the rule on the given time range.
ExecRange(ctx context.Context, start, end time.Time) ([]prompbmarshal.TimeSeries, error)
// UpdateWith performs modification of current Rule
// with fields of the given Rule.

View file

@ -3,7 +3,7 @@
<html lang="en">
<head>
<title>vmalert{% if title != "" %} - {%s title %}{% endif %}</title>
<link href="static/css/bootstrap.min.css" rel="stylesheet" crossorigin="anonymous">
<link href="static/css/bootstrap.min.css" rel="stylesheet" />
<style>
body{
min-height: 75rem;

View file

@ -35,7 +35,7 @@ func StreamHeader(qw422016 *qt422016.Writer, title string, pages []NavItem) {
}
//line app/vmalert/tpl/header.qtpl:5
qw422016.N().S(`</title>
<link href="static/css/bootstrap.min.css" rel="stylesheet" crossorigin="anonymous">
<link href="static/css/bootstrap.min.css" rel="stylesheet" />
<style>
body{
min-height: 75rem;

View file

@ -1,6 +1,7 @@
package vminsert
import (
"embed"
"flag"
"fmt"
"net/http"
@ -55,6 +56,11 @@ var (
opentsdbhttpServer *opentsdbhttpserver.Server
)
//go:embed static
var staticFiles embed.FS
var staticServer = http.FileServer(http.FS(staticFiles))
// Init initializes vminsert.
func Init() {
relabel.Init()
@ -101,6 +107,20 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
defer requestDuration.UpdateDuration(startTime)
path := strings.Replace(r.URL.Path, "//", "/", -1)
if strings.HasPrefix(path, "/static") {
staticServer.ServeHTTP(w, r)
return true
}
if strings.HasPrefix(path, "/prometheus/static") {
r.URL.Path = strings.TrimPrefix(path, "/prometheus")
staticServer.ServeHTTP(w, r)
return true
}
if strings.HasPrefix(path, "/datadog/") {
// Trim suffix from paths starting from /datadog/ in order to support legacy DataDog agent.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2670
path = strings.TrimSuffix(path, "/")
}
switch path {
case "/prometheus/api/v1/write", "/api/v1/write":
prometheusWriteRequests.Inc()
@ -187,7 +207,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
w.WriteHeader(202)
fmt.Fprintf(w, `{"status":"ok"}`)
return true
case "/datadog/intake/":
case "/datadog/intake":
datadogIntakeRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{}`)
@ -196,6 +216,10 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
promscrapeTargetsRequests.Inc()
promscrape.WriteHumanReadableTargetsStatus(w, r)
return true
case "/prometheus/service-discovery", "/service-discovery":
promscrapeServiceDiscoveryRequests.Inc()
promscrape.WriteServiceDiscovery(w, r)
return true
case "/prometheus/api/v1/targets", "/api/v1/targets":
promscrapeAPIV1TargetsRequests.Inc()
w.Header().Set("Content-Type", "application/json")
@ -294,10 +318,11 @@ var (
datadogValidateRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`)
datadogCheckRunRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`)
datadogIntakeRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/intake/", protocol="datadog"}`)
datadogIntakeRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/intake", protocol="datadog"}`)
promscrapeTargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/targets"}`)
promscrapeAPIV1TargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/targets"}`)
promscrapeTargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/targets"}`)
promscrapeServiceDiscoveryRequests = metrics.NewCounter(`vm_http_requests_total{path="/service-discovery"}`)
promscrapeAPIV1TargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/targets"}`)
promscrapeTargetResponseRequests = metrics.NewCounter(`vm_http_requests_total{path="/target_response"}`)
promscrapeTargetResponseErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/target_response"}`)

View file

@ -22,9 +22,6 @@ import (
// See https://graphite-api.readthedocs.io/en/latest/api.html#metrics-find
func MetricsFindHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
format := r.FormValue("format")
if format == "" {
format = "treejson"
@ -119,9 +116,6 @@ func deduplicatePaths(paths []string, delimiter string) []string {
// See https://graphite-api.readthedocs.io/en/latest/api.html#metrics-expand
func MetricsExpandHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
queries := r.Form["query"]
if len(queries) == 0 {
return fmt.Errorf("missing `query` arg")
@ -202,9 +196,6 @@ func MetricsExpandHandler(startTime time.Time, w http.ResponseWriter, r *http.Re
// See https://graphite-api.readthedocs.io/en/latest/api.html#metrics-index-json
func MetricsIndexHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
jsonp := r.FormValue("jsonp")
metricNames, err := netstorage.GetLabelValues(nil, "__name__", deadline)
if err != nil {

View file

@ -24,9 +24,6 @@ import (
// See https://graphite.readthedocs.io/en/stable/tags.html#removing-series-from-the-tagdb
func TagsDelSeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
paths := r.Form["path"]
totalDeleted := 0
var row graphiteparser.Row
@ -86,9 +83,8 @@ func TagsTagMultiSeriesHandler(startTime time.Time, w http.ResponseWriter, r *ht
}
func registerMetrics(startTime time.Time, w http.ResponseWriter, r *http.Request, isJSONResponse bool) error {
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
deadline := searchutils.GetDeadlineForQuery(r, startTime)
_ = deadline // TODO: use the deadline as in the cluster branch
paths := r.Form["path"]
var row graphiteparser.Row
var labels []prompb.Label
@ -163,9 +159,6 @@ var (
// See https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support
func TagsAutoCompleteValuesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
limit, err := getInt(r, "limit")
if err != nil {
return err
@ -252,9 +245,6 @@ var tagsAutoCompleteValuesDuration = metrics.NewSummary(`vm_request_duration_sec
// See https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support
func TagsAutoCompleteTagsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
limit, err := getInt(r, "limit")
if err != nil {
return err
@ -334,9 +324,6 @@ var tagsAutoCompleteTagsDuration = metrics.NewSummary(`vm_request_duration_secon
// See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags
func TagsFindSeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
limit, err := getInt(r, "limit")
if err != nil {
return err
@ -405,9 +392,6 @@ var tagsFindSeriesDuration = metrics.NewSummary(`vm_request_duration_seconds{pat
// See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags
func TagValuesHandler(startTime time.Time, tagName string, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
limit, err := getInt(r, "limit")
if err != nil {
return err
@ -436,9 +420,6 @@ var tagValuesDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/t
// See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags
func TagsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
limit, err := getInt(r, "limit")
if err != nil {
return err

View file

@ -86,20 +86,14 @@ var (
//go:embed vmui
var vmuiFiles embed.FS
//go:embed static
var staticFiles embed.FS
var (
vmuiFileServer = http.FileServer(http.FS(vmuiFiles))
staticServer = http.FileServer(http.FS(staticFiles))
)
var vmuiFileServer = http.FileServer(http.FS(vmuiFiles))
// RequestHandler handles remote read API requests
func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
startTime := time.Now()
defer requestDuration.UpdateDuration(startTime)
tracerEnabled := searchutils.GetBool(r, "trace")
qt := querytracer.New(tracerEnabled)
qt := querytracer.New(tracerEnabled, r.URL.Path)
// Limit the number of concurrent queries.
select {
@ -187,11 +181,6 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
vmuiFileServer.ServeHTTP(w, r)
return true
}
if strings.HasPrefix(path, "/static") {
staticServer.ServeHTTP(w, r)
return true
}
if strings.HasPrefix(path, "/api/v1/label/") {
s := path[len("/api/v1/label/"):]
if strings.HasSuffix(s, "/values") {
@ -279,6 +268,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
case "/api/v1/status/tsdb":
statusTSDBRequests.Inc()
httpserver.EnableCORS(w, r)
if err := prometheus.TSDBStatusHandler(startTime, w, r); err != nil {
statusTSDBErrors.Inc()
sendPrometheusError(w, r, err)
@ -291,6 +281,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
case "/api/v1/status/top_queries":
topQueriesRequests.Inc()
httpserver.EnableCORS(w, r)
if err := prometheus.QueryStatsHandler(startTime, w, r); err != nil {
topQueriesErrors.Inc()
sendPrometheusError(w, r, fmt.Errorf("cannot query status endpoint: %w", err))

View file

@ -195,7 +195,7 @@ var resultPool sync.Pool
//
// rss becomes unusable after the call to RunParallel.
func (rss *Results) RunParallel(qt *querytracer.Tracer, f func(rs *Result, workerID uint) error) error {
qt = qt.NewChild()
qt = qt.NewChild("parallel process of fetched data")
defer rss.mustClose()
// Spin up local workers.
@ -257,7 +257,7 @@ func (rss *Results) RunParallel(qt *querytracer.Tracer, f func(rs *Result, worke
close(workCh)
}
workChsWG.Wait()
qt.Donef("parallel process of fetched data: series=%d, samples=%d", seriesProcessedTotal, rowsProcessedTotal)
qt.Donef("series=%d, samples=%d", seriesProcessedTotal, rowsProcessedTotal)
return firstErr
}
@ -640,8 +640,8 @@ func (sbh *sortBlocksHeap) Pop() interface{} {
// DeleteSeries deletes time series matching the given tagFilterss.
func DeleteSeries(qt *querytracer.Tracer, sq *storage.SearchQuery, deadline searchutils.Deadline) (int, error) {
qt = qt.NewChild()
defer qt.Donef("delete series: %s", sq)
qt = qt.NewChild("delete series: %s", sq)
defer qt.Done()
tr := storage.TimeRange{
MinTimestamp: sq.MinTimestamp,
MaxTimestamp: sq.MaxTimestamp,
@ -655,8 +655,8 @@ func DeleteSeries(qt *querytracer.Tracer, sq *storage.SearchQuery, deadline sear
// GetLabelsOnTimeRange returns labels for the given tr until the given deadline.
func GetLabelsOnTimeRange(qt *querytracer.Tracer, tr storage.TimeRange, deadline searchutils.Deadline) ([]string, error) {
qt = qt.NewChild()
defer qt.Donef("get labels on timeRange=%s", &tr)
qt = qt.NewChild("get labels on timeRange=%s", &tr)
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
@ -687,8 +687,8 @@ func GetLabelsOnTimeRange(qt *querytracer.Tracer, tr storage.TimeRange, deadline
// GetGraphiteTags returns Graphite tags until the given deadline.
func GetGraphiteTags(qt *querytracer.Tracer, filter string, limit int, deadline searchutils.Deadline) ([]string, error) {
qt = qt.NewChild()
defer qt.Donef("get graphite tags: filter=%s, limit=%d", filter, limit)
qt = qt.NewChild("get graphite tags: filter=%s, limit=%d", filter, limit)
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
@ -734,8 +734,8 @@ func hasString(a []string, s string) bool {
// GetLabels returns labels until the given deadline.
func GetLabels(qt *querytracer.Tracer, deadline searchutils.Deadline) ([]string, error) {
qt = qt.NewChild()
defer qt.Donef("get labels")
qt = qt.NewChild("get labels")
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
@ -788,8 +788,8 @@ func mergeStrings(a, b []string) []string {
// GetLabelValuesOnTimeRange returns label values for the given labelName on the given tr
// until the given deadline.
func GetLabelValuesOnTimeRange(qt *querytracer.Tracer, labelName string, tr storage.TimeRange, deadline searchutils.Deadline) ([]string, error) {
qt = qt.NewChild()
defer qt.Donef("get values for label %s on a timeRange %s", labelName, &tr)
qt = qt.NewChild("get values for label %s on a timeRange %s", labelName, &tr)
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
@ -818,8 +818,8 @@ func GetLabelValuesOnTimeRange(qt *querytracer.Tracer, labelName string, tr stor
// GetGraphiteTagValues returns tag values for the given tagName until the given deadline.
func GetGraphiteTagValues(qt *querytracer.Tracer, tagName, filter string, limit int, deadline searchutils.Deadline) ([]string, error) {
qt = qt.NewChild()
defer qt.Donef("get graphite tag values for tagName=%s, filter=%s, limit=%d", tagName, filter, limit)
qt = qt.NewChild("get graphite tag values for tagName=%s, filter=%s, limit=%d", tagName, filter, limit)
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
@ -845,8 +845,8 @@ func GetGraphiteTagValues(qt *querytracer.Tracer, tagName, filter string, limit
// GetLabelValues returns label values for the given labelName
// until the given deadline.
func GetLabelValues(qt *querytracer.Tracer, labelName string, deadline searchutils.Deadline) ([]string, error) {
qt = qt.NewChild()
defer qt.Donef("get values for label %s", labelName)
qt = qt.NewChild("get values for label %s", labelName)
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
@ -877,8 +877,8 @@ func GetLabelValues(qt *querytracer.Tracer, labelName string, deadline searchuti
//
// It can be used for implementing https://graphite-api.readthedocs.io/en/latest/api.html#metrics-find
func GetTagValueSuffixes(qt *querytracer.Tracer, tr storage.TimeRange, tagKey, tagValuePrefix string, delimiter byte, deadline searchutils.Deadline) ([]string, error) {
qt = qt.NewChild()
defer qt.Donef("get tag value suffixes for tagKey=%s, tagValuePrefix=%s, timeRange=%s", tagKey, tagValuePrefix, &tr)
qt = qt.NewChild("get tag value suffixes for tagKey=%s, tagValuePrefix=%s, timeRange=%s", tagKey, tagValuePrefix, &tr)
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
@ -897,8 +897,8 @@ func GetTagValueSuffixes(qt *querytracer.Tracer, tr storage.TimeRange, tagKey, t
// GetLabelEntries returns all the label entries until the given deadline.
func GetLabelEntries(qt *querytracer.Tracer, deadline searchutils.Deadline) ([]storage.TagEntry, error) {
qt = qt.NewChild()
defer qt.Donef("get label entries")
qt = qt.NewChild("get label entries")
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
@ -931,8 +931,8 @@ func GetLabelEntries(qt *querytracer.Tracer, deadline searchutils.Deadline) ([]s
// GetTSDBStatusForDate returns tsdb status according to https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats
func GetTSDBStatusForDate(qt *querytracer.Tracer, deadline searchutils.Deadline, date uint64, topN, maxMetrics int) (*storage.TSDBStatus, error) {
qt = qt.NewChild()
defer qt.Donef("get tsdb stats for date=%d, topN=%d", date, topN)
qt = qt.NewChild("get tsdb stats for date=%d, topN=%d", date, topN)
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
@ -947,8 +947,8 @@ func GetTSDBStatusForDate(qt *querytracer.Tracer, deadline searchutils.Deadline,
//
// It accepts aribtrary filters on time series in sq.
func GetTSDBStatusWithFilters(qt *querytracer.Tracer, deadline searchutils.Deadline, sq *storage.SearchQuery, topN int) (*storage.TSDBStatus, error) {
qt = qt.NewChild()
defer qt.Donef("get tsdb stats: %s, topN=%d", sq, topN)
qt = qt.NewChild("get tsdb stats: %s, topN=%d", sq, topN)
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
@ -970,8 +970,8 @@ func GetTSDBStatusWithFilters(qt *querytracer.Tracer, deadline searchutils.Deadl
// GetSeriesCount returns the number of unique series.
func GetSeriesCount(qt *querytracer.Tracer, deadline searchutils.Deadline) (uint64, error) {
qt = qt.NewChild()
defer qt.Donef("get series count")
qt = qt.NewChild("get series count")
defer qt.Done()
if deadline.Exceeded() {
return 0, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
@ -1004,8 +1004,8 @@ var ssPool sync.Pool
// It is the responsibility of f to call b.UnmarshalData before reading timestamps and values from the block.
// It is the responsibility of f to filter blocks according to the given tr.
func ExportBlocks(qt *querytracer.Tracer, sq *storage.SearchQuery, deadline searchutils.Deadline, f func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error) error {
qt = qt.NewChild()
defer qt.Donef("export blocks: %s", sq)
qt = qt.NewChild("export blocks: %s", sq)
defer qt.Done()
if deadline.Exceeded() {
return fmt.Errorf("timeout exceeded before starting data export: %s", deadline.String())
}
@ -1116,8 +1116,8 @@ var exportWorkPool = &sync.Pool{
// SearchMetricNames returns all the metric names matching sq until the given deadline.
func SearchMetricNames(qt *querytracer.Tracer, sq *storage.SearchQuery, deadline searchutils.Deadline) ([]storage.MetricName, error) {
qt = qt.NewChild()
defer qt.Donef("fetch metric names: %s", sq)
qt = qt.NewChild("fetch metric names: %s", sq)
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting to search metric names: %s", deadline.String())
}
@ -1146,8 +1146,8 @@ func SearchMetricNames(qt *querytracer.Tracer, sq *storage.SearchQuery, deadline
//
// Results.RunParallel or Results.Cancel must be called on the returned Results.
func ProcessSearchQuery(qt *querytracer.Tracer, sq *storage.SearchQuery, fetchData bool, deadline searchutils.Deadline) (*Results, error) {
qt = qt.NewChild()
defer qt.Donef("fetch matching series: %s, fetchData=%v", sq, fetchData)
qt = qt.NewChild("fetch matching series: %s, fetchData=%v", sq, fetchData)
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}

View file

@ -6,7 +6,7 @@
LabelValuesResponse generates response for /api/v1/label/<labelName>/values .
See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values
{% func LabelValuesResponse(labelValues []string, qt *querytracer.Tracer, qtDone func()) %}
{% func LabelValuesResponse(labelValues []string, qt *querytracer.Tracer) %}
{
"status":"success",
"data":[
@ -17,7 +17,7 @@ See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-va
]
{% code
qt.Printf("generate response for %d label values", len(labelValues))
qtDone()
qt.Done()
%}
{%= dumpQueryTrace(qt) %}
}

View file

@ -25,7 +25,7 @@ var (
)
//line app/vmselect/prometheus/label_values_response.qtpl:9
func StreamLabelValuesResponse(qw422016 *qt422016.Writer, labelValues []string, qt *querytracer.Tracer, qtDone func()) {
func StreamLabelValuesResponse(qw422016 *qt422016.Writer, labelValues []string, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/label_values_response.qtpl:9
qw422016.N().S(`{"status":"success","data":[`)
//line app/vmselect/prometheus/label_values_response.qtpl:13
@ -44,7 +44,7 @@ func StreamLabelValuesResponse(qw422016 *qt422016.Writer, labelValues []string,
qw422016.N().S(`]`)
//line app/vmselect/prometheus/label_values_response.qtpl:19
qt.Printf("generate response for %d label values", len(labelValues))
qtDone()
qt.Done()
//line app/vmselect/prometheus/label_values_response.qtpl:22
streamdumpQueryTrace(qw422016, qt)
@ -54,22 +54,22 @@ func StreamLabelValuesResponse(qw422016 *qt422016.Writer, labelValues []string,
}
//line app/vmselect/prometheus/label_values_response.qtpl:24
func WriteLabelValuesResponse(qq422016 qtio422016.Writer, labelValues []string, qt *querytracer.Tracer, qtDone func()) {
func WriteLabelValuesResponse(qq422016 qtio422016.Writer, labelValues []string, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/label_values_response.qtpl:24
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/label_values_response.qtpl:24
StreamLabelValuesResponse(qw422016, labelValues, qt, qtDone)
StreamLabelValuesResponse(qw422016, labelValues, qt)
//line app/vmselect/prometheus/label_values_response.qtpl:24
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/label_values_response.qtpl:24
}
//line app/vmselect/prometheus/label_values_response.qtpl:24
func LabelValuesResponse(labelValues []string, qt *querytracer.Tracer, qtDone func()) string {
func LabelValuesResponse(labelValues []string, qt *querytracer.Tracer) string {
//line app/vmselect/prometheus/label_values_response.qtpl:24
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/label_values_response.qtpl:24
WriteLabelValuesResponse(qb422016, labelValues, qt, qtDone)
WriteLabelValuesResponse(qb422016, labelValues, qt)
//line app/vmselect/prometheus/label_values_response.qtpl:24
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/label_values_response.qtpl:24

View file

@ -6,7 +6,7 @@
LabelsResponse generates response for /api/v1/labels .
See https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names
{% func LabelsResponse(labels []string, qt *querytracer.Tracer, qtDone func()) %}
{% func LabelsResponse(labels []string, qt *querytracer.Tracer) %}
{
"status":"success",
"data":[
@ -17,7 +17,7 @@ See https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-nam
]
{% code
qt.Printf("generate response for %d labels", len(labels))
qtDone()
qt.Done()
%}
{%= dumpQueryTrace(qt) %}
}

View file

@ -25,7 +25,7 @@ var (
)
//line app/vmselect/prometheus/labels_response.qtpl:9
func StreamLabelsResponse(qw422016 *qt422016.Writer, labels []string, qt *querytracer.Tracer, qtDone func()) {
func StreamLabelsResponse(qw422016 *qt422016.Writer, labels []string, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/labels_response.qtpl:9
qw422016.N().S(`{"status":"success","data":[`)
//line app/vmselect/prometheus/labels_response.qtpl:13
@ -44,7 +44,7 @@ func StreamLabelsResponse(qw422016 *qt422016.Writer, labels []string, qt *queryt
qw422016.N().S(`]`)
//line app/vmselect/prometheus/labels_response.qtpl:19
qt.Printf("generate response for %d labels", len(labels))
qtDone()
qt.Done()
//line app/vmselect/prometheus/labels_response.qtpl:22
streamdumpQueryTrace(qw422016, qt)
@ -54,22 +54,22 @@ func StreamLabelsResponse(qw422016 *qt422016.Writer, labels []string, qt *queryt
}
//line app/vmselect/prometheus/labels_response.qtpl:24
func WriteLabelsResponse(qq422016 qtio422016.Writer, labels []string, qt *querytracer.Tracer, qtDone func()) {
func WriteLabelsResponse(qq422016 qtio422016.Writer, labels []string, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/labels_response.qtpl:24
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/labels_response.qtpl:24
StreamLabelsResponse(qw422016, labels, qt, qtDone)
StreamLabelsResponse(qw422016, labels, qt)
//line app/vmselect/prometheus/labels_response.qtpl:24
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/labels_response.qtpl:24
}
//line app/vmselect/prometheus/labels_response.qtpl:24
func LabelsResponse(labels []string, qt *querytracer.Tracer, qtDone func()) string {
func LabelsResponse(labels []string, qt *querytracer.Tracer) string {
//line app/vmselect/prometheus/labels_response.qtpl:24
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/labels_response.qtpl:24
WriteLabelsResponse(qb422016, labels, qt, qtDone)
WriteLabelsResponse(qb422016, labels, qt)
//line app/vmselect/prometheus/labels_response.qtpl:24
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/labels_response.qtpl:24

View file

@ -45,10 +45,10 @@ var (
"points with timestamps closer than -search.latencyOffset to the current time. The adjustment is needed because such points may contain incomplete data")
maxUniqueTimeseries = flag.Int("search.maxUniqueTimeseries", 300e3, "The maximum number of unique time series, which can be selected during /api/v1/query and /api/v1/query_range queries. This option allows limiting memory usage")
maxFederateSeries = flag.Int("search.maxFederateSeries", 300e3, "The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage")
maxExportSeries = flag.Int("search.maxExportSeries", 1e6, "The maximum number of time series, which can be returned from /api/v1/export* APIs. This option allows limiting memory usage")
maxTSDBStatusSeries = flag.Int("search.maxTSDBStatusSeries", 1e6, "The maximum number of time series, which can be processed during the call to /api/v1/status/tsdb. This option allows limiting memory usage")
maxSeriesLimit = flag.Int("search.maxSeries", 10e3, "The maximum number of time series, which can be returned from /api/v1/series. This option allows limiting memory usage")
maxFederateSeries = flag.Int("search.maxFederateSeries", 1e6, "The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage")
maxExportSeries = flag.Int("search.maxExportSeries", 10e6, "The maximum number of time series, which can be returned from /api/v1/export* APIs. This option allows limiting memory usage")
maxTSDBStatusSeries = flag.Int("search.maxTSDBStatusSeries", 10e6, "The maximum number of time series, which can be processed during the call to /api/v1/status/tsdb. This option allows limiting memory usage")
maxSeriesLimit = flag.Int("search.maxSeries", 100e3, "The maximum number of time series, which can be returned from /api/v1/series. This option allows limiting memory usage")
)
// Default step used if not set.
@ -59,9 +59,7 @@ func FederateHandler(startTime time.Time, w http.ResponseWriter, r *http.Request
defer federateDuration.UpdateDuration(startTime)
ct := startTime.UnixNano() / 1e6
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse request form values: %w", err)
}
deadline := searchutils.GetDeadlineForQuery(r, startTime)
lookbackDelta, err := getMaxLookback(r)
if err != nil {
return err
@ -77,7 +75,6 @@ func FederateHandler(startTime time.Time, w http.ResponseWriter, r *http.Request
if err != nil {
return err
}
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if start >= end {
start = end - defaultStep
}
@ -119,9 +116,6 @@ var federateDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/fe
func ExportCSVHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
defer exportCSVDuration.UpdateDuration(startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse request form values: %w", err)
}
format := r.FormValue("format")
if len(format) == 0 {
return fmt.Errorf("missing `format` arg; see https://docs.victoriametrics.com/#how-to-export-csv-data")
@ -213,9 +207,6 @@ var exportCSVDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/a
func ExportNativeHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
defer exportNativeDuration.UpdateDuration(startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse request form values: %w", err)
}
ep, err := getExportParams(r, startTime)
if err != nil {
return err
@ -278,9 +269,6 @@ var bbPool bytesutil.ByteBufferPool
func ExportHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
defer exportDuration.UpdateDuration(startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse request form values: %w", err)
}
ep, err := getExportParams(r, startTime)
if err != nil {
return err
@ -361,7 +349,7 @@ func exportHandler(qt *querytracer.Tracer, w http.ResponseWriter, ep *exportPara
if err != nil {
return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
}
qtChild := qt.NewChild()
qtChild := qt.NewChild("background export format=%s", format)
go func() {
err := rss.RunParallel(qtChild, func(rs *netstorage.Result, workerID uint) error {
if err := bw.Error(); err != nil {
@ -376,12 +364,12 @@ func exportHandler(qt *querytracer.Tracer, w http.ResponseWriter, ep *exportPara
exportBlockPool.Put(xb)
return nil
})
qtChild.Donef("background export format=%s", format)
qtChild.Done()
close(resultsCh)
doneCh <- err
}()
} else {
qtChild := qt.NewChild()
qtChild := qt.NewChild("background export format=%s", format)
go func() {
err := netstorage.ExportBlocks(qtChild, sq, ep.deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error {
if err := bw.Error(); err != nil {
@ -400,7 +388,7 @@ func exportHandler(qt *querytracer.Tracer, w http.ResponseWriter, ep *exportPara
exportBlockPool.Put(xb)
return nil
})
qtChild.Donef("background export format=%s", format)
qtChild.Done()
close(resultsCh)
doneCh <- err
}()
@ -443,9 +431,6 @@ func DeleteHandler(startTime time.Time, r *http.Request) error {
defer deleteDuration.UpdateDuration(startTime)
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse request form values: %w", err)
}
if r.FormValue("start") != "" || r.FormValue("end") != "" {
return fmt.Errorf("start and end aren't supported. Remove these args from the query in order to delete all the matching metrics")
}
@ -474,9 +459,6 @@ func LabelValuesHandler(qt *querytracer.Tracer, startTime time.Time, labelName s
defer labelValuesDuration.UpdateDuration(startTime)
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
etfs, err := searchutils.GetExtraTagFilters(r)
if err != nil {
return err
@ -535,10 +517,7 @@ func LabelValuesHandler(qt *querytracer.Tracer, startTime time.Time, labelName s
w.Header().Set("Content-Type", "application/json")
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
qtDone := func() {
qt.Donef("/api/v1/labels")
}
WriteLabelValuesResponse(bw, labelValues, qt, qtDone)
WriteLabelValuesResponse(bw, labelValues, qt)
if err := bw.Flush(); err != nil {
return fmt.Errorf("canot flush label values to remote client: %w", err)
}
@ -649,9 +628,6 @@ func TSDBStatusHandler(startTime time.Time, w http.ResponseWriter, r *http.Reque
defer tsdbStatusDuration.UpdateDuration(startTime)
deadline := searchutils.GetDeadlineForStatusRequest(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
etfs, err := searchutils.GetExtraTagFilters(r)
if err != nil {
return err
@ -732,9 +708,6 @@ func LabelsHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseW
defer labelsDuration.UpdateDuration(startTime)
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
etfs, err := searchutils.GetExtraTagFilters(r)
if err != nil {
return err
@ -791,10 +764,7 @@ func LabelsHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseW
w.Header().Set("Content-Type", "application/json")
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
qtDone := func() {
qt.Donef("/api/v1/labels")
}
WriteLabelsResponse(bw, labels, qt, qtDone)
WriteLabelsResponse(bw, labels, qt)
if err := bw.Flush(); err != nil {
return fmt.Errorf("cannot send labels response to remote client: %w", err)
}
@ -886,10 +856,8 @@ var seriesCountDuration = metrics.NewSummary(`vm_request_duration_seconds{path="
func SeriesHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, r *http.Request) error {
defer seriesDuration.UpdateDuration(startTime)
deadline := searchutils.GetDeadlineForQuery(r, startTime)
ct := startTime.UnixNano() / 1e6
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
end, err := searchutils.GetTime(r, "end", ct)
if err != nil {
return err
@ -903,7 +871,6 @@ func SeriesHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseW
if err != nil {
return err
}
deadline := searchutils.GetDeadlineForQuery(r, startTime)
tagFilterss, err := getTagFilterssFromRequest(r)
if err != nil {
@ -914,7 +881,7 @@ func SeriesHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseW
}
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxSeriesLimit)
qtDone := func() {
qt.Donef("/api/v1/series: start=%d, end=%d", start, end)
qt.Donef("start=%d, end=%d", start, end)
}
if end-start > 24*3600*1000 {
// It is cheaper to call SearchMetricNames on time ranges exceeding a day.
@ -986,6 +953,7 @@ func QueryHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWr
defer queryDuration.UpdateDuration(startTime)
ct := startTime.UnixNano() / 1e6
deadline := searchutils.GetDeadlineForQuery(r, startTime)
mayCache := !searchutils.GetBool(r, "nocache")
query := r.FormValue("query")
if len(query) == 0 {
@ -1006,7 +974,6 @@ func QueryHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWr
if step <= 0 {
step = defaultStep
}
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if len(query) > maxQueryLen.N {
return fmt.Errorf("too long query; got %d bytes; mustn't exceed `-search.maxQueryLen=%d` bytes", len(query), maxQueryLen.N)
@ -1101,7 +1068,7 @@ func QueryHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWr
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
qtDone := func() {
qt.Donef("/api/v1/query: query=%s, time=%d: series=%d", query, start, len(result))
qt.Donef("query=%s, time=%d: series=%d", query, start, len(result))
}
WriteQueryResponse(bw, result, qt, qtDone)
if err := bw.Flush(); err != nil {
@ -1199,7 +1166,7 @@ func queryRangeHandler(qt *querytracer.Tracer, startTime time.Time, w http.Respo
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
qtDone := func() {
qt.Donef("/api/v1/query_range: start=%d, end=%d, step=%d, query=%q: series=%d", start, end, step, query, len(result))
qt.Donef("start=%d, end=%d, step=%d, query=%q: series=%d", start, end, step, query, len(result))
}
WriteQueryRangeResponse(bw, result, qt, qtDone)
if err := bw.Flush(); err != nil {
@ -1349,9 +1316,6 @@ func getLatencyOffsetMilliseconds() int64 {
func QueryStatsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
defer queryStatsDuration.UpdateDuration(startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
topN := 20
topNStr := r.FormValue("topN")
if len(topNStr) > 0 {

View file

@ -6,9 +6,11 @@ TSDBStatusResponse generates response for /api/v1/status/tsdb .
{
"status":"success",
"data":{
"totalSeries": {%dul= status.TotalSeries %},
"totalLabelValuePairs": {%dul= status.TotalLabelValuePairs %},
"seriesCountByMetricName":{%= tsdbStatusEntries(status.SeriesCountByMetricName) %},
"labelValueCountByLabelName":{%= tsdbStatusEntries(status.LabelValueCountByLabelName) %},
"seriesCountByLabelValuePair":{%= tsdbStatusEntries(status.SeriesCountByLabelValuePair) %}
"seriesCountByLabelValuePair":{%= tsdbStatusEntries(status.SeriesCountByLabelValuePair) %},
"labelValueCountByLabelName":{%= tsdbStatusEntries(status.LabelValueCountByLabelName) %}
}
}
{% endfunc %}

View file

@ -25,99 +25,107 @@ var (
//line app/vmselect/prometheus/tsdb_status_response.qtpl:5
func StreamTSDBStatusResponse(qw422016 *qt422016.Writer, status *storage.TSDBStatus) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:5
qw422016.N().S(`{"status":"success","data":{"seriesCountByMetricName":`)
qw422016.N().S(`{"status":"success","data":{"totalSeries":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
qw422016.N().DUL(status.TotalSeries)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
qw422016.N().S(`,"totalLabelValuePairs":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:10
qw422016.N().DUL(status.TotalLabelValuePairs)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:10
qw422016.N().S(`,"seriesCountByMetricName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:11
streamtsdbStatusEntries(qw422016, status.SeriesCountByMetricName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9
qw422016.N().S(`,"labelValueCountByLabelName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:10
streamtsdbStatusEntries(qw422016, status.LabelValueCountByLabelName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:10
//line app/vmselect/prometheus/tsdb_status_response.qtpl:11
qw422016.N().S(`,"seriesCountByLabelValuePair":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:11
//line app/vmselect/prometheus/tsdb_status_response.qtpl:12
streamtsdbStatusEntries(qw422016, status.SeriesCountByLabelValuePair)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:11
//line app/vmselect/prometheus/tsdb_status_response.qtpl:12
qw422016.N().S(`,"labelValueCountByLabelName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
streamtsdbStatusEntries(qw422016, status.LabelValueCountByLabelName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
qw422016.N().S(`}}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
func WriteTSDBStatusResponse(qq422016 qtio422016.Writer, status *storage.TSDBStatus) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
StreamTSDBStatusResponse(qw422016, status)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
func TSDBStatusResponse(status *storage.TSDBStatus) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
WriteTSDBStatusResponse(qb422016, status)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:14
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
func streamtsdbStatusEntries(qw422016 *qt422016.Writer, a []storage.TopHeapEntry) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
qw422016.N().S(`[`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
//line app/vmselect/prometheus/tsdb_status_response.qtpl:20
for i, e := range a {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18
//line app/vmselect/prometheus/tsdb_status_response.qtpl:20
qw422016.N().S(`{"name":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:20
//line app/vmselect/prometheus/tsdb_status_response.qtpl:22
qw422016.N().Q(e.Name)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:20
//line app/vmselect/prometheus/tsdb_status_response.qtpl:22
qw422016.N().S(`,"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:25
if i+1 < len(a) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:25
qw422016.N().S(`,`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
//line app/vmselect/prometheus/tsdb_status_response.qtpl:25
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:24
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
qw422016.N().S(`]`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
func writetsdbStatusEntries(qq422016 qtio422016.Writer, a []storage.TopHeapEntry) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
streamtsdbStatusEntries(qw422016, a)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
func tsdbStatusEntries(a []storage.TopHeapEntry) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
writetsdbStatusEntries(qb422016, a)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28
}

View file

@ -193,22 +193,23 @@ func getTimestamps(start, end, step int64) []int64 {
}
func evalExpr(qt *querytracer.Tracer, ec *EvalConfig, e metricsql.Expr) ([]*timeseries, error) {
qt = qt.NewChild()
if qt.Enabled() {
query := e.AppendString(nil)
mayCache := ec.mayCache()
qt = qt.NewChild("eval: query=%s, timeRange=[%d..%d], step=%d, mayCache=%v", query, ec.Start, ec.End, ec.Step, mayCache)
}
rv, err := evalExprInternal(qt, ec, e)
if err != nil {
return nil, err
}
if qt.Enabled() {
query := e.AppendString(nil)
seriesCount := len(rv)
pointsPerSeries := 0
if len(rv) > 0 {
pointsPerSeries = len(rv[0].Timestamps)
}
pointsCount := seriesCount * pointsPerSeries
mayCache := ec.mayCache()
qt.Donef("eval: query=%s, timeRange=[%d..%d], step=%d, mayCache=%v: series=%d, points=%d, pointsPerSeries=%d",
query, ec.Start, ec.End, ec.Step, mayCache, seriesCount, pointsCount, pointsPerSeries)
qt.Donef("series=%d, points=%d, pointsPerSeries=%d", seriesCount, pointsCount, pointsPerSeries)
}
return rv, nil
}
@ -234,9 +235,9 @@ func evalExprInternal(qt *querytracer.Tracer, ec *EvalConfig, e metricsql.Expr)
if fe, ok := e.(*metricsql.FuncExpr); ok {
nrf := getRollupFunc(fe.Name)
if nrf == nil {
qtChild := qt.NewChild()
qtChild := qt.NewChild("transform %s()", fe.Name)
rv, err := evalTransformFunc(qtChild, ec, fe)
qtChild.Donef("transform %s(): series=%d", fe.Name, len(rv))
qtChild.Donef("series=%d", len(rv))
return rv, err
}
args, re, err := evalRollupFuncArgs(qt, ec, fe)
@ -254,15 +255,15 @@ func evalExprInternal(qt *querytracer.Tracer, ec *EvalConfig, e metricsql.Expr)
return rv, nil
}
if ae, ok := e.(*metricsql.AggrFuncExpr); ok {
qtChild := qt.NewChild()
qtChild := qt.NewChild("aggregate %s()", ae.Name)
rv, err := evalAggrFunc(qtChild, ec, ae)
qtChild.Donef("aggregate %s(): series=%d", ae.Name, len(rv))
qtChild.Donef("series=%d", len(rv))
return rv, err
}
if be, ok := e.(*metricsql.BinaryOpExpr); ok {
qtChild := qt.NewChild()
qtChild := qt.NewChild("binary op %q", be.Op)
rv, err := evalBinaryOp(qtChild, ec, be)
qtChild.Donef("binary op %q: series=%d", be.Op, len(rv))
qtChild.Donef("series=%d", len(rv))
return rv, err
}
if ne, ok := e.(*metricsql.NumberExpr); ok {
@ -724,8 +725,8 @@ func aggregateAbsentOverTime(ec *EvalConfig, expr metricsql.Expr, tss []*timeser
func evalRollupFuncWithSubquery(qt *querytracer.Tracer, ec *EvalConfig, funcName string, rf rollupFunc, expr metricsql.Expr, re *metricsql.RollupExpr) ([]*timeseries, error) {
// TODO: determine whether to use rollupResultCacheV here.
qt = qt.NewChild()
defer qt.Donef("subquery")
qt = qt.NewChild("subquery")
defer qt.Done()
step := re.Step.Duration(ec.Step)
if step == 0 {
step = ec.Step
@ -855,9 +856,9 @@ func evalRollupFuncWithMetricExpr(qt *querytracer.Tracer, ec *EvalConfig, funcNa
expr metricsql.Expr, me *metricsql.MetricExpr, iafc *incrementalAggrFuncContext, windowExpr *metricsql.DurationExpr) ([]*timeseries, error) {
var rollupMemorySize int64
window := windowExpr.Duration(ec.Step)
qt = qt.NewChild()
qt = qt.NewChild("rollup %s(): timeRange=[%d..%d], step=%d, window=%d", funcName, ec.Start, ec.End, ec.Step, window)
defer func() {
qt.Donef("rollup %s(): timeRange=[%d..%d], step=%d, window=%d, neededMemoryBytes=%d", funcName, ec.Start, ec.End, ec.Step, window, rollupMemorySize)
qt.Donef("neededMemoryBytes=%d", rollupMemorySize)
}()
if me.IsEmpty() {
return evalNumber(ec, nan), nil
@ -972,8 +973,8 @@ func getRollupMemoryLimiter() *memoryLimiter {
func evalRollupWithIncrementalAggregate(qt *querytracer.Tracer, funcName string, keepMetricNames bool,
iafc *incrementalAggrFuncContext, rss *netstorage.Results, rcs []*rollupConfig,
preFunc func(values []float64, timestamps []int64), sharedTimestamps []int64) ([]*timeseries, error) {
qt = qt.NewChild()
defer qt.Donef("rollup %s() with incremental aggregation %s() over %d series", funcName, iafc.ae.Name, rss.Len())
qt = qt.NewChild("rollup %s() with incremental aggregation %s() over %d series", funcName, iafc.ae.Name, rss.Len())
defer qt.Done()
err := rss.RunParallel(qt, func(rs *netstorage.Result, workerID uint) error {
rs.Values, rs.Timestamps = dropStaleNaNs(funcName, rs.Values, rs.Timestamps)
preFunc(rs.Values, rs.Timestamps)
@ -1007,8 +1008,8 @@ func evalRollupWithIncrementalAggregate(qt *querytracer.Tracer, funcName string,
func evalRollupNoIncrementalAggregate(qt *querytracer.Tracer, funcName string, keepMetricNames bool, rss *netstorage.Results, rcs []*rollupConfig,
preFunc func(values []float64, timestamps []int64), sharedTimestamps []int64) ([]*timeseries, error) {
qt = qt.NewChild()
defer qt.Donef("rollup %s() over %d series", funcName, rss.Len())
qt = qt.NewChild("rollup %s() over %d series", funcName, rss.Len())
defer qt.Done()
tss := make([]*timeseries, 0, rss.Len()*len(rcs))
var tssLock sync.Mutex
err := rss.RunParallel(qt, func(rs *netstorage.Result, workerID uint) error {

View file

@ -200,10 +200,10 @@ func ResetRollupResultCache() {
}
func (rrc *rollupResultCache) Get(qt *querytracer.Tracer, ec *EvalConfig, expr metricsql.Expr, window int64) (tss []*timeseries, newStart int64) {
qt = qt.NewChild()
if qt.Enabled() {
query := expr.AppendString(nil)
defer qt.Donef("rollup cache get: query=%s, timeRange=[%d..%d], step=%d, window=%d", query, ec.Start, ec.End, ec.Step, window)
qt = qt.NewChild("rollup cache get: query=%s, timeRange=[%d..%d], step=%d, window=%d", query, ec.Start, ec.End, ec.Step, window)
defer qt.Done()
}
if !ec.mayCache() {
qt.Printf("do not fetch series from cache, since it is disabled in the current context")
@ -296,10 +296,10 @@ func (rrc *rollupResultCache) Get(qt *querytracer.Tracer, ec *EvalConfig, expr m
var resultBufPool bytesutil.ByteBufferPool
func (rrc *rollupResultCache) Put(qt *querytracer.Tracer, ec *EvalConfig, expr metricsql.Expr, window int64, tss []*timeseries) {
qt = qt.NewChild()
if qt.Enabled() {
query := expr.AppendString(nil)
defer qt.Donef("rollup cache put: query=%s, timeRange=[%d..%d], step=%d, window=%d, series=%d", query, ec.Start, ec.End, ec.Step, window, len(tss))
qt = qt.NewChild("rollup cache put: query=%s, timeRange=[%d..%d], step=%d, window=%d, series=%d", query, ec.Start, ec.End, ec.Step, window, len(tss))
defer qt.Done()
}
if len(tss) == 0 || !ec.mayCache() {
qt.Printf("do not store series to cache, since it is disabled in the current context")

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -1,12 +1,12 @@
{
"files": {
"main.css": "./static/css/main.d8362c27.css",
"main.js": "./static/js/main.a35e61a3.js",
"main.js": "./static/js/main.105dbc4f.js",
"static/js/27.939f971b.chunk.js": "./static/js/27.939f971b.chunk.js",
"index.html": "./index.html"
},
"entrypoints": [
"static/css/main.d8362c27.css",
"static/js/main.a35e61a3.js"
"static/js/main.105dbc4f.js"
]
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View file

@ -1 +1 @@
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script src="./dashboards/index.js" type="module"></script><script defer="defer" src="./static/js/main.a35e61a3.js"></script><link href="./static/css/main.d8362c27.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script src="./dashboards/index.js" type="module"></script><script defer="defer" src="./static/js/main.105dbc4f.js"></script><link href="./static/css/main.d8362c27.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -17808,6 +17808,16 @@
"js-yaml": "bin/js-yaml.js"
}
},
"node_modules/svgo/node_modules/nth-check": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz",
"integrity": "sha512-WeBOdju8SnzPN5vTUJYxYUxLeXpCaVP5i5e0LF8fg7WORF2Wd7wFX/pk0tYZk7s8T+J7VLy0Da6J1+wCT0AtHg==",
"dev": true,
"peer": true,
"dependencies": {
"boolbase": "~1.0.0"
}
},
"node_modules/symbol-tree": {
"version": "3.2.4",
"resolved": "https://registry.npmjs.org/symbol-tree/-/symbol-tree-3.2.4.tgz",
@ -32692,7 +32702,7 @@
"boolbase": "^1.0.0",
"css-what": "^3.2.1",
"domutils": "^1.7.0",
"nth-check": "^2.0.1"
"nth-check": "^1.0.2"
}
},
"css-what": {
@ -32743,6 +32753,16 @@
"argparse": "^1.0.7",
"esprima": "^4.0.0"
}
},
"nth-check": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz",
"integrity": "sha512-WeBOdju8SnzPN5vTUJYxYUxLeXpCaVP5i5e0LF8fg7WORF2Wd7wFX/pk0tYZk7s8T+J7VLy0Da6J1+wCT0AtHg==",
"dev": true,
"peer": true,
"requires": {
"boolbase": "~1.0.0"
}
}
}
},

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View file

@ -4,6 +4,7 @@ import {SnackbarProvider} from "./contexts/Snackbar";
import {StateProvider} from "./state/common/StateContext";
import {AuthStateProvider} from "./state/auth/AuthStateContext";
import {GraphStateProvider} from "./state/graph/GraphStateContext";
import {CardinalityStateProvider} from "./state/cardinality/CardinalityStateContext";
import THEME from "./theme/theme";
import { ThemeProvider, StyledEngineProvider } from "@mui/material/styles";
import CssBaseline from "@mui/material/CssBaseline";
@ -14,6 +15,7 @@ import router from "./router/index";
import CustomPanel from "./components/CustomPanel/CustomPanel";
import HomeLayout from "./components/Home/HomeLayout";
import DashboardsLayout from "./components/PredefinedPanels/DashboardsLayout";
import CardinalityPanel from "./components/CardinalityPanel/CardinalityPanel";
const App: FC = () => {
@ -27,14 +29,17 @@ const App: FC = () => {
<StateProvider> {/* Serialized into query string, common app settings */}
<AuthStateProvider> {/* Auth related info - optionally persisted to Local Storage */}
<GraphStateProvider> {/* Graph settings */}
<SnackbarProvider> {/* Display various snackbars */}
<Routes>
<Route path={"/"} element={<HomeLayout/>}>
<Route path={router.home} element={<CustomPanel/>}/>
<Route path={router.dashboards} element={<DashboardsLayout/>}/>
</Route>
</Routes>
</SnackbarProvider>
<CardinalityStateProvider> {/* Cardinality settings */}
<SnackbarProvider> {/* Display various snackbars */}
<Routes>
<Route path={"/"} element={<HomeLayout/>}>
<Route path={router.home} element={<CustomPanel/>}/>
<Route path={router.dashboards} element={<DashboardsLayout/>}/>
<Route path={router.cardinality} element={<CardinalityPanel/>} />
</Route>
</Routes>
</SnackbarProvider>
</CardinalityStateProvider>
</GraphStateProvider>
</AuthStateProvider>
</StateProvider>

View file

@ -0,0 +1,12 @@
export interface CardinalityRequestsParams {
topN: number,
extraLabel: string | null,
match: string | null,
date: string | null,
}
export const getCardinalityInfo = (server: string, requestsParam: CardinalityRequestsParams) => {
const match = requestsParam.match ? `&match[]=${requestsParam.match}` : "";
return `${server}/api/v1/status/tsdb?topN=${requestsParam.topN}&date=${requestsParam.date}${match}`;
};

View file

@ -0,0 +1,41 @@
import React, {FC, useEffect, useRef, useState} from "preact/compat";
import uPlot, {Options as uPlotOptions} from "uplot";
import useResize from "../../hooks/useResize";
import {BarChartProps} from "./types";
const BarChart: FC<BarChartProps> = ({
data,
container,
configs}) => {
const uPlotRef = useRef<HTMLDivElement>(null);
const [isPanning, setPanning] = useState(false);
const [uPlotInst, setUPlotInst] = useState<uPlot>();
const layoutSize = useResize(container);
const options: uPlotOptions ={
...configs,
width: layoutSize.width || 400,
};
const updateChart = (): void => {
if (!uPlotInst) return;
uPlotInst.setData(data);
if (!isPanning) uPlotInst.redraw();
};
useEffect(() => {
if (!uPlotRef.current) return;
const u = new uPlot(options, data, uPlotRef.current);
setUPlotInst(u);
return u.destroy;
}, [uPlotRef.current, layoutSize]);
useEffect(() => updateChart(), [data]);
return <div style={{pointerEvents: isPanning ? "none" : "auto", height: "100%"}}>
<div ref={uPlotRef}/>
</div>;
};
export default BarChart;

View file

@ -0,0 +1,51 @@
import {seriesBarsPlugin} from "../../utils/uplot/plugin";
import {barDisp, getBarSeries} from "../../utils/uplot/series";
import {Fill, Stroke} from "../../utils/uplot/types";
import {PaddingSide, Series} from "uplot";
const stroke: Stroke = {
unit: 3,
values: (u: { data: number[][]; }) => u.data[1].map((_: number, idx) =>
idx !== 0 ? "#33BB55" : "#F79420"
),
};
const fill: Fill = {
unit: 3,
values: (u: { data: number[][]; }) => u.data[1].map((_: number, idx) =>
idx !== 0 ? "#33BB55" : "#F79420"
),
};
export const barOptions = {
height: 500,
width: 500,
padding: [null, 0, null, 0] as [top: PaddingSide, right: PaddingSide, bottom: PaddingSide, left: PaddingSide],
axes: [{ show: false }],
series: [
{
label: "",
value: (u: uPlot, v: string) => v
},
{
label: " ",
width: 0,
fill: "",
values: (u: uPlot, seriesIdx: number) => {
const idxs = u.legend.idxs || [];
if (u.data === null || idxs.length === 0)
return {"Name": null, "Value": null,};
const dataIdx = idxs[seriesIdx] || 0;
const build = u.data[0][dataIdx];
const duration = u.data[seriesIdx][dataIdx];
return {"Name": build, "Value": duration};
}
},
] as Series[],
plugins: [seriesBarsPlugin(getBarSeries([1], 0, 1, 0, barDisp(stroke, fill)))],
};

View file

@ -0,0 +1,7 @@
import {AlignedData as uPlotData, Options as uPlotOptions} from "uplot";
export interface BarChartProps {
data: uPlotData;
container: HTMLDivElement | null,
configs: uPlotOptions,
}

View file

@ -0,0 +1,27 @@
import React from "preact/compat";
import { styled } from "@mui/material/styles";
import LinearProgressWithLabel, {linearProgressClasses, LinearProgressProps} from "@mui/material/LinearProgress";
import {Box, Typography} from "@mui/material";
export const BorderLinearProgress = styled(LinearProgressWithLabel)(({ theme }) => ({
height: 20,
borderRadius: 5,
[`&.${linearProgressClasses.colorPrimary}`]: {
backgroundColor: theme.palette.grey[theme.palette.mode === "light" ? 200 : 800],
},
[`& .${linearProgressClasses.bar}`]: {
borderRadius: 5,
backgroundColor: theme.palette.mode === "light" ? "#1a90ff" : "#308fe8",
},
}));
export const BorderLinearProgressWithLabel = (props: LinearProgressProps & { value: number }) => (
<Box sx={{ display: "flex", alignItems: "center" }}>
<Box sx={{ width: "100%", mr: 1 }}>
<BorderLinearProgress variant="determinate" {...props} />
</Box>
<Box sx={{ minWidth: 35 }}>
<Typography variant="body2" color="text.secondary">{`${props.value.toFixed(2)}%`}</Typography>
</Box>
</Box>
);

View file

@ -0,0 +1,78 @@
import React, {ChangeEvent, FC} from "react";
import Box from "@mui/material/Box";
import QueryEditor from "../../CustomPanel/Configurator/Query/QueryEditor";
import Tooltip from "@mui/material/Tooltip";
import IconButton from "@mui/material/IconButton";
import PlayCircleOutlineIcon from "@mui/icons-material/PlayCircleOutline";
import {useFetchQueryOptions} from "../../../hooks/useFetchQueryOptions";
import {useAppDispatch, useAppState} from "../../../state/common/StateContext";
import FormControlLabel from "@mui/material/FormControlLabel";
import BasicSwitch from "../../../theme/switch";
import {saveToStorage} from "../../../utils/storage";
import TextField from "@mui/material/TextField";
import {ErrorTypes} from "../../../types";
export interface CardinalityConfiguratorProps {
onSetHistory: (step: number, index: number) => void;
onSetQuery: (query: string, index: number) => void;
onRunQuery: () => void;
onTopNChange: (e: ChangeEvent<HTMLTextAreaElement|HTMLInputElement>) => void;
query: string;
topN: number;
error?: ErrorTypes | string;
}
const CardinalityConfigurator: FC<CardinalityConfiguratorProps> = ({
topN,
error,
query,
onSetHistory,
onRunQuery,
onSetQuery,
onTopNChange }) => {
const dispatch = useAppDispatch();
const {queryControls: {autocomplete}} = useAppState();
const {queryOptions} = useFetchQueryOptions();
const onChangeAutocomplete = () => {
dispatch({type: "TOGGLE_AUTOCOMPLETE"});
saveToStorage("AUTOCOMPLETE", !autocomplete);
};
return <Box boxShadow="rgba(99, 99, 99, 0.2) 0px 2px 8px 0px;" p={4} pb={2} mb={2}>
<Box>
<Box display="grid" gridTemplateColumns="1fr auto auto" gap="4px" width="100%" mb={0}>
<QueryEditor
query={query} index={0} autocomplete={autocomplete} queryOptions={queryOptions}
error={error} setHistoryIndex={onSetHistory} runQuery={onRunQuery} setQuery={onSetQuery}
label={"Arbitrary time series selector"}
/>
<Tooltip title="Execute Query">
<IconButton onClick={onRunQuery} sx={{height: "49px", width: "49px"}}>
<PlayCircleOutlineIcon/>
</IconButton>
</Tooltip>
</Box>
</Box>
<Box display="flex" alignItems="center" mt={3} mr={"53px"}>
<Box>
<FormControlLabel label="Enable autocomplete"
control={<BasicSwitch checked={autocomplete} onChange={onChangeAutocomplete}/>}
/>
</Box>
<Box ml={2}>
<TextField
label="Number of top entries"
type="number"
size="small"
variant="outlined"
value={topN}
error={topN < 1}
helperText={topN < 1 ? "Number must be bigger than zero" : " "}
onChange={onTopNChange}/>
</Box>
</Box>
</Box>;
};
export default CardinalityConfigurator;

View file

@ -0,0 +1,157 @@
import React, {ChangeEvent, FC, useState} from "react";
import {SyntheticEvent} from "react";
import {Typography, Grid, Alert, Box, Tabs, Tab, Tooltip} from "@mui/material";
import TableChartIcon from "@mui/icons-material/TableChart";
import ShowChartIcon from "@mui/icons-material/ShowChart";
import {useFetchQuery} from "../../hooks/useCardinalityFetch";
import EnhancedTable from "../Table/Table";
import {TSDBStatus, TopHeapEntry, DefaultState, Tabs as TabsType, Containers} from "./types";
import {
defaultHeadCells,
headCellsWithProgress,
SPINNER_TITLE,
spinnerContainerStyles
} from "./consts";
import {defaultProperties, progressCount, queryUpdater, tableTitles} from "./helpers";
import {Data} from "../Table/types";
import BarChart from "../BarChart/BarChart";
import CardinalityConfigurator from "./CardinalityConfigurator/CardinalityConfigurator";
import {barOptions} from "../BarChart/consts";
import Spinner from "../common/Spinner";
import TabPanel from "../TabPanel/TabPanel";
import {useCardinalityDispatch, useCardinalityState} from "../../state/cardinality/CardinalityStateContext";
import {tableCells} from "./TableCells/TableCells";
const CardinalityPanel: FC = () => {
const cardinalityDispatch = useCardinalityDispatch();
const {topN, match, date} = useCardinalityState();
const configError = "";
const [query, setQuery] = useState(match || "");
const [queryHistoryIndex, setQueryHistoryIndex] = useState(0);
const [queryHistory, setQueryHistory] = useState<string[]>([]);
const onRunQuery = () => {
setQueryHistory(prev => [...prev, query]);
setQueryHistoryIndex(prev => prev + 1);
cardinalityDispatch({type: "SET_MATCH", payload: query});
cardinalityDispatch({type: "RUN_QUERY"});
};
const onSetQuery = (query: string) => {
setQuery(query);
};
const onSetHistory = (step: number) => {
const newIndexHistory = queryHistoryIndex + step;
if (newIndexHistory < 0 || newIndexHistory >= queryHistory.length) return;
setQueryHistoryIndex(newIndexHistory);
setQuery(queryHistory[newIndexHistory]);
};
const onTopNChange = (e: ChangeEvent<HTMLTextAreaElement|HTMLInputElement>) => {
cardinalityDispatch({type: "SET_TOP_N", payload: +e.target.value});
};
const {isLoading, tsdbStatus, error} = useFetchQuery();
const defaultProps = defaultProperties(tsdbStatus);
const [stateTabs, setTab] = useState(defaultProps.defaultState);
const handleTabChange = (e: SyntheticEvent, newValue: number) => {
// eslint-disable-next-line @typescript-eslint/ban-ts-comment
// @ts-ignore
setTab({...stateTabs, [e.target.id]: newValue});
};
const handleFilterClick = (key: string) => (e: SyntheticEvent) => {
const name = e.currentTarget.id;
const query = queryUpdater[key](name);
setQuery(query);
setQueryHistory(prev => [...prev, query]);
setQueryHistoryIndex(prev => prev + 1);
cardinalityDispatch({type: "SET_MATCH", payload: query});
cardinalityDispatch({type: "RUN_QUERY"});
};
return (
<>
{isLoading && <Spinner
isLoading={isLoading}
height={"800px"}
containerStyles={spinnerContainerStyles("100%")}
title={<Alert color="error" severity="error" sx={{whiteSpace: "pre-wrap", mt: 2}}>
{SPINNER_TITLE}
</Alert>}
/>}
<CardinalityConfigurator error={configError} query={query} onRunQuery={onRunQuery} onSetQuery={onSetQuery}
onSetHistory={onSetHistory} onTopNChange={onTopNChange} topN={topN} />
{error && <Alert color="error" severity="error" sx={{whiteSpace: "pre-wrap", mt: 2}}>{error}</Alert>}
{<Box m={2}>
Analyzed <b>{tsdbStatus.totalSeries}</b> series and <b>{tsdbStatus.totalLabelValuePairs}</b> label=value pairs
at <b>{date}</b> {match && <span>for series selector <b>{match}</b></span>}. Show top {topN} entries per table.
</Box>}
{Object.keys(tsdbStatus).map((key ) => {
if (key == "totalSeries" || key == "totalLabelValuePairs") return null;
const tableTitle = tableTitles[key];
const rows = tsdbStatus[key as keyof TSDBStatus] as unknown as Data[];
rows.forEach((row) => {
progressCount(tsdbStatus.totalSeries, key, row);
row.actions = "0";
});
const headerCells = (key == "seriesCountByMetricName" || key == "seriesCountByLabelValuePair") ? headCellsWithProgress : defaultHeadCells;
return (
<>
<Grid container spacing={2} sx={{px: 2}}>
<Grid item xs={12} md={12} lg={12} key={key}>
<Typography gutterBottom variant="h5" component="h5">
{tableTitle}
</Typography>
<Box sx={{ borderBottom: 1, borderColor: "divider" }}>
<Tabs
value={stateTabs[key as keyof DefaultState]}
onChange={handleTabChange} aria-label="basic tabs example">
{defaultProps.tabs[key as keyof TabsType].map((title: string, i: number) =>
<Tab
key={title}
label={title}
aria-controls={`tabpanel-${i}`}
id={key}
iconPosition={"start"}
icon={ i === 0 ? <TableChartIcon /> : <ShowChartIcon /> } />
)}
</Tabs>
</Box>
{defaultProps.tabs[key as keyof TabsType].map((_,idx) =>
<div
ref={defaultProps.containerRefs[key as keyof Containers<HTMLDivElement>]}
style={{width: "100%", paddingRight: idx !== 0 ? "40px" : 0 }} key={`${key}-${idx}`}>
<TabPanel value={stateTabs[key as keyof DefaultState]} index={idx}>
{stateTabs[key as keyof DefaultState] === 0 ? <EnhancedTable
rows={rows}
headerCells={headerCells}
defaultSortColumn={"value"}
tableCells={(row) => tableCells(row,date,handleFilterClick(key))}
/>: <BarChart
data={[
// eslint-disable-next-line @typescript-eslint/ban-ts-comment
// @ts-ignore
rows.map((v) => v.name),
rows.map((v) => v.value),
rows.map((_, i) => i % 12 == 0 ? 1 : i % 10 == 0 ? 2 : 0),
]}
container={defaultProps.containerRefs[key as keyof Containers<HTMLDivElement>]?.current}
configs={barOptions}
/>}
</TabPanel>
</div>
)}
</Grid>
</Grid>
</>
);
})}
</>
);
};
export default CardinalityPanel;

View file

@ -0,0 +1,50 @@
import {TableCell, ButtonGroup} from "@mui/material";
import {Data} from "../../Table/types";
import {BorderLinearProgressWithLabel} from "../../BorderLineProgress/BorderLinearProgress";
import React from "preact/compat";
import IconButton from "@mui/material/IconButton";
import PlayCircleOutlineIcon from "@mui/icons-material/PlayCircleOutline";
import Tooltip from "@mui/material/Tooltip";
import {SyntheticEvent} from "react";
import dayjs from "dayjs";
export const tableCells = (
row: Data,
date: string | null,
onFilterClick: (e: SyntheticEvent) => void) => {
const pathname = window.location.pathname;
const withday = dayjs(date).add(1, "day").toDate();
return Object.keys(row).map((key, idx) => {
if (idx === 0) {
return (<TableCell component="th" scope="row" key={key}>
{row[key as keyof Data]}
</TableCell>);
}
if (key === "progressValue") {
return (
<TableCell key={key}>
<BorderLinearProgressWithLabel
variant="determinate"
value={row[key as keyof Data] as number}
/>
</TableCell>
);
}
if (key === "actions") {
const title = `Filter by ${row.name}`;
return (<TableCell key={key}>
<ButtonGroup variant="contained">
<Tooltip title={title}>
<IconButton
id={row.name}
onClick={onFilterClick}
sx={{height: "20px", width: "20px"}}>
<PlayCircleOutlineIcon/>
</IconButton>
</Tooltip>
</ButtonGroup>
</TableCell>);
}
return (<TableCell key={key}>{row[key as keyof Data]}</TableCell>);
});
};

View file

@ -0,0 +1,44 @@
import {HeadCell} from "../Table/types";
export const headCellsWithProgress = [
{
disablePadding: false,
id: "name",
label: "Name",
numeric: false,
},
{
disablePadding: false,
id: "value",
label: "Value",
numeric: false,
},
{
disablePadding: false,
id: "percentage",
label: "Percent of series",
numeric: false,
},
{
disablePadding: false,
id: "action",
label: "Action",
numeric: false,
}
] as HeadCell[];
export const defaultHeadCells = headCellsWithProgress.filter((head) => head.id!=="percentage");
export const spinnerContainerStyles = (height: string) => {
return {
width: "100%",
maxWidth: "100%",
position: "absolute",
height: height ?? "50%",
background: "rgba(255, 255, 255, 0.7)",
pointerEvents: "none",
zIndex: 1000,
};
};
export const SPINNER_TITLE = "Please wait while cardinality stats is calculated. This may take some time if the db contains big number of time series";

View file

@ -0,0 +1,59 @@
import {Containers, DefaultState, QueryUpdater, Tabs, TSDBStatus, TypographyFunctions} from "./types";
import {Data} from "../Table/types";
import {useRef} from "preact/compat";
export const tableTitles: {[key: string]: string} = {
"seriesCountByMetricName": "Metric names with the highest number of series",
"seriesCountByLabelValuePair": "Label=value pairs with the highest number of series",
"labelValueCountByLabelName": "Labels with the highest number of unique values",
};
export const queryUpdater: QueryUpdater = {
labelValueCountByLabelName: (query: string): string => `{${query}!=""}`,
seriesCountByLabelValuePair: (query: string): string => {
const a = query.split("=");
const label = a[0];
const value = a.slice(1).join("=");
return getSeriesSelector(label, value);
},
seriesCountByMetricName: (query: string): string => {
return getSeriesSelector("__name__", query);
},
};
const getSeriesSelector = (label: string, value: string): string => {
return "{" + label + "=" + JSON.stringify(value) + "}";
};
export const progressCount = (totalSeries: number, key: string, row: Data): Data => {
if (key === "seriesCountByMetricName" || key === "seriesCountByLabelValuePair") {
row.progressValue = row.value / totalSeries * 100;
return row;
}
return row;
};
export const defaultProperties = (tsdbStatus: TSDBStatus) => {
return Object.keys(tsdbStatus).reduce((acc, key) => {
if (key === "totalSeries" || key === "totalLabelValuePairs") return acc;
return {
...acc,
tabs:{
...acc.tabs,
[key]: ["table", "graph"],
},
containerRefs: {
...acc.containerRefs,
[key]: useRef<HTMLDivElement>(null),
},
defaultState: {
...acc.defaultState,
[key]: 0,
},
};
}, {
tabs:{} as Tabs,
containerRefs: {} as Containers<HTMLDivElement>,
defaultState: {} as DefaultState,
});
};

View file

@ -0,0 +1,40 @@
import {MutableRef} from "preact/hooks";
export interface TSDBStatus {
labelValueCountByLabelName: TopHeapEntry[];
seriesCountByLabelValuePair: TopHeapEntry[];
seriesCountByMetricName: TopHeapEntry[];
totalSeries: number;
totalLabelValuePairs: number;
}
export interface TopHeapEntry {
name: string;
count: number;
}
export type TypographyFunctions = {
[key: string]: (value: number) => string,
}
export type QueryUpdater = {
[key: string]: (query: string) => string,
}
export interface Tabs {
labelValueCountByLabelName: string[];
seriesCountByLabelValuePair: string[];
seriesCountByMetricName: string[];
}
export interface Containers<T> {
labelValueCountByLabelName: MutableRef<T>;
seriesCountByLabelValuePair: MutableRef<T>;
seriesCountByMetricName: MutableRef<T>;
}
export interface DefaultState {
labelValueCountByLabelName: number;
seriesCountByLabelValuePair: number;
seriesCountByMetricName: number;
}

View file

@ -72,8 +72,10 @@ const QueryConfigurator: FC<QueryConfiguratorProps> = ({error, queryOptions}) =>
{query.map((q, i) =>
<Box key={i} display="grid" gridTemplateColumns="1fr auto auto" gap="4px" width="100%"
mb={i === query.length - 1 ? 0 : 2.5}>
<QueryEditor query={query[i]} index={i} autocomplete={autocomplete} queryOptions={queryOptions}
error={error} setHistoryIndex={setHistoryIndex} runQuery={onRunQuery} setQuery={onSetQuery}/>
<QueryEditor
query={query[i]} index={i} autocomplete={autocomplete} queryOptions={queryOptions}
error={error} setHistoryIndex={setHistoryIndex} runQuery={onRunQuery} setQuery={onSetQuery}
label={`Query ${i + 1}`}/>
{i === 0 && <Tooltip title="Execute Query">
<IconButton onClick={onRunQuery} sx={{height: "49px", width: "49px"}}>
<PlayCircleOutlineIcon/>
@ -97,4 +99,4 @@ const QueryConfigurator: FC<QueryConfiguratorProps> = ({error, queryOptions}) =>
</Box>;
};
export default QueryConfigurator;
export default QueryConfigurator;

View file

@ -18,6 +18,7 @@ export interface QueryEditorProps {
autocomplete: boolean;
error?: ErrorTypes | string;
queryOptions: string[];
label: string;
}
const QueryEditor: FC<QueryEditorProps> = ({
@ -28,7 +29,8 @@ const QueryEditor: FC<QueryEditorProps> = ({
runQuery,
autocomplete,
error,
queryOptions
queryOptions,
label,
}) => {
const [focusField, setFocusField] = useState(false);
@ -99,8 +101,9 @@ const QueryEditor: FC<QueryEditorProps> = ({
<TextField
defaultValue={query}
fullWidth
label={`Query ${index + 1}`}
label={label}
multiline
focused={!!query}
error={!!error}
onFocus={() => setFocusField(true)}
onBlur={(e) => {

View file

@ -12,6 +12,7 @@ import GraphSettings from "./Configurator/Graph/GraphSettings";
import {useGraphDispatch, useGraphState} from "../../state/graph/GraphStateContext";
import {AxisRange} from "../../state/graph/reducer";
import Spinner from "../common/Spinner";
import {useFetchQueryOptions} from "../../hooks/useFetchQueryOptions";
const CustomPanel: FC = () => {
@ -33,7 +34,8 @@ const CustomPanel: FC = () => {
dispatch({type: "SET_PERIOD", payload: {from, to}});
};
const {isLoading, liveData, graphData, error, queryOptions} = useFetchQuery({
const {queryOptions} = useFetchQueryOptions();
const {isLoading, liveData, graphData, error} = useFetchQuery({
visible: true,
customStep
});

View file

@ -1,4 +1,4 @@
import React, {FC, useState} from "preact/compat";
import React, {FC, useMemo, useState} from "preact/compat";
import AppBar from "@mui/material/AppBar";
import Box from "@mui/material/Box";
import Link from "@mui/material/Link";
@ -12,7 +12,10 @@ import GlobalSettings from "../CustomPanel/Configurator/Settings/GlobalSettings"
import {Link as RouterLink, useLocation, useNavigate} from "react-router-dom";
import Tabs from "@mui/material/Tabs";
import Tab from "@mui/material/Tab";
import router from "../../router/index";
import router, {RouterOptions, routerOptions} from "../../router/index";
import DatePicker from "../Main/DatePicker/DatePicker";
import {useCardinalityState, useCardinalityDispatch} from "../../state/cardinality/CardinalityStateContext";
import {useEffect} from "react";
const classes = {
logo: {
@ -54,11 +57,18 @@ const classes = {
const Header: FC = () => {
const {date} = useCardinalityState();
const cardinalityDispatch = useCardinalityDispatch();
const {search, pathname} = useLocation();
const navigate = useNavigate();
const [activeMenu, setActiveMenu] = useState(pathname);
const headerSetup = useMemo(() => {
return ((routerOptions[pathname] || {}) as RouterOptions).header || {};
}, [pathname]);
const onClickLogo = () => {
navigateHandler(router.home);
setQueryStringWithoutPageReload("");
@ -69,6 +79,10 @@ const Header: FC = () => {
navigate({pathname, search: search});
};
useEffect(() => {
setActiveMenu(pathname);
}, [pathname]);
return <AppBar position="static" sx={{px: 1, boxShadow: "none"}}>
<Toolbar>
<Box display="grid" alignItems="center" justifyContent="center">
@ -89,12 +103,23 @@ const Header: FC = () => {
onChange={(e, val) => setActiveMenu(val)}>
<Tab label="Custom panel" value={router.home} component={RouterLink} to={`${router.home}${search}`}/>
<Tab label="Dashboards" value={router.dashboards} component={RouterLink} to={`${router.dashboards}${search}`}/>
<Tab
label="Cardinality"
value={router.cardinality}
component={RouterLink}
to={`${router.cardinality}${search}`}/>
</Tabs>
</Box>
<Box display="grid" gridTemplateColumns="repeat(3, auto)" gap={1} alignItems="center" ml="auto" mr={0}>
<TimeSelector/>
<ExecutionControls/>
<GlobalSettings/>
{headerSetup?.timeSelector && <TimeSelector/>}
{headerSetup?.datePicker && (
<DatePicker
date={date}
onChange={(val) => cardinalityDispatch({type: "SET_DATE", payload: val})}
/>
)}
{headerSetup?.executionControls && <ExecutionControls/>}
{headerSetup?.globalSettings && <GlobalSettings/>}
</Box>
</Toolbar>
</AppBar>;

View file

@ -0,0 +1,67 @@
import React, {FC} from "react";
import TextField from "@mui/material/TextField";
import {useState} from "preact/compat";
import StaticDatePicker from "@mui/lab/StaticDatePicker";
import Box from "@mui/material/Box";
import Button from "@mui/material/Button";
import Tooltip from "@mui/material/Tooltip";
import dayjs from "dayjs";
import Popper from "@mui/material/Popper";
import ClickAwayListener from "@mui/material/ClickAwayListener";
import Paper from "@mui/material/Paper";
import EventIcon from "@mui/icons-material/Event";
const formatDate = "YYYY-MM-DD";
interface DatePickerProps {
date: string | null,
onChange: (val: string | null) => void
}
const DatePicker: FC<DatePickerProps> = ({date, onChange}) => {
const dateFormatted = date ? dayjs(date).format(formatDate) : null;
const [anchorEl, setAnchorEl] = useState<HTMLButtonElement | null>(null);
const open = Boolean(anchorEl);
return <>
<Tooltip title="Date control">
<Button variant="contained" color="primary"
sx={{
color: "white",
border: "1px solid rgba(0, 0, 0, 0.2)",
boxShadow: "none"
}}
startIcon={<EventIcon/>}
onClick={(e) => setAnchorEl(e.currentTarget)}>
{dateFormatted}
</Button>
</Tooltip>
<Popper
open={open}
anchorEl={anchorEl}
placement="bottom-end"
modifiers={[{name: "offset", options: {offset: [0, 6]}}]}>
<ClickAwayListener onClickAway={() => setAnchorEl(null)}>
<Paper elevation={3}>
<Box>
<StaticDatePicker
displayStaticWrapperAs="desktop"
inputFormat={formatDate}
mask="____-__-__"
value={date}
onChange={(newDate) => {
onChange(newDate ? dayjs(newDate).format(formatDate) : null);
setAnchorEl(null);
}}
renderInput={(params) => <TextField {...params}/>}
/>
</Box>
</Paper>
</ClickAwayListener>
</Popper>
</>;
};
export default DatePicker;

View file

@ -0,0 +1,26 @@
import {ReactNode} from "react";
import {Box} from "@mui/material";
import React from "preact/compat";
interface TabPanelProps {
children?: ReactNode;
index: number;
value: number;
}
const TabPanel = (props: TabPanelProps) => {
const { children, value, index, ...other } = props;
return (
<div
role="tabpanel"
hidden={value !== index}
id={`simple-tabpanel-${index}`}
aria-labelledby={`simple-tab-${index}`}
{...other}
>
{value === index && (<Box sx={{ p: 3 }}>{children}</Box>)}
</div>
);
};
export default TabPanel;

View file

@ -0,0 +1,137 @@
import {Box, Paper, Table, TableBody, TableCell, TableContainer, TablePagination, TableRow,} from "@mui/material";
import React, {FC, useState} from "preact/compat";
import {ChangeEvent, MouseEvent, SyntheticEvent} from "react";
import {Data, Order, TableProps,} from "./types";
import {EnhancedTableHead} from "./TableHead";
import {getComparator, stableSort} from "./helpers";
const EnhancedTable: FC<TableProps> = ({
rows,
headerCells,
defaultSortColumn,
isPagingEnabled,
tableCells}) => {
const [order, setOrder] = useState<Order>("desc");
const [orderBy, setOrderBy] = useState<keyof Data>(defaultSortColumn);
const [selected, setSelected] = useState<readonly string[]>([]);
const [page, setPage] = useState(0);
const [rowsPerPage, setRowsPerPage] = useState(5);
const handleRequestSort = (
event: MouseEvent<unknown>,
property: keyof Data,
) => {
const isAsc = orderBy === property && order === "asc";
setOrder(isAsc ? "desc" : "asc");
setOrderBy(property);
};
const handleSelectAllClick = (event: ChangeEvent<HTMLInputElement>) => {
if (event.target.checked) {
const newSelecteds = rows.map((n) => n.name) as string[];
setSelected(newSelecteds);
return;
}
setSelected([]);
};
const handleClick = (event: SyntheticEvent, name: string) => {
const selectedIndex = selected.indexOf(name);
let newSelected: readonly string[] = [];
if (selectedIndex === -1) {
newSelected = newSelected.concat(selected, name);
} else if (selectedIndex === 0) {
newSelected = newSelected.concat(selected.slice(1));
} else if (selectedIndex === selected.length - 1) {
newSelected = newSelected.concat(selected.slice(0, -1));
} else if (selectedIndex > 0) {
newSelected = newSelected.concat(
selected.slice(0, selectedIndex),
selected.slice(selectedIndex + 1),
);
}
setSelected(newSelected);
};
const handleChangePage = (event: unknown, newPage: number) => {
setPage(newPage);
};
const handleChangeRowsPerPage = (event: ChangeEvent<HTMLInputElement>) => {
setRowsPerPage(parseInt(event.target.value, 10));
setPage(0);
};
const isSelected = (name: string) => selected.indexOf(name) !== -1;
// Avoid a layout jump when reaching the last page with empty rows.
const emptyRows =
page > 0 ? Math.max(0, (1 + page) * rowsPerPage - rows.length) : 0;
const sortedData = isPagingEnabled ? stableSort(rows, getComparator(order, orderBy))
.slice(page * rowsPerPage, page * rowsPerPage + rowsPerPage) : stableSort(rows, getComparator(order, orderBy));
return (
<Box sx={{width: "100%"}}>
<Paper sx={{width: "100%", mb: 2}}>
<TableContainer>
<Table
size={"small"}
sx={{minWidth: 750}}
aria-labelledby="tableTitle"
>
<EnhancedTableHead
numSelected={selected.length}
order={order}
orderBy={orderBy}
onSelectAllClick={handleSelectAllClick}
onRequestSort={handleRequestSort}
rowCount={rows.length}
headerCells={headerCells}/>
<TableBody>
{/* if you don't need to support IE11, you can replace the `stableSort` call with:
rows.slice().sort(getComparator(order, orderBy)) */}
{sortedData
.map((row) => {
const isItemSelected = isSelected(row.name);
return (
<TableRow
hover
onClick={(event) => handleClick(event, row.name)}
role="checkbox"
aria-checked={isItemSelected}
tabIndex={-1}
key={row.name}
selected={isItemSelected}
>
{tableCells(row)}
</TableRow>
);
})}
{emptyRows > 0 && (
<TableRow>
<TableCell colSpan={6}/>
</TableRow>
)}
</TableBody>
</Table>
</TableContainer>
{isPagingEnabled ? <TablePagination
rowsPerPageOptions={[5, 10, 25]}
component="div"
count={rows.length}
rowsPerPage={rowsPerPage}
page={page}
onPageChange={handleChangePage}
onRowsPerPageChange={handleChangeRowsPerPage}
/> : null}
</Paper>
</Box>
);
};
export default EnhancedTable;

View file

@ -0,0 +1,41 @@
import {MouseEvent} from "react";
import {Box, TableCell, TableHead, TableRow, TableSortLabel} from "@mui/material";
import {visuallyHidden} from "@mui/utils";
import React from "preact/compat";
import {Data, EnhancedHeaderTableProps} from "./types";
export function EnhancedTableHead(props: EnhancedHeaderTableProps) {
const { order, orderBy, onRequestSort, headerCells } = props;
const createSortHandler =
(property: keyof Data) => (event: MouseEvent<unknown>) => {
onRequestSort(event, property);
};
return (
<TableHead>
<TableRow>
{headerCells.map((headCell) => (
<TableCell
key={headCell.id}
align={headCell.numeric ? "right" : "left"}
sortDirection={orderBy === headCell.id ? order : false}
>
<TableSortLabel
active={orderBy === headCell.id}
direction={orderBy === headCell.id ? order : "asc"}
onClick={createSortHandler(headCell.id as keyof Data)}
>
{headCell.label}
{orderBy === headCell.id ? (
<Box component="span" sx={visuallyHidden}>
{order === "desc" ? "sorted descending" : "sorted ascending"}
</Box>
) : null}
</TableSortLabel>
</TableCell>
))}
</TableRow>
</TableHead>
);
}

View file

@ -0,0 +1,37 @@
import {Order} from "./types";
export function descendingComparator<T>(a: T, b: T, orderBy: keyof T) {
if (b[orderBy] < a[orderBy]) {
return -1;
}
if (b[orderBy] > a[orderBy]) {
return 1;
}
return 0;
}
export function getComparator<Key extends keyof any>(
order: Order,
orderBy: Key,
): (
a: { [key in Key]: number | string },
b: { [key in Key]: number | string },
) => number {
return order === "desc"
? (a, b) => descendingComparator(a, b, orderBy)
: (a, b) => -descendingComparator(a, b, orderBy);
}
// This method is created for cross-browser compatibility, if you don't
// need to support IE11, you can use Array.prototype.sort() directly
export function stableSort<T>(array: readonly T[], comparator: (a: T, b: T) => number) {
const stabilizedThis = array.map((el, index) => [el, index] as [T, number]);
stabilizedThis.sort((a, b) => {
const order = comparator(a[0], b[0]);
if (order !== 0) {
return order;
}
return a[1] - b[1];
});
return stabilizedThis.map((el) => el[0]);
}

View file

@ -0,0 +1,36 @@
import {ChangeEvent, MouseEvent, ReactNode} from "react";
export type Order = "asc" | "desc";
export interface HeadCell {
disablePadding: boolean;
id: string;
label: string | ReactNode;
numeric: boolean;
}
export interface EnhancedHeaderTableProps {
numSelected: number;
onRequestSort: (event: MouseEvent<unknown>, property: keyof Data) => void;
onSelectAllClick: (event: ChangeEvent<HTMLInputElement>) => void;
order: Order;
orderBy: string;
rowCount: number;
headerCells: HeadCell[];
}
export interface TableProps {
rows: Data[];
headerCells: HeadCell[],
defaultSortColumn: keyof Data,
tableCells: (row: Data) => ReactNode[],
isPagingEnabled?: boolean,
}
export interface Data {
name: string;
value: number;
progressValue: number;
actions: string;
}

View file

@ -1,4 +1,5 @@
import React, {FC} from "preact/compat";
import {ReactNode} from "react";
import Fade from "@mui/material/Fade";
import Box from "@mui/material/Box";
import CircularProgress from "@mui/material/CircularProgress";
@ -6,25 +7,30 @@ import CircularProgress from "@mui/material/CircularProgress";
interface SpinnerProps {
isLoading: boolean;
height?: string;
containerStyles?: Record<string, string | number>;
title?: string | ReactNode,
}
const Spinner: FC<SpinnerProps> = ({isLoading, height}) => {
export const defaultContainerStyles: Record<string, string | number> = {
width: "100%",
maxWidth: "calc(100vw - 64px)",
height: "50%",
position: "absolute",
background: "rgba(255, 255, 255, 0.7)",
pointerEvents: "none",
zIndex: 2,
};
const Spinner: FC<SpinnerProps> = ({isLoading, containerStyles, title}) => {
const styles = containerStyles ?? defaultContainerStyles;
return <Fade in={isLoading} style={{
transitionDelay: isLoading ? "300ms" : "0ms",
}}>
<Box alignItems="center" justifyContent="center" flexDirection="column" display="flex"
style={{
width: "100%",
maxWidth: "calc(100vw - 64px)",
position: "absolute",
height: height ?? "50%",
background: "rgba(255, 255, 255, 0.7)",
pointerEvents: "none",
zIndex: 2,
}}>
<Box alignItems="center" justifyContent="center" flexDirection="column" display="flex" style={styles}>
<CircularProgress/>
{title}
</Box>
</Fade>;
};
export default Spinner;
export default Spinner;

View file

@ -0,0 +1,71 @@
import {ErrorTypes} from "../types";
import {useAppState} from "../state/common/StateContext";
import {useEffect, useState} from "preact/compat";
import {CardinalityRequestsParams, getCardinalityInfo} from "../api/tsdb";
import {getAppModeEnable, getAppModeParams} from "../utils/app-mode";
import {TSDBStatus} from "../components/CardinalityPanel/types";
import {useCardinalityState} from "../state/cardinality/CardinalityStateContext";
const appModeEnable = getAppModeEnable();
const {serverURL: appServerUrl} = getAppModeParams();
const defaultTSDBStatus = {
totalSeries: 0,
totalLabelValuePairs: 0,
seriesCountByMetricName: [],
seriesCountByLabelValuePair: [],
labelValueCountByLabelName: [],
};
export const useFetchQuery = (): {
fetchUrl?: string[],
isLoading: boolean,
error?: ErrorTypes | string
tsdbStatus: TSDBStatus,
} => {
const {topN, extraLabel, match, date, runQuery} = useCardinalityState();
const {serverUrl} = useAppState();
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState<ErrorTypes | string>();
const [tsdbStatus, setTSDBStatus] = useState<TSDBStatus>(defaultTSDBStatus);
useEffect(() => {
if (error) {
setTSDBStatus(defaultTSDBStatus);
setIsLoading(false);
}
}, [error]);
const fetchCardinalityInfo = async (requestParams: CardinalityRequestsParams) => {
const server = appModeEnable ? appServerUrl : serverUrl;
if (!server) return;
setError("");
setIsLoading(true);
setTSDBStatus(defaultTSDBStatus);
const url = getCardinalityInfo(server, requestParams);
try {
const response = await fetch(url);
const resp = await response.json();
if (response.ok) {
const {data} = resp;
setTSDBStatus({ ...data });
setIsLoading(false);
} else {
setError(resp.error);
setTSDBStatus(defaultTSDBStatus);
setIsLoading(false);
}
} catch (e) {
setIsLoading(false);
if (e instanceof Error) setError(`${e.name}: ${e.message}`);
}
};
useEffect(() => {
fetchCardinalityInfo({topN, extraLabel, match, date});
}, [serverUrl, runQuery, date]);
return {isLoading, tsdbStatus, error};
};

View file

@ -1,5 +1,5 @@
import {useEffect, useMemo, useCallback, useState} from "preact/compat";
import {getQueryOptions, getQueryRangeUrl, getQueryUrl} from "../api/query-range";
import {getQueryRangeUrl, getQueryUrl} from "../api/query-range";
import {useAppState} from "../state/common/StateContext";
import {InstantMetricResult, MetricBase, MetricResult} from "../api/types";
import {isValidHttpUrl} from "../utils/url";
@ -27,11 +27,9 @@ export const useFetchQuery = ({predefinedQuery, visible, display, customStep}: F
graphData?: MetricResult[],
liveData?: InstantMetricResult[],
error?: ErrorTypes | string,
queryOptions: string[],
} => {
const {query, displayType, serverUrl, time: {period}, queryControls: {nocache}} = useAppState();
const [queryOptions, setQueryOptions] = useState([]);
const [isLoading, setIsLoading] = useState(false);
const [graphData, setGraphData] = useState<MetricResult[]>();
const [liveData, setLiveData] = useState<InstantMetricResult[]>();
@ -78,22 +76,6 @@ export const useFetchQuery = ({predefinedQuery, visible, display, customStep}: F
const throttledFetchData = useCallback(throttle(fetchData, 1000), []);
const fetchOptions = async () => {
const server = appModeEnable ? appServerUrl : serverUrl;
if (!server) return;
const url = getQueryOptions(server);
try {
const response = await fetch(url);
const resp = await response.json();
if (response.ok) {
setQueryOptions(resp.data);
}
} catch (e) {
if (e instanceof Error) setError(`${e.name}: ${e.message}`);
}
};
const fetchUrl = useMemo(() => {
const server = appModeEnable ? appServerUrl : serverUrl;
const expr = predefinedQuery ?? query;
@ -117,10 +99,6 @@ export const useFetchQuery = ({predefinedQuery, visible, display, customStep}: F
const prevFetchUrl = usePrevious(fetchUrl);
useEffect(() => {
fetchOptions();
}, [serverUrl]);
useEffect(() => {
if (!visible || (fetchUrl && prevFetchUrl && arrayEquals(fetchUrl, prevFetchUrl))) return;
throttledFetchData(fetchUrl, fetchQueue, (display || displayType));
@ -133,5 +111,5 @@ export const useFetchQuery = ({predefinedQuery, visible, display, customStep}: F
setFetchQueue(fetchQueue.filter(f => !f.signal.aborted));
}, [fetchQueue]);
return { fetchUrl, isLoading, graphData, liveData, error, queryOptions: queryOptions };
return { fetchUrl, isLoading, graphData, liveData, error };
};

View file

@ -0,0 +1,37 @@
import {useEffect, useState} from "preact/compat";
import {getQueryOptions} from "../api/query-range";
import {useAppState} from "../state/common/StateContext";
import {getAppModeEnable, getAppModeParams} from "../utils/app-mode";
const appModeEnable = getAppModeEnable();
const {serverURL: appServerUrl} = getAppModeParams();
export const useFetchQueryOptions = (): {
queryOptions: string[],
} => {
const {serverUrl} = useAppState();
const [queryOptions, setQueryOptions] = useState([]);
const fetchOptions = async () => {
const server = appModeEnable ? appServerUrl : serverUrl;
if (!server) return;
const url = getQueryOptions(server);
try {
const response = await fetch(url);
const resp = await response.json();
if (response.ok) {
setQueryOptions(resp.data);
}
} catch (e) {
console.error(e);
}
};
useEffect(() => {
fetchOptions();
}, [serverUrl]);
return { queryOptions };
};

View file

@ -1,4 +1,35 @@
export default {
const router = {
home: "/",
dashboards: "/dashboards"
dashboards: "/dashboards",
cardinality: "/cardinality",
};
export interface RouterOptions {
header: {
timeSelector?: boolean,
executionControls?: boolean,
globalSettings?: boolean,
datePicker?: boolean
}
}
const routerOptionsDefault = {
header: {
timeSelector: true,
executionControls: true,
globalSettings: true,
}
};
export const routerOptions: {[key: string]: RouterOptions} = {
[router.home]: routerOptionsDefault,
[router.dashboards]: routerOptionsDefault,
[router.cardinality]: {
header: {
datePicker: true,
globalSettings: true,
}
}
};
export default router;

View file

@ -0,0 +1,35 @@
import React, {createContext, FC, useContext, useEffect, useMemo, useReducer} from "preact/compat";
import {Action, CardinalityState, initialState, reducer} from "./reducer";
import {Dispatch} from "react";
import {useLocation} from "react-router-dom";
import {setQueryStringValue} from "../../utils/query-string";
import router from "../../router";
type CardinalityStateContextType = { state: CardinalityState, dispatch: Dispatch<Action> };
export const CardinalityStateContext = createContext<CardinalityStateContextType>({} as CardinalityStateContextType);
export const useCardinalityState = (): CardinalityState => useContext(CardinalityStateContext).state;
export const useCardinalityDispatch = (): Dispatch<Action> => useContext(CardinalityStateContext).dispatch;
export const CardinalityStateProvider: FC = ({children}) => {
const location = useLocation();
const [state, dispatch] = useReducer(reducer, initialState);
useEffect(() => {
if (location.pathname !== router.cardinality) return;
setQueryStringValue(state as unknown as Record<string, unknown>);
}, [state, location]);
const contextValue = useMemo(() => {
return { state, dispatch };
}, [state, dispatch]);
return <CardinalityStateContext.Provider value={contextValue}>
{children}
</CardinalityStateContext.Provider>;
};

View file

@ -0,0 +1,57 @@
import dayjs from "dayjs";
import {getQueryStringValue} from "../../utils/query-string";
export interface CardinalityState {
runQuery: number,
topN: number
date: string | null
match: string | null
extraLabel: string | null
}
export type Action =
| { type: "SET_TOP_N", payload: number }
| { type: "SET_DATE", payload: string | null }
| { type: "SET_MATCH", payload: string | null }
| { type: "SET_EXTRA_LABEL", payload: string | null }
| { type: "RUN_QUERY" }
export const initialState: CardinalityState = {
runQuery: 0,
topN: getQueryStringValue("topN", 10) as number,
date: getQueryStringValue("date", dayjs(new Date()).format("YYYY-MM-DD")) as string,
match: (getQueryStringValue("match", []) as string[]).join("&"),
extraLabel: getQueryStringValue("extra_label", "") as string,
};
export function reducer(state: CardinalityState, action: Action): CardinalityState {
switch (action.type) {
case "SET_TOP_N":
return {
...state,
topN: action.payload
};
case "SET_DATE":
return {
...state,
date: action.payload
};
case "SET_MATCH":
return {
...state,
match: action.payload
};
case "SET_EXTRA_LABEL":
return {
...state,
extraLabel: action.payload
};
case "RUN_QUERY":
return {
...state,
runQuery: state.runQuery + 1
};
default:
throw new Error();
}
}

View file

@ -3,6 +3,7 @@ import {Action, AppState, initialState, reducer} from "./reducer";
import {getQueryStringValue, setQueryStringValue} from "../../utils/query-string";
import {Dispatch} from "react";
import {useLocation} from "react-router-dom";
import router from "../../router";
type StateContextType = { state: AppState, dispatch: Dispatch<Action> };
@ -23,6 +24,7 @@ export const StateProvider: FC = ({children}) => {
const [state, dispatch] = useReducer(reducer, initialPrepopulatedState);
useEffect(() => {
if (location.pathname === router.cardinality) return;
setQueryStringValue(state as unknown as Record<string, unknown>);
}, [state, location]);

View file

@ -13,6 +13,12 @@ const graphStateToUrlParams = {
const stateToUrlParams = {
[router.home]: graphStateToUrlParams,
[router.dashboards]: graphStateToUrlParams,
[router.cardinality]: {
"topN": "topN",
"date": "date",
"match": "match[]",
"extraLabel": "extra_label"
}
};
// TODO need function for detect types.

View file

@ -0,0 +1,440 @@
/* eslint-disable */
import uPlot from "uplot";
export const seriesBarsPlugin = (opts) => {
let pxRatio;
let font;
let { ignore = [] } = opts;
let radius = opts.radius ?? 0;
function setPxRatio() {
pxRatio = devicePixelRatio;
font = Math.round(10 * pxRatio) + "px Arial";
}
setPxRatio();
window.addEventListener("dppxchange", setPxRatio);
const ori = opts.ori;
const dir = opts.dir;
const stacked = opts.stacked;
const groupWidth = 0.9;
const groupDistr = SPACE_BETWEEN;
const barWidth = 1;
const barDistr = SPACE_BETWEEN;
function distrTwo(groupCount, barCount, _groupWidth = groupWidth) {
let out = Array.from({length: barCount}, () => ({
offs: Array(groupCount).fill(0),
size: Array(groupCount).fill(0),
}));
distr(groupCount, _groupWidth, groupDistr, null, (groupIdx, groupOffPct, groupDimPct) => {
distr(barCount, barWidth, barDistr, null, (barIdx, barOffPct, barDimPct) => {
out[barIdx].offs[groupIdx] = groupOffPct + (groupDimPct * barOffPct);
out[barIdx].size[groupIdx] = groupDimPct * barDimPct;
});
});
return out;
}
function distrOne(groupCount, barCount) {
let out = Array.from({length: barCount}, () => ({
offs: Array(groupCount).fill(0),
size: Array(groupCount).fill(0),
}));
distr(groupCount, groupWidth, groupDistr, null, (groupIdx, groupOffPct, groupDimPct) => {
distr(barCount, barWidth, barDistr, null, (barIdx, barOffPct, barDimPct) => {
out[barIdx].offs[groupIdx] = groupOffPct;
out[barIdx].size[groupIdx] = groupDimPct;
});
});
return out;
}
let barsPctLayout;
let barsColors;
let barsBuilder = uPlot.paths.bars({
radius,
disp: {
x0: {
unit: 2,
values: (u, seriesIdx, idx0, idx1) => barsPctLayout[seriesIdx].offs,
},
size: {
unit: 2,
values: (u, seriesIdx, idx0, idx1) => barsPctLayout[seriesIdx].size,
},
...opts.disp,
},
each: (u, seriesIdx, dataIdx, lft, top, wid, hgt) => {
// we get back raw canvas coords (included axes & padding). translate to the plotting area origin
lft -= u.bbox.left;
top -= u.bbox.top;
qt.add({x: lft, y: top, w: wid, h: hgt, sidx: seriesIdx, didx: dataIdx});
},
});
function drawPoints(u, sidx, i0, i1) {
u.ctx.save();
u.ctx.font = font;
u.ctx.fillStyle = "black";
uPlot.orient(u, sidx, (
series,
dataX,
dataY,
scaleX,
scaleY,
valToPosX,
valToPosY,
xOff,
yOff,
xDim,
yDim, moveTo, lineTo, rect) => {
const _dir = dir * (ori === 0 ? 1 : -1);
const wid = Math.round(barsPctLayout[sidx].size[0] * xDim);
barsPctLayout[sidx].offs.forEach((offs, ix) => {
if (dataY[ix] !== null) {
let x0 = xDim * offs;
let lft = Math.round(xOff + (_dir === 1 ? x0 : xDim - x0 - wid));
let barWid = Math.round(wid);
let yPos = valToPosY(dataY[ix], scaleY, yDim, yOff);
let x = ori === 0 ? Math.round(lft + barWid/2) : Math.round(yPos);
let y = ori === 0 ? Math.round(yPos) : Math.round(lft + barWid/2);
u.ctx.textAlign = ori === 0 ? "center" : dataY[ix] >= 0 ? "left" : "right";
u.ctx.textBaseline = ori === 1 ? "middle" : dataY[ix] >= 0 ? "bottom" : "top";
u.ctx.fillText(dataY[ix], x, y);
}
});
});
u.ctx.restore();
}
function range(u, dataMin, dataMax) {
let [min, max] = uPlot.rangeNum(0, dataMax, 0.05, true);
return [0, max];
}
let qt;
return {
hooks: {
drawClear: u => {
qt = qt || new Quadtree(0, 0, u.bbox.width, u.bbox.height);
qt.clear();
// force-clear the path cache to cause drawBars() to rebuild new quadtree
u.series.forEach(s => {
s._paths = null;
});
if (stacked)
barsPctLayout = [null].concat(distrOne(u.data.length - 1 - ignore.length, u.data[0].length));
else if (u.series.length === 2)
barsPctLayout = [null].concat(distrOne(u.data[0].length, 1));
else
barsPctLayout = [null].concat(distrTwo(u.data[0].length, u.data.length - 1 - ignore.length, u.data[0].length === 1 ? 1 : groupWidth));
// TODOL only do on setData, not every redraw
if (opts.disp?.fill != null) {
barsColors = [null];
for (let i = 1; i < u.data.length; i++) {
barsColors.push({
fill: opts.disp.fill.values(u, i),
stroke: opts.disp.stroke.values(u, i),
});
}
}
},
},
opts: (u, opts) => {
const yScaleOpts = {
range,
ori: ori === 0 ? 1 : 0,
};
// hovered
let hRect;
uPlot.assign(opts, {
select: {show: false},
cursor: {
x: false,
y: false,
dataIdx: (u, seriesIdx) => {
if (seriesIdx === 1) {
hRect = null;
let cx = u.cursor.left * pxRatio;
let cy = u.cursor.top * pxRatio;
qt.get(cx, cy, 1, 1, o => {
if (pointWithin(cx, cy, o.x, o.y, o.x + o.w, o.y + o.h))
hRect = o;
});
}
return hRect && seriesIdx === hRect.sidx ? hRect.didx : null;
},
points: {
// fill: "rgba(255,255,255, 0.3)",
bbox: (u, seriesIdx) => {
let isHovered = hRect && seriesIdx === hRect.sidx;
return {
left: isHovered ? hRect.x / pxRatio : -10,
top: isHovered ? hRect.y / pxRatio : -10,
width: isHovered ? hRect.w / pxRatio : 0,
height: isHovered ? hRect.h / pxRatio : 0,
};
}
}
},
scales: {
x: {
time: false,
distr: 2,
ori,
dir,
// auto: true,
range: (u, min, max) => {
min = 0;
max = Math.max(1, u.data[0].length - 1);
let pctOffset = 0;
distr(u.data[0].length, groupWidth, groupDistr, 0, (di, lftPct, widPct) => {
pctOffset = lftPct + widPct / 2;
});
let rn = max - min;
if (pctOffset === 0.5)
min -= rn;
else {
let upScale = 1 / (1 - pctOffset * 2);
let offset = (upScale * rn - rn) / 2;
min -= offset;
max += offset;
}
return [min, max];
}
},
rend: yScaleOpts,
size: yScaleOpts,
mem: yScaleOpts,
inter: yScaleOpts,
toggle: yScaleOpts,
}
});
if (ori === 1) {
opts.padding = [0, null, 0, null];
}
uPlot.assign(opts.axes[0], {
splits: (u, axisIdx) => {
const _dir = dir * (ori === 0 ? 1 : -1);
let splits = u._data[0].slice();
return _dir === 1 ? splits : splits.reverse();
},
values: u => u.data[0],
gap: 15,
size: ori === 0 ? 40 : 150,
labelSize: 20,
grid: {show: false},
ticks: {show: false},
side: ori === 0 ? 2 : 3,
});
opts.series.forEach((s, i) => {
if (i > 0 && !ignore.includes(i)) {
uPlot.assign(s, {
// pxAlign: false,
// stroke: "rgba(255,0,0,0.5)",
paths: barsBuilder,
points: {
show: drawPoints
}
});
}
});
}
};
};
const roundDec = (val, dec) => {
return Math.round(val * (dec = 10**dec)) / dec;
}
const SPACE_BETWEEN = 1;
const SPACE_AROUND = 2;
const SPACE_EVENLY = 3;
const coord = (i, offs, iwid, gap) => roundDec(offs + i * (iwid + gap), 6);
const distr = (numItems, sizeFactor, justify, onlyIdx, each) => {
let space = 1 - sizeFactor;
let gap = (
justify === SPACE_BETWEEN ? space / (numItems - 1) :
justify === SPACE_AROUND ? space / (numItems ) :
justify === SPACE_EVENLY ? space / (numItems + 1) : 0
);
if (isNaN(gap) || gap === Infinity)
gap = 0;
let offs = (
justify === SPACE_BETWEEN ? 0 :
justify === SPACE_AROUND ? gap / 2 :
justify === SPACE_EVENLY ? gap : 0
);
let iwid = sizeFactor / numItems;
let _iwid = roundDec(iwid, 6);
if (onlyIdx == null) {
for (let i = 0; i < numItems; i++)
each(i, coord(i, offs, iwid, gap), _iwid);
}
else
each(onlyIdx, coord(onlyIdx, offs, iwid, gap), _iwid);
}
const pointWithin = (px, py, rlft, rtop, rrgt, rbtm) => {
return px >= rlft && px <= rrgt && py >= rtop && py <= rbtm;
}
const MAX_OBJECTS = 10;
const MAX_LEVELS = 4;
function Quadtree(x, y, w, h, l) {
let t = this;
t.x = x;
t.y = y;
t.w = w;
t.h = h;
t.l = l || 0;
t.o = [];
t.q = null;
}
const proto = {
split: function() {
let t = this,
x = t.x,
y = t.y,
w = t.w / 2,
h = t.h / 2,
l = t.l + 1;
t.q = [
// top right
new Quadtree(x + w, y, w, h, l),
// top left
new Quadtree(x, y, w, h, l),
// bottom left
new Quadtree(x, y + h, w, h, l),
// bottom right
new Quadtree(x + w, y + h, w, h, l),
];
},
// invokes callback with index of each overlapping quad
quads: function(x, y, w, h, cb) {
let t = this,
q = t.q,
hzMid = t.x + t.w / 2,
vtMid = t.y + t.h / 2,
startIsNorth = y < vtMid,
startIsWest = x < hzMid,
endIsEast = x + w > hzMid,
endIsSouth = y + h > vtMid;
// top-right quad
startIsNorth && endIsEast && cb(q[0]);
// top-left quad
startIsWest && startIsNorth && cb(q[1]);
// bottom-left quad
startIsWest && endIsSouth && cb(q[2]);
// bottom-right quad
endIsEast && endIsSouth && cb(q[3]);
},
add: function(o) {
let t = this;
if (t.q != null) {
t.quads(o.x, o.y, o.w, o.h, q => {
q.add(o);
});
}
else {
let os = t.o;
os.push(o);
if (os.length > MAX_OBJECTS && t.l < MAX_LEVELS) {
t.split();
for (let i = 0; i < os.length; i++) {
let oi = os[i];
t.quads(oi.x, oi.y, oi.w, oi.h, q => {
q.add(oi);
});
}
t.o.length = 0;
}
}
},
get: function(x, y, w, h, cb) {
let t = this;
let os = t.o;
for (let i = 0; i < os.length; i++)
cb(os[i]);
if (t.q != null) {
t.quads(x, y, w, h, q => {
q.get(x, y, w, h, cb);
});
}
},
clear: function() {
this.o.length = 0;
this.q = null;
},
};
Object.assign(Quadtree.prototype, proto);
global.Quadtree = Quadtree;

View file

@ -1,7 +1,7 @@
import {MetricResult} from "../../api/types";
import {Series} from "uplot";
import {getNameForMetric} from "../metric";
import {LegendItem} from "./types";
import {BarSeriesItem, Disp, Fill, LegendItem, Stroke} from "./types";
import {getColorLine, getDashLine} from "./helpers";
import {HideSeriesArgs} from "./types";
@ -50,3 +50,25 @@ export const getHideSeries = ({hideSeries, legend, metaKey, series}: HideSeriesA
export const includesHideSeries = (label: string, group: string | number, hideSeries: string[]): boolean => {
return hideSeries.includes(`${group}.${label}`);
};
export const getBarSeries = (
which: number[],
ori: number,
dir: number,
radius: number,
disp: Disp): BarSeriesItem => {
return {
which: which,
ori: ori,
dir: dir,
radius: radius,
disp: disp,
};
};
export const barDisp = (stroke: Stroke, fill: Fill): Disp => {
return {
stroke: stroke,
fill: fill
};
};

View file

@ -39,3 +39,26 @@ export interface LegendItem {
checked: boolean;
freeFormFields: {[key: string]: string};
}
export interface BarSeriesItem {
which: number[],
ori: number,
dir: number,
radius: number,
disp: Disp
}
export interface Disp {
stroke: Stroke,
fill: Fill,
}
export interface Stroke {
unit: number,
values: (u: { data: number[][]; }) => string[],
}
export interface Fill {
unit: number,
values: (u: { data: number[][]; }) => string[],
}

View file

@ -1,35 +1,78 @@
{
"__inputs": [
{
"name": "DS_VICTORIAMETRICS",
"label": "VictoriaMetrics",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
],
"__elements": [],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "8.5.1"
},
{
"type": "panel",
"id": "graph",
"name": "Graph (old)",
"version": ""
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"datasource": {
"type": "datasource",
"uid": "grafana"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"description": "Overview for enterprise cluster VictoriaMetrics v1.56.0 or higher",
"editable": true,
"gnetId": null,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 13,
"iteration": 1617980754279,
"id": null,
"iteration": 1654632993170,
"links": [],
"liveNow": false,
"panels": [
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$ds",
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"description": "How many datapoints are inserted into storage per second by accountID and projectID",
"fieldConfig": {
"defaults": {
"custom": {},
"links": []
},
"overrides": []
@ -61,7 +104,7 @@
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.3.4",
"pluginVersion": "8.5.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
@ -71,16 +114,18 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(increase(vm_tenant_inserted_rows_total{job=~\"$job\", instance=~\"$instance\",accountID=~\"$accountID\", projectID=~\"$projectID\"}[1m])/60) by (accountID,projectID) ",
"datasource": {
"type": "prometheus",
"uid": "${DS_VICTORIAMETRICS}"
},
"expr": "sum(increase(vm_tenant_inserted_rows_total{job=~\"$job\", instance=~\"$instance\",accountID=~\"$account\", projectID=~\"$project\"}[1m])/60) by (accountID,projectID) ",
"interval": "",
"legendFormat": "inserted rows: {{accountID}}:{{projectID}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Datapoints ingestion rate ($instance)",
"tooltip": {
"shared": true,
@ -89,33 +134,26 @@
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
"align": false
}
},
{
@ -123,11 +161,13 @@
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$ds",
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"description": "Request rate accepted by vmselect nodes per tenant",
"fieldConfig": {
"defaults": {
"custom": {},
"links": []
},
"overrides": []
@ -162,7 +202,7 @@
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.3.4",
"pluginVersion": "8.5.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
@ -172,18 +212,22 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(vm_tenant_select_requests_total{job=~\"$job\", instance=~\"$instance.*\",accountID=~\"$accountID\", projectID=~\"$projectID\"}[$__rate_interval])) by (accountID,projectID) ",
"datasource": {
"type": "prometheus",
"uid": "${DS_VICTORIAMETRICS}"
},
"editorMode": "code",
"expr": "sum(rate(vm_tenant_select_requests_total{job=~\"$job\", instance=~\"$instance.*\",accountID=~\"$account\", projectID=~\"$project\"}[$__rate_interval])) by (accountID,projectID) ",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "tenant: {{accountID}}{{projectID}}",
"legendFormat": "query rate tenant: {{accountID}}:{{projectID}}",
"range": true,
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Query rate ($instance)",
"tooltip": {
"shared": true,
@ -192,33 +236,26 @@
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
"align": false
}
},
{
@ -226,11 +263,13 @@
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$ds",
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"description": "Shows the number of active time series with new data points inserted during the last hour. High value may result in ingestion slowdown. \n\nSee following link for details:",
"fieldConfig": {
"defaults": {
"custom": {},
"links": []
},
"overrides": []
@ -246,6 +285,7 @@
"hiddenSeries": false,
"id": 6,
"legend": {
"alignAsTable": true,
"avg": false,
"current": false,
"max": false,
@ -268,7 +308,7 @@
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.3.4",
"pluginVersion": "8.5.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
@ -278,7 +318,11 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(vm_tenant_active_timeseries{job=~\"$job\", instance=~\"$instance.*\",accountID=~\"$accountID\",projectID=~\"$projectID\"}) by(accountID,projectID)",
"datasource": {
"type": "prometheus",
"uid": "${DS_VICTORIAMETRICS}"
},
"expr": "sum(vm_tenant_active_timeseries{job=~\"$job\", instance=~\"$instance.*\",accountID=~\"$account\",projectID=~\"$project\"}) by(accountID,projectID)",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
@ -287,9 +331,7 @@
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Active time series ($instance)",
"tooltip": {
"shared": true,
@ -298,33 +340,26 @@
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
"align": false
}
},
{
@ -332,11 +367,13 @@
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$ds",
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"description": "Shows how many of new time-series are created every second. High churn rate tightly connected with database performance and may result in unexpected OOM's or slow queries. It is recommended to always keep an eye on this metric to avoid unexpected cardinality \"explosions\".\n\nGood references to read:\n* https://www.robustperception.io/cardinality-is-key\n* https://www.robustperception.io/using-tsdb-analyze-to-investigate-churn-and-cardinality",
"fieldConfig": {
"defaults": {
"custom": {},
"links": []
},
"overrides": []
@ -352,10 +389,12 @@
"hiddenSeries": false,
"id": 8,
"legend": {
"alignAsTable": true,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": false,
"show": true,
"total": false,
"values": false
@ -367,7 +406,7 @@
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.3.4",
"pluginVersion": "8.5.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
@ -377,16 +416,18 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(increase(vm_tenant_timeseries_created_total{job=~\"$job\", instance=~\"$instance\",accountID=~\"$accountID\", projectID=~\"$projectID\"}[1m])/60) by(accountID,projectID)",
"datasource": {
"type": "prometheus",
"uid": "${DS_VICTORIAMETRICS}"
},
"expr": "sum(increase(vm_tenant_timeseries_created_total{job=~\"$job\", instance=~\"$instance\",accountID=~\"$account\", projectID=~\"$project\"}[1m])/60) by(accountID,projectID)",
"interval": "",
"legendFormat": "churn rate tenant: {{accountID}}:{{projectID}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Churn rate ($instance)",
"tooltip": {
"shared": true,
@ -395,33 +436,25 @@
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
"align": false
}
},
{
@ -429,11 +462,13 @@
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$ds",
"datasource": {
"type": "prometheus",
"uid": "$ds"
},
"description": "Shows amount of on-disk space occupied by data points.",
"fieldConfig": {
"defaults": {
"custom": {},
"links": []
},
"overrides": []
@ -449,6 +484,7 @@
"hiddenSeries": false,
"id": 10,
"legend": {
"alignAsTable": true,
"avg": false,
"current": false,
"max": false,
@ -465,7 +501,7 @@
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.3.4",
"pluginVersion": "8.5.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
@ -475,18 +511,22 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(vm_tenant_used_tenant_bytes{job=~\"$job_storage\", instance=~\"$instance\",accountID=~\"$accountID\",projectID=~\"$projectID\"}) by(accountID,projectID)",
"datasource": {
"type": "prometheus",
"uid": "${DS_VICTORIAMETRICS}"
},
"editorMode": "code",
"expr": "sum(vm_tenant_used_tenant_bytes{job=~\"$job\", instance=~\"$instance\",accountID=~\"$account\",projectID=~\"$project\"}) by(accountID,projectID)",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{accountID}}:{{projectID}}",
"legendFormat": "disk usage tenant {{accountID}}:{{projectID}}",
"range": true,
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Disk space usage (datapoints) ($instance)",
"tooltip": {
"shared": true,
@ -495,37 +535,30 @@
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
"align": false
}
}
],
"schemaVersion": 26,
"schemaVersion": 36,
"style": "dark",
"tags": [
"VictoriaMetrics",
@ -536,13 +569,11 @@
{
"current": {
"selected": false,
"text": "gw",
"value": "gw"
"text": "VictoriaMetrics",
"value": "VictoriaMetrics"
},
"error": null,
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "ds",
"options": [],
@ -555,108 +586,100 @@
},
{
"allValue": ".*",
"current": {
"selected": false,
"text": "All",
"value": "$__all"
"current": {},
"datasource": {
"uid": "$ds"
},
"datasource": "$ds",
"definition": "label_values(vm_app_version{version=~\"^vm(insert|select|storage).*\"}, job)",
"error": null,
"hide": 0,
"includeAll": true,
"label": null,
"multi": true,
"name": "job",
"options": [],
"query": "label_values(vm_app_version{version=~\"^vm(insert|select|storage).*\"}, job)",
"query": {
"query": "label_values(vm_app_version{version=~\"^vm(insert|select|storage).*\"}, job)",
"refId": "VictoriaMetrics-job-Variable-Query"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"tagValuesQuery": "",
"tags": [],
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": ".*",
"current": {
"selected": false,
"text": "All",
"value": "$__all"
"current": {},
"datasource": {
"uid": "$ds"
},
"datasource": "$ds",
"definition": "label_values(vm_app_version{job=~\"$job\"}, instance)",
"error": null,
"hide": 0,
"includeAll": true,
"label": null,
"multi": false,
"name": "instance",
"options": [],
"query": "label_values(vm_app_version{job=~\"$job\"}, instance)",
"query": {
"query": "label_values(vm_app_version{job=~\"$job\"}, instance)",
"refId": "VictoriaMetrics-instance-Variable-Query"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"tagValuesQuery": "",
"tags": [],
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": ".*",
"current": {
"selected": false,
"text": "All",
"value": "$__all"
"current": {},
"datasource": {
"uid": "$ds"
},
"datasource": "$ds",
"definition": "label_values(vm_tenant_active_timeseries{job=~\"$job\"},accountID)",
"error": null,
"hide": 0,
"includeAll": true,
"label": null,
"multi": false,
"name": "accountID",
"name": "account",
"options": [],
"query": "label_values(vm_tenant_active_timeseries{job=~\"$job\"},accountID)",
"query": {
"query": "label_values(vm_tenant_active_timeseries{job=~\"$job\"},accountID)",
"refId": "StandardVariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"tagValuesQuery": "",
"tags": [],
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": ".*",
"current": {
"selected": false,
"text": "All",
"value": "$__all"
"current": {},
"datasource": {
"uid": "$ds"
},
"datasource": "$ds",
"definition": "label_values(vm_tenant_active_timeseries{accountID=~\"$accountID\"},projectID)",
"error": null,
"hide": 0,
"includeAll": true,
"label": null,
"multi": false,
"name": "projectID",
"name": "project",
"options": [],
"query": "label_values(vm_tenant_active_timeseries{accountID=~\"$accountID\"},projectID)",
"query": {
"query": "label_values(vm_tenant_active_timeseries{accountID=~\"$accountID\"},projectID)",
"refId": "VictoriaMetrics-projectID-Variable-Query"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"tagValuesQuery": "",
"tags": [],
"tagsQuery": "",
"type": "query",
"useTags": false
@ -669,7 +692,8 @@
},
"timepicker": {},
"timezone": "",
"title": "VictoriaMetrics cluster per tenant Copy",
"title": "VictoriaMetrics Cluster Per Tenant Statistic",
"uid": "IZFqd3lMz",
"version": 1
}
"version": 7,
"weekStart": ""
}

View file

@ -8,6 +8,7 @@ See also [case studies](https://docs.victoriametrics.com/CaseStudies.html).
## Third-party articles and slides about VictoriaMetrics
* [Optimizing the Storage of Large Volumes of Metrics for a Long Time in VictoriaMetrics](https://percona.community/blog/2022/06/02/long-time-keeping-metrics-victoriametrics/)
* [Announcing Asserts](https://www.asserts.ai/blog/announcing-asserts/)
* [Choosing a Time Series Database for High Cardinality Aggregations](https://abiosgaming.com/press/high-cardinality-aggregations/)
* [Scaling to trillions of metric data points](https://engineering.razorpay.com/scaling-to-trillions-of-metric-data-points-f569a5b654f2)

View file

@ -15,19 +15,27 @@ The following tip changes can be tested by building VictoriaMetrics components f
## tip
* FEATURE: adds service discovery visualisation tab for `/targets` page. It simplifies service discovery debugging. See [this PR](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2675).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): Allows using kubeconfig file within `kubernetes_sd_configs`. It may be useful for kubernetes cluster monitoring by `vmagent` outside kubernetes cluster. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1464).
**Update notes:** this release introduces backwards-incompatible changes to communication protocol between `vmselect` and `vmstorage` nodes in cluster version of VictoriaMetrics because of added [query tracing](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#query-tracing), so `vmselect` and `vmstorage` nodes may log communication errors during the upgrade. These errors should stop after all the `vmselect` and `vmstorage` nodes are updated to new release. It is safe to downgrade to previous releases.
* FEATURE: support query tracing, which allows determining bottlenecks during query processing. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#query-tracing) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1403).
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): add `cardinality` tab, which can help identifying the source of [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) and [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) issues. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2233) and [these docs](https://docs.victoriametrics.com/#cardinality-explorer).
* FEATURE: allow overriding default limits for in-memory cache `indexdb/tagFilters` via flag `-storage.cacheSizeIndexDBTagFilters`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2663).
* FEATURE: add support of `lowercase` and `uppercase` relabeling actions in the same way as [Prometheus 2.36.0 does](https://github.com/prometheus/prometheus/releases/tag/v2.36.0). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2664).
* FEATURE: support query tracing, which allows determining bottlenecks during query processing. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#query-tracing) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1403).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): remove dependency on Internet access in `http://vmagent:8429/targets` page. Previously the page layout was broken without Internet access. See [shis issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2594).
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): remove dependency on Internet access in [web API pages](https://docs.victoriametrics.com/vmalert.html#web). Previously the functionality and the layout of these pages was broken without Internet access. See [shis issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2594).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): expose `/api/v1/status/config` endpoint in the same way as Prometheus does. See [these docs](https://prometheus.io/docs/prometheus/latest/querying/api/#config).
* FEATURE: add ability to change the `indexdb` rotation timezone offset via `-retentionTimezoneOffset` command-line flag. Previously it was performed at 4am UTC time. This could lead to performance degradation in the middle of the day when VictoriaMetrics runs in time zones located too far from UTC. Thanks to @cnych for [the pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2574).
* FEATURE: limit the number of background merge threads on systems with big number of CPU cores by default. This increases the max size of parts, which can be created during background merge when `-storageDataPath` directory has limited free disk space. This may improve on-disk data compression efficiency and query performance. The limits can be tuned if needed with `-smallMergeConcurrency` and `-bigMergeConcurrency` command-line flags. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2673).
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): support `limit` param per-group for limiting number of produced samples per each rule. Thanks to @Howie59 for [implementation](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2676).
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): remove dependency on Internet access at [web API pages](https://docs.victoriametrics.com/vmalert.html#web). Previously the functionality and the layout of these pages was broken without Internet access. See [shis issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2594).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): implement the `http://vmagent:8429/service-discovery` page in the same way as Prometheus does. This page shows the original labels for all the discovered targets alongside the resulting labels after the relabeling. This simplifies service discovery debugging.
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): remove dependency on Internet access at `http://vmagent:8429/targets` page. Previously the page layout was broken without Internet access. See [shis issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2594).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add support for `kubeconfig_file` option at [kubernetes_sd_configs](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config). It may be useful for Kubernetes monitoring by `vmagent` outside Kubernetes cluster. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1464).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): expose `/api/v1/status/config` endpoint in the same way as Prometheus does. See [these docs](https://prometheus.io/docs/prometheus/latest/querying/api/#config).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `-promscrape.suppressScrapeErrorsDelay` command-line flag, which can be used for delaying and aggregating the logging of per-target scrape errors. This may reduce the amounts of logs when `vmagent` scrapes many unreliable targets. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2575). Thanks to @jelmd for [the initial implementation](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2576).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `-promscrape.cluster.name` command-line flag, which allows proper data de-duplication when the same target is scraped from multiple [vmagent clusters](https://docs.victoriametrics.com/vmagent.html#scraping-big-number-of-targets). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2679).
* BUGFIX: support for data ingestion in [DataDog format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-datadog-agent) from legacy clients / agents. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2670). Thanks to @elProxy for the fix.
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent.html): do not expose `vm_promscrape_service_discovery_duration_seconds_bucket` metric for unused service discovery types. This reduces the number of metrics exported at `http://vmagent:8429/metrics`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2671).
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly apply `alert_relabel_configs` relabeling rules to `-notifier.config` according to [these docs](https://docs.victoriametrics.com/vmalert.html#notifier-configuration-file). Thanks to @spectvtor for [the bugfix](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2633).
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly add `Content-Encoding: snappy` request header when `vmalert` sends [evaluated recording rules' data](https://docs.victoriametrics.com/vmalert.html#recording-rules) to `-remoteWrite.url`. This header is needed by some remote storage systems in order to properly decode snappy-encoded request body. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2685). Thanks to @manji-0 for th fix.
* BUGFIX: deny [background merge](https://valyala.medium.com/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282) when the storage enters read-only mode, e.g. when free disk space becomes lower than `-storage.minFreeDiskSpaceBytes`. Background merge needs additional disk space, so it could result in `no space left on device` errors. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2603).
## [v1.77.2](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.77.2)

View file

@ -8,18 +8,18 @@ sort: 2
VictoriaMetrics is a fast, cost-effective and scalable time series database. It can be used as a long-term remote storage for Prometheus.
It is recommended using [single-node version](https://github.com/VictoriaMetrics/VictoriaMetrics) instead of cluster version
for ingestion rates lower than a million of data points per second.
Single-node version [scales perfectly](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae)
It is recommended to use the [single-node version](https://github.com/VictoriaMetrics/VictoriaMetrics) instead of the cluster version
for ingestion rates lower than a million data points per second.
The single-node version [scales perfectly](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae)
with the number of CPU cores, RAM and available storage space.
Single-node version is easier to configure and operate comparing to cluster version, so think twice before sticking to cluster version.
The single-node version is easier to configure and operate compared to the cluster version, so think twice before choosing the cluster version.
Join [our Slack](https://slack.victoriametrics.com/) or [contact us](mailto:info@victoriametrics.com) with consulting and support questions.
## Prominent features
- Supports all the features of [single-node version](https://github.com/VictoriaMetrics/VictoriaMetrics).
- Performance and capacity scales horizontally. See [these docs for details](#cluster-resizing-and-scalability).
- Supports all the features of the [single-node version](https://github.com/VictoriaMetrics/VictoriaMetrics).
- Performance and capacity scale horizontally. See [these docs for details](#cluster-resizing-and-scalability).
- Supports multiple independent namespaces for time series data (aka multi-tenancy). See [these docs for details](#multitenancy).
- Supports replication. See [these docs for details](#replication-and-data-safety).
@ -33,8 +33,8 @@ VictoriaMetrics cluster consists of the following services:
Each service may scale independently and may run on the most suitable hardware.
`vmstorage` nodes don't know about each other, don't communicate with each other and don't share any data.
This is [shared nothing architecture](https://en.wikipedia.org/wiki/Shared-nothing_architecture).
It increases cluster availability, simplifies cluster maintenance and cluster scaling.
This is a [shared nothing architecture](https://en.wikipedia.org/wiki/Shared-nothing_architecture).
It increases cluster availability, and simplifies cluster maintenance as well as cluster scaling.
![Naive cluster scheme](assets/images/Naive_cluster_scheme.png)
@ -60,10 +60,10 @@ when different tenants have different amounts of data and different query load.
## Binaries
Compiled binaries for cluster version are available in the `assets` section of [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
See archives containing `cluster` word.
Compiled binaries for the cluster version are available in the `assets` section of the [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
Also see archives containing the word `cluster`.
Docker images for cluster version are available here:
Docker images for the cluster version are available here:
- `vminsert` - <https://hub.docker.com/r/victoriametrics/vminsert/tags>
- `vmselect` - <https://hub.docker.com/r/victoriametrics/vmselect/tags>
@ -71,20 +71,20 @@ Docker images for cluster version are available here:
## Building from sources
Source code for cluster version is available at [cluster branch](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster).
The source code for the cluster version is available in the [cluster branch](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster).
### Production builds
There is no need in installing Go on a host system since binaries are built
There is no need to install Go on a host system since binaries are built
inside [the official docker container for Go](https://hub.docker.com/_/golang).
This makes reproducible builds.
This allows reproducible builds.
So [install docker](https://docs.docker.com/install/) and run the following command:
```
make vminsert-prod vmselect-prod vmstorage-prod
```
Production binaries are built into statically linked binaries. They are put into `bin` folder with `-prod` suffixes:
Production binaries are built into statically linked binaries. They are put into the `bin` folder with `-prod` suffixes:
```
$ make vminsert-prod vmselect-prod vmstorage-prod
@ -154,7 +154,7 @@ It is possible manualy setting up a toy cluster on a single host. In this case e
### Environment variables
Each flag values can be set thru environment variables by following these rules:
Each flag values can be set through environment variables by following these rules:
- The `-envflag.enable` flag must be set
- Each `.` in flag names must be substituted by `_` (for example `-insert.maxQueueDuration <duration>` will translate to `insert_maxQueueDuration=<duration>`)

View file

@ -282,15 +282,15 @@ The main reason for high churn rate is a metric label with frequently changed va
* A label derived from the current time such as `timestamp`, `minute` or `hour`.
* A `hash` or `uuid` label, which changes frequently.
The solution against high churn rate is to identify and eliminate labels with frequently changed values. The [/api/v1/status/tsdb](https://docs.victoriametrics.com/#tsdb-stats) page can help determining these labels.
The solution against high churn rate is to identify and eliminate labels with frequently changed values. [Cardinality explorer](https://docs.victoriametrics.com/#cardinality-explorer) can help determining these labels.
## What is high cardinality?
High cardinality usually means a high number of [active time series](#what-is-an-active-time-series). High cardinality may lead to high memory usage and/or to a high percentage of [slow inserts](#what-is-a-slow-insert). The source of high cardinality is usually a label with a large number of unique values, which presents a big share of the ingested time series. The solution is to identify and remove the source of high cardinality with the help of [/api/v1/status/tsdb](https://docs.victoriametrics.com/#tsdb-stats).
High cardinality usually means a high number of [active time series](#what-is-an-active-time-series). High cardinality may lead to high memory usage and/or to a high percentage of [slow inserts](#what-is-a-slow-insert). The source of high cardinality is usually a label with a large number of unique values, which presents a big share of the ingested time series. The solution is to identify and remove the source of high cardinality with the help of [cardinality explorer](https://docs.victoriametrics.com/#cardinality-explorer).
## What is a slow insert?
VictoriaMetrics maintains in-memory cache for mapping of [active time series](#what-is-an-active-time-series) into internal series ids. The cache size depends on the available memory for VictoriaMetrics in the host system. If the information about all the active time series doesn't fit the cache, then VictoriaMetrics needs to read and unpack the information from disk on every incoming sample for time series missing in the cache. This operation is much slower than the cache lookup, so such an insert is named a `slow insert`. A high percentage of slow inserts on the [official dashboard for VictoriaMetrics](https://docs.victoriametrics.com/#monitoring) indicates a memory shortage for the current number of [active time series](#what-is-an-active-time-series). Such a condition usually leads to a significant slowdown for data ingestion and to significantly increased disk IO and CPU usage. The solution is to add more memory or to reduce the number of [active time series](#what-is-an-active-time-series). The `/api/v1/status/tsdb` page can be helpful for locating the source of high number of active time seriess see [these docs](https://docs.victoriametrics.com/#tsdb-stats).
VictoriaMetrics maintains in-memory cache for mapping of [active time series](#what-is-an-active-time-series) into internal series ids. The cache size depends on the available memory for VictoriaMetrics in the host system. If the information about all the active time series doesn't fit the cache, then VictoriaMetrics needs to read and unpack the information from disk on every incoming sample for time series missing in the cache. This operation is much slower than the cache lookup, so such an insert is named a `slow insert`. A high percentage of slow inserts on the [official dashboard for VictoriaMetrics](https://docs.victoriametrics.com/#monitoring) indicates a memory shortage for the current number of [active time series](#what-is-an-active-time-series). Such a condition usually leads to a significant slowdown for data ingestion and to significantly increased disk IO and CPU usage. The solution is to add more memory or to reduce the number of [active time series](#what-is-an-active-time-series). [Cardinality explorer](https://docs.victoriametrics.com/#cardinality-explorer) can be helpful for locating the source of high number of active time series.
## How to optimize MetricsQL query?

View file

@ -17,10 +17,10 @@ VictoriaMetrics is available in [binary releases](https://github.com/VictoriaMet
and [source code](https://github.com/VictoriaMetrics/VictoriaMetrics).
Just download VictoriaMetrics and follow [these instructions](https://docs.victoriametrics.com/Quick-Start.html).
Cluster version of VictoriaMetrics is available [here](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html).
The cluster version of VictoriaMetrics is available [here](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html).
Learn more about [key concepts](https://docs.victoriametrics.com/keyConcepts.html) of VictoriaMetrics and follow
[QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for better experience.
Learn more about [key concepts](https://docs.victoriametrics.com/keyConcepts.html) of VictoriaMetrics and follow the
[QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for a better experience.
[Contact us](mailto:info@victoriametrics.com) if you need enterprise support for VictoriaMetrics.
See [features available in enterprise package](https://victoriametrics.com/products/enterprise/).
@ -32,8 +32,8 @@ from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/rele
VictoriaMetrics has the following prominent features:
* It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details.
* It can be used as drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage).
* It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage).
* It features easy setup and operation:
* VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d) without external dependencies.
* All the configuration is done via explicit command-line flags with reasonable defaults.
@ -243,7 +243,9 @@ Prometheus doesn't drop data during VictoriaMetrics restart. See [this article](
## vmui
VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at `http://victoriametrics:8428/vmui`.
The UI allows exploring query results via graphs and tables. Graphs support scrolling and zooming:
The UI allows exploring query results via graphs and tables. It also provides support for [cardinality explorer](#cardinality-explorer).
Graphs in vmui support scrolling and zooming:
* Drag the graph to the left / right in order to move the displayed time range into the past / future.
* Hold `Ctrl` (or `Cmd` on MacOS) and scroll up / down in order to zoom in / out the graph.
@ -261,6 +263,23 @@ VMUI allows investigating correlations between two queries on the same graph. Ju
See the [example VMUI at VictoriaMetrics playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/?g0.expr=100%20*%20sum(rate(process_cpu_seconds_total))%20by%20(job)&g0.range_input=1d).
## Cardinality explorer
VictoriaMetrics provides an ability to explore time series cardinality at `cardinality` tab in [vmui](#vmui) in the following ways:
- To identify metric names with the highest number of series.
- To identify label=name pairs with the highest number of series.
- To identify labels with the highest number of unique values.
By default cardinality explorer analyzes time series for the current date. It provides the ability to select different day at the top right corner.
By default all the time series for the selected date are analyzed. It is possible to narrow down the analysis to series
matching the specified [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors).
Cardinality explorer is built on top of [/api/v1/status/tsdb](#tsdb-stats).
See [cardinality explorer playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/cardinality).
## How to apply new config to VictoriaMetrics
VictoriaMetrics is configured via command-line flags, so it must be restarted when new command-line flags should be applied:
@ -824,6 +843,11 @@ Each JSON line contains samples for a single time series. An example output:
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
Optional `max_rows_per_line` arg may be added to the request for limiting the maximum number of rows exported per each JSON line.
Optional `reduce_mem_usage=1` arg may be added to the request for reducing memory usage when exporting big number of time series.
@ -863,6 +887,11 @@ for metrics to export.
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
The exported CSV data can be imported to VictoriaMetrics via [/api/v1/import/csv](#how-to-import-csv-data).
@ -885,6 +914,11 @@ wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
The exported data can be imported to VictoriaMetrics via [/api/v1/import/native](#how-to-import-data-in-native-format).
The native export format may change in incompatible way between VictoriaMetrics releases, so the data exported from the release X
@ -1079,8 +1113,13 @@ VictoriaMetrics exports [Prometheus-compatible federation data](https://promethe
at `http://<victoriametrics-addr>:8428/federate?match[]=<timeseries_selector_for_federation>`.
Optional `start` and `end` args may be added to the request in order to scrape the last point for each selected time series on the `[start ... end]` interval.
`start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. By default, the last point
on the interval `[now - max_lookback ... now]` is scraped for each time series. The default value for `max_lookback` is `5m` (5 minutes), but it can be overridden.
`start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
By default, the last point on the interval `[now - max_lookback ... now]` is scraped for each time series. The default value for `max_lookback` is `5m` (5 minutes), but it can be overridden with `max_lookback` query arg.
For instance, `/federate?match[]=up&max_lookback=1h` would return last points on the `[now - 1h ... now]` interval. This may be useful for time series federation
with scrape intervals exceeding `5m`.
@ -1187,9 +1226,18 @@ values and timestamps. These are sorted and compressed raw time series values. A
index files for searching for specific series in the values and timestamps files.
`Parts` are periodically merged into the bigger parts. The resulting `part` is constructed
under `<-storageDataPath>/data/{small,big}/YYYY_MM/tmp` subdirectory. When the resulting `part` is complete, it is atomically moved from the `tmp`
under `<-storageDataPath>/data/{small,big}/YYYY_MM/tmp` subdirectory.
When the resulting `part` is complete, it is atomically moved from the `tmp`
to its own subdirectory, while the source parts are atomically removed. The end result is that the source
parts are substituted by a single resulting bigger `part` in the `<-storageDataPath>/data/{small,big}/YYYY_MM/` directory.
VictoriaMetrics doesn't merge parts if their summary size exceeds free disk space.
This prevents from potential out of disk space errors during merge.
The number of parts may significantly increase over time under free disk space shortage.
This increases overhead during data querying, since VictoriaMetrics needs to read data from
bigger number of parts per each request. That's why it is recommended to have at least 20%
of free disk space under directory pointed by `-storageDataPath` command-line flag.
Information about merging process is available in [single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards.
See more details in [monitoring docs](#monitoring).
@ -1259,7 +1307,7 @@ The downsampling can be evaluated for free by downloading and using enterprise b
## Multi-tenancy
Single-node VictoriaMetrics doesn't support multi-tenancy. Use [cluster version](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#multitenancy) instead.
Single-node VictoriaMetrics doesn't support multi-tenancy. Use the [cluster version](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#multitenancy) instead.
## Scalability and cluster version
@ -1267,7 +1315,7 @@ Though single-node VictoriaMetrics cannot scale to multiple nodes, it is optimiz
This means that a single-node VictoriaMetrics may scale vertically and substitute a moderately sized cluster built with competing solutions
such as Thanos, Uber M3, InfluxDB or TimescaleDB. See [vertical scalability benchmarks](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae).
So try single-node VictoriaMetrics at first and then [switch to cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster) if you still need
So try single-node VictoriaMetrics at first and then [switch to the cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster) if you still need
horizontally scalable long-term remote storage for really large Prometheus deployments.
[Contact us](mailto:info@victoriametrics.com) for enterprise support.
@ -1342,7 +1390,7 @@ The most interesting metrics are:
aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
* `increase(vm_new_timeseries_created_total[1h])` - time series [churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) during the previous hour.
* `sum(vm_rows{type=~"storage/.*"})` - total number of `(timestamp, value)` data points in the database.
* `sum(rate(vm_rows_inserted_total[5m]))` - ingestion rate, i.e. how many samples are inserted int the database per second.
* `sum(rate(vm_rows_inserted_total[5m]))` - ingestion rate, i.e. how many samples are inserted in the database per second.
* `vm_free_disk_space_bytes` - free space left at `-storageDataPath`.
* `sum(vm_data_size_bytes)` - the total size of data on disk.
* `increase(vm_slow_row_inserts_total[5m])` - the number of slow inserts during the last 5 minutes.
@ -1365,6 +1413,8 @@ VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way simi
* `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account.
* `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details.
VictoriaMetrics provides an UI on top of `/api/v1/status/tsdb` - see [cardinality explorer docs](#cardinality-explorer).
## Query tracing
VictoriaMetrics supports query tracing, which can be used for determining bottlenecks during query processing.
@ -1375,7 +1425,7 @@ In this case VictoriaMetrics puts query trace into `trace` field in the output J
For example, the following command:
```bash
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq -r '.trace'
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace'
```
would return the following trace:
@ -1502,7 +1552,7 @@ See also more advanced [cardinality limiter in vmagent](https://docs.victoriamet
It may be needed in order to suppress default gap filling algorithm used by VictoriaMetrics - by default it assumes
each time series is continuous instead of discrete, so it fills gaps between real samples with regular intervals.
* Metrics and labels leading to [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) or [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) can be determined at `/api/v1/status/tsdb` page. See [these docs](#tsdb-stats) for details.
* Metrics and labels leading to [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) or [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) can be determined via [cardinality explorer](#cardinality-explorer) and via [/api/v1/status/tsdb](#tsdb-stats) endpoint.
* New time series can be logged if `-logNewSeries` command-line flag is passed to VictoriaMetrics.

View file

@ -21,10 +21,10 @@ VictoriaMetrics is available in [binary releases](https://github.com/VictoriaMet
and [source code](https://github.com/VictoriaMetrics/VictoriaMetrics).
Just download VictoriaMetrics and follow [these instructions](https://docs.victoriametrics.com/Quick-Start.html).
Cluster version of VictoriaMetrics is available [here](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html).
The cluster version of VictoriaMetrics is available [here](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html).
Learn more about [key concepts](https://docs.victoriametrics.com/keyConcepts.html) of VictoriaMetrics and follow
[QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for better experience.
Learn more about [key concepts](https://docs.victoriametrics.com/keyConcepts.html) of VictoriaMetrics and follow the
[QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for a better experience.
[Contact us](mailto:info@victoriametrics.com) if you need enterprise support for VictoriaMetrics.
See [features available in enterprise package](https://victoriametrics.com/products/enterprise/).
@ -36,8 +36,8 @@ from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/rele
VictoriaMetrics has the following prominent features:
* It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details.
* It can be used as drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage).
* It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage).
* It features easy setup and operation:
* VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d) without external dependencies.
* All the configuration is done via explicit command-line flags with reasonable defaults.
@ -247,7 +247,9 @@ Prometheus doesn't drop data during VictoriaMetrics restart. See [this article](
## vmui
VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at `http://victoriametrics:8428/vmui`.
The UI allows exploring query results via graphs and tables. Graphs support scrolling and zooming:
The UI allows exploring query results via graphs and tables. It also provides support for [cardinality explorer](#cardinality-explorer).
Graphs in vmui support scrolling and zooming:
* Drag the graph to the left / right in order to move the displayed time range into the past / future.
* Hold `Ctrl` (or `Cmd` on MacOS) and scroll up / down in order to zoom in / out the graph.
@ -265,6 +267,23 @@ VMUI allows investigating correlations between two queries on the same graph. Ju
See the [example VMUI at VictoriaMetrics playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/?g0.expr=100%20*%20sum(rate(process_cpu_seconds_total))%20by%20(job)&g0.range_input=1d).
## Cardinality explorer
VictoriaMetrics provides an ability to explore time series cardinality at `cardinality` tab in [vmui](#vmui) in the following ways:
- To identify metric names with the highest number of series.
- To identify label=name pairs with the highest number of series.
- To identify labels with the highest number of unique values.
By default cardinality explorer analyzes time series for the current date. It provides the ability to select different day at the top right corner.
By default all the time series for the selected date are analyzed. It is possible to narrow down the analysis to series
matching the specified [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors).
Cardinality explorer is built on top of [/api/v1/status/tsdb](#tsdb-stats).
See [cardinality explorer playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/cardinality).
## How to apply new config to VictoriaMetrics
VictoriaMetrics is configured via command-line flags, so it must be restarted when new command-line flags should be applied:
@ -828,6 +847,11 @@ Each JSON line contains samples for a single time series. An example output:
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
Optional `max_rows_per_line` arg may be added to the request for limiting the maximum number of rows exported per each JSON line.
Optional `reduce_mem_usage=1` arg may be added to the request for reducing memory usage when exporting big number of time series.
@ -867,6 +891,11 @@ for metrics to export.
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/csv -d 'format=<format>' -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
The exported CSV data can be imported to VictoriaMetrics via [/api/v1/import/csv](#how-to-import-csv-data).
@ -889,6 +918,11 @@ wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/api/v1/export/native -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
The exported data can be imported to VictoriaMetrics via [/api/v1/import/native](#how-to-import-data-in-native-format).
The native export format may change in incompatible way between VictoriaMetrics releases, so the data exported from the release X
@ -1083,8 +1117,13 @@ VictoriaMetrics exports [Prometheus-compatible federation data](https://promethe
at `http://<victoriametrics-addr>:8428/federate?match[]=<timeseries_selector_for_federation>`.
Optional `start` and `end` args may be added to the request in order to scrape the last point for each selected time series on the `[start ... end]` interval.
`start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. By default, the last point
on the interval `[now - max_lookback ... now]` is scraped for each time series. The default value for `max_lookback` is `5m` (5 minutes), but it can be overridden.
`start` and `end` may contain either unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
For example:
```bash
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=1654543486' -d 'end=1654543486'
curl http://<victoriametrics-addr>:8428/federate -d 'match[]=<timeseries_selector_for_export>' -d 'start=2022-06-06T19:25:48+00:00' -d 'end=2022-06-06T19:29:07+00:00'
```
By default, the last point on the interval `[now - max_lookback ... now]` is scraped for each time series. The default value for `max_lookback` is `5m` (5 minutes), but it can be overridden with `max_lookback` query arg.
For instance, `/federate?match[]=up&max_lookback=1h` would return last points on the `[now - 1h ... now]` interval. This may be useful for time series federation
with scrape intervals exceeding `5m`.
@ -1191,9 +1230,18 @@ values and timestamps. These are sorted and compressed raw time series values. A
index files for searching for specific series in the values and timestamps files.
`Parts` are periodically merged into the bigger parts. The resulting `part` is constructed
under `<-storageDataPath>/data/{small,big}/YYYY_MM/tmp` subdirectory. When the resulting `part` is complete, it is atomically moved from the `tmp`
under `<-storageDataPath>/data/{small,big}/YYYY_MM/tmp` subdirectory.
When the resulting `part` is complete, it is atomically moved from the `tmp`
to its own subdirectory, while the source parts are atomically removed. The end result is that the source
parts are substituted by a single resulting bigger `part` in the `<-storageDataPath>/data/{small,big}/YYYY_MM/` directory.
VictoriaMetrics doesn't merge parts if their summary size exceeds free disk space.
This prevents from potential out of disk space errors during merge.
The number of parts may significantly increase over time under free disk space shortage.
This increases overhead during data querying, since VictoriaMetrics needs to read data from
bigger number of parts per each request. That's why it is recommended to have at least 20%
of free disk space under directory pointed by `-storageDataPath` command-line flag.
Information about merging process is available in [single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards.
See more details in [monitoring docs](#monitoring).
@ -1263,7 +1311,7 @@ The downsampling can be evaluated for free by downloading and using enterprise b
## Multi-tenancy
Single-node VictoriaMetrics doesn't support multi-tenancy. Use [cluster version](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#multitenancy) instead.
Single-node VictoriaMetrics doesn't support multi-tenancy. Use the [cluster version](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#multitenancy) instead.
## Scalability and cluster version
@ -1271,7 +1319,7 @@ Though single-node VictoriaMetrics cannot scale to multiple nodes, it is optimiz
This means that a single-node VictoriaMetrics may scale vertically and substitute a moderately sized cluster built with competing solutions
such as Thanos, Uber M3, InfluxDB or TimescaleDB. See [vertical scalability benchmarks](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae).
So try single-node VictoriaMetrics at first and then [switch to cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster) if you still need
So try single-node VictoriaMetrics at first and then [switch to the cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster) if you still need
horizontally scalable long-term remote storage for really large Prometheus deployments.
[Contact us](mailto:info@victoriametrics.com) for enterprise support.
@ -1346,7 +1394,7 @@ The most interesting metrics are:
aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
* `increase(vm_new_timeseries_created_total[1h])` - time series [churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) during the previous hour.
* `sum(vm_rows{type=~"storage/.*"})` - total number of `(timestamp, value)` data points in the database.
* `sum(rate(vm_rows_inserted_total[5m]))` - ingestion rate, i.e. how many samples are inserted int the database per second.
* `sum(rate(vm_rows_inserted_total[5m]))` - ingestion rate, i.e. how many samples are inserted in the database per second.
* `vm_free_disk_space_bytes` - free space left at `-storageDataPath`.
* `sum(vm_data_size_bytes)` - the total size of data on disk.
* `increase(vm_slow_row_inserts_total[5m])` - the number of slow inserts during the last 5 minutes.
@ -1369,6 +1417,8 @@ VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way simi
* `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account.
* `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details.
VictoriaMetrics provides an UI on top of `/api/v1/status/tsdb` - see [cardinality explorer docs](#cardinality-explorer).
## Query tracing
VictoriaMetrics supports query tracing, which can be used for determining bottlenecks during query processing.
@ -1379,7 +1429,7 @@ In this case VictoriaMetrics puts query trace into `trace` field in the output J
For example, the following command:
```bash
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq -r '.trace'
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq '.trace'
```
would return the following trace:
@ -1506,7 +1556,7 @@ See also more advanced [cardinality limiter in vmagent](https://docs.victoriamet
It may be needed in order to suppress default gap filling algorithm used by VictoriaMetrics - by default it assumes
each time series is continuous instead of discrete, so it fills gaps between real samples with regular intervals.
* Metrics and labels leading to [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) or [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) can be determined at `/api/v1/status/tsdb` page. See [these docs](#tsdb-stats) for details.
* Metrics and labels leading to [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) or [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) can be determined via [cardinality explorer](#cardinality-explorer) and via [/api/v1/status/tsdb](#tsdb-stats) endpoint.
* New time series can be logged if `-logNewSeries` command-line flag is passed to VictoriaMetrics.

View file

@ -105,6 +105,10 @@ name: <string>
# How often rules in the group are evaluated.
[ interval: <duration> | default = -evaluationInterval flag ]
# Limit the number of alerts an alerting rule and series a recording
# rule can produce. 0 is no limit.
[ limit: <int> | default = 0 ]
# How many rules execute at once within a group. Increasing concurrency may speed
# up round execution speed.
[ concurrency: <integer> | default = 1 ]
@ -539,6 +543,7 @@ See full description for these flags in `./vmalert --help`.
* Graphite engine isn't supported yet;
* `query` template function is disabled for performance reasons (might be changed in future);
* `limit` group's param has no effect during replay (might be changed in future);
## Monitoring

14
go.mod
View file

@ -11,7 +11,7 @@ require (
github.com/VictoriaMetrics/fasthttp v1.1.0
github.com/VictoriaMetrics/metrics v1.18.1
github.com/VictoriaMetrics/metricsql v0.43.0
github.com/aws/aws-sdk-go v1.44.24
github.com/aws/aws-sdk-go v1.44.27
github.com/cespare/xxhash/v2 v2.1.2
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
@ -23,7 +23,7 @@ require (
github.com/go-kit/kit v0.12.0
github.com/golang/snappy v0.0.4
github.com/influxdata/influxdb v1.9.7
github.com/klauspost/compress v1.15.5
github.com/klauspost/compress v1.15.6
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/mattn/go-runewidth v0.0.13 // indirect
github.com/oklog/ulid v1.3.1
@ -35,10 +35,10 @@ require (
github.com/valyala/fasttemplate v1.2.1
github.com/valyala/gozstd v1.17.0
github.com/valyala/quicktemplate v1.7.0
golang.org/x/net v0.0.0-20220526153639-5463443f8c37
golang.org/x/net v0.0.0-20220531201128-c960675eff93
golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a
google.golang.org/api v0.81.0
google.golang.org/api v0.82.0
gopkg.in/yaml.v2 v2.4.0
)
@ -71,12 +71,12 @@ require (
go.opencensus.io v0.23.0 // indirect
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/goleak v1.1.11-0.20210813005559-691160354723 // indirect
golang.org/x/sync v0.0.0-20220513210516-0976fa681c29 // indirect
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220527130721-00d5c0f3be58 // indirect
google.golang.org/grpc v1.46.2 // indirect
google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8 // indirect
google.golang.org/grpc v1.47.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
)

26
go.sum
View file

@ -142,8 +142,8 @@ github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQ
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.44.24 h1:3nOkwJBJLiGBmJKWp3z0utyXuBkxyGkRRwWjrTItJaY=
github.com/aws/aws-sdk-go v1.44.24/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
github.com/aws/aws-sdk-go v1.44.27 h1:8CMspeZSrewnbvAwgl8qo5R7orDLwQnTGBf/OKPiHxI=
github.com/aws/aws-sdk-go v1.44.27/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
@ -566,8 +566,8 @@ github.com/klauspost/compress v1.4.0/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0
github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.13.4/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
github.com/klauspost/compress v1.13.5/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.15.5 h1:qyCLMz2JCrKADihKOh9FxnW3houKeNsp2h5OEz0QSEA=
github.com/klauspost/compress v1.15.5/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU=
github.com/klauspost/compress v1.15.6 h1:6D9PcO8QWu0JyaQ2zUMmu16T1T+zjjEpP91guRsvDfY=
github.com/klauspost/compress v1.15.6/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU=
github.com/klauspost/cpuid v0.0.0-20170728055534-ae7887de9fa5/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/crc32 v0.0.0-20161016154125-cb6bfca970f6/go.mod h1:+ZoRqAPRLkC4NPOvfYeR5KNOrY6TD+/sAC3HXPZgDYg=
github.com/klauspost/pgzip v1.0.2-0.20170402124221-0bf5dcad4ada/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
@ -992,9 +992,9 @@ golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su
golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220412020605-290c469a71a5/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220520000938-2e3eb7b945c2/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220526153639-5463443f8c37 h1:lUkvobShwKsOesNfWWlCS5q7fnbG1MEliIzwu886fn8=
golang.org/x/net v0.0.0-20220526153639-5463443f8c37/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220531201128-c960675eff93 h1:MYimHLfoXEpOhqd/zgoA/uoXzHB86AEky4LAx5ij9xA=
golang.org/x/net v0.0.0-20220531201128-c960675eff93/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -1028,8 +1028,9 @@ golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220513210516-0976fa681c29 h1:w8s32wxx3sY+OjLlv9qltkLU5yvJzxjjgiHWLjdIcw4=
golang.org/x/sync v0.0.0-20220513210516-0976fa681c29/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f h1:Ax0t5p6N38Ga0dThY21weqDEyz2oklo4IvDkpigvkD8=
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -1267,8 +1268,8 @@ google.golang.org/api v0.74.0/go.mod h1:ZpfMZOVRMywNyvJFeqL9HRWBgAuRfSjJFpe9QtRR
google.golang.org/api v0.75.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA=
google.golang.org/api v0.78.0/go.mod h1:1Sg78yoMLOhlQTeF+ARBoytAcH1NNyyl390YMy6rKmw=
google.golang.org/api v0.80.0/go.mod h1:xY3nI94gbvBrE0J6NHXhxOmW97HG7Khjkku6AFB3Hyg=
google.golang.org/api v0.81.0 h1:o8WF5AvfidafWbFjsRyupxyEQJNUWxLZJCK5NXrxZZ8=
google.golang.org/api v0.81.0/go.mod h1:FA6Mb/bZxj706H2j+j2d6mHEEaHBmbbWnkfvmorOCko=
google.golang.org/api v0.82.0 h1:h6EGeZuzhoKSS7BUznzkW+2wHZ+4Ubd6rsVvvh3dRkw=
google.golang.org/api v0.82.0/go.mod h1:Ld58BeTlL9DIYr2M2ajvoSqmGLei0BMn+kVBmkam1os=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@ -1356,10 +1357,10 @@ google.golang.org/genproto v0.0.0-20220421151946-72621c1f0bd3/go.mod h1:8w6bsBMX
google.golang.org/genproto v0.0.0-20220429170224-98d788798c3e/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/genproto v0.0.0-20220505152158-f39f71e6c8f3/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220518221133-4f43b3371335/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220519153652-3a47de7e79bd/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220523171625-347a074981d8/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220527130721-00d5c0f3be58 h1:a221mAAEAzq4Lz6ZWRkcS8ptb2mxoxYSt4N68aRyQHM=
google.golang.org/genproto v0.0.0-20220527130721-00d5c0f3be58/go.mod h1:yKyY4AMRwFiC8yMMNaMi+RkCnjZJt9LoWuvhXjMs+To=
google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8 h1:qRu95HZ148xXw+XeZ3dvqe85PxH4X8+jIo0iRPKcEnM=
google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8/go.mod h1:yKyY4AMRwFiC8yMMNaMi+RkCnjZJt9LoWuvhXjMs+To=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
@ -1394,8 +1395,9 @@ google.golang.org/grpc v1.40.1/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9K
google.golang.org/grpc v1.44.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
google.golang.org/grpc v1.45.0/go.mod h1:lN7owxKUQEqMfSyQikvvk5tf/6zMPsrK+ONuO11+0rQ=
google.golang.org/grpc v1.46.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
google.golang.org/grpc v1.46.2 h1:u+MLGgVf7vRdjEYZ8wDFhAVNmhkbJ5hmrA1LMWK1CAQ=
google.golang.org/grpc v1.46.2/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
google.golang.org/grpc v1.47.0 h1:9n77onPX5F3qfFCqjy9dhn8PbNQsIKeVU04J9G7umt8=
google.golang.org/grpc v1.47.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=

View file

@ -40,7 +40,6 @@ func Init() {
initTimezone()
go logLimiterCleaner()
logAllFlags()
}
func initTimezone() {
@ -79,7 +78,7 @@ func validateLoggerFormat() {
switch *loggerFormat {
case "default", "json":
default:
// We cannot use logger.Pancif here, since the logger isn't initialized yet.
// We cannot use logger.Panicf here, since the logger isn't initialized yet.
panic(fmt.Errorf("FATAL: unsupported `-loggerFormat` value: %q; supported values are: default, json", *loggerFormat))
}
}

View file

@ -22,16 +22,19 @@ func newAPIConfig(sdc *SDConfig, baseDir string, swcFunc ScrapeWorkConstructorFu
}
apiServer := sdc.APIServer
if len(sdc.KubeConfig) > 0 {
fmt.Println("building")
kc, err := buildConfig(sdc)
if err != nil {
return nil, fmt.Errorf("cannot build kube config: %w", err)
if len(sdc.KubeConfigFile) > 0 {
if len(apiServer) > 0 {
return nil, fmt.Errorf("`api_server: %q` and `kubeconfig_file: %q` options cannot be set simultaneously", apiServer, sdc.KubeConfigFile)
}
ac, err = promauth.NewConfig(".", nil, kc.basicAuth, kc.token, kc.tokenFile, nil, kc.tlsConfig)
kc, err := newKubeConfig(sdc.KubeConfigFile)
if err != nil {
return nil, fmt.Errorf("cannot initialize service account auth: %w; probably, `kubernetes_sd_config->api_server` is missing in Prometheus configs?", err)
return nil, fmt.Errorf("cannot build kube config from the specified `kubeconfig_file` config option: %w", err)
}
acNew, err := promauth.NewConfig(".", nil, kc.basicAuth, kc.token, kc.tokenFile, nil, kc.tlsConfig)
if err != nil {
return nil, fmt.Errorf("cannot initialize auth config from `kubeconfig_file: %q`: %w", sdc.KubeConfigFile, err)
}
ac = acNew
apiServer = kc.server
sdc.ProxyURL = kc.proxyURL
}

View file

@ -66,21 +66,20 @@ type AuthInfo struct {
}
func (au *AuthInfo) validate() error {
errContext := "field: %s is not supported currently, open an issue with feature request for it"
if au.Exec != nil {
return fmt.Errorf(errContext, "exec")
return unsupportedFieldError("exec")
}
if len(au.ImpersonateUID) > 0 {
return fmt.Errorf(errContext, "act-as-uid")
return unsupportedFieldError("act-as-uid")
}
if len(au.Impersonate) > 0 {
return fmt.Errorf(errContext, "act-as")
return unsupportedFieldError("act-as")
}
if len(au.ImpersonateGroups) > 0 {
return fmt.Errorf(errContext, "act-as-groups")
return unsupportedFieldError("act-as-groups")
}
if len(au.ImpersonateUserExtra) > 0 {
return fmt.Errorf(errContext, "act-as-user-extra")
return unsupportedFieldError("act-as-user-extra")
}
if len(au.Password) > 0 && len(au.Username) == 0 {
return fmt.Errorf("username cannot be empty, if password defined")
@ -88,6 +87,11 @@ func (au *AuthInfo) validate() error {
return nil
}
func unsupportedFieldError(fieldName string) error {
return fmt.Errorf("field %q is not supported yet; if you feel it is needed please open a feature request "+
"at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/new", fieldName)
}
// ExecConfig contains information about os.command, that returns auth token for kubernetes cluster connection
type ExecConfig struct {
// Command to execute.
@ -150,94 +154,95 @@ type kubeConfig struct {
proxyURL *proxy.URL
}
func buildConfig(sdc *SDConfig) (*kubeConfig, error) {
data, err := fs.ReadFileOrHTTP(sdc.KubeConfig)
func newKubeConfig(kubeConfigFile string) (*kubeConfig, error) {
data, err := fs.ReadFileOrHTTP(kubeConfigFile)
if err != nil {
return nil, fmt.Errorf("cannot read kubeConfig from %q: %w", sdc.KubeConfig, err)
return nil, fmt.Errorf("cannot read %q: %w", kubeConfigFile, err)
}
var config Config
if err = yaml.Unmarshal(data, &config); err != nil {
return nil, fmt.Errorf("cannot parse %q: %w", sdc.KubeConfig, err)
var cfg Config
if err = yaml.Unmarshal(data, &cfg); err != nil {
return nil, fmt.Errorf("cannot parse %q: %w", kubeConfigFile, err)
}
kc, err := cfg.buildKubeConfig()
if err != nil {
return nil, fmt.Errorf("cannot build kubeConfig from %q: %w", kubeConfigFile, err)
}
return kc, nil
}
func (cfg *Config) buildKubeConfig() (*kubeConfig, error) {
authInfos := make(map[string]*AuthInfo)
for _, obj := range config.AuthInfos {
for _, obj := range cfg.AuthInfos {
authInfos[obj.Name] = obj.AuthInfo
}
clusterInfos := make(map[string]*Cluster)
for _, obj := range config.Clusters {
for _, obj := range cfg.Clusters {
clusterInfos[obj.Name] = obj.Cluster
}
contexts := make(map[string]*Context)
for _, obj := range config.Contexts {
for _, obj := range cfg.Contexts {
contexts[obj.Name] = obj.Context
}
contextName := config.CurrentContext
contextName := cfg.CurrentContext
configContext := contexts[contextName]
if configContext == nil {
return nil, fmt.Errorf("context %q does not exist", contextName)
return nil, fmt.Errorf("missing context %q", contextName)
}
clusterInfoName := configContext.Cluster
configClusterInfo := clusterInfos[clusterInfoName]
if configClusterInfo == nil {
return nil, fmt.Errorf("cluster %q does not exist", clusterInfoName)
return nil, fmt.Errorf("missing cluster config %q at context %q", clusterInfoName, contextName)
}
if len(configClusterInfo.Server) == 0 {
return nil, fmt.Errorf("kubernetes server address cannot be empty, define it for context: %s", contextName)
server := configClusterInfo.Server
if len(server) == 0 {
return nil, fmt.Errorf("missing kubernetes server address for config %q at context %q", clusterInfoName, contextName)
}
authInfoName := configContext.AuthInfo
configAuthInfo := authInfos[authInfoName]
if authInfoName != "" && configAuthInfo == nil {
return nil, fmt.Errorf("auth info %q does not exist", authInfoName)
return nil, fmt.Errorf("missing auth config %q", authInfoName)
}
var tlsConfig *promauth.TLSConfig
var basicAuth *promauth.BasicAuthConfig
var token, tokenFile string
isHTTPS := strings.HasPrefix(configClusterInfo.Server, "https://")
if isHTTPS {
tlsConfig = &promauth.TLSConfig{
CAFile: configClusterInfo.CertificateAuthority,
ServerName: configClusterInfo.TLSServerName,
InsecureSkipVerify: configClusterInfo.InsecureSkipTLSVerify,
}
}
if len(configClusterInfo.CertificateAuthorityData) > 0 && isHTTPS {
tlsConfig.CA, err = base64.StdEncoding.DecodeString(configClusterInfo.CertificateAuthorityData)
if err != nil {
return nil, fmt.Errorf("cannot base64-decode configClusterInfo.CertificateAuthorityData %q: %w", configClusterInfo.CertificateAuthorityData, err)
}
}
if configAuthInfo != nil {
if err := configAuthInfo.validate(); err != nil {
return nil, fmt.Errorf("invalid user auth configuration for context: %s, err: %w", contextName, err)
return nil, fmt.Errorf("invalid auth config %q: %w", authInfoName, err)
}
if isHTTPS {
if strings.HasPrefix(configClusterInfo.Server, "https://") {
tlsConfig = &promauth.TLSConfig{
CAFile: configClusterInfo.CertificateAuthority,
ServerName: configClusterInfo.TLSServerName,
InsecureSkipVerify: configClusterInfo.InsecureSkipTLSVerify,
}
if len(configClusterInfo.CertificateAuthorityData) > 0 {
ca, err := base64.StdEncoding.DecodeString(configClusterInfo.CertificateAuthorityData)
if err != nil {
return nil, fmt.Errorf("cannot base64-decode certificate-authority-data from config %q at context %q: %w", clusterInfoName, contextName, err)
}
tlsConfig.CA = ca
}
tlsConfig.CertFile = configAuthInfo.ClientCertificate
tlsConfig.KeyFile = configAuthInfo.ClientKey
if len(configAuthInfo.ClientCertificateData) > 0 {
tlsConfig.Cert, err = base64.StdEncoding.DecodeString(configAuthInfo.ClientCertificateData)
cert, err := base64.StdEncoding.DecodeString(configAuthInfo.ClientCertificateData)
if err != nil {
return nil, fmt.Errorf("cannot base64-decode configAuthInfo.ClientCertificateData %q: %w", configClusterInfo.CertificateAuthorityData, err)
return nil, fmt.Errorf("cannot base64-decode client-certificate-data from %q: %w", authInfoName, err)
}
tlsConfig.Cert = cert
}
if len(configAuthInfo.ClientKeyData) > 0 {
tlsConfig.Key, err = base64.StdEncoding.DecodeString(configAuthInfo.ClientKeyData)
key, err := base64.StdEncoding.DecodeString(configAuthInfo.ClientKeyData)
if err != nil {
return nil, fmt.Errorf("cannot base64-decode configAuthInfo.ClientKeyData %q: %w", configClusterInfo.CertificateAuthorityData, err)
return nil, fmt.Errorf("cannot base64-decode client-key-data from %q: %w", authInfoName, err)
}
tlsConfig.Key = key
}
}
if len(configAuthInfo.Username) > 0 || len(configAuthInfo.Password) > 0 {
basicAuth = &promauth.BasicAuthConfig{
Username: configAuthInfo.Username,
@ -247,15 +252,13 @@ func buildConfig(sdc *SDConfig) (*kubeConfig, error) {
token = configAuthInfo.Token
tokenFile = configAuthInfo.TokenFile
}
kc := kubeConfig{
kc := &kubeConfig{
basicAuth: basicAuth,
server: configClusterInfo.Server,
server: server,
token: token,
tokenFile: tokenFile,
tlsConfig: tlsConfig,
proxyURL: configClusterInfo.ProxyURL,
}
return &kc, nil
return kc, nil
}

View file

@ -11,26 +11,22 @@ func TestParseKubeConfigSuccess(t *testing.T) {
type testCase struct {
name string
sdc *SDConfig
kubeConfigFile string
expectedConfig *kubeConfig
}
var cases = []testCase{
{
name: "token",
sdc: &SDConfig{
KubeConfig: "testdata/good_kubeconfig/with_token.yaml",
},
name: "token",
kubeConfigFile: "testdata/good_kubeconfig/with_token.yaml",
expectedConfig: &kubeConfig{
server: "http://some-server:8080",
token: "abc",
},
},
{
name: "cert",
sdc: &SDConfig{
KubeConfig: "testdata/good_kubeconfig/with_tls.yaml",
},
name: "cert",
kubeConfigFile: "testdata/good_kubeconfig/with_tls.yaml",
expectedConfig: &kubeConfig{
server: "https://localhost:6443",
tlsConfig: &promauth.TLSConfig{
@ -41,10 +37,8 @@ func TestParseKubeConfigSuccess(t *testing.T) {
},
},
{
name: "basic",
sdc: &SDConfig{
KubeConfig: "testdata/good_kubeconfig/with_basic.yaml",
},
name: "basic",
kubeConfigFile: "testdata/good_kubeconfig/with_basic.yaml",
expectedConfig: &kubeConfig{
server: "http://some-server:8080",
basicAuth: &promauth.BasicAuthConfig{
@ -56,7 +50,7 @@ func TestParseKubeConfigSuccess(t *testing.T) {
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
ac, err := buildConfig(tc.sdc)
ac, err := newKubeConfig(tc.kubeConfigFile)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
@ -68,14 +62,11 @@ func TestParseKubeConfigSuccess(t *testing.T) {
}
func TestParseKubeConfigFail(t *testing.T) {
f := func(name, kubeConfigPath string) {
f := func(name, kubeConfigFile string) {
t.Helper()
t.Run(name, func(t *testing.T) {
sdc := &SDConfig{
KubeConfig: kubeConfigPath,
}
if _, err := buildConfig(sdc); err == nil {
t.Fatalf("unexpected result for config file: %s, must return error", kubeConfigPath)
if _, err := newKubeConfig(kubeConfigFile); err == nil {
t.Fatalf("unexpected result for config file: %s, must return error", kubeConfigFile)
}
})
}

View file

@ -22,8 +22,9 @@ type SDConfig struct {
// Use role() function for accessing the Role field
Role string `yaml:"role"`
// if defined any cluster connection information from HTTPClientConfig will be ignored
KubeConfig string `yaml:"kubeconfig_file"`
// The filepath to kube config.
// If defined any cluster connection information from HTTPClientConfig is ignored.
KubeConfigFile string `yaml:"kubeconfig_file"`
HTTPClientConfig promauth.HTTPClientConfig `yaml:",inline"`
ProxyURL *proxy.URL `yaml:"proxy_url,omitempty"`

View file

@ -256,7 +256,11 @@ func (scfg *scrapeConfig) run(globalStopCh <-chan struct{}) {
sws := scfg.getScrapeWork(cfg, swsPrev)
sg.update(sws)
swsPrev = sws
scfg.discoveryDuration.UpdateDuration(startTime)
if sg.scrapersStarted.Get() > 0 {
// update duration only if at least one scraper has started
// otherwise this SD is considered as inactive
scfg.discoveryDuration.UpdateDuration(startTime)
}
}
updateScrapeWork(cfg)
atomic.AddInt32(&PendingScrapeConfigs, -1)

View file

@ -500,7 +500,7 @@ func (sw *scrapeWork) scrapeInternal(scrapeTimestamp, realTimestamp int64) error
// This should reduce memory usage when scraping targets which return big responses.
leveledbytebufferpool.Put(body)
}
tsmGlobal.Update(sw, sw.ScrapeGroup, up == 1, realTimestamp, int64(duration*1000), samplesScraped, err)
tsmGlobal.Update(sw, up == 1, realTimestamp, int64(duration*1000), samplesScraped, err)
return err
}
@ -603,7 +603,7 @@ func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error {
sw.storeLastScrape(sbr.body)
}
sw.finalizeLastScrape()
tsmGlobal.Update(sw, sw.ScrapeGroup, up == 1, realTimestamp, int64(duration*1000), samplesScraped, err)
tsmGlobal.Update(sw, up == 1, realTimestamp, int64(duration*1000), samplesScraped, err)
// Do not track active series in streaming mode, since this may need too big amounts of memory
// when the target exports too big number of metrics.
return err

View file

@ -1,82 +0,0 @@
{% import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
) %}
{% func ServiceDiscovery(jts []jobTargetsStatuses, emptyJobs []string, showOnlyUnhealthy bool, droppedKeyStatuses []droppedKeyStatus) %}
<div class="row mt-4">
<div class="col-12">
{% for i, js := range jts %}
{% if showOnlyUnhealthy && js.upCount == js.targetsTotal %}{% continue %}{% endif %}
<h4>
<span class="me-2">{%s js.job %}{% space %}({%d js.upCount %}/{%d js.targetsTotal %}{% space %}up)</span>
<button type="button" class="btn btn-primary btn-sm me-1"
onclick="document.querySelector('.table-discovery-{%d i %}').style.display='none'">collapse
</button>
<button type="button" class="btn btn-secondary btn-sm me-1"
onclick="document.querySelector('.table-discovery-{%d i %}').style.display='block'">expand
</button>
</h4>
<div id="table-discovery-{%d i %}" class="table-responsive table-discovery-{%d i %}">
<table class="table table-striped table-hover table-bordered table-sm">
<thead>
<tr>
<th scope="col" style="width: 50%">Discovered Labels</th>
<th scope="col" style="width: 50%">Target Labels</th>
</tr>
</thead>
<tbody class="list-{%d i %}">
{% for _, ts := range js.targetsStatus %}
{% if showOnlyUnhealthy && ts.up %}{% continue %}{% endif %}
<tr {% if !ts.up %}{%space%}class="alert alert-danger" role="alert" {% endif %}>
<td class="labels">
{%= formatLabel(ts.sw.Config.OriginalLabels) %}
</td>
<td class="labels">
{%= formatLabel(promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels)) %}
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
{% endfor %}
</div>
</div>
{% for i,jobName := range emptyJobs %}
<div>
<h4>
<a>{%s jobName %} (0/0 up)</a>
<button type="button" class="btn btn-primary btn-sm me-1"
onclick="document.querySelector('.table-empty-{%d i %}').style.display='none'">collapse
</button>
<button type="button" class="btn btn-secondary btn-sm me-1"
onclick="document.querySelector('.table-empty-{%d i %}').style.display='block'">expand
</button>
</h4>
<table id="table-empty-{%d i %}" class="table table-striped table-hover table-bordered table-sm table-empty-{%d i %}">
<thead>
<tr>
<th scope="col" style="width: 50%">Discovered Labels</th>
<th scope="col" style="width: 50%">Target Labels</th>
</tr>
</thead>
<tbody class="list-{%d i %}">
{% for _, status := range droppedKeyStatuses %}
{% for _, label := range status.originalLabels %}
{% if label.Value == jobName %}
<tr>
<td class="labels">
{%= formatLabel(status.originalLabels) %}
</td>
<td class="labels">
<span class="badge bg-danger">DROPPED</span>
</td>
</tr>
{% endif %}
{% endfor %}
{% endfor %}
</tbody>
</table>
</div>
{% endfor %}
{% endfunc %}

View file

@ -1,279 +0,0 @@
// Code generated by qtc from "service_discovery.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line lib/promscrape/service_discovery.qtpl:1
package promscrape
//line lib/promscrape/service_discovery.qtpl:1
import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
)
//line lib/promscrape/service_discovery.qtpl:5
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line lib/promscrape/service_discovery.qtpl:5
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line lib/promscrape/service_discovery.qtpl:5
func StreamServiceDiscovery(qw422016 *qt422016.Writer, jts []jobTargetsStatuses, emptyJobs []string, showOnlyUnhealthy bool, droppedKeyStatuses []droppedKeyStatus) {
//line lib/promscrape/service_discovery.qtpl:5
qw422016.N().S(`
<div class="row mt-4">
<div class="col-12">
`)
//line lib/promscrape/service_discovery.qtpl:8
for i, js := range jts {
//line lib/promscrape/service_discovery.qtpl:8
qw422016.N().S(`
`)
//line lib/promscrape/service_discovery.qtpl:9
if showOnlyUnhealthy && js.upCount == js.targetsTotal {
//line lib/promscrape/service_discovery.qtpl:9
continue
//line lib/promscrape/service_discovery.qtpl:9
}
//line lib/promscrape/service_discovery.qtpl:9
qw422016.N().S(`
<h4>
<span class="me-2">`)
//line lib/promscrape/service_discovery.qtpl:11
qw422016.E().S(js.job)
//line lib/promscrape/service_discovery.qtpl:11
qw422016.N().S(` `)
//line lib/promscrape/service_discovery.qtpl:11
qw422016.N().S(`(`)
//line lib/promscrape/service_discovery.qtpl:11
qw422016.N().D(js.upCount)
//line lib/promscrape/service_discovery.qtpl:11
qw422016.N().S(`/`)
//line lib/promscrape/service_discovery.qtpl:11
qw422016.N().D(js.targetsTotal)
//line lib/promscrape/service_discovery.qtpl:11
qw422016.N().S(` `)
//line lib/promscrape/service_discovery.qtpl:11
qw422016.N().S(`up)</span>
<button type="button" class="btn btn-primary btn-sm me-1"
onclick="document.querySelector('.table-discovery-`)
//line lib/promscrape/service_discovery.qtpl:13
qw422016.N().D(i)
//line lib/promscrape/service_discovery.qtpl:13
qw422016.N().S(`').style.display='none'">collapse
</button>
<button type="button" class="btn btn-secondary btn-sm me-1"
onclick="document.querySelector('.table-discovery-`)
//line lib/promscrape/service_discovery.qtpl:16
qw422016.N().D(i)
//line lib/promscrape/service_discovery.qtpl:16
qw422016.N().S(`').style.display='block'">expand
</button>
</h4>
<div id="table-discovery-`)
//line lib/promscrape/service_discovery.qtpl:19
qw422016.N().D(i)
//line lib/promscrape/service_discovery.qtpl:19
qw422016.N().S(`" class="table-responsive table-discovery-`)
//line lib/promscrape/service_discovery.qtpl:19
qw422016.N().D(i)
//line lib/promscrape/service_discovery.qtpl:19
qw422016.N().S(`">
<table class="table table-striped table-hover table-bordered table-sm">
<thead>
<tr>
<th scope="col" style="width: 50%">Discovered Labels</th>
<th scope="col" style="width: 50%">Target Labels</th>
</tr>
</thead>
<tbody class="list-`)
//line lib/promscrape/service_discovery.qtpl:27
qw422016.N().D(i)
//line lib/promscrape/service_discovery.qtpl:27
qw422016.N().S(`">
`)
//line lib/promscrape/service_discovery.qtpl:28
for _, ts := range js.targetsStatus {
//line lib/promscrape/service_discovery.qtpl:28
qw422016.N().S(`
`)
//line lib/promscrape/service_discovery.qtpl:29
if showOnlyUnhealthy && ts.up {
//line lib/promscrape/service_discovery.qtpl:29
continue
//line lib/promscrape/service_discovery.qtpl:29
}
//line lib/promscrape/service_discovery.qtpl:29
qw422016.N().S(`
<tr `)
//line lib/promscrape/service_discovery.qtpl:30
if !ts.up {
//line lib/promscrape/service_discovery.qtpl:30
qw422016.N().S(` `)
//line lib/promscrape/service_discovery.qtpl:30
qw422016.N().S(`class="alert alert-danger" role="alert" `)
//line lib/promscrape/service_discovery.qtpl:30
}
//line lib/promscrape/service_discovery.qtpl:30
qw422016.N().S(`>
<td class="labels">
`)
//line lib/promscrape/service_discovery.qtpl:32
streamformatLabel(qw422016, ts.sw.Config.OriginalLabels)
//line lib/promscrape/service_discovery.qtpl:32
qw422016.N().S(`
</td>
<td class="labels">
`)
//line lib/promscrape/service_discovery.qtpl:35
streamformatLabel(qw422016, promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels))
//line lib/promscrape/service_discovery.qtpl:35
qw422016.N().S(`
</td>
</tr>
`)
//line lib/promscrape/service_discovery.qtpl:38
}
//line lib/promscrape/service_discovery.qtpl:38
qw422016.N().S(`
</tbody>
</table>
</div>
`)
//line lib/promscrape/service_discovery.qtpl:42
}
//line lib/promscrape/service_discovery.qtpl:42
qw422016.N().S(`
</div>
</div>
`)
//line lib/promscrape/service_discovery.qtpl:45
for i, jobName := range emptyJobs {
//line lib/promscrape/service_discovery.qtpl:45
qw422016.N().S(`
<div>
<h4>
<a>`)
//line lib/promscrape/service_discovery.qtpl:48
qw422016.E().S(jobName)
//line lib/promscrape/service_discovery.qtpl:48
qw422016.N().S(` (0/0 up)</a>
<button type="button" class="btn btn-primary btn-sm me-1"
onclick="document.querySelector('.table-empty-`)
//line lib/promscrape/service_discovery.qtpl:50
qw422016.N().D(i)
//line lib/promscrape/service_discovery.qtpl:50
qw422016.N().S(`').style.display='none'">collapse
</button>
<button type="button" class="btn btn-secondary btn-sm me-1"
onclick="document.querySelector('.table-empty-`)
//line lib/promscrape/service_discovery.qtpl:53
qw422016.N().D(i)
//line lib/promscrape/service_discovery.qtpl:53
qw422016.N().S(`').style.display='block'">expand
</button>
</h4>
<table id="table-empty-`)
//line lib/promscrape/service_discovery.qtpl:56
qw422016.N().D(i)
//line lib/promscrape/service_discovery.qtpl:56
qw422016.N().S(`" class="table table-striped table-hover table-bordered table-sm table-empty-`)
//line lib/promscrape/service_discovery.qtpl:56
qw422016.N().D(i)
//line lib/promscrape/service_discovery.qtpl:56
qw422016.N().S(`">
<thead>
<tr>
<th scope="col" style="width: 50%">Discovered Labels</th>
<th scope="col" style="width: 50%">Target Labels</th>
</tr>
</thead>
<tbody class="list-`)
//line lib/promscrape/service_discovery.qtpl:63
qw422016.N().D(i)
//line lib/promscrape/service_discovery.qtpl:63
qw422016.N().S(`">
`)
//line lib/promscrape/service_discovery.qtpl:64
for _, status := range droppedKeyStatuses {
//line lib/promscrape/service_discovery.qtpl:64
qw422016.N().S(`
`)
//line lib/promscrape/service_discovery.qtpl:65
for _, label := range status.originalLabels {
//line lib/promscrape/service_discovery.qtpl:65
qw422016.N().S(`
`)
//line lib/promscrape/service_discovery.qtpl:66
if label.Value == jobName {
//line lib/promscrape/service_discovery.qtpl:66
qw422016.N().S(`
<tr>
<td class="labels">
`)
//line lib/promscrape/service_discovery.qtpl:69
streamformatLabel(qw422016, status.originalLabels)
//line lib/promscrape/service_discovery.qtpl:69
qw422016.N().S(`
</td>
<td class="labels">
<span class="badge bg-danger">DROPPED</span>
</td>
</tr>
`)
//line lib/promscrape/service_discovery.qtpl:75
}
//line lib/promscrape/service_discovery.qtpl:75
qw422016.N().S(`
`)
//line lib/promscrape/service_discovery.qtpl:76
}
//line lib/promscrape/service_discovery.qtpl:76
qw422016.N().S(`
`)
//line lib/promscrape/service_discovery.qtpl:77
}
//line lib/promscrape/service_discovery.qtpl:77
qw422016.N().S(`
</tbody>
</table>
</div>
`)
//line lib/promscrape/service_discovery.qtpl:81
}
//line lib/promscrape/service_discovery.qtpl:81
qw422016.N().S(`
`)
//line lib/promscrape/service_discovery.qtpl:82
}
//line lib/promscrape/service_discovery.qtpl:82
func WriteServiceDiscovery(qq422016 qtio422016.Writer, jts []jobTargetsStatuses, emptyJobs []string, showOnlyUnhealthy bool, droppedKeyStatuses []droppedKeyStatus) {
//line lib/promscrape/service_discovery.qtpl:82
qw422016 := qt422016.AcquireWriter(qq422016)
//line lib/promscrape/service_discovery.qtpl:82
StreamServiceDiscovery(qw422016, jts, emptyJobs, showOnlyUnhealthy, droppedKeyStatuses)
//line lib/promscrape/service_discovery.qtpl:82
qt422016.ReleaseWriter(qw422016)
//line lib/promscrape/service_discovery.qtpl:82
}
//line lib/promscrape/service_discovery.qtpl:82
func ServiceDiscovery(jts []jobTargetsStatuses, emptyJobs []string, showOnlyUnhealthy bool, droppedKeyStatuses []droppedKeyStatus) string {
//line lib/promscrape/service_discovery.qtpl:82
qb422016 := qt422016.AcquireByteBuffer()
//line lib/promscrape/service_discovery.qtpl:82
WriteServiceDiscovery(qb422016, jts, emptyJobs, showOnlyUnhealthy, droppedKeyStatuses)
//line lib/promscrape/service_discovery.qtpl:82
qs422016 := string(qb422016.B)
//line lib/promscrape/service_discovery.qtpl:82
qt422016.ReleaseByteBuffer(qb422016)
//line lib/promscrape/service_discovery.qtpl:82
return qs422016
//line lib/promscrape/service_discovery.qtpl:82
}

Some files were not shown because too many files have changed in this diff Show more