Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files

This commit is contained in:
Aliaksandr Valialkin 2022-06-14 12:26:32 +03:00
commit f7b52b64a3
No known key found for this signature in database
GPG key ID: A72BEC6CD3D0DED1
57 changed files with 1473 additions and 1528 deletions

113
README.md
View file

@ -33,7 +33,7 @@ VictoriaMetrics has the following prominent features:
* It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details. * It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details.
* It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage). * It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage). * It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage).
* It features easy setup and operation: * It features easy setup and operation:
* VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d) without external dependencies. * VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d) without external dependencies.
* All the configuration is done via explicit command-line flags with reasonable defaults. * All the configuration is done via explicit command-line flags with reasonable defaults.
@ -275,6 +275,8 @@ By default cardinality explorer analyzes time series for the current date. It pr
By default all the time series for the selected date are analyzed. It is possible to narrow down the analysis to series By default all the time series for the selected date are analyzed. It is possible to narrow down the analysis to series
matching the specified [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). matching the specified [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors).
Cardinality explorer takes into account [deleted time series](#how-to-delete-time-series), because they stay in the inverted index for up to [-retentionPeriod](#retention). This means that the deleted time series take RAM, CPU, disk IO and disk space for the inverted index in the same way as other time series.
Cardinality explorer is built on top of [/api/v1/status/tsdb](#tsdb-stats). Cardinality explorer is built on top of [/api/v1/status/tsdb](#tsdb-stats).
See [cardinality explorer playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/cardinality). See [cardinality explorer playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/cardinality).
@ -350,13 +352,17 @@ echo '
The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format): The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format):
<div class="with-copy" markdown="1">
```bash ```bash
curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1' curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1'
``` ```
</div>
This command should return the following output if everything is OK: This command should return the following output if everything is OK:
``` ```json
{"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]} {"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]}
``` ```
@ -458,13 +464,17 @@ VictoriaMetrics sets the current time if the timestamp is omitted.
An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go. An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go.
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
<div class="with-copy" markdown="1">
```bash ```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
``` ```
</div>
The `/api/v1/export` endpoint should return the following response: The `/api/v1/export` endpoint should return the following response:
```bash ```json
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]} {"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]}
``` ```
@ -607,6 +617,8 @@ For example, the following query would return data for the last 30 minutes: `/ap
VictoriaMetrics accepts `round_digits` query arg for `/api/v1/query` and `/api/v1/query_range` handlers. It can be used for rounding response values to the given number of digits after the decimal point. For example, `/api/v1/query?query=avg_over_time(temperature[1h])&round_digits=2` would round response values to up to two digits after the decimal point. VictoriaMetrics accepts `round_digits` query arg for `/api/v1/query` and `/api/v1/query_range` handlers. It can be used for rounding response values to the given number of digits after the decimal point. For example, `/api/v1/query?query=avg_over_time(temperature[1h])&round_digits=2` would round response values to up to two digits after the decimal point.
VictoriaMetrics accepts `limit` query arg for `/api/v1/labels` and `/api/v1/label/<labelName>/values` handlers for limiting the number of returned entries. For example, the query to `/api/v1/labels?limit=5` returns a sample of up to 5 unique labels, while ignoring the rest of labels. If the provided `limit` value exceeds the corresponding `-search.maxTagKeys` / `-search.maxTagValues` command-line flag values, then limits specified in the command-line flags are used.
By default, VictoriaMetrics returns time series for the last 5 minutes from `/api/v1/series`, while the Prometheus API defaults to all time. Use `start` and `end` to select a different time range. By default, VictoriaMetrics returns time series for the last 5 minutes from `/api/v1/series`, while the Prometheus API defaults to all time. Use `start` and `end` to select a different time range.
Additionally, VictoriaMetrics provides the following handlers: Additionally, VictoriaMetrics provides the following handlers:
@ -615,7 +627,6 @@ Additionally, VictoriaMetrics provides the following handlers:
* `/api/v1/series/count` - returns the total number of time series in the database. Some notes: * `/api/v1/series/count` - returns the total number of time series in the database. Some notes:
* the handler scans all the inverted index, so it can be slow if the database contains tens of millions of time series; * the handler scans all the inverted index, so it can be slow if the database contains tens of millions of time series;
* the handler may count [deleted time series](#how-to-delete-time-series) additionally to normal time series due to internal implementation restrictions; * the handler may count [deleted time series](#how-to-delete-time-series) additionally to normal time series due to internal implementation restrictions;
* `/api/v1/labels/count` - returns a list of `label: values_count` entries. It can be used for determining labels with the maximum number of values.
* `/api/v1/status/active_queries` - returns a list of currently running queries. * `/api/v1/status/active_queries` - returns a list of currently running queries.
* `/api/v1/status/top_queries` - returns the following query lists: * `/api/v1/status/top_queries` - returns the following query lists:
* the most frequently executed queries - `topByCount` * the most frequently executed queries - `topByCount`
@ -742,8 +753,8 @@ More details may be found [here](https://github.com/VictoriaMetrics/VictoriaMetr
## Setting up service ## Setting up service
Read [these instructions](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/43) on how to set up VictoriaMetrics as a service in your OS. Read [instructions](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/43) on how to set up VictoriaMetrics
There is also [snap package for Ubuntu](https://snapcraft.io/victoriametrics). as a service for your OS. A [snap package](https://snapcraft.io/victoriametrics) is available for Ubuntu.
## How to work with snapshots ## How to work with snapshots
@ -836,7 +847,7 @@ for metrics to export. Use `{__name__!=""}` selector for fetching all the time s
The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON). The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON).
Each JSON line contains samples for a single time series. An example output: Each JSON line contains samples for a single time series. An example output:
```jsonl ```json
{"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]} {"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]}
{"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]} {"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]}
``` ```
@ -856,10 +867,14 @@ In this case the output may contain multiple lines with samples for the same tim
Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts
of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data: of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data:
<div class="with-copy" markdown="1">
```bash ```bash
curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz
``` ```
</div>
The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag. The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag.
Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format). Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format).
@ -1032,7 +1047,7 @@ curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
The following response should be returned: The following response should be returned:
```bash ```json
{"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]} {"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]}
{"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]} {"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]}
{"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]} {"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]}
@ -1050,29 +1065,41 @@ VictoriaMetrics accepts data in [Prometheus exposition format](https://github.co
and in [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md) and in [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md)
via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics: via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics:
<div class="with-copy" markdown="1">
```bash ```bash
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
``` ```
</div>
The following command may be used for verifying the imported data: The following command may be used for verifying the imported data:
<div class="with-copy" markdown="1">
```bash ```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}' curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
``` ```
</div>
It should return something like the following: It should return something like the following:
``` ```json
{"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]} {"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]}
``` ```
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data: Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data:
<div class="with-copy" markdown="1">
```bash ```bash
# Import gzipped data to <destination-victoriametrics>: # Import gzipped data to <destination-victoriametrics>:
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz
``` ```
</div>
Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args. Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args.
For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics. For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics.
@ -1242,16 +1269,17 @@ Information about merging process is available in [single-node VictoriaMetrics](
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards. and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards.
See more details in [monitoring docs](#monitoring). See more details in [monitoring docs](#monitoring).
The `merge` process is usually named "compaction", because the resulting `part` size is usually smaller than The `merge` process improves compression rate and keeps number of `parts` on disk relatively low.
the sum of the source `parts`. There are following benefits of doing the merge process: Benefits of doing the merge process are the following:
* it improves query performance, since lower number of `parts` are inspected with each query; * it improves query performance, since lower number of `parts` are inspected with each query
* it reduces the number of data files, since each `part`contains fixed number of files; * it reduces the number of data files, since each `part` contains fixed number of files
* better compression rate for the resulting part. * various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling)
and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are performed during the merge.
Newly added `parts` either appear in the storage or fail to appear. Newly added `parts` either appear in the storage or fail to appear.
Storage never contains partially created parts. The same applies to merge process — `parts` are either fully Storage never contains partially created parts. The same applies to merge process — `parts` are either fully
merged into a new `part` or fail to merge. There are no partially merged `parts` in MergeTree. merged into a new `part` or fail to merge. MergeTree doesn't contain partially merged `parts`.
`Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge `Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge
to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`. to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`.
@ -1331,8 +1359,13 @@ Additionally, alerting can be set up with the following tools:
## Security ## Security
Do not forget protecting sensitive endpoints in VictoriaMetrics when exposing it to untrusted networks such as the internet. General security recommendations:
Consider setting the following command-line flags:
- All the VictoriaMetrics components must run in protected private networks without direct access from untrusted networks such as Internet. The exception is [vmauth](https://docs.victoriametrics.com/vmauth.html) and [vmgateway](https://docs.victoriametrics.com/vmgateway.html).
- All the requests from untrusted networks to VictoriaMetrics components must go through auth proxy such as vmauth or vmgateway. The proxy must be set up with proper authentication and authorization.
- Prefer using lists of allowed API endpoints, while disallowing access to other endpoints when configuring auth proxy in front of VictoriaMetrics components.
VictoriaMetrics provides the following security-related command-line flags:
* `-tls`, `-tlsCertFile` and `-tlsKeyFile` for switching from HTTP to HTTPS. * `-tls`, `-tlsCertFile` and `-tlsKeyFile` for switching from HTTP to HTTPS.
* `-httpAuth.username` and `-httpAuth.password` for protecting all the HTTP endpoints * `-httpAuth.username` and `-httpAuth.password` for protecting all the HTTP endpoints
@ -1342,20 +1375,17 @@ Consider setting the following command-line flags:
* `-forceMergeAuthKey` for protecting `/internal/force_merge` endpoint. See [force merge docs](#forced-merge). * `-forceMergeAuthKey` for protecting `/internal/force_merge` endpoint. See [force merge docs](#forced-merge).
* `-search.resetCacheAuthKey` for protecting `/internal/resetRollupResultCache` endpoint. See [backfilling](#backfilling) for more details. * `-search.resetCacheAuthKey` for protecting `/internal/resetRollupResultCache` endpoint. See [backfilling](#backfilling) for more details.
* `-configAuthKey` for protecting `/config` endpoint, since it may contain sensitive information such as passwords. * `-configAuthKey` for protecting `/config` endpoint, since it may contain sensitive information such as passwords.
* `-pprofAuthKey` for protecting `/debug/pprof/*` endpoints, which can be used for [profiling](#profiling).
- `-pprofAuthKey` for protecting `/debug/pprof/*` endpoints, which can be used for [profiling](#profiling). * `-denyQueryTracing` for disallowing [query tracing](#query-tracing).
Explicitly set internal network interface for TCP and UDP ports for data ingestion with Graphite and OpenTSDB formats. Explicitly set internal network interface for TCP and UDP ports for data ingestion with Graphite and OpenTSDB formats.
For example, substitute `-graphiteListenAddr=:2003` with `-graphiteListenAddr=<internal_iface_ip>:2003`. For example, substitute `-graphiteListenAddr=:2003` with `-graphiteListenAddr=<internal_iface_ip>:2003`. This protects from unexpected requests from untrusted network interfaces.
Prefer authorizing all the incoming requests from untrusted networks with [vmauth](https://docs.victoriametrics.com/vmauth.html)
or similar auth proxy.
## Tuning ## Tuning
* There is no need for VictoriaMetrics tuning since it uses reasonable defaults for command-line flags, * No need in tuning for VictoriaMetrics - it uses reasonable defaults for command-line flags,
which are automatically adjusted for the available CPU and RAM resources. which are automatically adjusted for the available CPU and RAM resources.
* There is no need for Operating System tuning since VictoriaMetrics is optimized for default OS settings. * No need in tuning for Operating System - VictoriaMetrics is optimized for default OS settings.
The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a). The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a).
The recommendation is not specific for VictoriaMetrics only but also for any service which handles many HTTP connections and stores data on disk. The recommendation is not specific for VictoriaMetrics only but also for any service which handles many HTTP connections and stores data on disk.
* VictoriaMetrics is a write-heavy application and its performance depends on disk performance. So be careful with other * VictoriaMetrics is a write-heavy application and its performance depends on disk performance. So be careful with other
@ -1372,19 +1402,23 @@ mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
## Monitoring ## Monitoring
VictoriaMetrics exports internal metrics in Prometheus format at `/metrics` page. VictoriaMetrics exports internal metrics in Prometheus exposition format at `/metrics` page.
These metrics may be collected by [vmagent](https://docs.victoriametrics.com/vmagent.html) These metrics can be scraped via [vmagent](https://docs.victoriametrics.com/vmagent.html) or Prometheus.
or Prometheus by adding the corresponding scrape config to it. Alternatively, single-node VictoriaMetrics can self-scrape the metrics when `-selfScrapeInterval` command-line flag is
Alternatively they can be self-scraped by setting `-selfScrapeInterval` command-line flag to duration greater than 0. set to duration greater than 0. For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page
For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page with 10 seconds interval. with 10 seconds interval.
There are officials Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/dashboards/10229) and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176). There is also an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831). Official Grafana dashboards available for [single-node](https://grafana.com/dashboards/10229)
and [clustered](https://grafana.com/grafana/dashboards/11176) VictoriaMetrics.
See an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831)
created by community.
Graphs on these dashboard contain useful hints - hover the `i` icon at the top left corner of each graph in order to read it. Graphs on the dashboards contain useful hints - hover the `i` icon in the top left corner of each graph to read it.
It is recommended setting up alerts in [vmalert](https://docs.victoriametrics.com/vmalert.html) or in Prometheus from [this config](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml). We recommend setting up [alerts](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml)
via [vmalert](https://docs.victoriametrics.com/vmalert.html) or via Prometheus.
The most interesting metrics are: The most interesting health metrics are the following:
* `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour * `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour
aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series). aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
@ -1400,16 +1434,14 @@ The most interesting metrics are:
If this number remains high during extended periods of time, then it is likely more RAM is needed for optimal handling If this number remains high during extended periods of time, then it is likely more RAM is needed for optimal handling
of the current number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series). of the current number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
VictoriaMetrics also exposes currently running queries with their execution times at `/api/v1/status/active_queries` page. VictoriaMetrics exposes currently running queries and their execution times at `/api/v1/status/active_queries` page.
See the example of alerting rules for VM components [here](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml).
## TSDB stats ## TSDB stats
VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way similar to Prometheus - see [these Prometheus docs](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats). VictoriaMetrics accepts the following optional query args at `/api/v1/status/tsdb` page: VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way similar to Prometheus - see [these Prometheus docs](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats). VictoriaMetrics accepts the following optional query args at `/api/v1/status/tsdb` page:
* `topN=N` where `N` is the number of top entries to return in the response. By default top 10 entries are returned. * `topN=N` where `N` is the number of top entries to return in the response. By default top 10 entries are returned.
* `date=YYYY-MM-DD` where `YYYY-MM-DD` is the date for collecting the stats. By default the stats is collected for the current day. * `date=YYYY-MM-DD` where `YYYY-MM-DD` is the date for collecting the stats. By default the stats is collected for the current day. Pass `date=1970-01-01` in order to collect global stats across all the days.
* `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account. * `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account.
* `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details. * `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details.
@ -1787,9 +1819,10 @@ Files included in each folder:
### We kindly ask ### We kindly ask
* Please don't use any other font instead of suggested. * Please don't use any other font instead of suggested.
* There should be sufficient clear space around the logo. * To keep enough clear space around the logo.
* Do not change spacing, alignment, or relative locations of the design elements. * Do not change spacing, alignment, or relative locations of the design elements.
* Do not change the proportions of any of the design elements or the design itself. You may resize as needed but must retain all proportions. * Do not change the proportions for any of the design elements or the design itself.
You may resize as needed but must retain all proportions.
## List of command-line flags ## List of command-line flags

View file

@ -273,6 +273,11 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{}`) fmt.Fprintf(w, `{}`)
return true return true
case "/datadog/api/v1/metadata":
datadogMetadataRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{}`)
return true
case "/targets": case "/targets":
promscrapeTargetsRequests.Inc() promscrapeTargetsRequests.Inc()
promscrape.WriteHumanReadableTargetsStatus(w, r) promscrape.WriteHumanReadableTargetsStatus(w, r)
@ -459,6 +464,11 @@ func processMultitenantRequest(w http.ResponseWriter, r *http.Request, path stri
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{}`) fmt.Fprintf(w, `{}`)
return true return true
case "datadog/api/v1/metadata":
datadogMetadataRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{}`)
return true
default: default:
httpserver.Errorf(w, r, "unsupported multitenant path suffix: %q", p.Suffix) httpserver.Errorf(w, r, "unsupported multitenant path suffix: %q", p.Suffix)
return true return true
@ -492,6 +502,7 @@ var (
datadogValidateRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`) datadogValidateRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`)
datadogCheckRunRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`) datadogCheckRunRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`)
datadogIntakeRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/intake", protocol="datadog"}`) datadogIntakeRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/intake", protocol="datadog"}`)
datadogMetadataRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/metadata", protocol="datadog"}`)
promscrapeTargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/targets"}`) promscrapeTargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/targets"}`)
promscrapeServiceDiscoveryRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/service-discovery"}`) promscrapeServiceDiscoveryRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/service-discovery"}`)

View file

@ -237,7 +237,12 @@ func (c *Client) send(ctx context.Context, data []byte) error {
return fmt.Errorf("failed to create new HTTP request: %w", err) return fmt.Errorf("failed to create new HTTP request: %w", err)
} }
// RFC standard compliant headers
req.Header.Set("Content-Encoding", "snappy") req.Header.Set("Content-Encoding", "snappy")
req.Header.Set("Content-Type", "application/x-protobuf")
// Prometheus compliant headers
req.Header.Set("X-Prometheus-Remote-Write-Version", "0.1.0")
if c.authCfg != nil { if c.authCfg != nil {
if auth := c.authCfg.GetAuthHeader(); auth != "" { if auth := c.authCfg.GetAuthHeader(); auth != "" {

View file

@ -86,6 +86,16 @@ func (rw *rwServer) handler(w http.ResponseWriter, r *http.Request) {
rw.err(w, fmt.Errorf("header read error: Content-Encoding is not snappy (%q)", h)) rw.err(w, fmt.Errorf("header read error: Content-Encoding is not snappy (%q)", h))
} }
h = r.Header.Get("Content-Type")
if h != "application/x-protobuf" {
rw.err(w, fmt.Errorf("header read error: Content-Type is not x-protobuf (%q)", h))
}
h = r.Header.Get("X-Prometheus-Remote-Write-Version")
if h != "0.1.0" {
rw.err(w, fmt.Errorf("header read error: X-Prometheus-Remote-Write-Version is not 0.1.0 (%q)", h))
}
data, err := ioutil.ReadAll(r.Body) data, err := ioutil.ReadAll(r.Body)
if err != nil { if err != nil {
rw.err(w, fmt.Errorf("body read err: %w", err)) rw.err(w, fmt.Errorf("body read err: %w", err))

View file

@ -212,6 +212,11 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{}`) fmt.Fprintf(w, `{}`)
return true return true
case "/datadog/api/v1/metadata":
datadogMetadataRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{}`)
return true
case "/prometheus/targets", "/targets": case "/prometheus/targets", "/targets":
promscrapeTargetsRequests.Inc() promscrapeTargetsRequests.Inc()
promscrape.WriteHumanReadableTargetsStatus(w, r) promscrape.WriteHumanReadableTargetsStatus(w, r)
@ -319,6 +324,7 @@ var (
datadogValidateRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`) datadogValidateRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`)
datadogCheckRunRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`) datadogCheckRunRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`)
datadogIntakeRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/intake", protocol="datadog"}`) datadogIntakeRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/intake", protocol="datadog"}`)
datadogMetadataRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/metadata", protocol="datadog"}`)
promscrapeTargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/targets"}`) promscrapeTargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/targets"}`)
promscrapeServiceDiscoveryRequests = metrics.NewCounter(`vm_http_requests_total{path="/service-discovery"}`) promscrapeServiceDiscoveryRequests = metrics.NewCounter(`vm_http_requests_total{path="/service-discovery"}`)

View file

@ -197,7 +197,8 @@ func MetricsExpandHandler(startTime time.Time, w http.ResponseWriter, r *http.Re
func MetricsIndexHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error { func MetricsIndexHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime) deadline := searchutils.GetDeadlineForQuery(r, startTime)
jsonp := r.FormValue("jsonp") jsonp := r.FormValue("jsonp")
metricNames, err := netstorage.GetLabelValues(nil, "__name__", deadline) sq := storage.NewSearchQuery(0, 0, nil, 0)
metricNames, err := netstorage.GetLabelValues(nil, "__name__", sq, 0, deadline)
if err != nil { if err != nil {
return fmt.Errorf(`cannot obtain metric names: %w`, err) return fmt.Errorf(`cannot obtain metric names: %w`, err)
} }

View file

@ -5,7 +5,6 @@ import (
"net/http" "net/http"
"regexp" "regexp"
"sort" "sort"
"strconv"
"strings" "strings"
"time" "time"
@ -159,7 +158,7 @@ var (
// See https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support // See https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support
func TagsAutoCompleteValuesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error { func TagsAutoCompleteValuesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime) deadline := searchutils.GetDeadlineForQuery(r, startTime)
limit, err := getInt(r, "limit") limit, err := searchutils.GetInt(r, "limit")
if err != nil { if err != nil {
return err return err
} }
@ -245,7 +244,7 @@ var tagsAutoCompleteValuesDuration = metrics.NewSummary(`vm_request_duration_sec
// See https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support // See https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support
func TagsAutoCompleteTagsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error { func TagsAutoCompleteTagsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime) deadline := searchutils.GetDeadlineForQuery(r, startTime)
limit, err := getInt(r, "limit") limit, err := searchutils.GetInt(r, "limit")
if err != nil { if err != nil {
return err return err
} }
@ -324,7 +323,7 @@ var tagsAutoCompleteTagsDuration = metrics.NewSummary(`vm_request_duration_secon
// See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags // See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags
func TagsFindSeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error { func TagsFindSeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime) deadline := searchutils.GetDeadlineForQuery(r, startTime)
limit, err := getInt(r, "limit") limit, err := searchutils.GetInt(r, "limit")
if err != nil { if err != nil {
return err return err
} }
@ -392,7 +391,7 @@ var tagsFindSeriesDuration = metrics.NewSummary(`vm_request_duration_seconds{pat
// See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags // See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags
func TagValuesHandler(startTime time.Time, tagName string, w http.ResponseWriter, r *http.Request) error { func TagValuesHandler(startTime time.Time, tagName string, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime) deadline := searchutils.GetDeadlineForQuery(r, startTime)
limit, err := getInt(r, "limit") limit, err := searchutils.GetInt(r, "limit")
if err != nil { if err != nil {
return err return err
} }
@ -420,7 +419,7 @@ var tagValuesDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/t
// See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags // See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags
func TagsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error { func TagsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime) deadline := searchutils.GetDeadlineForQuery(r, startTime)
limit, err := getInt(r, "limit") limit, err := searchutils.GetInt(r, "limit")
if err != nil { if err != nil {
return err return err
} }
@ -443,18 +442,6 @@ func TagsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) er
var tagsDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/tags"}`) var tagsDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/tags"}`)
func getInt(r *http.Request, argName string) (int, error) {
argValue := r.FormValue(argName)
if len(argValue) == 0 {
return 0, nil
}
n, err := strconv.Atoi(argValue)
if err != nil {
return 0, fmt.Errorf("cannot parse %q=%q: %w", argName, argValue, err)
}
return n, nil
}
func getSearchQueryForExprs(startTime time.Time, etfs [][]storage.TagFilter, exprs []string, maxMetrics int) (*storage.SearchQuery, error) { func getSearchQueryForExprs(startTime time.Time, etfs [][]storage.TagFilter, exprs []string, maxMetrics int) (*storage.SearchQuery, error) {
tfs, err := exprsToTagFilters(exprs) tfs, err := exprsToTagFilters(exprs)
if err != nil { if err != nil {

View file

@ -257,19 +257,10 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true return true
} }
return true return true
case "/api/v1/labels/count":
labelsCountRequests.Inc()
httpserver.EnableCORS(w, r)
if err := prometheus.LabelsCountHandler(startTime, w, r); err != nil {
labelsCountErrors.Inc()
sendPrometheusError(w, r, err)
return true
}
return true
case "/api/v1/status/tsdb": case "/api/v1/status/tsdb":
statusTSDBRequests.Inc() statusTSDBRequests.Inc()
httpserver.EnableCORS(w, r) httpserver.EnableCORS(w, r)
if err := prometheus.TSDBStatusHandler(startTime, w, r); err != nil { if err := prometheus.TSDBStatusHandler(qt, startTime, w, r); err != nil {
statusTSDBErrors.Inc() statusTSDBErrors.Inc()
sendPrometheusError(w, r, err) sendPrometheusError(w, r, err)
return true return true
@ -502,9 +493,6 @@ var (
labelsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/labels"}`) labelsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/labels"}`)
labelsErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/labels"}`) labelsErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/labels"}`)
labelsCountRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/labels/count"}`)
labelsCountErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/labels/count"}`)
statusTSDBRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/status/tsdb"}`) statusTSDBRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/status/tsdb"}`)
statusTSDBErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/status/tsdb"}`) statusTSDBErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/status/tsdb"}`)

View file

@ -653,30 +653,35 @@ func DeleteSeries(qt *querytracer.Tracer, sq *storage.SearchQuery, deadline sear
return vmstorage.DeleteMetrics(tfss) return vmstorage.DeleteMetrics(tfss)
} }
// GetLabelsOnTimeRange returns labels for the given tr until the given deadline. // GetLabelNames returns label names matching the given sq until the given deadline.
func GetLabelsOnTimeRange(qt *querytracer.Tracer, tr storage.TimeRange, deadline searchutils.Deadline) ([]string, error) { func GetLabelNames(qt *querytracer.Tracer, sq *storage.SearchQuery, maxLabelNames int, deadline searchutils.Deadline) ([]string, error) {
qt = qt.NewChild("get labels on timeRange=%s", &tr) qt = qt.NewChild("get labels: %s", sq)
defer qt.Done() defer qt.Done()
if deadline.Exceeded() { if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String()) return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
} }
labels, err := vmstorage.SearchTagKeysOnTimeRange(tr, *maxTagKeysPerSearch, deadline.Deadline()) if maxLabelNames > *maxTagKeysPerSearch || maxLabelNames <= 0 {
qt.Printf("get %d labels", len(labels)) maxLabelNames = *maxTagKeysPerSearch
}
tr := storage.TimeRange{
MinTimestamp: sq.MinTimestamp,
MaxTimestamp: sq.MaxTimestamp,
}
tfss, err := setupTfss(tr, sq.TagFilterss, sq.MaxMetrics, deadline)
if err != nil {
return nil, err
}
labels, err := vmstorage.SearchLabelNamesWithFiltersOnTimeRange(qt, tfss, tr, maxLabelNames, sq.MaxMetrics, deadline.Deadline())
if err != nil { if err != nil {
return nil, fmt.Errorf("error during labels search on time range: %w", err) return nil, fmt.Errorf("error during labels search on time range: %w", err)
} }
// Substitute "" with "__name__"
for i := range labels {
if labels[i] == "" {
labels[i] = "__name__"
}
}
// Merge labels obtained from Prometheus storage. // Merge labels obtained from Prometheus storage.
promLabels, err := promdb.GetLabelNamesOnTimeRange(tr, deadline) promLabels, err := promdb.GetLabelNamesOnTimeRange(tr, deadline)
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot obtain labels from Prometheus storage: %w", err) return nil, fmt.Errorf("cannot obtain labels from Prometheus storage: %w", err)
} }
qt.Printf("get %d label names from Prometheus storage", len(promLabels))
labels = mergeStrings(labels, promLabels) labels = mergeStrings(labels, promLabels)
// Sort labels like Prometheus does // Sort labels like Prometheus does
@ -692,7 +697,8 @@ func GetGraphiteTags(qt *querytracer.Tracer, filter string, limit int, deadline
if deadline.Exceeded() { if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String()) return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
} }
labels, err := GetLabels(nil, deadline) sq := storage.NewSearchQuery(0, 0, nil, 0)
labels, err := GetLabelNames(qt, sq, 0, deadline)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -732,36 +738,41 @@ func hasString(a []string, s string) bool {
return false return false
} }
// GetLabels returns labels until the given deadline. // GetLabelValues returns label values matching the given labelName and sq until the given deadline.
func GetLabels(qt *querytracer.Tracer, deadline searchutils.Deadline) ([]string, error) { func GetLabelValues(qt *querytracer.Tracer, labelName string, sq *storage.SearchQuery, maxLabelValues int, deadline searchutils.Deadline) ([]string, error) {
qt = qt.NewChild("get labels") qt = qt.NewChild("get values for label %s: %s", labelName, sq)
defer qt.Done() defer qt.Done()
if deadline.Exceeded() { if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String()) return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
} }
labels, err := vmstorage.SearchTagKeys(*maxTagKeysPerSearch, deadline.Deadline()) if maxLabelValues > *maxTagValuesPerSearch || maxLabelValues <= 0 {
qt.Printf("get %d labels from global index", len(labels)) maxLabelValues = *maxTagValuesPerSearch
if err != nil {
return nil, fmt.Errorf("error during labels search: %w", err)
} }
// Substitute "" with "__name__" tr := storage.TimeRange{
for i := range labels { MinTimestamp: sq.MinTimestamp,
if labels[i] == "" { MaxTimestamp: sq.MaxTimestamp,
labels[i] = "__name__" }
} tfss, err := setupTfss(tr, sq.TagFilterss, sq.MaxMetrics, deadline)
if err != nil {
return nil, err
}
labelValues, err := vmstorage.SearchLabelValuesWithFiltersOnTimeRange(qt, labelName, tfss, tr, maxLabelValues, sq.MaxMetrics, deadline.Deadline())
if err != nil {
return nil, fmt.Errorf("error during label values search on time range: %w", err)
} }
// Merge labels obtained from Prometheus storage. // Merge label values obtained from Prometheus storage.
promLabels, err := promdb.GetLabelNames(deadline) promLabelValues, err := promdb.GetLabelValuesOnTimeRange(labelName, tr, deadline)
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot obtain labels from Prometheus storage: %w", err) return nil, fmt.Errorf("cannot obtain label values on time range for %q from Prometheus storage: %w", labelName, err)
} }
labels = mergeStrings(labels, promLabels) qt.Printf("get %d label values from Prometheus storage", len(promLabelValues))
labelValues = mergeStrings(labelValues, promLabelValues)
// Sort labels like Prometheus does // Sort labelValues like Prometheus does
sort.Strings(labels) sort.Strings(labelValues)
qt.Printf("sort %d labels", len(labels)) qt.Printf("sort %d label values", len(labelValues))
return labels, nil return labelValues, nil
} }
func mergeStrings(a, b []string) []string { func mergeStrings(a, b []string) []string {
@ -785,37 +796,6 @@ func mergeStrings(a, b []string) []string {
return result return result
} }
// GetLabelValuesOnTimeRange returns label values for the given labelName on the given tr
// until the given deadline.
func GetLabelValuesOnTimeRange(qt *querytracer.Tracer, labelName string, tr storage.TimeRange, deadline searchutils.Deadline) ([]string, error) {
qt = qt.NewChild("get values for label %s on a timeRange %s", labelName, &tr)
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
if labelName == "__name__" {
labelName = ""
}
// Search for tag values
labelValues, err := vmstorage.SearchTagValuesOnTimeRange([]byte(labelName), tr, *maxTagValuesPerSearch, deadline.Deadline())
qt.Printf("get %d label values", len(labelValues))
if err != nil {
return nil, fmt.Errorf("error during label values search on time range for labelName=%q: %w", labelName, err)
}
// Merge label values obtained from Prometheus storage.
promLabelValues, err := promdb.GetLabelValuesOnTimeRange(labelName, tr, deadline)
if err != nil {
return nil, fmt.Errorf("cannot obtain label values on time range for %q from Prometheus storage: %w", labelName, err)
}
labelValues = mergeStrings(labelValues, promLabelValues)
// Sort labelValues like Prometheus does
sort.Strings(labelValues)
qt.Printf("sort %d label values", len(labelValues))
return labelValues, nil
}
// GetGraphiteTagValues returns tag values for the given tagName until the given deadline. // GetGraphiteTagValues returns tag values for the given tagName until the given deadline.
func GetGraphiteTagValues(qt *querytracer.Tracer, tagName, filter string, limit int, deadline searchutils.Deadline) ([]string, error) { func GetGraphiteTagValues(qt *querytracer.Tracer, tagName, filter string, limit int, deadline searchutils.Deadline) ([]string, error) {
qt = qt.NewChild("get graphite tag values for tagName=%s, filter=%s, limit=%d", tagName, filter, limit) qt = qt.NewChild("get graphite tag values for tagName=%s, filter=%s, limit=%d", tagName, filter, limit)
@ -826,7 +806,8 @@ func GetGraphiteTagValues(qt *querytracer.Tracer, tagName, filter string, limit
if tagName == "name" { if tagName == "name" {
tagName = "" tagName = ""
} }
tagValues, err := GetLabelValues(nil, tagName, deadline) sq := storage.NewSearchQuery(0, 0, nil, 0)
tagValues, err := GetLabelValues(qt, tagName, sq, 0, deadline)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -842,37 +823,6 @@ func GetGraphiteTagValues(qt *querytracer.Tracer, tagName, filter string, limit
return tagValues, nil return tagValues, nil
} }
// GetLabelValues returns label values for the given labelName
// until the given deadline.
func GetLabelValues(qt *querytracer.Tracer, labelName string, deadline searchutils.Deadline) ([]string, error) {
qt = qt.NewChild("get values for label %s", labelName)
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
if labelName == "__name__" {
labelName = ""
}
// Search for tag values
labelValues, err := vmstorage.SearchTagValues([]byte(labelName), *maxTagValuesPerSearch, deadline.Deadline())
qt.Printf("get %d label values", len(labelValues))
if err != nil {
return nil, fmt.Errorf("error during label values search for labelName=%q: %w", labelName, err)
}
// Merge label values obtained from Prometheus storage.
promLabelValues, err := promdb.GetLabelValues(labelName, deadline)
if err != nil {
return nil, fmt.Errorf("cannot obtain label values for %q from Prometheus storage: %w", labelName, err)
}
labelValues = mergeStrings(labelValues, promLabelValues)
// Sort labelValues like Prometheus does
sort.Strings(labelValues)
qt.Printf("sort %d label values", len(labelValues))
return labelValues, nil
}
// GetTagValueSuffixes returns tag value suffixes for the given tagKey and the given tagValuePrefix. // GetTagValueSuffixes returns tag value suffixes for the given tagKey and the given tagValuePrefix.
// //
// It can be used for implementing https://graphite-api.readthedocs.io/en/latest/api.html#metrics-find // It can be used for implementing https://graphite-api.readthedocs.io/en/latest/api.html#metrics-find
@ -895,40 +845,6 @@ func GetTagValueSuffixes(qt *querytracer.Tracer, tr storage.TimeRange, tagKey, t
return suffixes, nil return suffixes, nil
} }
// GetLabelEntries returns all the label entries until the given deadline.
func GetLabelEntries(qt *querytracer.Tracer, deadline searchutils.Deadline) ([]storage.TagEntry, error) {
qt = qt.NewChild("get label entries")
defer qt.Done()
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
labelEntries, err := vmstorage.SearchTagEntries(*maxTagKeysPerSearch, *maxTagValuesPerSearch, deadline.Deadline())
if err != nil {
return nil, fmt.Errorf("error during label entries request: %w", err)
}
qt.Printf("get %d label entries", len(labelEntries))
// Substitute "" with "__name__"
for i := range labelEntries {
e := &labelEntries[i]
if e.Key == "" {
e.Key = "__name__"
}
}
// Sort labelEntries by the number of label values in each entry.
sort.Slice(labelEntries, func(i, j int) bool {
a, b := labelEntries[i].Values, labelEntries[j].Values
if len(a) != len(b) {
return len(a) > len(b)
}
return labelEntries[i].Key > labelEntries[j].Key
})
qt.Printf("sort %d label entries", len(labelEntries))
return labelEntries, nil
}
// GetTSDBStatusForDate returns tsdb status according to https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats // GetTSDBStatusForDate returns tsdb status according to https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats
func GetTSDBStatusForDate(qt *querytracer.Tracer, deadline searchutils.Deadline, date uint64, topN, maxMetrics int) (*storage.TSDBStatus, error) { func GetTSDBStatusForDate(qt *querytracer.Tracer, deadline searchutils.Deadline, date uint64, topN, maxMetrics int) (*storage.TSDBStatus, error) {
qt = qt.NewChild("get tsdb stats for date=%d, topN=%d", date, topN) qt = qt.NewChild("get tsdb stats for date=%d, topN=%d", date, topN)
@ -936,7 +852,7 @@ func GetTSDBStatusForDate(qt *querytracer.Tracer, deadline searchutils.Deadline,
if deadline.Exceeded() { if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String()) return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
} }
status, err := vmstorage.GetTSDBStatusForDate(date, topN, maxMetrics, deadline.Deadline()) status, err := vmstorage.GetTSDBStatusForDate(qt, date, topN, maxMetrics, deadline.Deadline())
if err != nil { if err != nil {
return nil, fmt.Errorf("error during tsdb status request: %w", err) return nil, fmt.Errorf("error during tsdb status request: %w", err)
} }
@ -961,7 +877,7 @@ func GetTSDBStatusWithFilters(qt *querytracer.Tracer, deadline searchutils.Deadl
return nil, err return nil, err
} }
date := uint64(tr.MinTimestamp) / (3600 * 24 * 1000) date := uint64(tr.MinTimestamp) / (3600 * 24 * 1000)
status, err := vmstorage.GetTSDBStatusWithFiltersForDate(tfss, date, topN, sq.MaxMetrics, deadline.Deadline()) status, err := vmstorage.GetTSDBStatusWithFiltersForDate(qt, tfss, date, topN, sq.MaxMetrics, deadline.Deadline())
if err != nil { if err != nil {
return nil, fmt.Errorf("error during tsdb status with filters request: %w", err) return nil, fmt.Errorf("error during tsdb status with filters request: %w", err)
} }

View file

@ -1,17 +0,0 @@
{% import "github.com/VictoriaMetrics/VictoriaMetrics/lib/storage" %}
{% stripspace %}
LabelsCountResponse generates response for /api/v1/labels/count .
{% func LabelsCountResponse(labelEntries []storage.TagEntry) %}
{
"status":"success",
"data":{
{% for i, e := range labelEntries %}
{%q= e.Key %}:{%d= len(e.Values) %}
{% if i+1 < len(labelEntries) %},{% endif %}
{% endfor %}
}
}
{% endfunc %}
{% endstripspace %}

View file

@ -1,74 +0,0 @@
// Code generated by qtc from "labels_count_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vmselect/prometheus/labels_count_response.qtpl:1
package prometheus
//line app/vmselect/prometheus/labels_count_response.qtpl:1
import "github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
// LabelsCountResponse generates response for /api/v1/labels/count .
//line app/vmselect/prometheus/labels_count_response.qtpl:5
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/prometheus/labels_count_response.qtpl:5
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/prometheus/labels_count_response.qtpl:5
func StreamLabelsCountResponse(qw422016 *qt422016.Writer, labelEntries []storage.TagEntry) {
//line app/vmselect/prometheus/labels_count_response.qtpl:5
qw422016.N().S(`{"status":"success","data":{`)
//line app/vmselect/prometheus/labels_count_response.qtpl:9
for i, e := range labelEntries {
//line app/vmselect/prometheus/labels_count_response.qtpl:10
qw422016.N().Q(e.Key)
//line app/vmselect/prometheus/labels_count_response.qtpl:10
qw422016.N().S(`:`)
//line app/vmselect/prometheus/labels_count_response.qtpl:10
qw422016.N().D(len(e.Values))
//line app/vmselect/prometheus/labels_count_response.qtpl:11
if i+1 < len(labelEntries) {
//line app/vmselect/prometheus/labels_count_response.qtpl:11
qw422016.N().S(`,`)
//line app/vmselect/prometheus/labels_count_response.qtpl:11
}
//line app/vmselect/prometheus/labels_count_response.qtpl:12
}
//line app/vmselect/prometheus/labels_count_response.qtpl:12
qw422016.N().S(`}}`)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
}
//line app/vmselect/prometheus/labels_count_response.qtpl:15
func WriteLabelsCountResponse(qq422016 qtio422016.Writer, labelEntries []storage.TagEntry) {
//line app/vmselect/prometheus/labels_count_response.qtpl:15
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
StreamLabelsCountResponse(qw422016, labelEntries)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
}
//line app/vmselect/prometheus/labels_count_response.qtpl:15
func LabelsCountResponse(labelEntries []storage.TagEntry) string {
//line app/vmselect/prometheus/labels_count_response.qtpl:15
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/labels_count_response.qtpl:15
WriteLabelsCountResponse(qb422016, labelEntries)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/labels_count_response.qtpl:15
return qs422016
//line app/vmselect/prometheus/labels_count_response.qtpl:15
}

View file

@ -5,7 +5,6 @@ import (
"fmt" "fmt"
"math" "math"
"net/http" "net/http"
"sort"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
@ -22,7 +21,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime" "github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver" "github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer" "github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage" "github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/metrics" "github.com/VictoriaMetrics/metrics"
@ -58,8 +56,10 @@ const defaultStep = 5 * 60 * 1000
func FederateHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error { func FederateHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
defer federateDuration.UpdateDuration(startTime) defer federateDuration.UpdateDuration(startTime)
ct := startTime.UnixNano() / 1e6 cp, err := getCommonParams(r, startTime, true)
deadline := searchutils.GetDeadlineForQuery(r, startTime) if err != nil {
return err
}
lookbackDelta, err := getMaxLookback(r) lookbackDelta, err := getMaxLookback(r)
if err != nil { if err != nil {
return err return err
@ -67,23 +67,11 @@ func FederateHandler(startTime time.Time, w http.ResponseWriter, r *http.Request
if lookbackDelta <= 0 { if lookbackDelta <= 0 {
lookbackDelta = defaultStep lookbackDelta = defaultStep
} }
start, err := searchutils.GetTime(r, "start", ct-lookbackDelta) if cp.IsDefaultTimeRange() {
if err != nil { cp.start = cp.end - lookbackDelta
return err
} }
end, err := searchutils.GetTime(r, "end", ct) sq := storage.NewSearchQuery(cp.start, cp.end, cp.filterss, *maxFederateSeries)
if err != nil { rss, err := netstorage.ProcessSearchQuery(nil, sq, true, cp.deadline)
return err
}
if start >= end {
start = end - defaultStep
}
tagFilterss, err := getTagFilterssFromRequest(r)
if err != nil {
return err
}
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxFederateSeries)
rss, err := netstorage.ProcessSearchQuery(nil, sq, true, deadline)
if err != nil { if err != nil {
return fmt.Errorf("cannot fetch data for %q: %w", sq, err) return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
} }
@ -116,18 +104,19 @@ var federateDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/fe
func ExportCSVHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error { func ExportCSVHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
defer exportCSVDuration.UpdateDuration(startTime) defer exportCSVDuration.UpdateDuration(startTime)
cp, err := getExportParams(r, startTime)
if err != nil {
return err
}
format := r.FormValue("format") format := r.FormValue("format")
if len(format) == 0 { if len(format) == 0 {
return fmt.Errorf("missing `format` arg; see https://docs.victoriametrics.com/#how-to-export-csv-data") return fmt.Errorf("missing `format` arg; see https://docs.victoriametrics.com/#how-to-export-csv-data")
} }
fieldNames := strings.Split(format, ",") fieldNames := strings.Split(format, ",")
reduceMemUsage := searchutils.GetBool(r, "reduce_mem_usage") reduceMemUsage := searchutils.GetBool(r, "reduce_mem_usage")
ep, err := getExportParams(r, startTime)
if err != nil {
return err
}
sq := storage.NewSearchQuery(ep.start, ep.end, ep.filterss, *maxExportSeries) sq := storage.NewSearchQuery(cp.start, cp.end, cp.filterss, *maxExportSeries)
w.Header().Set("Content-Type", "text/csv; charset=utf-8") w.Header().Set("Content-Type", "text/csv; charset=utf-8")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
@ -143,7 +132,7 @@ func ExportCSVHandler(startTime time.Time, w http.ResponseWriter, r *http.Reques
} }
doneCh := make(chan error, 1) doneCh := make(chan error, 1)
if !reduceMemUsage { if !reduceMemUsage {
rss, err := netstorage.ProcessSearchQuery(nil, sq, true, ep.deadline) rss, err := netstorage.ProcessSearchQuery(nil, sq, true, cp.deadline)
if err != nil { if err != nil {
return fmt.Errorf("cannot fetch data for %q: %w", sq, err) return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
} }
@ -166,7 +155,7 @@ func ExportCSVHandler(startTime time.Time, w http.ResponseWriter, r *http.Reques
}() }()
} else { } else {
go func() { go func() {
err := netstorage.ExportBlocks(nil, sq, ep.deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error { err := netstorage.ExportBlocks(nil, sq, cp.deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error {
if err := bw.Error(); err != nil { if err := bw.Error(); err != nil {
return err return err
} }
@ -207,24 +196,24 @@ var exportCSVDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/a
func ExportNativeHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error { func ExportNativeHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
defer exportNativeDuration.UpdateDuration(startTime) defer exportNativeDuration.UpdateDuration(startTime)
ep, err := getExportParams(r, startTime) cp, err := getExportParams(r, startTime)
if err != nil { if err != nil {
return err return err
} }
sq := storage.NewSearchQuery(ep.start, ep.end, ep.filterss, *maxExportSeries) sq := storage.NewSearchQuery(cp.start, cp.end, cp.filterss, *maxExportSeries)
w.Header().Set("Content-Type", "VictoriaMetrics/native") w.Header().Set("Content-Type", "VictoriaMetrics/native")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
// Marshal tr // Marshal tr
trBuf := make([]byte, 0, 16) trBuf := make([]byte, 0, 16)
trBuf = encoding.MarshalInt64(trBuf, ep.start) trBuf = encoding.MarshalInt64(trBuf, cp.start)
trBuf = encoding.MarshalInt64(trBuf, ep.end) trBuf = encoding.MarshalInt64(trBuf, cp.end)
_, _ = bw.Write(trBuf) _, _ = bw.Write(trBuf)
// Marshal native blocks. // Marshal native blocks.
err = netstorage.ExportBlocks(nil, sq, ep.deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error { err = netstorage.ExportBlocks(nil, sq, cp.deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error {
if err := bw.Error(); err != nil { if err := bw.Error(); err != nil {
return err return err
} }
@ -269,22 +258,22 @@ var bbPool bytesutil.ByteBufferPool
func ExportHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error { func ExportHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
defer exportDuration.UpdateDuration(startTime) defer exportDuration.UpdateDuration(startTime)
ep, err := getExportParams(r, startTime) cp, err := getExportParams(r, startTime)
if err != nil { if err != nil {
return err return err
} }
format := r.FormValue("format") format := r.FormValue("format")
maxRowsPerLine := int(fastfloat.ParseInt64BestEffort(r.FormValue("max_rows_per_line"))) maxRowsPerLine := int(fastfloat.ParseInt64BestEffort(r.FormValue("max_rows_per_line")))
reduceMemUsage := searchutils.GetBool(r, "reduce_mem_usage") reduceMemUsage := searchutils.GetBool(r, "reduce_mem_usage")
if err := exportHandler(nil, w, ep, format, maxRowsPerLine, reduceMemUsage); err != nil { if err := exportHandler(nil, w, cp, format, maxRowsPerLine, reduceMemUsage); err != nil {
return fmt.Errorf("error when exporting data on the time range (start=%d, end=%d): %w", ep.start, ep.end, err) return fmt.Errorf("error when exporting data on the time range (start=%d, end=%d): %w", cp.start, cp.end, err)
} }
return nil return nil
} }
var exportDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/export"}`) var exportDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/export"}`)
func exportHandler(qt *querytracer.Tracer, w http.ResponseWriter, ep *exportParams, format string, maxRowsPerLine int, reduceMemUsage bool) error { func exportHandler(qt *querytracer.Tracer, w http.ResponseWriter, cp *commonParams, format string, maxRowsPerLine int, reduceMemUsage bool) error {
writeResponseFunc := WriteExportStdResponse writeResponseFunc := WriteExportStdResponse
writeLineFunc := func(xb *exportBlock, resultsCh chan<- *quicktemplate.ByteBuffer) { writeLineFunc := func(xb *exportBlock, resultsCh chan<- *quicktemplate.ByteBuffer) {
bb := quicktemplate.AcquireByteBuffer() bb := quicktemplate.AcquireByteBuffer()
@ -337,7 +326,7 @@ func exportHandler(qt *querytracer.Tracer, w http.ResponseWriter, ep *exportPara
} }
} }
sq := storage.NewSearchQuery(ep.start, ep.end, ep.filterss, *maxExportSeries) sq := storage.NewSearchQuery(cp.start, cp.end, cp.filterss, *maxExportSeries)
w.Header().Set("Content-Type", contentType) w.Header().Set("Content-Type", contentType)
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
@ -345,7 +334,7 @@ func exportHandler(qt *querytracer.Tracer, w http.ResponseWriter, ep *exportPara
resultsCh := make(chan *quicktemplate.ByteBuffer, cgroup.AvailableCPUs()) resultsCh := make(chan *quicktemplate.ByteBuffer, cgroup.AvailableCPUs())
doneCh := make(chan error, 1) doneCh := make(chan error, 1)
if !reduceMemUsage { if !reduceMemUsage {
rss, err := netstorage.ProcessSearchQuery(qt, sq, true, ep.deadline) rss, err := netstorage.ProcessSearchQuery(qt, sq, true, cp.deadline)
if err != nil { if err != nil {
return fmt.Errorf("cannot fetch data for %q: %w", sq, err) return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
} }
@ -371,7 +360,7 @@ func exportHandler(qt *querytracer.Tracer, w http.ResponseWriter, ep *exportPara
} else { } else {
qtChild := qt.NewChild("background export format=%s", format) qtChild := qt.NewChild("background export format=%s", format)
go func() { go func() {
err := netstorage.ExportBlocks(qtChild, sq, ep.deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error { err := netstorage.ExportBlocks(qtChild, sq, cp.deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error {
if err := bw.Error(); err != nil { if err := bw.Error(); err != nil {
return err return err
} }
@ -430,17 +419,15 @@ var exportBlockPool = &sync.Pool{
func DeleteHandler(startTime time.Time, r *http.Request) error { func DeleteHandler(startTime time.Time, r *http.Request) error {
defer deleteDuration.UpdateDuration(startTime) defer deleteDuration.UpdateDuration(startTime)
deadline := searchutils.GetDeadlineForQuery(r, startTime) cp, err := getCommonParams(r, startTime, true)
if r.FormValue("start") != "" || r.FormValue("end") != "" {
return fmt.Errorf("start and end aren't supported. Remove these args from the query in order to delete all the matching metrics")
}
tagFilterss, err := getTagFilterssFromRequest(r)
if err != nil { if err != nil {
return err return err
} }
ct := startTime.UnixNano() / 1e6 if !cp.IsDefaultTimeRange() {
sq := storage.NewSearchQuery(0, ct, tagFilterss, 0) return fmt.Errorf("start=%d and end=%d args aren't supported. Remove these args from the query in order to delete all the matching metrics", cp.start, cp.end)
deletedCount, err := netstorage.DeleteSeries(nil, sq, deadline) }
sq := storage.NewSearchQuery(cp.start, cp.end, cp.filterss, 0)
deletedCount, err := netstorage.DeleteSeries(nil, sq, cp.deadline)
if err != nil { if err != nil {
return fmt.Errorf("cannot delete time series: %w", err) return fmt.Errorf("cannot delete time series: %w", err)
} }
@ -458,60 +445,18 @@ var deleteDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/
func LabelValuesHandler(qt *querytracer.Tracer, startTime time.Time, labelName string, w http.ResponseWriter, r *http.Request) error { func LabelValuesHandler(qt *querytracer.Tracer, startTime time.Time, labelName string, w http.ResponseWriter, r *http.Request) error {
defer labelValuesDuration.UpdateDuration(startTime) defer labelValuesDuration.UpdateDuration(startTime)
deadline := searchutils.GetDeadlineForQuery(r, startTime) cp, err := getCommonParams(r, startTime, false)
etfs, err := searchutils.GetExtraTagFilters(r)
if err != nil { if err != nil {
return err return err
} }
matches := getMatchesFromRequest(r) limit, err := searchutils.GetInt(r, "limit")
var labelValues []string if err != nil {
if len(matches) == 0 && len(etfs) == 0 { return err
if len(r.Form["start"]) == 0 && len(r.Form["end"]) == 0 { }
var err error sq := storage.NewSearchQuery(cp.start, cp.end, cp.filterss, *maxUniqueTimeseries)
labelValues, err = netstorage.GetLabelValues(qt, labelName, deadline) labelValues, err := netstorage.GetLabelValues(qt, labelName, sq, limit, cp.deadline)
if err != nil { if err != nil {
return fmt.Errorf(`cannot obtain label values for %q: %w`, labelName, err) return fmt.Errorf("cannot obtain values for label %q: %w", labelName, err)
}
} else {
ct := startTime.UnixNano() / 1e6
end, err := searchutils.GetTime(r, "end", ct)
if err != nil {
return err
}
start, err := searchutils.GetTime(r, "start", end-defaultStep)
if err != nil {
return err
}
tr := storage.TimeRange{
MinTimestamp: start,
MaxTimestamp: end,
}
labelValues, err = netstorage.GetLabelValuesOnTimeRange(qt, labelName, tr, deadline)
if err != nil {
return fmt.Errorf(`cannot obtain label values on time range for %q: %w`, labelName, err)
}
}
} else {
// Extended functionality that allows filtering by label filters and time range
// i.e. /api/v1/label/foo/values?match[]=foobar{baz="abc"}&start=...&end=...
// is equivalent to `label_values(foobar{baz="abc"}, foo)` call on the selected
// time range in Grafana templating.
if len(matches) == 0 {
matches = []string{fmt.Sprintf("{%s!=''}", labelName)}
}
ct := startTime.UnixNano() / 1e6
end, err := searchutils.GetTime(r, "end", ct)
if err != nil {
return err
}
start, err := searchutils.GetTime(r, "start", end-defaultStep)
if err != nil {
return err
}
labelValues, err = labelValuesWithMatches(qt, labelName, matches, etfs, start, end, deadline)
if err != nil {
return fmt.Errorf("cannot obtain label values for %q, match[]=%q, start=%d, end=%d: %w", labelName, matches, start, end, err)
}
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
@ -524,99 +469,8 @@ func LabelValuesHandler(qt *querytracer.Tracer, startTime time.Time, labelName s
return nil return nil
} }
func labelValuesWithMatches(qt *querytracer.Tracer, labelName string, matches []string, etfs [][]storage.TagFilter,
start, end int64, deadline searchutils.Deadline) ([]string, error) {
tagFilterss, err := getTagFilterssFromMatches(matches)
if err != nil {
return nil, err
}
// Add `labelName!=''` tag filter in order to filter out series without the labelName.
// There is no need in adding `__name__!=''` filter, since all the time series should
// already have non-empty name.
if labelName != "__name__" {
key := []byte(labelName)
for i, tfs := range tagFilterss {
tagFilterss[i] = append(tfs, storage.TagFilter{
Key: key,
IsNegative: true,
})
}
}
if start >= end {
end = start + defaultStep
}
tagFilterss = searchutils.JoinTagFilterss(tagFilterss, etfs)
if len(tagFilterss) == 0 {
logger.Panicf("BUG: tagFilterss must be non-empty")
}
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxSeriesLimit)
m := make(map[string]struct{})
if end-start > 24*3600*1000 {
// It is cheaper to call SearchMetricNames on time ranges exceeding a day.
mns, err := netstorage.SearchMetricNames(qt, sq, deadline)
if err != nil {
return nil, fmt.Errorf("cannot fetch time series for %q: %w", sq, err)
}
for _, mn := range mns {
labelValue := mn.GetTagValue(labelName)
if len(labelValue) == 0 {
continue
}
m[string(labelValue)] = struct{}{}
}
} else {
rss, err := netstorage.ProcessSearchQuery(qt, sq, false, deadline)
if err != nil {
return nil, fmt.Errorf("cannot fetch data for %q: %w", sq, err)
}
var mLock sync.Mutex
err = rss.RunParallel(qt, func(rs *netstorage.Result, workerID uint) error {
labelValue := rs.MetricName.GetTagValue(labelName)
if len(labelValue) == 0 {
return nil
}
mLock.Lock()
m[string(labelValue)] = struct{}{}
mLock.Unlock()
return nil
})
if err != nil {
return nil, fmt.Errorf("cannot fetch label values from storage: %w", err)
}
}
labelValues := make([]string, 0, len(m))
for labelValue := range m {
labelValues = append(labelValues, labelValue)
}
sort.Strings(labelValues)
qt.Printf("sort %d label values", len(labelValues))
return labelValues, nil
}
var labelValuesDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/label/{}/values"}`) var labelValuesDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/label/{}/values"}`)
// LabelsCountHandler processes /api/v1/labels/count request.
func LabelsCountHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
defer labelsCountDuration.UpdateDuration(startTime)
deadline := searchutils.GetDeadlineForStatusRequest(r, startTime)
labelEntries, err := netstorage.GetLabelEntries(nil, deadline)
if err != nil {
return fmt.Errorf(`cannot obtain label entries: %w`, err)
}
w.Header().Set("Content-Type", "application/json")
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteLabelsCountResponse(bw, labelEntries)
if err := bw.Flush(); err != nil {
return fmt.Errorf("cannot send labels count response to remote client: %w", err)
}
return nil
}
var labelsCountDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/labels/count"}`)
const secsPerDay = 3600 * 24 const secsPerDay = 3600 * 24
// TSDBStatusHandler processes /api/v1/status/tsdb request. // TSDBStatusHandler processes /api/v1/status/tsdb request.
@ -624,15 +478,14 @@ const secsPerDay = 3600 * 24
// See https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats // See https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats
// //
// It can accept `match[]` filters in order to narrow down the search. // It can accept `match[]` filters in order to narrow down the search.
func TSDBStatusHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error { func TSDBStatusHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, r *http.Request) error {
defer tsdbStatusDuration.UpdateDuration(startTime) defer tsdbStatusDuration.UpdateDuration(startTime)
deadline := searchutils.GetDeadlineForStatusRequest(r, startTime) cp, err := getCommonParams(r, startTime, false)
etfs, err := searchutils.GetExtraTagFilters(r)
if err != nil { if err != nil {
return err return err
} }
matches := getMatchesFromRequest(r) cp.deadline = searchutils.GetDeadlineForStatusRequest(r, startTime)
date := fasttime.UnixDate() date := fasttime.UnixDate()
dateStr := r.FormValue("date") dateStr := r.FormValue("date")
@ -659,13 +512,13 @@ func TSDBStatusHandler(startTime time.Time, w http.ResponseWriter, r *http.Reque
topN = n topN = n
} }
var status *storage.TSDBStatus var status *storage.TSDBStatus
if len(matches) == 0 && len(etfs) == 0 { if len(cp.filterss) == 0 {
status, err = netstorage.GetTSDBStatusForDate(nil, deadline, date, topN, *maxTSDBStatusSeries) status, err = netstorage.GetTSDBStatusForDate(qt, cp.deadline, date, topN, *maxTSDBStatusSeries)
if err != nil { if err != nil {
return fmt.Errorf(`cannot obtain tsdb status for date=%d, topN=%d: %w`, date, topN, err) return fmt.Errorf(`cannot obtain tsdb status for date=%d, topN=%d: %w`, date, topN, err)
} }
} else { } else {
status, err = tsdbStatusWithMatches(matches, etfs, date, topN, *maxTSDBStatusSeries, deadline) status, err = tsdbStatusWithMatches(qt, cp.filterss, date, topN, *maxTSDBStatusSeries, cp.deadline)
if err != nil { if err != nil {
return fmt.Errorf("cannot obtain tsdb status with matches for date=%d, topN=%d: %w", date, topN, err) return fmt.Errorf("cannot obtain tsdb status with matches for date=%d, topN=%d: %w", date, topN, err)
} }
@ -673,26 +526,18 @@ func TSDBStatusHandler(startTime time.Time, w http.ResponseWriter, r *http.Reque
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
WriteTSDBStatusResponse(bw, status) WriteTSDBStatusResponse(bw, status, qt)
if err := bw.Flush(); err != nil { if err := bw.Flush(); err != nil {
return fmt.Errorf("cannot send tsdb status response to remote client: %w", err) return fmt.Errorf("cannot send tsdb status response to remote client: %w", err)
} }
return nil return nil
} }
func tsdbStatusWithMatches(matches []string, etfs [][]storage.TagFilter, date uint64, topN, maxMetrics int, deadline searchutils.Deadline) (*storage.TSDBStatus, error) { func tsdbStatusWithMatches(qt *querytracer.Tracer, filterss [][]storage.TagFilter, date uint64, topN, maxMetrics int, deadline searchutils.Deadline) (*storage.TSDBStatus, error) {
tagFilterss, err := getTagFilterssFromMatches(matches)
if err != nil {
return nil, err
}
tagFilterss = searchutils.JoinTagFilterss(tagFilterss, etfs)
if len(tagFilterss) == 0 {
logger.Panicf("BUG: tagFilterss must be non-empty")
}
start := int64(date*secsPerDay) * 1000 start := int64(date*secsPerDay) * 1000
end := int64(date*secsPerDay+secsPerDay) * 1000 end := int64(date*secsPerDay+secsPerDay) * 1000
sq := storage.NewSearchQuery(start, end, tagFilterss, maxMetrics) sq := storage.NewSearchQuery(start, end, filterss, maxMetrics)
status, err := netstorage.GetTSDBStatusWithFilters(nil, deadline, sq, topN) status, err := netstorage.GetTSDBStatusWithFilters(qt, deadline, sq, topN)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -707,58 +552,18 @@ var tsdbStatusDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/
func LabelsHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, r *http.Request) error { func LabelsHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, r *http.Request) error {
defer labelsDuration.UpdateDuration(startTime) defer labelsDuration.UpdateDuration(startTime)
deadline := searchutils.GetDeadlineForQuery(r, startTime) cp, err := getCommonParams(r, startTime, false)
etfs, err := searchutils.GetExtraTagFilters(r)
if err != nil { if err != nil {
return err return err
} }
matches := getMatchesFromRequest(r) limit, err := searchutils.GetInt(r, "limit")
var labels []string if err != nil {
if len(matches) == 0 && len(etfs) == 0 { return err
if len(r.Form["start"]) == 0 && len(r.Form["end"]) == 0 { }
var err error sq := storage.NewSearchQuery(cp.start, cp.end, cp.filterss, *maxUniqueTimeseries)
labels, err = netstorage.GetLabels(qt, deadline) labels, err := netstorage.GetLabelNames(qt, sq, limit, cp.deadline)
if err != nil { if err != nil {
return fmt.Errorf("cannot obtain labels: %w", err) return fmt.Errorf("cannot obtain labels: %w", err)
}
} else {
ct := startTime.UnixNano() / 1e6
end, err := searchutils.GetTime(r, "end", ct)
if err != nil {
return err
}
start, err := searchutils.GetTime(r, "start", end-defaultStep)
if err != nil {
return err
}
tr := storage.TimeRange{
MinTimestamp: start,
MaxTimestamp: end,
}
labels, err = netstorage.GetLabelsOnTimeRange(qt, tr, deadline)
if err != nil {
return fmt.Errorf("cannot obtain labels on time range: %w", err)
}
}
} else {
// Extended functionality that allows filtering by label filters and time range
// i.e. /api/v1/labels?match[]=foobar{baz="abc"}&start=...&end=...
if len(matches) == 0 {
matches = []string{"{__name__!=''}"}
}
ct := startTime.UnixNano() / 1e6
end, err := searchutils.GetTime(r, "end", ct)
if err != nil {
return err
}
start, err := searchutils.GetTime(r, "start", end-defaultStep)
if err != nil {
return err
}
labels, err = labelsWithMatches(qt, matches, etfs, start, end, deadline)
if err != nil {
return fmt.Errorf("cannot obtain labels for match[]=%q, start=%d, end=%d: %w", matches, start, end, err)
}
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
@ -771,62 +576,6 @@ func LabelsHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseW
return nil return nil
} }
func labelsWithMatches(qt *querytracer.Tracer, matches []string, etfs [][]storage.TagFilter, start, end int64, deadline searchutils.Deadline) ([]string, error) {
tagFilterss, err := getTagFilterssFromMatches(matches)
if err != nil {
return nil, err
}
if start >= end {
end = start + defaultStep
}
tagFilterss = searchutils.JoinTagFilterss(tagFilterss, etfs)
if len(tagFilterss) == 0 {
logger.Panicf("BUG: tagFilterss must be non-empty")
}
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxSeriesLimit)
m := make(map[string]struct{})
if end-start > 24*3600*1000 {
// It is cheaper to call SearchMetricNames on time ranges exceeding a day.
mns, err := netstorage.SearchMetricNames(qt, sq, deadline)
if err != nil {
return nil, fmt.Errorf("cannot fetch time series for %q: %w", sq, err)
}
for _, mn := range mns {
for _, tag := range mn.Tags {
m[string(tag.Key)] = struct{}{}
}
}
if len(mns) > 0 {
m["__name__"] = struct{}{}
}
} else {
rss, err := netstorage.ProcessSearchQuery(qt, sq, false, deadline)
if err != nil {
return nil, fmt.Errorf("cannot fetch data for %q: %w", sq, err)
}
var mLock sync.Mutex
err = rss.RunParallel(qt, func(rs *netstorage.Result, workerID uint) error {
mLock.Lock()
for _, tag := range rs.MetricName.Tags {
m[string(tag.Key)] = struct{}{}
}
m["__name__"] = struct{}{}
mLock.Unlock()
return nil
})
if err != nil {
return nil, fmt.Errorf("cannot fetch labels from storage: %w", err)
}
}
labels := make([]string, 0, len(m))
for label := range m {
labels = append(labels, label)
}
sort.Strings(labels)
qt.Printf("sort %d labels", len(labels))
return labels, nil
}
var labelsDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/labels"}`) var labelsDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/labels"}`)
// SeriesCountHandler processes /api/v1/series/count request. // SeriesCountHandler processes /api/v1/series/count request.
@ -856,9 +605,7 @@ var seriesCountDuration = metrics.NewSummary(`vm_request_duration_seconds{path="
func SeriesHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, r *http.Request) error { func SeriesHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, r *http.Request) error {
defer seriesDuration.UpdateDuration(startTime) defer seriesDuration.UpdateDuration(startTime)
deadline := searchutils.GetDeadlineForQuery(r, startTime) cp, err := getCommonParams(r, startTime, true)
ct := startTime.UnixNano() / 1e6
end, err := searchutils.GetTime(r, "end", ct)
if err != nil { if err != nil {
return err return err
} }
@ -867,25 +614,16 @@ func SeriesHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseW
// which can take a lot of time for big storages. // which can take a lot of time for big storages.
// It is better setting start as end-defaultStep by default. // It is better setting start as end-defaultStep by default.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/91 // See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/91
start, err := searchutils.GetTime(r, "start", end-defaultStep) if cp.start == 0 {
if err != nil { cp.start = cp.end - defaultStep
return err
} }
sq := storage.NewSearchQuery(cp.start, cp.end, cp.filterss, *maxSeriesLimit)
tagFilterss, err := getTagFilterssFromRequest(r)
if err != nil {
return err
}
if start >= end {
end = start + defaultStep
}
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxSeriesLimit)
qtDone := func() { qtDone := func() {
qt.Donef("start=%d, end=%d", start, end) qt.Donef("start=%d, end=%d", cp.start, cp.end)
} }
if end-start > 24*3600*1000 { if cp.end-cp.start > 24*3600*1000 {
// It is cheaper to call SearchMetricNames on time ranges exceeding a day. // It is cheaper to call SearchMetricNames on time ranges exceeding a day.
mns, err := netstorage.SearchMetricNames(qt, sq, deadline) mns, err := netstorage.SearchMetricNames(qt, sq, cp.deadline)
if err != nil { if err != nil {
return fmt.Errorf("cannot fetch time series for %q: %w", sq, err) return fmt.Errorf("cannot fetch time series for %q: %w", sq, err)
} }
@ -909,7 +647,7 @@ func SeriesHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseW
seriesDuration.UpdateDuration(startTime) seriesDuration.UpdateDuration(startTime)
return nil return nil
} }
rss, err := netstorage.ProcessSearchQuery(qt, sq, false, deadline) rss, err := netstorage.ProcessSearchQuery(qt, sq, false, cp.deadline)
if err != nil { if err != nil {
return fmt.Errorf("cannot fetch data for %q: %w", sq, err) return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
} }
@ -1000,13 +738,13 @@ func QueryHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWr
} }
filterss := searchutils.JoinTagFilterss(tagFilterss, etfs) filterss := searchutils.JoinTagFilterss(tagFilterss, etfs)
ep := &exportParams{ cp := &commonParams{
deadline: deadline, deadline: deadline,
start: start, start: start,
end: end, end: end,
filterss: filterss, filterss: filterss,
} }
if err := exportHandler(qt, w, ep, "promapi", 0, false); err != nil { if err := exportHandler(qt, w, cp, "promapi", 0, false); err != nil {
return fmt.Errorf("error when exporting data for query=%q on the time range (start=%d, end=%d): %w", childQuery, start, end, err) return fmt.Errorf("error when exporting data for query=%q on the time range (start=%d, end=%d): %w", childQuery, start, end, err)
} }
queryDuration.UpdateDuration(startTime) queryDuration.UpdateDuration(startTime)
@ -1268,30 +1006,6 @@ func getTagFilterssFromMatches(matches []string) ([][]storage.TagFilter, error)
return tagFilterss, nil return tagFilterss, nil
} }
func getTagFilterssFromRequest(r *http.Request) ([][]storage.TagFilter, error) {
matches := getMatchesFromRequest(r)
if len(matches) == 0 {
return nil, fmt.Errorf("missing `match[]` query arg")
}
tagFilterss, err := getTagFilterssFromMatches(matches)
if err != nil {
return nil, err
}
etfs, err := searchutils.GetExtraTagFilters(r)
if err != nil {
return nil, err
}
tagFilterss = searchutils.JoinTagFilterss(tagFilterss, etfs)
return tagFilterss, nil
}
func getMatchesFromRequest(r *http.Request) []string {
matches := r.Form["match[]"]
// This is needed for backwards compatibility
matches = append(matches, r.Form["match"]...)
return matches
}
func getRoundDigits(r *http.Request) int { func getRoundDigits(r *http.Request) int {
s := r.FormValue("round_digits") s := r.FormValue("round_digits")
if len(s) == 0 { if len(s) == 0 {
@ -1342,19 +1056,50 @@ func QueryStatsHandler(startTime time.Time, w http.ResponseWriter, r *http.Reque
var queryStatsDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/status/top_queries"}`) var queryStatsDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/status/top_queries"}`)
// exportParams contains common parameters for all /api/v1/export* handlers // commonParams contains common parameters for all /api/v1/* handlers
// //
// deadline, start, end, match[], extra_label, extra_filters // timeout, start, end, match[], extra_label, extra_filters[]
type exportParams struct { type commonParams struct {
deadline searchutils.Deadline deadline searchutils.Deadline
start int64 start int64
end int64 end int64
filterss [][]storage.TagFilter currentTimestamp int64
filterss [][]storage.TagFilter
} }
// getExportParams obtains common params from r, which are used for /api/v1/export* handlers func (cp *commonParams) IsDefaultTimeRange() bool {
func getExportParams(r *http.Request, startTime time.Time) (*exportParams, error) { return cp.start == 0 && cp.currentTimestamp-cp.end < 1000
deadline := searchutils.GetDeadlineForExport(r, startTime) }
// getCommonParams obtains common params from r, which are used in /api/v1/export* handlers
//
// - timeout
// - start
// - end
// - match[]
// - extra_label
// - extra_filters[]
//
func getExportParams(r *http.Request, startTime time.Time) (*commonParams, error) {
cp, err := getCommonParams(r, startTime, true)
if err != nil {
return nil, err
}
cp.deadline = searchutils.GetDeadlineForExport(r, startTime)
return cp, nil
}
// getCommonParams obtains common params from r, which are used in /api/v1/* handlers:
//
// - timeout
// - start
// - end
// - match[]
// - extra_label
// - extra_filters[]
//
func getCommonParams(r *http.Request, startTime time.Time, requireNonEmptyMatch bool) (*commonParams, error) {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
start, err := searchutils.GetTime(r, "start", 0) start, err := searchutils.GetTime(r, "start", 0)
if err != nil { if err != nil {
return nil, err return nil, err
@ -1367,15 +1112,10 @@ func getExportParams(r *http.Request, startTime time.Time) (*exportParams, error
if end < start { if end < start {
end = start end = start
} }
matches := append([]string{}, r.Form["match[]"]...)
matches := r.Form["match[]"] matches = append(matches, r.Form["match"]...)
if len(matches) == 0 { if requireNonEmptyMatch && len(matches) == 0 {
// Maintain backwards compatibility return nil, fmt.Errorf("missing `match[]` arg")
match := r.FormValue("match")
if len(match) == 0 {
return nil, fmt.Errorf("missing `match[]` arg")
}
matches = []string{match}
} }
tagFilterss, err := getTagFilterssFromMatches(matches) tagFilterss, err := getTagFilterssFromMatches(matches)
if err != nil { if err != nil {
@ -1386,11 +1126,12 @@ func getExportParams(r *http.Request, startTime time.Time) (*exportParams, error
return nil, err return nil, err
} }
filterss := searchutils.JoinTagFilterss(tagFilterss, etfs) filterss := searchutils.JoinTagFilterss(tagFilterss, etfs)
cp := &commonParams{
return &exportParams{ deadline: deadline,
deadline: deadline, start: start,
start: start, end: end,
end: end, currentTimestamp: ct,
filterss: filterss, filterss: filterss,
}, nil }
return cp, nil
} }

View file

@ -1,8 +1,11 @@
{% import "github.com/VictoriaMetrics/VictoriaMetrics/lib/storage" %} {% import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
) %}
{% stripspace %} {% stripspace %}
TSDBStatusResponse generates response for /api/v1/status/tsdb . TSDBStatusResponse generates response for /api/v1/status/tsdb .
{% func TSDBStatusResponse(status *storage.TSDBStatus) %} {% func TSDBStatusResponse(status *storage.TSDBStatus, qt *querytracer.Tracer) %}
{ {
"status":"success", "status":"success",
"data":{ "data":{
@ -12,6 +15,8 @@ TSDBStatusResponse generates response for /api/v1/status/tsdb .
"seriesCountByLabelValuePair":{%= tsdbStatusEntries(status.SeriesCountByLabelValuePair) %}, "seriesCountByLabelValuePair":{%= tsdbStatusEntries(status.SeriesCountByLabelValuePair) %},
"labelValueCountByLabelName":{%= tsdbStatusEntries(status.LabelValueCountByLabelName) %} "labelValueCountByLabelName":{%= tsdbStatusEntries(status.LabelValueCountByLabelName) %}
} }
{% code qt.Done() %}
{%= dumpQueryTrace(qt) %}
} }
{% endfunc %} {% endfunc %}

View file

@ -5,127 +5,137 @@
package prometheus package prometheus
//line app/vmselect/prometheus/tsdb_status_response.qtpl:1 //line app/vmselect/prometheus/tsdb_status_response.qtpl:1
import "github.com/VictoriaMetrics/VictoriaMetrics/lib/storage" import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
)
// TSDBStatusResponse generates response for /api/v1/status/tsdb . // TSDBStatusResponse generates response for /api/v1/status/tsdb .
//line app/vmselect/prometheus/tsdb_status_response.qtpl:5 //line app/vmselect/prometheus/tsdb_status_response.qtpl:8
import ( import (
qtio422016 "io" qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate" qt422016 "github.com/valyala/quicktemplate"
) )
//line app/vmselect/prometheus/tsdb_status_response.qtpl:5 //line app/vmselect/prometheus/tsdb_status_response.qtpl:8
var ( var (
_ = qtio422016.Copy _ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer _ = qt422016.AcquireByteBuffer
) )
//line app/vmselect/prometheus/tsdb_status_response.qtpl:5 //line app/vmselect/prometheus/tsdb_status_response.qtpl:8
func StreamTSDBStatusResponse(qw422016 *qt422016.Writer, status *storage.TSDBStatus) { func StreamTSDBStatusResponse(qw422016 *qt422016.Writer, status *storage.TSDBStatus, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:5 //line app/vmselect/prometheus/tsdb_status_response.qtpl:8
qw422016.N().S(`{"status":"success","data":{"totalSeries":`) qw422016.N().S(`{"status":"success","data":{"totalSeries":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9 //line app/vmselect/prometheus/tsdb_status_response.qtpl:12
qw422016.N().DUL(status.TotalSeries) qw422016.N().DUL(status.TotalSeries)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:9 //line app/vmselect/prometheus/tsdb_status_response.qtpl:12
qw422016.N().S(`,"totalLabelValuePairs":`) qw422016.N().S(`,"totalLabelValuePairs":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:10 //line app/vmselect/prometheus/tsdb_status_response.qtpl:13
qw422016.N().DUL(status.TotalLabelValuePairs) qw422016.N().DUL(status.TotalLabelValuePairs)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:10 //line app/vmselect/prometheus/tsdb_status_response.qtpl:13
qw422016.N().S(`,"seriesCountByMetricName":`) qw422016.N().S(`,"seriesCountByMetricName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:11 //line app/vmselect/prometheus/tsdb_status_response.qtpl:14
streamtsdbStatusEntries(qw422016, status.SeriesCountByMetricName) streamtsdbStatusEntries(qw422016, status.SeriesCountByMetricName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:11 //line app/vmselect/prometheus/tsdb_status_response.qtpl:14
qw422016.N().S(`,"seriesCountByLabelValuePair":`) qw422016.N().S(`,"seriesCountByLabelValuePair":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:12 //line app/vmselect/prometheus/tsdb_status_response.qtpl:15
streamtsdbStatusEntries(qw422016, status.SeriesCountByLabelValuePair) streamtsdbStatusEntries(qw422016, status.SeriesCountByLabelValuePair)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:12 //line app/vmselect/prometheus/tsdb_status_response.qtpl:15
qw422016.N().S(`,"labelValueCountByLabelName":`) qw422016.N().S(`,"labelValueCountByLabelName":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13 //line app/vmselect/prometheus/tsdb_status_response.qtpl:16
streamtsdbStatusEntries(qw422016, status.LabelValueCountByLabelName) streamtsdbStatusEntries(qw422016, status.LabelValueCountByLabelName)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:13
qw422016.N().S(`}}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16 //line app/vmselect/prometheus/tsdb_status_response.qtpl:16
} qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
func WriteTSDBStatusResponse(qq422016 qtio422016.Writer, status *storage.TSDBStatus) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
StreamTSDBStatusResponse(qw422016, status)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
func TSDBStatusResponse(status *storage.TSDBStatus) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
WriteTSDBStatusResponse(qb422016, status)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:16
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18 //line app/vmselect/prometheus/tsdb_status_response.qtpl:18
qt.Done()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:19
streamdumpQueryTrace(qw422016, qt)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:19
qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
func WriteTSDBStatusResponse(qq422016 qtio422016.Writer, status *storage.TSDBStatus, qt *querytracer.Tracer) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
StreamTSDBStatusResponse(qw422016, status, qt)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
func TSDBStatusResponse(status *storage.TSDBStatus, qt *querytracer.Tracer) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
WriteTSDBStatusResponse(qb422016, status, qt)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:21
}
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23
func streamtsdbStatusEntries(qw422016 *qt422016.Writer, a []storage.TopHeapEntry) { func streamtsdbStatusEntries(qw422016 *qt422016.Writer, a []storage.TopHeapEntry) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:18 //line app/vmselect/prometheus/tsdb_status_response.qtpl:23
qw422016.N().S(`[`) qw422016.N().S(`[`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:20 //line app/vmselect/prometheus/tsdb_status_response.qtpl:25
for i, e := range a { for i, e := range a {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:20 //line app/vmselect/prometheus/tsdb_status_response.qtpl:25
qw422016.N().S(`{"name":`) qw422016.N().S(`{"name":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:22 //line app/vmselect/prometheus/tsdb_status_response.qtpl:27
qw422016.N().Q(e.Name) qw422016.N().Q(e.Name)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:22 //line app/vmselect/prometheus/tsdb_status_response.qtpl:27
qw422016.N().S(`,"value":`) qw422016.N().S(`,"value":`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23 //line app/vmselect/prometheus/tsdb_status_response.qtpl:28
qw422016.N().D(int(e.Count)) qw422016.N().D(int(e.Count))
//line app/vmselect/prometheus/tsdb_status_response.qtpl:23 //line app/vmselect/prometheus/tsdb_status_response.qtpl:28
qw422016.N().S(`}`) qw422016.N().S(`}`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:25 //line app/vmselect/prometheus/tsdb_status_response.qtpl:30
if i+1 < len(a) { if i+1 < len(a) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:25 //line app/vmselect/prometheus/tsdb_status_response.qtpl:30
qw422016.N().S(`,`) qw422016.N().S(`,`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:25 //line app/vmselect/prometheus/tsdb_status_response.qtpl:30
} }
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26 //line app/vmselect/prometheus/tsdb_status_response.qtpl:31
} }
//line app/vmselect/prometheus/tsdb_status_response.qtpl:26 //line app/vmselect/prometheus/tsdb_status_response.qtpl:31
qw422016.N().S(`]`) qw422016.N().S(`]`)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28 //line app/vmselect/prometheus/tsdb_status_response.qtpl:33
} }
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28 //line app/vmselect/prometheus/tsdb_status_response.qtpl:33
func writetsdbStatusEntries(qq422016 qtio422016.Writer, a []storage.TopHeapEntry) { func writetsdbStatusEntries(qq422016 qtio422016.Writer, a []storage.TopHeapEntry) {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28 //line app/vmselect/prometheus/tsdb_status_response.qtpl:33
qw422016 := qt422016.AcquireWriter(qq422016) qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28 //line app/vmselect/prometheus/tsdb_status_response.qtpl:33
streamtsdbStatusEntries(qw422016, a) streamtsdbStatusEntries(qw422016, a)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28 //line app/vmselect/prometheus/tsdb_status_response.qtpl:33
qt422016.ReleaseWriter(qw422016) qt422016.ReleaseWriter(qw422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28 //line app/vmselect/prometheus/tsdb_status_response.qtpl:33
} }
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28 //line app/vmselect/prometheus/tsdb_status_response.qtpl:33
func tsdbStatusEntries(a []storage.TopHeapEntry) string { func tsdbStatusEntries(a []storage.TopHeapEntry) string {
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28 //line app/vmselect/prometheus/tsdb_status_response.qtpl:33
qb422016 := qt422016.AcquireByteBuffer() qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28 //line app/vmselect/prometheus/tsdb_status_response.qtpl:33
writetsdbStatusEntries(qb422016, a) writetsdbStatusEntries(qb422016, a)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28 //line app/vmselect/prometheus/tsdb_status_response.qtpl:33
qs422016 := string(qb422016.B) qs422016 := string(qb422016.B)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28 //line app/vmselect/prometheus/tsdb_status_response.qtpl:33
qt422016.ReleaseByteBuffer(qb422016) qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28 //line app/vmselect/prometheus/tsdb_status_response.qtpl:33
return qs422016 return qs422016
//line app/vmselect/prometheus/tsdb_status_response.qtpl:28 //line app/vmselect/prometheus/tsdb_status_response.qtpl:33
} }

View file

@ -25,6 +25,19 @@ func roundToSeconds(ms int64) int64 {
return ms - ms%1000 return ms - ms%1000
} }
// GetInt returns integer value from the given argKey.
func GetInt(r *http.Request, argKey string) (int, error) {
argValue := r.FormValue(argKey)
if len(argValue) == 0 {
return 0, nil
}
n, err := strconv.Atoi(argValue)
if err != nil {
return 0, fmt.Errorf("cannot parse integer %q=%q: %w", argKey, argValue, err)
}
return n, nil
}
// GetTime returns time from the given argKey query arg. // GetTime returns time from the given argKey query arg.
// //
// If argKey is missing in r, then defaultMs rounded to seconds is returned. // If argKey is missing in r, then defaultMs rounded to seconds is returned.
@ -213,12 +226,16 @@ func GetExtraTagFilters(r *http.Request) ([][]storage.TagFilter, error) {
if len(tmp) != 2 { if len(tmp) != 2 {
return nil, fmt.Errorf("`extra_label` query arg must have the format `name=value`; got %q", match) return nil, fmt.Errorf("`extra_label` query arg must have the format `name=value`; got %q", match)
} }
if tmp[0] == "__name__" {
// This is required for storage.Search.
tmp[0] = ""
}
tagFilters = append(tagFilters, storage.TagFilter{ tagFilters = append(tagFilters, storage.TagFilter{
Key: []byte(tmp[0]), Key: []byte(tmp[0]),
Value: []byte(tmp[1]), Value: []byte(tmp[1]),
}) })
} }
extraFilters := r.Form["extra_filters"] extraFilters := append([]string{}, r.Form["extra_filters"]...)
extraFilters = append(extraFilters, r.Form["extra_filters[]"]...) extraFilters = append(extraFilters, r.Form["extra_filters[]"]...)
if len(extraFilters) == 0 { if len(extraFilters) == 0 {
if len(tagFilters) == 0 { if len(tagFilters) == 0 {

View file

@ -1,12 +1,12 @@
{ {
"files": { "files": {
"main.css": "./static/css/main.d8362c27.css", "main.css": "./static/css/main.7e6d0c89.css",
"main.js": "./static/js/main.105dbc4f.js", "main.js": "./static/js/main.42cb1c78.js",
"static/js/27.939f971b.chunk.js": "./static/js/27.939f971b.chunk.js", "static/js/27.939f971b.chunk.js": "./static/js/27.939f971b.chunk.js",
"index.html": "./index.html" "index.html": "./index.html"
}, },
"entrypoints": [ "entrypoints": [
"static/css/main.d8362c27.css", "static/css/main.7e6d0c89.css",
"static/js/main.105dbc4f.js" "static/js/main.42cb1c78.js"
] ]
} }

View file

@ -1 +1 @@
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script src="./dashboards/index.js" type="module"></script><script defer="defer" src="./static/js/main.105dbc4f.js"></script><link href="./static/css/main.d8362c27.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html> <!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script src="./dashboards/index.js" type="module"></script><script defer="defer" src="./static/js/main.42cb1c78.js"></script><link href="./static/css/main.7e6d0c89.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>

View file

@ -1 +1 @@
body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif}code{font-family:source-code-pro,Menlo,Monaco,Consolas,Courier New,monospace}.MuiAccordionSummary-content{margin:0!important}.uplot,.uplot *,.uplot :after,.uplot :before{box-sizing:border-box}.uplot{font-family:system-ui,-apple-system,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji;line-height:1.5;width:-webkit-min-content;width:min-content}.u-title{font-size:18px;font-weight:700;text-align:center}.u-wrap{position:relative;-webkit-user-select:none;-ms-user-select:none;user-select:none}.u-over,.u-under{position:absolute}.u-under{overflow:hidden}.uplot canvas{display:block;height:100%;position:relative;width:100%}.u-axis{position:absolute}.u-legend{font-size:14px;margin:auto;text-align:center}.u-inline{display:block}.u-inline *{display:inline-block}.u-inline tr{margin-right:16px}.u-legend th{font-weight:600}.u-legend th>*{display:inline-block;vertical-align:middle}.u-legend .u-marker{background-clip:padding-box!important;height:1em;margin-right:4px;width:1em}.u-inline.u-live th:after{content:":";vertical-align:middle}.u-inline:not(.u-live) .u-value{display:none}.u-series>*{padding:4px}.u-series th{cursor:pointer}.u-legend .u-off>*{opacity:.3}.u-select{background:rgba(0,0,0,.07)}.u-cursor-x,.u-cursor-y,.u-select{pointer-events:none;position:absolute}.u-cursor-x,.u-cursor-y{left:0;top:0;will-change:transform;z-index:100}.u-hz .u-cursor-x,.u-vt .u-cursor-y{border-right:1px dashed #607d8b;height:100%}.u-hz .u-cursor-y,.u-vt .u-cursor-x{border-bottom:1px dashed #607d8b;width:100%}.u-cursor-pt{background-clip:padding-box!important;border:0 solid;border-radius:50%;left:0;pointer-events:none;position:absolute;top:0;will-change:transform;z-index:100}.u-axis.u-off,.u-cursor-pt.u-off,.u-cursor-x.u-off,.u-cursor-y.u-off,.u-select.u-off,.u-tooltip{display:none}.u-tooltip{grid-gap:12px;word-wrap:break-word;background:rgba(57,57,57,.9);border-radius:4px;color:#fff;font-family:monospace;font-size:10px;font-weight:500;line-height:1.4em;max-width:300px;padding:8px;pointer-events:none;position:absolute;z-index:100}.u-tooltip-data{align-items:center;display:flex;flex-wrap:wrap;font-size:11px;line-height:150%}.u-tooltip-data__value{font-weight:700;padding:4px}.u-tooltip__info{grid-gap:4px;display:grid}.u-tooltip__marker{height:12px;margin-right:4px;width:12px}.legendWrapper{cursor:default;display:flex;flex-wrap:wrap;margin-top:20px;position:relative}.legendGroup{margin:0 12px 24px 0}.legendGroupTitle{align-items:center;display:grid;font-size:11px;grid-template-columns:43px auto;padding:10px}.legendGroupQuery{grid-column:1/3;opacity:.6}.legendGroupLine{margin-right:10px}.legendItem{grid-gap:6px;align-items:start;background-color:#fff;cursor:pointer;display:grid;grid-template-columns:auto auto;justify-content:start;padding:7px 50px 7px 10px;transition:.2s ease}.legendItemHide{opacity:.5;text-decoration:line-through}.legendItem:hover{background-color:rgba(0,0,0,.1)}.legendMarker{border-style:solid;border-width:2px;box-sizing:border-box;height:12px;transition:.2s ease;width:12px}.legendLabel{font-size:11px;font-weight:400;line-height:12px}.legendFreeFields{cursor:pointer;padding:3px}.legendFreeFields:hover{text-decoration:underline}.legendFreeFields:not(:last-child):after{content:","}.legendWrapperHotkey{align-items:center;display:flex;font-size:11px}.legendWrapperHotkey p{margin-right:20px}.legendWrapperHotkey code{word-wrap:break-word;background-color:#f2f2f2;border:1px solid #dedede;border-radius:2px;color:#0a0a0a;display:inline;font-size:10px;font-weight:400;max-width:100%;padding:4px 6px}.panelDescription ul{line-height:2.2}.panelDescription a{color:#fff}.panelDescription code{background-color:rgba(0,0,0,.3);border-radius:2px;color:#fff;display:inline;font-size:inherit;font-weight:400;max-width:100%;padding:4px 6px} body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif}code{font-family:source-code-pro,Menlo,Monaco,Consolas,Courier New,monospace}.MuiAccordionSummary-content{margin:0!important}.uplot,.uplot *,.uplot :after,.uplot :before{box-sizing:border-box}.uplot{font-family:system-ui,-apple-system,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji;line-height:1.5;width:-webkit-min-content;width:min-content}.u-title{font-size:18px;font-weight:700;text-align:center}.u-wrap{position:relative;-webkit-user-select:none;-ms-user-select:none;user-select:none}.u-over,.u-under{position:absolute}.u-under{overflow:hidden}.uplot canvas{display:block;height:100%;position:relative;width:100%}.u-axis{position:absolute}.u-legend{font-size:14px;margin:auto;text-align:center}.u-inline{display:block}.u-inline *{display:inline-block}.u-inline tr{margin-right:16px}.u-legend th{font-weight:600}.u-legend th>*{display:inline-block;vertical-align:middle}.u-legend .u-marker{background-clip:padding-box!important;height:1em;margin-right:4px;width:1em}.u-inline.u-live th:after{content:":";vertical-align:middle}.u-inline:not(.u-live) .u-value{display:none}.u-series>*{padding:4px}.u-series th{cursor:pointer}.u-legend .u-off>*{opacity:.3}.u-select{background:rgba(0,0,0,.07)}.u-cursor-x,.u-cursor-y,.u-select{pointer-events:none;position:absolute}.u-cursor-x,.u-cursor-y{left:0;top:0;will-change:transform;z-index:100}.u-hz .u-cursor-x,.u-vt .u-cursor-y{border-right:1px dashed #607d8b;height:100%}.u-hz .u-cursor-y,.u-vt .u-cursor-x{border-bottom:1px dashed #607d8b;width:100%}.u-cursor-pt{background-clip:padding-box!important;border:0 solid;border-radius:50%;left:0;pointer-events:none;position:absolute;top:0;will-change:transform;z-index:100}.u-axis.u-off,.u-cursor-pt.u-off,.u-cursor-x.u-off,.u-cursor-y.u-off,.u-select.u-off,.u-tooltip{display:none}.u-tooltip{grid-gap:12px;word-wrap:break-word;background:rgba(57,57,57,.9);border-radius:4px;color:#fff;font-family:monospace;font-size:10px;font-weight:500;line-height:1.4em;max-width:300px;padding:8px;pointer-events:none;position:absolute;z-index:100}.u-tooltip-data{align-items:center;display:flex;flex-wrap:wrap;font-size:11px;line-height:150%}.u-tooltip-data__value{font-weight:700;padding:4px}.u-tooltip__info{grid-gap:4px;display:grid}.u-tooltip__marker{height:12px;margin-right:4px;width:12px}.legendWrapper{cursor:default;display:flex;flex-wrap:wrap;margin-top:20px;position:relative}.legendGroup{margin:0 12px 24px 0;padding:10px 6px}.legendGroupTitle{align-items:center;border-bottom:1px solid #ecebe6;display:flex;font-size:11px;margin-bottom:5px;padding:0 10px 5px}.legendGroupQuery{font-weight:700;margin-right:4px}.legendGroupLine{margin-right:10px}.legendItem{grid-gap:6px;align-items:start;background-color:#fff;cursor:pointer;display:grid;grid-template-columns:auto auto;justify-content:start;padding:7px 50px 7px 10px;transition:.2s ease}.legendItemHide{opacity:.5;text-decoration:line-through}.legendItem:hover{background-color:rgba(0,0,0,.1)}.legendMarker{border-style:solid;border-width:2px;box-sizing:border-box;height:12px;transition:.2s ease;width:12px}.legendLabel{font-size:11px;font-weight:400;line-height:12px}.legendFreeFields{cursor:pointer;padding:3px}.legendFreeFields:hover{text-decoration:underline}.legendFreeFields:not(:last-child):after{content:","}.legendWrapperHotkey{align-items:center;display:flex;font-size:11px}.legendWrapperHotkey p{margin-right:20px}.legendWrapperHotkey code{word-wrap:break-word;background-color:#f2f2f2;border:1px solid #dedede;border-radius:2px;color:#0a0a0a;display:inline;font-size:10px;font-weight:400;max-width:100%;padding:4px 6px}.panelDescription ul{line-height:2.2}.panelDescription a{color:#fff}.panelDescription code{background-color:rgba(0,0,0,.3);border-radius:2px;color:#fff;display:inline;font-size:inherit;font-weight:400;max-width:100%;padding:4px 6px}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -183,36 +183,21 @@ func SearchMetricNames(qt *querytracer.Tracer, tfss []*storage.TagFilters, tr st
return mns, err return mns, err
} }
// SearchTagKeysOnTimeRange searches for tag keys on tr. // SearchLabelNamesWithFiltersOnTimeRange searches for tag keys matching the given tfss on tr.
func SearchTagKeysOnTimeRange(tr storage.TimeRange, maxTagKeys int, deadline uint64) ([]string, error) { func SearchLabelNamesWithFiltersOnTimeRange(qt *querytracer.Tracer, tfss []*storage.TagFilters, tr storage.TimeRange, maxTagKeys, maxMetrics int, deadline uint64) ([]string, error) {
WG.Add(1) WG.Add(1)
keys, err := Storage.SearchTagKeysOnTimeRange(tr, maxTagKeys, deadline) labelNames, err := Storage.SearchLabelNamesWithFiltersOnTimeRange(qt, tfss, tr, maxTagKeys, maxMetrics, deadline)
WG.Done() WG.Done()
return keys, err return labelNames, err
} }
// SearchTagKeys searches for tag keys // SearchLabelValuesWithFiltersOnTimeRange searches for label values for the given labelName, tfss and tr.
func SearchTagKeys(maxTagKeys int, deadline uint64) ([]string, error) { func SearchLabelValuesWithFiltersOnTimeRange(qt *querytracer.Tracer, labelName string, tfss []*storage.TagFilters,
tr storage.TimeRange, maxLabelValues, maxMetrics int, deadline uint64) ([]string, error) {
WG.Add(1) WG.Add(1)
keys, err := Storage.SearchTagKeys(maxTagKeys, deadline) labelValues, err := Storage.SearchLabelValuesWithFiltersOnTimeRange(qt, labelName, tfss, tr, maxLabelValues, maxMetrics, deadline)
WG.Done() WG.Done()
return keys, err return labelValues, err
}
// SearchTagValuesOnTimeRange searches for tag values for the given tagKey on tr.
func SearchTagValuesOnTimeRange(tagKey []byte, tr storage.TimeRange, maxTagValues int, deadline uint64) ([]string, error) {
WG.Add(1)
values, err := Storage.SearchTagValuesOnTimeRange(tagKey, tr, maxTagValues, deadline)
WG.Done()
return values, err
}
// SearchTagValues searches for tag values for the given tagKey
func SearchTagValues(tagKey []byte, maxTagValues int, deadline uint64) ([]string, error) {
WG.Add(1)
values, err := Storage.SearchTagValues(tagKey, maxTagValues, deadline)
WG.Done()
return values, err
} }
// SearchTagValueSuffixes returns all the tag value suffixes for the given tagKey and tagValuePrefix on the given tr. // SearchTagValueSuffixes returns all the tag value suffixes for the given tagKey and tagValuePrefix on the given tr.
@ -233,26 +218,18 @@ func SearchGraphitePaths(tr storage.TimeRange, query []byte, maxPaths int, deadl
return paths, err return paths, err
} }
// SearchTagEntries searches for tag entries.
func SearchTagEntries(maxTagKeys, maxTagValues int, deadline uint64) ([]storage.TagEntry, error) {
WG.Add(1)
tagEntries, err := Storage.SearchTagEntries(maxTagKeys, maxTagValues, deadline)
WG.Done()
return tagEntries, err
}
// GetTSDBStatusForDate returns TSDB status for the given date. // GetTSDBStatusForDate returns TSDB status for the given date.
func GetTSDBStatusForDate(date uint64, topN, maxMetrics int, deadline uint64) (*storage.TSDBStatus, error) { func GetTSDBStatusForDate(qt *querytracer.Tracer, date uint64, topN, maxMetrics int, deadline uint64) (*storage.TSDBStatus, error) {
WG.Add(1) WG.Add(1)
status, err := Storage.GetTSDBStatusWithFiltersForDate(nil, date, topN, maxMetrics, deadline) status, err := Storage.GetTSDBStatusWithFiltersForDate(qt, nil, date, topN, maxMetrics, deadline)
WG.Done() WG.Done()
return status, err return status, err
} }
// GetTSDBStatusWithFiltersForDate returns TSDB status for given filters on the given date. // GetTSDBStatusWithFiltersForDate returns TSDB status for given filters on the given date.
func GetTSDBStatusWithFiltersForDate(tfss []*storage.TagFilters, date uint64, topN, maxMetrics int, deadline uint64) (*storage.TSDBStatus, error) { func GetTSDBStatusWithFiltersForDate(qt *querytracer.Tracer, tfss []*storage.TagFilters, date uint64, topN, maxMetrics int, deadline uint64) (*storage.TSDBStatus, error) {
WG.Add(1) WG.Add(1)
status, err := Storage.GetTSDBStatusWithFiltersForDate(tfss, date, topN, maxMetrics, deadline) status, err := Storage.GetTSDBStatusWithFiltersForDate(qt, tfss, date, topN, maxMetrics, deadline)
WG.Done() WG.Done()
return status, err return status, err
} }

View file

@ -112,15 +112,6 @@ func GetLabelNamesOnTimeRange(tr storage.TimeRange, deadline searchutils.Deadlin
return names, err return names, err
} }
// GetLabelNames returns label names.
func GetLabelNames(deadline searchutils.Deadline) ([]string, error) {
tr := storage.TimeRange{
MinTimestamp: 0,
MaxTimestamp: time.Now().UnixNano() / 1e6,
}
return GetLabelNamesOnTimeRange(tr, deadline)
}
// GetLabelValuesOnTimeRange returns values for the given labelName on the given tr. // GetLabelValuesOnTimeRange returns values for the given labelName on the given tr.
func GetLabelValuesOnTimeRange(labelName string, tr storage.TimeRange, deadline searchutils.Deadline) ([]string, error) { func GetLabelValuesOnTimeRange(labelName string, tr storage.TimeRange, deadline searchutils.Deadline) ([]string, error) {
d := time.Unix(int64(deadline.Deadline()), 0) d := time.Unix(int64(deadline.Deadline()), 0)
@ -138,15 +129,6 @@ func GetLabelValuesOnTimeRange(labelName string, tr storage.TimeRange, deadline
return values, err return values, err
} }
// GetLabelValues returns values for the given labelName.
func GetLabelValues(labelName string, deadline searchutils.Deadline) ([]string, error) {
tr := storage.TimeRange{
MinTimestamp: 0,
MaxTimestamp: time.Now().UnixNano() / 1e6,
}
return GetLabelValuesOnTimeRange(labelName, tr, deadline)
}
func copyStringsWithMemory(a []string) []string { func copyStringsWithMemory(a []string) []string {
result := make([]string, len(a)) result := make([]string, len(a))
for i, s := range a { for i, s := range a {

View file

@ -1,4 +1,4 @@
import React, {FC, useEffect, useRef} from "preact/compat"; import React, {FC, useState} from "preact/compat";
import Box from "@mui/material/Box"; import Box from "@mui/material/Box";
import IconButton from "@mui/material/IconButton"; import IconButton from "@mui/material/IconButton";
import Tooltip from "@mui/material/Tooltip"; import Tooltip from "@mui/material/Tooltip";
@ -18,15 +18,12 @@ export interface QueryConfiguratorProps {
const QueryConfigurator: FC<QueryConfiguratorProps> = ({error, queryOptions}) => { const QueryConfigurator: FC<QueryConfiguratorProps> = ({error, queryOptions}) => {
const {query, queryHistory, queryControls: {autocomplete}} = useAppState(); const {query, queryHistory, queryControls: {autocomplete}} = useAppState();
const [stateQuery, setStateQuery] = useState(query || []);
const dispatch = useAppDispatch(); const dispatch = useAppDispatch();
const queryRef = useRef(query);
useEffect(() => {
queryRef.current = query;
}, [query]);
const updateHistory = () => { const updateHistory = () => {
dispatch({ dispatch({
type: "SET_QUERY_HISTORY", payload: query.map((q, i) => { type: "SET_QUERY_HISTORY", payload: stateQuery.map((q, i) => {
const h = queryHistory[i] || {values: []}; const h = queryHistory[i] || {values: []};
const queryEqual = q === h.values[h.values.length - 1]; const queryEqual = q === h.values[h.values.length - 1];
return { return {
@ -39,22 +36,20 @@ const QueryConfigurator: FC<QueryConfiguratorProps> = ({error, queryOptions}) =>
const onRunQuery = () => { const onRunQuery = () => {
updateHistory(); updateHistory();
dispatch({type: "SET_QUERY", payload: query}); dispatch({type: "SET_QUERY", payload: stateQuery});
dispatch({type: "RUN_QUERY"}); dispatch({type: "RUN_QUERY"});
}; };
const onAddQuery = () => dispatch({type: "SET_QUERY", payload: [...queryRef.current, ""]}); const onAddQuery = () => {
setStateQuery(prev => [...prev, ""]);
};
const onRemoveQuery = (index: number) => { const onRemoveQuery = (index: number) => {
const newQuery = [...queryRef.current]; setStateQuery(prev => prev.filter((q, i) => i !== index));
newQuery.splice(index, 1);
dispatch({type: "SET_QUERY", payload: newQuery});
}; };
const onSetQuery = (value: string, index: number) => { const onSetQuery = (value: string, index: number) => {
const newQuery = [...queryRef.current]; setStateQuery(prev => prev.map((q, i) => i === index ? value : q));
newQuery[index] = value;
dispatch({type: "SET_QUERY", payload: newQuery});
}; };
const setHistoryIndex = (step: number, indexQuery: number) => { const setHistoryIndex = (step: number, indexQuery: number) => {
@ -69,11 +64,11 @@ const QueryConfigurator: FC<QueryConfiguratorProps> = ({error, queryOptions}) =>
}; };
return <Box boxShadow="rgba(99, 99, 99, 0.2) 0px 2px 8px 0px;" p={4} pb={2} m={-4} mb={2}> return <Box boxShadow="rgba(99, 99, 99, 0.2) 0px 2px 8px 0px;" p={4} pb={2} m={-4} mb={2}>
<Box> <Box>
{query.map((q, i) => {stateQuery.map((q, i) =>
<Box key={i} display="grid" gridTemplateColumns="1fr auto auto" gap="4px" width="100%" <Box key={i} display="grid" gridTemplateColumns="1fr auto auto" gap="4px" width="100%"
mb={i === query.length - 1 ? 0 : 2.5}> mb={i === stateQuery.length - 1 ? 0 : 2.5}>
<QueryEditor <QueryEditor
query={query[i]} index={i} autocomplete={autocomplete} queryOptions={queryOptions} query={stateQuery[i]} index={i} autocomplete={autocomplete} queryOptions={queryOptions}
error={error} setHistoryIndex={setHistoryIndex} runQuery={onRunQuery} setQuery={onSetQuery} error={error} setHistoryIndex={setHistoryIndex} runQuery={onRunQuery} setQuery={onSetQuery}
label={`Query ${i + 1}`}/> label={`Query ${i + 1}`}/>
{i === 0 && <Tooltip title="Execute Query"> {i === 0 && <Tooltip title="Execute Query">
@ -81,7 +76,7 @@ const QueryConfigurator: FC<QueryConfiguratorProps> = ({error, queryOptions}) =>
<PlayCircleOutlineIcon/> <PlayCircleOutlineIcon/>
</IconButton> </IconButton>
</Tooltip>} </Tooltip>}
{query.length < 2 && <Tooltip title="Add Query"> {stateQuery.length < 2 && <Tooltip title="Add Query">
<IconButton onClick={onAddQuery} sx={{height: "49px", width: "49px"}}> <IconButton onClick={onAddQuery} sx={{height: "49px", width: "49px"}}>
<AddCircleOutlineIcon/> <AddCircleOutlineIcon/>
</IconButton> </IconButton>

View file

@ -76,7 +76,7 @@ export const TimeSelector: FC = () => {
}} }}
startIcon={<QueryBuilderIcon/>} startIcon={<QueryBuilderIcon/>}
onClick={(e) => setAnchorEl(e.currentTarget)}> onClick={(e) => setAnchorEl(e.currentTarget)}>
{relativeTime {relativeTime && relativeTime !== "none"
? relativeTime.replace(/_/g, " ") ? relativeTime.replace(/_/g, " ")
: `${formatRange.start} - ${formatRange.end}`} : `${formatRange.start} - ${formatRange.end}`}
</Button> </Button>

View file

@ -28,13 +28,13 @@ const Legend: FC<LegendProps> = ({labels, query, onChange}) => {
<div className="legendWrapper"> <div className="legendWrapper">
{groups.map((group) => <div className="legendGroup" key={group}> {groups.map((group) => <div className="legendGroup" key={group}>
<div className="legendGroupTitle"> <div className="legendGroupTitle">
<span className="legendGroupQuery">Query {group}</span>
<svg className="legendGroupLine" width="33" height="3" version="1.1" xmlns="http://www.w3.org/2000/svg"> <svg className="legendGroupLine" width="33" height="3" version="1.1" xmlns="http://www.w3.org/2000/svg">
<line strokeWidth="3" x1="0" y1="0" x2="33" y2="0" stroke="#363636" <line strokeWidth="3" x1="0" y1="0" x2="33" y2="0" stroke="#363636"
strokeDasharray={getDashLine(group).join(",")} strokeDasharray={getDashLine(group).join(",")}
/> />
</svg> </svg>
<b>&quot;{query[group - 1]}&quot;:</b> <span className="legendGroupQuery">Query {group}</span>
<span>(&quot;{query[group - 1]}&quot;)</span>
</div> </div>
<div> <div>
{labels.filter(l => l.group === group).map((legendItem: LegendItem) => {labels.filter(l => l.group === group).map((legendItem: LegendItem) =>

View file

@ -8,19 +8,21 @@
.legendGroup { .legendGroup {
margin: 0 12px 24px 0; margin: 0 12px 24px 0;
padding: 10px 6px;
} }
.legendGroupTitle { .legendGroupTitle {
display: grid; display: flex;
grid-template-columns: 43px auto;
align-items: center; align-items: center;
padding: 10px; padding: 0 10px 5px;
margin-bottom: 5px;
font-size: 11px; font-size: 11px;
border-bottom: 1px solid #ECEBE6;
} }
.legendGroupQuery { .legendGroupQuery {
grid-column: 1/3; font-weight: bold;
opacity: 0.6; margin-right: 4px;
} }
.legendGroupLine { .legendGroupLine {
@ -97,4 +99,4 @@
font-size: 10px; font-size: 10px;
color: #0a0a0a; color: #0a0a0a;
word-wrap: break-word; word-wrap: break-word;
} }

View file

@ -5,7 +5,7 @@ import {InstantMetricResult, MetricBase, MetricResult} from "../api/types";
import {isValidHttpUrl} from "../utils/url"; import {isValidHttpUrl} from "../utils/url";
import {ErrorTypes} from "../types"; import {ErrorTypes} from "../types";
import {getAppModeEnable, getAppModeParams} from "../utils/app-mode"; import {getAppModeEnable, getAppModeParams} from "../utils/app-mode";
import throttle from "lodash.throttle"; import debounce from "lodash.debounce";
import {DisplayType} from "../components/CustomPanel/Configurator/DisplayTypeSwitch"; import {DisplayType} from "../components/CustomPanel/Configurator/DisplayTypeSwitch";
import {CustomStep} from "../state/graph/reducer"; import {CustomStep} from "../state/graph/reducer";
import usePrevious from "./usePrevious"; import usePrevious from "./usePrevious";
@ -43,11 +43,9 @@ export const useFetchQuery = ({predefinedQuery, visible, display, customStep}: F
} }
}, [error]); }, [error]);
const fetchData = async (fetchUrl: string[] | undefined, fetchQueue: AbortController[], displayType: DisplayType) => { const fetchData = async (fetchUrl: string[], fetchQueue: AbortController[], displayType: DisplayType) => {
if (!fetchUrl?.length) return;
const controller = new AbortController(); const controller = new AbortController();
setFetchQueue([...fetchQueue, controller]); setFetchQueue([...fetchQueue, controller]);
setIsLoading(true);
try { try {
const responses = await Promise.all(fetchUrl.map(url => fetch(url, {signal: controller.signal}))); const responses = await Promise.all(fetchUrl.map(url => fetch(url, {signal: controller.signal})));
const tempData = []; const tempData = [];
@ -74,7 +72,7 @@ export const useFetchQuery = ({predefinedQuery, visible, display, customStep}: F
setIsLoading(false); setIsLoading(false);
}; };
const throttledFetchData = useCallback(throttle(fetchData, 1000), []); const throttledFetchData = useCallback(debounce(fetchData, 600), []);
const fetchUrl = useMemo(() => { const fetchUrl = useMemo(() => {
const server = appModeEnable ? appServerUrl : serverUrl; const server = appModeEnable ? appServerUrl : serverUrl;
@ -100,7 +98,8 @@ export const useFetchQuery = ({predefinedQuery, visible, display, customStep}: F
const prevFetchUrl = usePrevious(fetchUrl); const prevFetchUrl = usePrevious(fetchUrl);
useEffect(() => { useEffect(() => {
if (!visible || (fetchUrl && prevFetchUrl && arrayEquals(fetchUrl, prevFetchUrl))) return; if (!visible || (fetchUrl && prevFetchUrl && arrayEquals(fetchUrl, prevFetchUrl)) || !fetchUrl?.length) return;
setIsLoading(true);
throttledFetchData(fetchUrl, fetchQueue, (display || displayType)); throttledFetchData(fetchUrl, fetchQueue, (display || displayType));
}, [fetchUrl, visible]); }, [fetchUrl, visible]);

View file

@ -117,7 +117,7 @@ export function reducer(state: AppState, action: Action): AppState {
...state.time, ...state.time,
duration: action.payload, duration: action.payload,
period: getTimeperiodForDuration(action.payload, dateFromSeconds(state.time.period.end)), period: getTimeperiodForDuration(action.payload, dateFromSeconds(state.time.period.end)),
relativeTime: "" relativeTime: "none"
} }
}; };
case "SET_RELATIVE_TIME": case "SET_RELATIVE_TIME":
@ -125,6 +125,7 @@ export function reducer(state: AppState, action: Action): AppState {
...state, ...state,
time: { time: {
...state.time, ...state.time,
duration: action.payload.duration,
period: getTimeperiodForDuration(action.payload.duration, new Date(action.payload.until)), period: getTimeperiodForDuration(action.payload.duration, new Date(action.payload.until)),
relativeTime: action.payload.id, relativeTime: action.payload.id,
} }
@ -135,7 +136,7 @@ export function reducer(state: AppState, action: Action): AppState {
time: { time: {
...state.time, ...state.time,
period: getTimeperiodForDuration(state.time.duration, action.payload), period: getTimeperiodForDuration(state.time.duration, action.payload),
relativeTime: "" relativeTime: "none"
} }
}; };
case "SET_FROM": case "SET_FROM":
@ -151,7 +152,7 @@ export function reducer(state: AppState, action: Action): AppState {
...state.time, ...state.time,
duration: durationFrom, duration: durationFrom,
period: getTimeperiodForDuration(durationFrom, dayjs(state.time.period.end*1000).toDate()), period: getTimeperiodForDuration(durationFrom, dayjs(state.time.period.end*1000).toDate()),
relativeTime: "" relativeTime: "none"
} }
}; };
case "SET_PERIOD": case "SET_PERIOD":
@ -167,7 +168,7 @@ export function reducer(state: AppState, action: Action): AppState {
...state.time, ...state.time,
duration: durationPeriod, duration: durationPeriod,
period: getTimeperiodForDuration(durationPeriod, action.payload.to), period: getTimeperiodForDuration(durationPeriod, action.payload.to),
relativeTime: "" relativeTime: "none"
} }
}; };
case "TOGGLE_AUTOREFRESH": case "TOGGLE_AUTOREFRESH":

View file

@ -67,4 +67,5 @@ export interface RelativeTimeOption {
duration: string, duration: string,
until: () => Date, until: () => Date,
title: string, title: string,
isDefault?: boolean,
} }

View file

@ -2,10 +2,10 @@ import {MetricBase} from "../api/types";
export const getNameForMetric = (result: MetricBase, alias?: string): string => { export const getNameForMetric = (result: MetricBase, alias?: string): string => {
const { __name__, ...freeFormFields } = result.metric; const { __name__, ...freeFormFields } = result.metric;
const name = alias || __name__ || `Query ${result.group} result`; const name = alias || __name__ || "";
if (Object.keys(result.metric).length === 0) { if (Object.keys(result.metric).length === 0) {
return name; // a bit better than just {} for case of aggregation functions return name || `Result ${result.group}`; // a bit better than just {} for case of aggregation functions
} }
return `${name} {${Object.entries(freeFormFields).map(e => `${e[0]}: ${e[1]}`).join(", ")}}`; return `${name} {${Object.entries(freeFormFields).map(e => `${e[0]}: ${e[1]}`).join(", ")}}`;

View file

@ -52,9 +52,9 @@ export const setQueryStringWithoutPageReload = (qsValue: string): void => {
export const setQueryStringValue = (newValue: Record<string, unknown>): void => { export const setQueryStringValue = (newValue: Record<string, unknown>): void => {
const route = window.location.hash.replace("#", ""); const route = window.location.hash.replace("#", "");
const params = stateToUrlParams[route] || {}; const params = stateToUrlParams[route] || graphStateToUrlParams;
const queryMap = new Map(Object.entries(params)); const queryMap = new Map(Object.entries(params));
const isGraphRoute = route === router.home || route === router.dashboards; const isGraphRoute = route === router.home || route === router.dashboards || !route;
const newQsValue = isGraphRoute ? getGraphQsValue(newValue, queryMap) : getQsValue(newValue, queryMap); const newQsValue = isGraphRoute ? getGraphQsValue(newValue, queryMap) : getQsValue(newValue, queryMap);
setQueryStringWithoutPageReload(newQsValue.join("&")); setQueryStringWithoutPageReload(newQsValue.join("&"));
}; };

View file

@ -111,7 +111,7 @@ export const dateFromSeconds = (epochTimeInSeconds: number): Date => new Date(ep
export const relativeTimeOptions: RelativeTimeOption[] = [ export const relativeTimeOptions: RelativeTimeOption[] = [
{title: "Last 5 minutes", duration: "5m"}, {title: "Last 5 minutes", duration: "5m"},
{title: "Last 15 minutes", duration: "15m"}, {title: "Last 15 minutes", duration: "15m"},
{title: "Last 30 minutes", duration: "30m"}, {title: "Last 30 minutes", duration: "30m", isDefault: true},
{title: "Last 1 hour", duration: "1h"}, {title: "Last 1 hour", duration: "1h"},
{title: "Last 3 hours", duration: "3h"}, {title: "Last 3 hours", duration: "3h"},
{title: "Last 6 hours", duration: "6h"}, {title: "Last 6 hours", duration: "6h"},
@ -133,10 +133,11 @@ export const relativeTimeOptions: RelativeTimeOption[] = [
export const getRelativeTime = ({relativeTimeId, defaultDuration, defaultEndInput}: export const getRelativeTime = ({relativeTimeId, defaultDuration, defaultEndInput}:
{ relativeTimeId?: string, defaultDuration: string, defaultEndInput: Date }) => { { relativeTimeId?: string, defaultDuration: string, defaultEndInput: Date }) => {
const id = relativeTimeId || getQueryStringValue("g0.relative_time", "") as string; const defaultId = relativeTimeOptions.find(t => t.isDefault)?.id;
const id = relativeTimeId || getQueryStringValue("g0.relative_time", defaultId) as string;
const target = relativeTimeOptions.find(d => d.id === id); const target = relativeTimeOptions.find(d => d.id === id);
return { return {
relativeTimeId: id, relativeTimeId: target ? id : "none",
duration: target ? target.duration : defaultDuration, duration: target ? target.duration : defaultDuration,
endInput: target ? target.until() : defaultEndInput endInput: target ? target.until() : defaultEndInput
}; };

View file

@ -21,7 +21,9 @@ export const setTooltip = ({u, tooltipIdx, metrics, series, tooltip, tooltipOffs
tooltip.style.display = "grid"; tooltip.style.display = "grid";
tooltip.style.top = `${tooltipOffset.top + top + 10 - (overflowY ? tooltipHeight + 10 : 0)}px`; tooltip.style.top = `${tooltipOffset.top + top + 10 - (overflowY ? tooltipHeight + 10 : 0)}px`;
tooltip.style.left = `${tooltipOffset.left + lft + 10 - (overflowX ? tooltipWidth + 20 : 0)}px`; tooltip.style.left = `${tooltipOffset.left + lft + 10 - (overflowX ? tooltipWidth + 20 : 0)}px`;
const name = (selectedSeries.label || "").replace(/{.+}/gmi, ""); const metricName = (selectedSeries.label || "").replace(/{.+}/gmi, "").trim();
const groupName = `Query ${selectedSeries.scale}`;
const name = metricName || groupName;
const date = dayjs(new Date(dataTime * 1000)).format("YYYY-MM-DD HH:mm:ss:SSS (Z)"); const date = dayjs(new Date(dataTime * 1000)).format("YYYY-MM-DD HH:mm:ss:SSS (Z)");
const info = Object.keys(metric).filter(k => k !== "__name__").map(k => `<div><b>${k}</b>: ${metric[k]}</div>`).join(""); const info = Object.keys(metric).filter(k => k !== "__name__").map(k => `<div><b>${k}</b>: ${metric[k]}</div>`).join("");
const marker = `<div class="u-tooltip__marker" style="background: ${color}"></div>`; const marker = `<div class="u-tooltip__marker" style="background: ${color}"></div>`;

View file

@ -19,10 +19,13 @@ The following tip changes can be tested by building VictoriaMetrics components f
* FEATURE: support query tracing, which allows determining bottlenecks during query processing. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#query-tracing) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1403). * FEATURE: support query tracing, which allows determining bottlenecks during query processing. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#query-tracing) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1403).
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): add `cardinality` tab, which can help identifying the source of [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) and [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) issues. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2233) and [these docs](https://docs.victoriametrics.com/#cardinality-explorer). * FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): add `cardinality` tab, which can help identifying the source of [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) and [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) issues. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2233) and [these docs](https://docs.victoriametrics.com/#cardinality-explorer).
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): small UX enhancements according to [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2638).
* FEATURE: allow overriding default limits for in-memory cache `indexdb/tagFilters` via flag `-storage.cacheSizeIndexDBTagFilters`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2663). * FEATURE: allow overriding default limits for in-memory cache `indexdb/tagFilters` via flag `-storage.cacheSizeIndexDBTagFilters`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2663).
* FEATURE: add support of `lowercase` and `uppercase` relabeling actions in the same way as [Prometheus 2.36.0 does](https://github.com/prometheus/prometheus/releases/tag/v2.36.0). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2664). * FEATURE: add support of `lowercase` and `uppercase` relabeling actions in the same way as [Prometheus 2.36.0 does](https://github.com/prometheus/prometheus/releases/tag/v2.36.0). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2664).
* FEATURE: add ability to change the `indexdb` rotation timezone offset via `-retentionTimezoneOffset` command-line flag. Previously it was performed at 4am UTC time. This could lead to performance degradation in the middle of the day when VictoriaMetrics runs in time zones located too far from UTC. Thanks to @cnych for [the pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2574). * FEATURE: add ability to change the `indexdb` rotation timezone offset via `-retentionTimezoneOffset` command-line flag. Previously it was performed at 4am UTC time. This could lead to performance degradation in the middle of the day when VictoriaMetrics runs in time zones located too far from UTC. Thanks to @cnych for [the pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2574).
* FEATURE: limit the number of background merge threads on systems with big number of CPU cores by default. This increases the max size of parts, which can be created during background merge when `-storageDataPath` directory has limited free disk space. This may improve on-disk data compression efficiency and query performance. The limits can be tuned if needed with `-smallMergeConcurrency` and `-bigMergeConcurrency` command-line flags. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2673). * FEATURE: limit the number of background merge threads on systems with big number of CPU cores by default. This increases the max size of parts, which can be created during background merge when `-storageDataPath` directory has limited free disk space. This may improve on-disk data compression efficiency and query performance. The limits can be tuned if needed with `-smallMergeConcurrency` and `-bigMergeConcurrency` command-line flags. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2673).
* FEATURE: accept optional `limit` query arg at [/api/v1/labels](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names) and [/api/v1/label/.../values](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values) for limiting the numbef of sample entries returned from these endpoints. See [these docs](https://docs.victoriametrics.com/#prometheus-querying-api-enhancements).
* FEATURE: optimize performance for [/api/v1/labels](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names) and [/api/v1/label/.../values](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values) endpoints when `match[]`, `extra_label` or `extra_filters[]` query args are passed to these endpoints. This should help with [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1533).
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): support `limit` param per-group for limiting number of produced samples per each rule. Thanks to @Howie59 for [implementation](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2676). * FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): support `limit` param per-group for limiting number of produced samples per each rule. Thanks to @Howie59 for [implementation](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2676).
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): remove dependency on Internet access at [web API pages](https://docs.victoriametrics.com/vmalert.html#web). Previously the functionality and the layout of these pages was broken without Internet access. See [shis issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2594). * FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): remove dependency on Internet access at [web API pages](https://docs.victoriametrics.com/vmalert.html#web). Previously the functionality and the layout of these pages was broken without Internet access. See [shis issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2594).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): implement the `http://vmagent:8429/service-discovery` page in the same way as Prometheus does. This page shows the original labels for all the discovered targets alongside the resulting labels after the relabeling. This simplifies service discovery debugging. * FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): implement the `http://vmagent:8429/service-discovery` page in the same way as Prometheus does. This page shows the original labels for all the discovered targets alongside the resulting labels after the relabeling. This simplifies service discovery debugging.
@ -35,8 +38,10 @@ The following tip changes can be tested by building VictoriaMetrics components f
* BUGFIX: support for data ingestion in [DataDog format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-datadog-agent) from legacy clients / agents. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2670). Thanks to @elProxy for the fix. * BUGFIX: support for data ingestion in [DataDog format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-datadog-agent) from legacy clients / agents. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2670). Thanks to @elProxy for the fix.
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent.html): do not expose `vm_promscrape_service_discovery_duration_seconds_bucket` metric for unused service discovery types. This reduces the number of metrics exported at `http://vmagent:8429/metrics`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2671). * BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent.html): do not expose `vm_promscrape_service_discovery_duration_seconds_bucket` metric for unused service discovery types. This reduces the number of metrics exported at `http://vmagent:8429/metrics`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2671).
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly apply `alert_relabel_configs` relabeling rules to `-notifier.config` according to [these docs](https://docs.victoriametrics.com/vmalert.html#notifier-configuration-file). Thanks to @spectvtor for [the bugfix](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2633). * BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly apply `alert_relabel_configs` relabeling rules to `-notifier.config` according to [these docs](https://docs.victoriametrics.com/vmalert.html#notifier-configuration-file). Thanks to @spectvtor for [the bugfix](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2633).
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly add `Content-Encoding: snappy` request header when `vmalert` sends [evaluated recording rules' data](https://docs.victoriametrics.com/vmalert.html#recording-rules) to `-remoteWrite.url`. This header is needed by some remote storage systems in order to properly decode snappy-encoded request body. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2685). Thanks to @manji-0 for th fix. * BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly add `Content-Encoding: snappy`, `Content-Type: application/x-protobuf` and `X-Prometheus-Remote-Write-Version: 0.1.0` request headers when `vmalert` sends [evaluated recording rules' data](https://docs.victoriametrics.com/vmalert.html#recording-rules) to `-remoteWrite.url`. These headers are needed by some remote storage systems in order to properly decode snappy-encoded request body. See [this](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2685) and [this](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2701) pull requests. Thanks to @manji-0 for th fix.
* BUGFIX: deny [background merge](https://valyala.medium.com/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282) when the storage enters read-only mode, e.g. when free disk space becomes lower than `-storage.minFreeDiskSpaceBytes`. Background merge needs additional disk space, so it could result in `no space left on device` errors. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2603). * BUGFIX: deny [background merge](https://valyala.medium.com/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282) when the storage enters read-only mode, e.g. when free disk space becomes lower than `-storage.minFreeDiskSpaceBytes`. Background merge needs additional disk space, so it could result in `no space left on device` errors. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2603).
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): properly apply the selected time range when auto-refresh is enabled. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2693).
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): properly update the url with vmui state when new query is entered. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2692).
## [v1.77.2](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.77.2) ## [v1.77.2](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.77.2)

View file

@ -33,7 +33,7 @@ VictoriaMetrics has the following prominent features:
* It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details. * It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details.
* It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage). * It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage). * It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage).
* It features easy setup and operation: * It features easy setup and operation:
* VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d) without external dependencies. * VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d) without external dependencies.
* All the configuration is done via explicit command-line flags with reasonable defaults. * All the configuration is done via explicit command-line flags with reasonable defaults.
@ -275,6 +275,8 @@ By default cardinality explorer analyzes time series for the current date. It pr
By default all the time series for the selected date are analyzed. It is possible to narrow down the analysis to series By default all the time series for the selected date are analyzed. It is possible to narrow down the analysis to series
matching the specified [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). matching the specified [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors).
Cardinality explorer takes into account [deleted time series](#how-to-delete-time-series), because they stay in the inverted index for up to [-retentionPeriod](#retention). This means that the deleted time series take RAM, CPU, disk IO and disk space for the inverted index in the same way as other time series.
Cardinality explorer is built on top of [/api/v1/status/tsdb](#tsdb-stats). Cardinality explorer is built on top of [/api/v1/status/tsdb](#tsdb-stats).
See [cardinality explorer playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/cardinality). See [cardinality explorer playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/cardinality).
@ -350,13 +352,17 @@ echo '
The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format): The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format):
<div class="with-copy" markdown="1">
```bash ```bash
curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1' curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1'
``` ```
</div>
This command should return the following output if everything is OK: This command should return the following output if everything is OK:
``` ```json
{"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]} {"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]}
``` ```
@ -458,13 +464,17 @@ VictoriaMetrics sets the current time if the timestamp is omitted.
An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go. An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go.
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
<div class="with-copy" markdown="1">
```bash ```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
``` ```
</div>
The `/api/v1/export` endpoint should return the following response: The `/api/v1/export` endpoint should return the following response:
```bash ```json
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]} {"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]}
``` ```
@ -607,6 +617,8 @@ For example, the following query would return data for the last 30 minutes: `/ap
VictoriaMetrics accepts `round_digits` query arg for `/api/v1/query` and `/api/v1/query_range` handlers. It can be used for rounding response values to the given number of digits after the decimal point. For example, `/api/v1/query?query=avg_over_time(temperature[1h])&round_digits=2` would round response values to up to two digits after the decimal point. VictoriaMetrics accepts `round_digits` query arg for `/api/v1/query` and `/api/v1/query_range` handlers. It can be used for rounding response values to the given number of digits after the decimal point. For example, `/api/v1/query?query=avg_over_time(temperature[1h])&round_digits=2` would round response values to up to two digits after the decimal point.
VictoriaMetrics accepts `limit` query arg for `/api/v1/labels` and `/api/v1/label/<labelName>/values` handlers for limiting the number of returned entries. For example, the query to `/api/v1/labels?limit=5` returns a sample of up to 5 unique labels, while ignoring the rest of labels. If the provided `limit` value exceeds the corresponding `-search.maxTagKeys` / `-search.maxTagValues` command-line flag values, then limits specified in the command-line flags are used.
By default, VictoriaMetrics returns time series for the last 5 minutes from `/api/v1/series`, while the Prometheus API defaults to all time. Use `start` and `end` to select a different time range. By default, VictoriaMetrics returns time series for the last 5 minutes from `/api/v1/series`, while the Prometheus API defaults to all time. Use `start` and `end` to select a different time range.
Additionally, VictoriaMetrics provides the following handlers: Additionally, VictoriaMetrics provides the following handlers:
@ -615,7 +627,6 @@ Additionally, VictoriaMetrics provides the following handlers:
* `/api/v1/series/count` - returns the total number of time series in the database. Some notes: * `/api/v1/series/count` - returns the total number of time series in the database. Some notes:
* the handler scans all the inverted index, so it can be slow if the database contains tens of millions of time series; * the handler scans all the inverted index, so it can be slow if the database contains tens of millions of time series;
* the handler may count [deleted time series](#how-to-delete-time-series) additionally to normal time series due to internal implementation restrictions; * the handler may count [deleted time series](#how-to-delete-time-series) additionally to normal time series due to internal implementation restrictions;
* `/api/v1/labels/count` - returns a list of `label: values_count` entries. It can be used for determining labels with the maximum number of values.
* `/api/v1/status/active_queries` - returns a list of currently running queries. * `/api/v1/status/active_queries` - returns a list of currently running queries.
* `/api/v1/status/top_queries` - returns the following query lists: * `/api/v1/status/top_queries` - returns the following query lists:
* the most frequently executed queries - `topByCount` * the most frequently executed queries - `topByCount`
@ -742,8 +753,8 @@ More details may be found [here](https://github.com/VictoriaMetrics/VictoriaMetr
## Setting up service ## Setting up service
Read [these instructions](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/43) on how to set up VictoriaMetrics as a service in your OS. Read [instructions](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/43) on how to set up VictoriaMetrics
There is also [snap package for Ubuntu](https://snapcraft.io/victoriametrics). as a service for your OS. A [snap package](https://snapcraft.io/victoriametrics) is available for Ubuntu.
## How to work with snapshots ## How to work with snapshots
@ -836,7 +847,7 @@ for metrics to export. Use `{__name__!=""}` selector for fetching all the time s
The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON). The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON).
Each JSON line contains samples for a single time series. An example output: Each JSON line contains samples for a single time series. An example output:
```jsonl ```json
{"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]} {"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]}
{"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]} {"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]}
``` ```
@ -856,10 +867,14 @@ In this case the output may contain multiple lines with samples for the same tim
Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts
of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data: of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data:
<div class="with-copy" markdown="1">
```bash ```bash
curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz
``` ```
</div>
The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag. The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag.
Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format). Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format).
@ -1032,7 +1047,7 @@ curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
The following response should be returned: The following response should be returned:
```bash ```json
{"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]} {"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]}
{"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]} {"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]}
{"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]} {"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]}
@ -1050,29 +1065,41 @@ VictoriaMetrics accepts data in [Prometheus exposition format](https://github.co
and in [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md) and in [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md)
via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics: via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics:
<div class="with-copy" markdown="1">
```bash ```bash
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
``` ```
</div>
The following command may be used for verifying the imported data: The following command may be used for verifying the imported data:
<div class="with-copy" markdown="1">
```bash ```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}' curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
``` ```
</div>
It should return something like the following: It should return something like the following:
``` ```json
{"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]} {"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]}
``` ```
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data: Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data:
<div class="with-copy" markdown="1">
```bash ```bash
# Import gzipped data to <destination-victoriametrics>: # Import gzipped data to <destination-victoriametrics>:
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz
``` ```
</div>
Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args. Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args.
For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics. For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics.
@ -1242,16 +1269,17 @@ Information about merging process is available in [single-node VictoriaMetrics](
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards. and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards.
See more details in [monitoring docs](#monitoring). See more details in [monitoring docs](#monitoring).
The `merge` process is usually named "compaction", because the resulting `part` size is usually smaller than The `merge` process improves compression rate and keeps number of `parts` on disk relatively low.
the sum of the source `parts`. There are following benefits of doing the merge process: Benefits of doing the merge process are the following:
* it improves query performance, since lower number of `parts` are inspected with each query; * it improves query performance, since lower number of `parts` are inspected with each query
* it reduces the number of data files, since each `part`contains fixed number of files; * it reduces the number of data files, since each `part` contains fixed number of files
* better compression rate for the resulting part. * various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling)
and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are performed during the merge.
Newly added `parts` either appear in the storage or fail to appear. Newly added `parts` either appear in the storage or fail to appear.
Storage never contains partially created parts. The same applies to merge process — `parts` are either fully Storage never contains partially created parts. The same applies to merge process — `parts` are either fully
merged into a new `part` or fail to merge. There are no partially merged `parts` in MergeTree. merged into a new `part` or fail to merge. MergeTree doesn't contain partially merged `parts`.
`Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge `Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge
to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`. to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`.
@ -1331,8 +1359,13 @@ Additionally, alerting can be set up with the following tools:
## Security ## Security
Do not forget protecting sensitive endpoints in VictoriaMetrics when exposing it to untrusted networks such as the internet. General security recommendations:
Consider setting the following command-line flags:
- All the VictoriaMetrics components must run in protected private networks without direct access from untrusted networks such as Internet. The exception is [vmauth](https://docs.victoriametrics.com/vmauth.html) and [vmgateway](https://docs.victoriametrics.com/vmgateway.html).
- All the requests from untrusted networks to VictoriaMetrics components must go through auth proxy such as vmauth or vmgateway. The proxy must be set up with proper authentication and authorization.
- Prefer using lists of allowed API endpoints, while disallowing access to other endpoints when configuring auth proxy in front of VictoriaMetrics components.
VictoriaMetrics provides the following security-related command-line flags:
* `-tls`, `-tlsCertFile` and `-tlsKeyFile` for switching from HTTP to HTTPS. * `-tls`, `-tlsCertFile` and `-tlsKeyFile` for switching from HTTP to HTTPS.
* `-httpAuth.username` and `-httpAuth.password` for protecting all the HTTP endpoints * `-httpAuth.username` and `-httpAuth.password` for protecting all the HTTP endpoints
@ -1342,20 +1375,17 @@ Consider setting the following command-line flags:
* `-forceMergeAuthKey` for protecting `/internal/force_merge` endpoint. See [force merge docs](#forced-merge). * `-forceMergeAuthKey` for protecting `/internal/force_merge` endpoint. See [force merge docs](#forced-merge).
* `-search.resetCacheAuthKey` for protecting `/internal/resetRollupResultCache` endpoint. See [backfilling](#backfilling) for more details. * `-search.resetCacheAuthKey` for protecting `/internal/resetRollupResultCache` endpoint. See [backfilling](#backfilling) for more details.
* `-configAuthKey` for protecting `/config` endpoint, since it may contain sensitive information such as passwords. * `-configAuthKey` for protecting `/config` endpoint, since it may contain sensitive information such as passwords.
* `-pprofAuthKey` for protecting `/debug/pprof/*` endpoints, which can be used for [profiling](#profiling).
- `-pprofAuthKey` for protecting `/debug/pprof/*` endpoints, which can be used for [profiling](#profiling). * `-denyQueryTracing` for disallowing [query tracing](#query-tracing).
Explicitly set internal network interface for TCP and UDP ports for data ingestion with Graphite and OpenTSDB formats. Explicitly set internal network interface for TCP and UDP ports for data ingestion with Graphite and OpenTSDB formats.
For example, substitute `-graphiteListenAddr=:2003` with `-graphiteListenAddr=<internal_iface_ip>:2003`. For example, substitute `-graphiteListenAddr=:2003` with `-graphiteListenAddr=<internal_iface_ip>:2003`. This protects from unexpected requests from untrusted network interfaces.
Prefer authorizing all the incoming requests from untrusted networks with [vmauth](https://docs.victoriametrics.com/vmauth.html)
or similar auth proxy.
## Tuning ## Tuning
* There is no need for VictoriaMetrics tuning since it uses reasonable defaults for command-line flags, * No need in tuning for VictoriaMetrics - it uses reasonable defaults for command-line flags,
which are automatically adjusted for the available CPU and RAM resources. which are automatically adjusted for the available CPU and RAM resources.
* There is no need for Operating System tuning since VictoriaMetrics is optimized for default OS settings. * No need in tuning for Operating System - VictoriaMetrics is optimized for default OS settings.
The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a). The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a).
The recommendation is not specific for VictoriaMetrics only but also for any service which handles many HTTP connections and stores data on disk. The recommendation is not specific for VictoriaMetrics only but also for any service which handles many HTTP connections and stores data on disk.
* VictoriaMetrics is a write-heavy application and its performance depends on disk performance. So be careful with other * VictoriaMetrics is a write-heavy application and its performance depends on disk performance. So be careful with other
@ -1372,19 +1402,23 @@ mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
## Monitoring ## Monitoring
VictoriaMetrics exports internal metrics in Prometheus format at `/metrics` page. VictoriaMetrics exports internal metrics in Prometheus exposition format at `/metrics` page.
These metrics may be collected by [vmagent](https://docs.victoriametrics.com/vmagent.html) These metrics can be scraped via [vmagent](https://docs.victoriametrics.com/vmagent.html) or Prometheus.
or Prometheus by adding the corresponding scrape config to it. Alternatively, single-node VictoriaMetrics can self-scrape the metrics when `-selfScrapeInterval` command-line flag is
Alternatively they can be self-scraped by setting `-selfScrapeInterval` command-line flag to duration greater than 0. set to duration greater than 0. For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page
For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page with 10 seconds interval. with 10 seconds interval.
There are officials Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/dashboards/10229) and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176). There is also an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831). Official Grafana dashboards available for [single-node](https://grafana.com/dashboards/10229)
and [clustered](https://grafana.com/grafana/dashboards/11176) VictoriaMetrics.
See an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831)
created by community.
Graphs on these dashboard contain useful hints - hover the `i` icon at the top left corner of each graph in order to read it. Graphs on the dashboards contain useful hints - hover the `i` icon in the top left corner of each graph to read it.
It is recommended setting up alerts in [vmalert](https://docs.victoriametrics.com/vmalert.html) or in Prometheus from [this config](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml). We recommend setting up [alerts](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml)
via [vmalert](https://docs.victoriametrics.com/vmalert.html) or via Prometheus.
The most interesting metrics are: The most interesting health metrics are the following:
* `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour * `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour
aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series). aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
@ -1400,16 +1434,14 @@ The most interesting metrics are:
If this number remains high during extended periods of time, then it is likely more RAM is needed for optimal handling If this number remains high during extended periods of time, then it is likely more RAM is needed for optimal handling
of the current number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series). of the current number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
VictoriaMetrics also exposes currently running queries with their execution times at `/api/v1/status/active_queries` page. VictoriaMetrics exposes currently running queries and their execution times at `/api/v1/status/active_queries` page.
See the example of alerting rules for VM components [here](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml).
## TSDB stats ## TSDB stats
VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way similar to Prometheus - see [these Prometheus docs](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats). VictoriaMetrics accepts the following optional query args at `/api/v1/status/tsdb` page: VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way similar to Prometheus - see [these Prometheus docs](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats). VictoriaMetrics accepts the following optional query args at `/api/v1/status/tsdb` page:
* `topN=N` where `N` is the number of top entries to return in the response. By default top 10 entries are returned. * `topN=N` where `N` is the number of top entries to return in the response. By default top 10 entries are returned.
* `date=YYYY-MM-DD` where `YYYY-MM-DD` is the date for collecting the stats. By default the stats is collected for the current day. * `date=YYYY-MM-DD` where `YYYY-MM-DD` is the date for collecting the stats. By default the stats is collected for the current day. Pass `date=1970-01-01` in order to collect global stats across all the days.
* `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account. * `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account.
* `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details. * `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details.
@ -1787,9 +1819,10 @@ Files included in each folder:
### We kindly ask ### We kindly ask
* Please don't use any other font instead of suggested. * Please don't use any other font instead of suggested.
* There should be sufficient clear space around the logo. * To keep enough clear space around the logo.
* Do not change spacing, alignment, or relative locations of the design elements. * Do not change spacing, alignment, or relative locations of the design elements.
* Do not change the proportions of any of the design elements or the design itself. You may resize as needed but must retain all proportions. * Do not change the proportions for any of the design elements or the design itself.
You may resize as needed but must retain all proportions.
## List of command-line flags ## List of command-line flags

View file

@ -37,7 +37,7 @@ VictoriaMetrics has the following prominent features:
* It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details. * It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details.
* It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage). * It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage). * It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage).
* It features easy setup and operation: * It features easy setup and operation:
* VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d) without external dependencies. * VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d) without external dependencies.
* All the configuration is done via explicit command-line flags with reasonable defaults. * All the configuration is done via explicit command-line flags with reasonable defaults.
@ -279,6 +279,8 @@ By default cardinality explorer analyzes time series for the current date. It pr
By default all the time series for the selected date are analyzed. It is possible to narrow down the analysis to series By default all the time series for the selected date are analyzed. It is possible to narrow down the analysis to series
matching the specified [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). matching the specified [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors).
Cardinality explorer takes into account [deleted time series](#how-to-delete-time-series), because they stay in the inverted index for up to [-retentionPeriod](#retention). This means that the deleted time series take RAM, CPU, disk IO and disk space for the inverted index in the same way as other time series.
Cardinality explorer is built on top of [/api/v1/status/tsdb](#tsdb-stats). Cardinality explorer is built on top of [/api/v1/status/tsdb](#tsdb-stats).
See [cardinality explorer playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/cardinality). See [cardinality explorer playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/cardinality).
@ -354,13 +356,17 @@ echo '
The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format): The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format):
<div class="with-copy" markdown="1">
```bash ```bash
curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1' curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1'
``` ```
</div>
This command should return the following output if everything is OK: This command should return the following output if everything is OK:
``` ```json
{"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]} {"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]}
``` ```
@ -462,13 +468,17 @@ VictoriaMetrics sets the current time if the timestamp is omitted.
An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go. An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go.
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
<div class="with-copy" markdown="1">
```bash ```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
``` ```
</div>
The `/api/v1/export` endpoint should return the following response: The `/api/v1/export` endpoint should return the following response:
```bash ```json
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]} {"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]}
``` ```
@ -611,6 +621,8 @@ For example, the following query would return data for the last 30 minutes: `/ap
VictoriaMetrics accepts `round_digits` query arg for `/api/v1/query` and `/api/v1/query_range` handlers. It can be used for rounding response values to the given number of digits after the decimal point. For example, `/api/v1/query?query=avg_over_time(temperature[1h])&round_digits=2` would round response values to up to two digits after the decimal point. VictoriaMetrics accepts `round_digits` query arg for `/api/v1/query` and `/api/v1/query_range` handlers. It can be used for rounding response values to the given number of digits after the decimal point. For example, `/api/v1/query?query=avg_over_time(temperature[1h])&round_digits=2` would round response values to up to two digits after the decimal point.
VictoriaMetrics accepts `limit` query arg for `/api/v1/labels` and `/api/v1/label/<labelName>/values` handlers for limiting the number of returned entries. For example, the query to `/api/v1/labels?limit=5` returns a sample of up to 5 unique labels, while ignoring the rest of labels. If the provided `limit` value exceeds the corresponding `-search.maxTagKeys` / `-search.maxTagValues` command-line flag values, then limits specified in the command-line flags are used.
By default, VictoriaMetrics returns time series for the last 5 minutes from `/api/v1/series`, while the Prometheus API defaults to all time. Use `start` and `end` to select a different time range. By default, VictoriaMetrics returns time series for the last 5 minutes from `/api/v1/series`, while the Prometheus API defaults to all time. Use `start` and `end` to select a different time range.
Additionally, VictoriaMetrics provides the following handlers: Additionally, VictoriaMetrics provides the following handlers:
@ -619,7 +631,6 @@ Additionally, VictoriaMetrics provides the following handlers:
* `/api/v1/series/count` - returns the total number of time series in the database. Some notes: * `/api/v1/series/count` - returns the total number of time series in the database. Some notes:
* the handler scans all the inverted index, so it can be slow if the database contains tens of millions of time series; * the handler scans all the inverted index, so it can be slow if the database contains tens of millions of time series;
* the handler may count [deleted time series](#how-to-delete-time-series) additionally to normal time series due to internal implementation restrictions; * the handler may count [deleted time series](#how-to-delete-time-series) additionally to normal time series due to internal implementation restrictions;
* `/api/v1/labels/count` - returns a list of `label: values_count` entries. It can be used for determining labels with the maximum number of values.
* `/api/v1/status/active_queries` - returns a list of currently running queries. * `/api/v1/status/active_queries` - returns a list of currently running queries.
* `/api/v1/status/top_queries` - returns the following query lists: * `/api/v1/status/top_queries` - returns the following query lists:
* the most frequently executed queries - `topByCount` * the most frequently executed queries - `topByCount`
@ -746,8 +757,8 @@ More details may be found [here](https://github.com/VictoriaMetrics/VictoriaMetr
## Setting up service ## Setting up service
Read [these instructions](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/43) on how to set up VictoriaMetrics as a service in your OS. Read [instructions](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/43) on how to set up VictoriaMetrics
There is also [snap package for Ubuntu](https://snapcraft.io/victoriametrics). as a service for your OS. A [snap package](https://snapcraft.io/victoriametrics) is available for Ubuntu.
## How to work with snapshots ## How to work with snapshots
@ -840,7 +851,7 @@ for metrics to export. Use `{__name__!=""}` selector for fetching all the time s
The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON). The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON).
Each JSON line contains samples for a single time series. An example output: Each JSON line contains samples for a single time series. An example output:
```jsonl ```json
{"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]} {"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]}
{"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]} {"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]}
``` ```
@ -860,10 +871,14 @@ In this case the output may contain multiple lines with samples for the same tim
Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts
of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data: of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data:
<div class="with-copy" markdown="1">
```bash ```bash
curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz
``` ```
</div>
The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag. The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag.
Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format). Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format).
@ -1036,7 +1051,7 @@ curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
The following response should be returned: The following response should be returned:
```bash ```json
{"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]} {"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]}
{"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]} {"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]}
{"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]} {"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]}
@ -1054,29 +1069,41 @@ VictoriaMetrics accepts data in [Prometheus exposition format](https://github.co
and in [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md) and in [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md)
via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics: via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics:
<div class="with-copy" markdown="1">
```bash ```bash
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
``` ```
</div>
The following command may be used for verifying the imported data: The following command may be used for verifying the imported data:
<div class="with-copy" markdown="1">
```bash ```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}' curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
``` ```
</div>
It should return something like the following: It should return something like the following:
``` ```json
{"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]} {"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]}
``` ```
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data: Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data:
<div class="with-copy" markdown="1">
```bash ```bash
# Import gzipped data to <destination-victoriametrics>: # Import gzipped data to <destination-victoriametrics>:
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz
``` ```
</div>
Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args. Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args.
For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics. For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics.
@ -1246,16 +1273,17 @@ Information about merging process is available in [single-node VictoriaMetrics](
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards. and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards.
See more details in [monitoring docs](#monitoring). See more details in [monitoring docs](#monitoring).
The `merge` process is usually named "compaction", because the resulting `part` size is usually smaller than The `merge` process improves compression rate and keeps number of `parts` on disk relatively low.
the sum of the source `parts`. There are following benefits of doing the merge process: Benefits of doing the merge process are the following:
* it improves query performance, since lower number of `parts` are inspected with each query; * it improves query performance, since lower number of `parts` are inspected with each query
* it reduces the number of data files, since each `part`contains fixed number of files; * it reduces the number of data files, since each `part` contains fixed number of files
* better compression rate for the resulting part. * various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling)
and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are performed during the merge.
Newly added `parts` either appear in the storage or fail to appear. Newly added `parts` either appear in the storage or fail to appear.
Storage never contains partially created parts. The same applies to merge process — `parts` are either fully Storage never contains partially created parts. The same applies to merge process — `parts` are either fully
merged into a new `part` or fail to merge. There are no partially merged `parts` in MergeTree. merged into a new `part` or fail to merge. MergeTree doesn't contain partially merged `parts`.
`Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge `Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge
to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`. to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`.
@ -1335,8 +1363,13 @@ Additionally, alerting can be set up with the following tools:
## Security ## Security
Do not forget protecting sensitive endpoints in VictoriaMetrics when exposing it to untrusted networks such as the internet. General security recommendations:
Consider setting the following command-line flags:
- All the VictoriaMetrics components must run in protected private networks without direct access from untrusted networks such as Internet. The exception is [vmauth](https://docs.victoriametrics.com/vmauth.html) and [vmgateway](https://docs.victoriametrics.com/vmgateway.html).
- All the requests from untrusted networks to VictoriaMetrics components must go through auth proxy such as vmauth or vmgateway. The proxy must be set up with proper authentication and authorization.
- Prefer using lists of allowed API endpoints, while disallowing access to other endpoints when configuring auth proxy in front of VictoriaMetrics components.
VictoriaMetrics provides the following security-related command-line flags:
* `-tls`, `-tlsCertFile` and `-tlsKeyFile` for switching from HTTP to HTTPS. * `-tls`, `-tlsCertFile` and `-tlsKeyFile` for switching from HTTP to HTTPS.
* `-httpAuth.username` and `-httpAuth.password` for protecting all the HTTP endpoints * `-httpAuth.username` and `-httpAuth.password` for protecting all the HTTP endpoints
@ -1346,20 +1379,17 @@ Consider setting the following command-line flags:
* `-forceMergeAuthKey` for protecting `/internal/force_merge` endpoint. See [force merge docs](#forced-merge). * `-forceMergeAuthKey` for protecting `/internal/force_merge` endpoint. See [force merge docs](#forced-merge).
* `-search.resetCacheAuthKey` for protecting `/internal/resetRollupResultCache` endpoint. See [backfilling](#backfilling) for more details. * `-search.resetCacheAuthKey` for protecting `/internal/resetRollupResultCache` endpoint. See [backfilling](#backfilling) for more details.
* `-configAuthKey` for protecting `/config` endpoint, since it may contain sensitive information such as passwords. * `-configAuthKey` for protecting `/config` endpoint, since it may contain sensitive information such as passwords.
* `-pprofAuthKey` for protecting `/debug/pprof/*` endpoints, which can be used for [profiling](#profiling).
- `-pprofAuthKey` for protecting `/debug/pprof/*` endpoints, which can be used for [profiling](#profiling). * `-denyQueryTracing` for disallowing [query tracing](#query-tracing).
Explicitly set internal network interface for TCP and UDP ports for data ingestion with Graphite and OpenTSDB formats. Explicitly set internal network interface for TCP and UDP ports for data ingestion with Graphite and OpenTSDB formats.
For example, substitute `-graphiteListenAddr=:2003` with `-graphiteListenAddr=<internal_iface_ip>:2003`. For example, substitute `-graphiteListenAddr=:2003` with `-graphiteListenAddr=<internal_iface_ip>:2003`. This protects from unexpected requests from untrusted network interfaces.
Prefer authorizing all the incoming requests from untrusted networks with [vmauth](https://docs.victoriametrics.com/vmauth.html)
or similar auth proxy.
## Tuning ## Tuning
* There is no need for VictoriaMetrics tuning since it uses reasonable defaults for command-line flags, * No need in tuning for VictoriaMetrics - it uses reasonable defaults for command-line flags,
which are automatically adjusted for the available CPU and RAM resources. which are automatically adjusted for the available CPU and RAM resources.
* There is no need for Operating System tuning since VictoriaMetrics is optimized for default OS settings. * No need in tuning for Operating System - VictoriaMetrics is optimized for default OS settings.
The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a). The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a).
The recommendation is not specific for VictoriaMetrics only but also for any service which handles many HTTP connections and stores data on disk. The recommendation is not specific for VictoriaMetrics only but also for any service which handles many HTTP connections and stores data on disk.
* VictoriaMetrics is a write-heavy application and its performance depends on disk performance. So be careful with other * VictoriaMetrics is a write-heavy application and its performance depends on disk performance. So be careful with other
@ -1376,19 +1406,23 @@ mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
## Monitoring ## Monitoring
VictoriaMetrics exports internal metrics in Prometheus format at `/metrics` page. VictoriaMetrics exports internal metrics in Prometheus exposition format at `/metrics` page.
These metrics may be collected by [vmagent](https://docs.victoriametrics.com/vmagent.html) These metrics can be scraped via [vmagent](https://docs.victoriametrics.com/vmagent.html) or Prometheus.
or Prometheus by adding the corresponding scrape config to it. Alternatively, single-node VictoriaMetrics can self-scrape the metrics when `-selfScrapeInterval` command-line flag is
Alternatively they can be self-scraped by setting `-selfScrapeInterval` command-line flag to duration greater than 0. set to duration greater than 0. For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page
For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page with 10 seconds interval. with 10 seconds interval.
There are officials Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/dashboards/10229) and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176). There is also an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831). Official Grafana dashboards available for [single-node](https://grafana.com/dashboards/10229)
and [clustered](https://grafana.com/grafana/dashboards/11176) VictoriaMetrics.
See an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831)
created by community.
Graphs on these dashboard contain useful hints - hover the `i` icon at the top left corner of each graph in order to read it. Graphs on the dashboards contain useful hints - hover the `i` icon in the top left corner of each graph to read it.
It is recommended setting up alerts in [vmalert](https://docs.victoriametrics.com/vmalert.html) or in Prometheus from [this config](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml). We recommend setting up [alerts](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml)
via [vmalert](https://docs.victoriametrics.com/vmalert.html) or via Prometheus.
The most interesting metrics are: The most interesting health metrics are the following:
* `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour * `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour
aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series). aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
@ -1404,16 +1438,14 @@ The most interesting metrics are:
If this number remains high during extended periods of time, then it is likely more RAM is needed for optimal handling If this number remains high during extended periods of time, then it is likely more RAM is needed for optimal handling
of the current number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series). of the current number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
VictoriaMetrics also exposes currently running queries with their execution times at `/api/v1/status/active_queries` page. VictoriaMetrics exposes currently running queries and their execution times at `/api/v1/status/active_queries` page.
See the example of alerting rules for VM components [here](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml).
## TSDB stats ## TSDB stats
VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way similar to Prometheus - see [these Prometheus docs](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats). VictoriaMetrics accepts the following optional query args at `/api/v1/status/tsdb` page: VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way similar to Prometheus - see [these Prometheus docs](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats). VictoriaMetrics accepts the following optional query args at `/api/v1/status/tsdb` page:
* `topN=N` where `N` is the number of top entries to return in the response. By default top 10 entries are returned. * `topN=N` where `N` is the number of top entries to return in the response. By default top 10 entries are returned.
* `date=YYYY-MM-DD` where `YYYY-MM-DD` is the date for collecting the stats. By default the stats is collected for the current day. * `date=YYYY-MM-DD` where `YYYY-MM-DD` is the date for collecting the stats. By default the stats is collected for the current day. Pass `date=1970-01-01` in order to collect global stats across all the days.
* `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account. * `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account.
* `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details. * `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details.
@ -1791,9 +1823,10 @@ Files included in each folder:
### We kindly ask ### We kindly ask
* Please don't use any other font instead of suggested. * Please don't use any other font instead of suggested.
* There should be sufficient clear space around the logo. * To keep enough clear space around the logo.
* Do not change spacing, alignment, or relative locations of the design elements. * Do not change spacing, alignment, or relative locations of the design elements.
* Do not change the proportions of any of the design elements or the design itself. You may resize as needed but must retain all proportions. * Do not change the proportions for any of the design elements or the design itself.
You may resize as needed but must retain all proportions.
## List of command-line flags ## List of command-line flags

View file

@ -74,7 +74,7 @@ Run this command in your terminal:
<div class="with-copy" markdown="1">.html <div class="with-copy" markdown="1">.html
```yaml ```bash
helm install vmsingle vm/victoria-metrics-single -f https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml helm install vmsingle vm/victoria-metrics-single -f https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml
``` ```

View file

@ -145,13 +145,13 @@ vm_per_query_rows_processed_count_count 11
In practice, histogram `vm_per_query_rows_processed_count` may be used in the following way: In practice, histogram `vm_per_query_rows_processed_count` may be used in the following way:
```Go ```go
// define the histogram // define the histogram
perQueryRowsProcessed := metrics.NewHistogram(`vm_per_query_rows_processed_count`) perQueryRowsProcessed := metrics.NewHistogram(`vm_per_query_rows_processed_count`)
// use the histogram during processing // use the histogram during processing
for _, query := range queries { for _, query := range queries {
perQueryRowsProcessed.Update(len(query.Rows)) perQueryRowsProcessed.Update(len(query.Rows))
} }
``` ```

14
go.mod
View file

@ -11,7 +11,7 @@ require (
github.com/VictoriaMetrics/fasthttp v1.1.0 github.com/VictoriaMetrics/fasthttp v1.1.0
github.com/VictoriaMetrics/metrics v1.18.1 github.com/VictoriaMetrics/metrics v1.18.1
github.com/VictoriaMetrics/metricsql v0.43.0 github.com/VictoriaMetrics/metricsql v0.43.0
github.com/aws/aws-sdk-go v1.44.27 github.com/aws/aws-sdk-go v1.44.32
github.com/cespare/xxhash/v2 v2.1.2 github.com/cespare/xxhash/v2 v2.1.2
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
@ -35,10 +35,10 @@ require (
github.com/valyala/fasttemplate v1.2.1 github.com/valyala/fasttemplate v1.2.1
github.com/valyala/gozstd v1.17.0 github.com/valyala/gozstd v1.17.0
github.com/valyala/quicktemplate v1.7.0 github.com/valyala/quicktemplate v1.7.0
golang.org/x/net v0.0.0-20220531201128-c960675eff93 golang.org/x/net v0.0.0-20220607020251-c690dde0001d
golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401 golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d
google.golang.org/api v0.82.0 google.golang.org/api v0.83.0
gopkg.in/yaml.v2 v2.4.0 gopkg.in/yaml.v2 v2.4.0
) )
@ -73,9 +73,9 @@ require (
go.uber.org/goleak v1.1.11-0.20210813005559-691160354723 // indirect go.uber.org/goleak v1.1.11-0.20210813005559-691160354723 // indirect
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f // indirect golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f // indirect
golang.org/x/text v0.3.7 // indirect golang.org/x/text v0.3.7 // indirect
golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df // indirect golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f // indirect
google.golang.org/appengine v1.6.7 // indirect google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8 // indirect google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac // indirect
google.golang.org/grpc v1.47.0 // indirect google.golang.org/grpc v1.47.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect

27
go.sum
View file

@ -142,8 +142,8 @@ github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQ
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48= github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.44.27 h1:8CMspeZSrewnbvAwgl8qo5R7orDLwQnTGBf/OKPiHxI= github.com/aws/aws-sdk-go v1.44.32 h1:x5hBtpY/02sgRL158zzTclcCLwh3dx3YlSl1rAH4Op0=
github.com/aws/aws-sdk-go v1.44.27/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= github.com/aws/aws-sdk-go v1.44.32/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g= github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
@ -992,9 +992,8 @@ golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su
golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220412020605-290c469a71a5/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220412020605-290c469a71a5/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220526153639-5463443f8c37/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.0.0-20220607020251-c690dde0001d h1:4SFsTMi4UahlKoloni7L4eYzhFRifURQLw+yv0QDCx8=
golang.org/x/net v0.0.0-20220531201128-c960675eff93 h1:MYimHLfoXEpOhqd/zgoA/uoXzHB86AEky4LAx5ij9xA= golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220531201128-c960675eff93/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -1014,8 +1013,9 @@ golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401 h1:zwrSfklXn0gxyLRX/aR+q6cgHbV/ItVyzbPlbA+dkAw=
golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb h1:8tDJ3aechhddbdPAxpycgXHJRMLpk/Ab+aa4OgdN5/g=
golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb/go.mod h1:jaDAt6Dkxork7LmZnYtzbRWj0W47D86a3TGe0YHBvmE=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -1028,7 +1028,6 @@ golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220513210516-0976fa681c29/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f h1:Ax0t5p6N38Ga0dThY21weqDEyz2oklo4IvDkpigvkD8= golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f h1:Ax0t5p6N38Ga0dThY21weqDEyz2oklo4IvDkpigvkD8=
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -1123,8 +1122,9 @@ golang.org/x/sys v0.0.0-20220405052023-b1e9470b6e64/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220502124256-b6088ccd6cba/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220502124256-b6088ccd6cba/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a h1:dGzPydgVsqGcTRVwiLJ1jVbufYwmzD3LfVPLKsKg+0k=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d h1:Zu/JngovGLVi6t2J3nmAf3AoTDwuzw85YZ3b9o4yU7s=
golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -1221,8 +1221,9 @@ golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df h1:5Pf6pFKu98ODmgnpvkJ3kFUOQGGLIzLIkbzUHp47618=
golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8= golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f h1:uF6paiQQebLeSXkrTqHqz0MXhXXS1KgF41eUdBNvxK0=
golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
gonum.org/v1/gonum v0.0.0-20180816165407-929014505bf4/go.mod h1:Y+Yx5eoAFn32cQvJDxZx5Dpnq+c3wtXuadVZAcxbbBo= gonum.org/v1/gonum v0.0.0-20180816165407-929014505bf4/go.mod h1:Y+Yx5eoAFn32cQvJDxZx5Dpnq+c3wtXuadVZAcxbbBo=
gonum.org/v1/gonum v0.0.0-20181121035319-3f7ecaa7e8ca/go.mod h1:Y+Yx5eoAFn32cQvJDxZx5Dpnq+c3wtXuadVZAcxbbBo= gonum.org/v1/gonum v0.0.0-20181121035319-3f7ecaa7e8ca/go.mod h1:Y+Yx5eoAFn32cQvJDxZx5Dpnq+c3wtXuadVZAcxbbBo=
gonum.org/v1/gonum v0.6.0/go.mod h1:9mxDZsDKxgMAuccQkewq682L+0eCu4dCN2yonUJTCLU= gonum.org/v1/gonum v0.6.0/go.mod h1:9mxDZsDKxgMAuccQkewq682L+0eCu4dCN2yonUJTCLU=
@ -1268,8 +1269,8 @@ google.golang.org/api v0.74.0/go.mod h1:ZpfMZOVRMywNyvJFeqL9HRWBgAuRfSjJFpe9QtRR
google.golang.org/api v0.75.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA= google.golang.org/api v0.75.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA=
google.golang.org/api v0.78.0/go.mod h1:1Sg78yoMLOhlQTeF+ARBoytAcH1NNyyl390YMy6rKmw= google.golang.org/api v0.78.0/go.mod h1:1Sg78yoMLOhlQTeF+ARBoytAcH1NNyyl390YMy6rKmw=
google.golang.org/api v0.80.0/go.mod h1:xY3nI94gbvBrE0J6NHXhxOmW97HG7Khjkku6AFB3Hyg= google.golang.org/api v0.80.0/go.mod h1:xY3nI94gbvBrE0J6NHXhxOmW97HG7Khjkku6AFB3Hyg=
google.golang.org/api v0.82.0 h1:h6EGeZuzhoKSS7BUznzkW+2wHZ+4Ubd6rsVvvh3dRkw= google.golang.org/api v0.83.0 h1:pMvST+6v+46Gabac4zlJlalxZjCeRcepwg2EdBU+nCc=
google.golang.org/api v0.82.0/go.mod h1:Ld58BeTlL9DIYr2M2ajvoSqmGLei0BMn+kVBmkam1os= google.golang.org/api v0.83.0/go.mod h1:CNywQoj/AfhTw26ZWAa6LwOv+6WFxHmeLPZq2uncLZk=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@ -1358,9 +1359,9 @@ google.golang.org/genproto v0.0.0-20220429170224-98d788798c3e/go.mod h1:8w6bsBMX
google.golang.org/genproto v0.0.0-20220505152158-f39f71e6c8f3/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4= google.golang.org/genproto v0.0.0-20220505152158-f39f71e6c8f3/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220518221133-4f43b3371335/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4= google.golang.org/genproto v0.0.0-20220518221133-4f43b3371335/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220523171625-347a074981d8/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4= google.golang.org/genproto v0.0.0-20220523171625-347a074981d8/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220527130721-00d5c0f3be58/go.mod h1:yKyY4AMRwFiC8yMMNaMi+RkCnjZJt9LoWuvhXjMs+To=
google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8 h1:qRu95HZ148xXw+XeZ3dvqe85PxH4X8+jIo0iRPKcEnM=
google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8/go.mod h1:yKyY4AMRwFiC8yMMNaMi+RkCnjZJt9LoWuvhXjMs+To= google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8/go.mod h1:yKyY4AMRwFiC8yMMNaMi+RkCnjZJt9LoWuvhXjMs+To=
google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac h1:ByeiW1F67iV9o8ipGskA+HWzSkMbRJuKLlwCdPxzn7A=
google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs= google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM= google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=

File diff suppressed because it is too large Load diff

View file

@ -703,9 +703,9 @@ func testIndexDBGetOrCreateTSIDByName(db *indexDB, metricGroups int) ([]MetricNa
} }
func testIndexDBCheckTSIDByName(db *indexDB, mns []MetricName, tsids []TSID, isConcurrent bool) error { func testIndexDBCheckTSIDByName(db *indexDB, mns []MetricName, tsids []TSID, isConcurrent bool) error {
hasValue := func(tvs []string, v []byte) bool { hasValue := func(lvs []string, v []byte) bool {
for _, tv := range tvs { for _, lv := range lvs {
if string(v) == tv { if string(v) == lv {
return true return true
} }
} }
@ -715,7 +715,7 @@ func testIndexDBCheckTSIDByName(db *indexDB, mns []MetricName, tsids []TSID, isC
timeseriesCounters := make(map[uint64]bool) timeseriesCounters := make(map[uint64]bool)
var tsidCopy TSID var tsidCopy TSID
var metricNameCopy []byte var metricNameCopy []byte
allKeys := make(map[string]bool) allLabelNames := make(map[string]bool)
for i := range mns { for i := range mns {
mn := &mns[i] mn := &mns[i]
tsid := &tsids[i] tsid := &tsids[i]
@ -757,38 +757,38 @@ func testIndexDBCheckTSIDByName(db *indexDB, mns []MetricName, tsids []TSID, isC
return fmt.Errorf("expecting empty buf when searching for non-existent metricID; got %X", buf) return fmt.Errorf("expecting empty buf when searching for non-existent metricID; got %X", buf)
} }
// Test SearchTagValues // Test SearchLabelValuesWithFiltersOnTimeRange
tvs, err := db.SearchTagValues(nil, 1e5, noDeadline) lvs, err := db.SearchLabelValuesWithFiltersOnTimeRange(nil, "__name__", nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil { if err != nil {
return fmt.Errorf("error in SearchTagValues for __name__: %w", err) return fmt.Errorf("error in SearchLabelValuesWithFiltersOnTimeRange(labelName=%q): %w", "__name__", err)
} }
if !hasValue(tvs, mn.MetricGroup) { if !hasValue(lvs, mn.MetricGroup) {
return fmt.Errorf("SearchTagValues couldn't find %q; found %q", mn.MetricGroup, tvs) return fmt.Errorf("SearchLabelValuesWithFiltersOnTimeRange(labelName=%q): couldn't find %q; found %q", "__name__", mn.MetricGroup, lvs)
} }
for i := range mn.Tags { for i := range mn.Tags {
tag := &mn.Tags[i] tag := &mn.Tags[i]
tvs, err := db.SearchTagValues(tag.Key, 1e5, noDeadline) lvs, err := db.SearchLabelValuesWithFiltersOnTimeRange(nil, string(tag.Key), nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil { if err != nil {
return fmt.Errorf("error in SearchTagValues for __name__: %w", err) return fmt.Errorf("error in SearchLabelValuesWithFiltersOnTimeRange(labelName=%q): %w", tag.Key, err)
} }
if !hasValue(tvs, tag.Value) { if !hasValue(lvs, tag.Value) {
return fmt.Errorf("SearchTagValues couldn't find %q=%q; found %q", tag.Key, tag.Value, tvs) return fmt.Errorf("SearchLabelValuesWithFiltersOnTimeRange(labelName=%q): couldn't find %q; found %q", tag.Key, tag.Value, lvs)
} }
allKeys[string(tag.Key)] = true allLabelNames[string(tag.Key)] = true
} }
} }
// Test SearchTagKeys // Test SearchLabelNamesWithFiltersOnTimeRange (empty filters, global time range)
tks, err := db.SearchTagKeys(1e5, noDeadline) lns, err := db.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil { if err != nil {
return fmt.Errorf("error in SearchTagKeys: %w", err) return fmt.Errorf("error in SearchLabelNamesWithFiltersOnTimeRange(empty filter, global time range): %w", err)
} }
if !hasValue(tks, nil) { if !hasValue(lns, []byte("__name__")) {
return fmt.Errorf("cannot find __name__ in %q", tks) return fmt.Errorf("cannot find __name__ in %q", lns)
} }
for key := range allKeys { for labelName := range allLabelNames {
if !hasValue(tks, []byte(key)) { if !hasValue(lns, []byte(labelName)) {
return fmt.Errorf("cannot find %q in %q", key, tks) return fmt.Errorf("cannot find %q in %q", labelName, lns)
} }
} }
@ -1633,13 +1633,13 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
var metricNameBuf []byte var metricNameBuf []byte
perDayMetricIDs := make(map[uint64]*uint64set.Set) perDayMetricIDs := make(map[uint64]*uint64set.Set)
var allMetricIDs uint64set.Set var allMetricIDs uint64set.Set
tagKeys := []string{ labelNames := []string{
"", "constant", "day", "uniqueid", "__name__", "constant", "day", "uniqueid",
} }
tagValues := []string{ labelValues := []string{
"testMetric", "testMetric",
} }
sort.Strings(tagKeys) sort.Strings(labelNames)
for day := 0; day < days; day++ { for day := 0; day < days; day++ {
var tsids []TSID var tsids []TSID
var mns []MetricName var mns []MetricName
@ -1709,30 +1709,28 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
t.Fatalf("unexpected metricIDs found;\ngot\n%d\nwant\n%d", metricIDs.AppendTo(nil), allMetricIDs.AppendTo(nil)) t.Fatalf("unexpected metricIDs found;\ngot\n%d\nwant\n%d", metricIDs.AppendTo(nil), allMetricIDs.AppendTo(nil))
} }
// Check SearchTagKeysOnTimeRange. // Check SearchLabelNamesWithFiltersOnTimeRange with the specified time range.
tks, err := db.SearchTagKeysOnTimeRange(TimeRange{ tr := TimeRange{
MinTimestamp: int64(now) - msecPerDay, MinTimestamp: int64(now) - msecPerDay,
MaxTimestamp: int64(now), MaxTimestamp: int64(now),
}, 10000, noDeadline)
if err != nil {
t.Fatalf("unexpected error in SearchTagKeysOnTimeRange: %s", err)
} }
sort.Strings(tks) lns, err := db.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, tr, 10000, 1e9, noDeadline)
if !reflect.DeepEqual(tks, tagKeys) { if err != nil {
t.Fatalf("unexpected tagKeys; got\n%s\nwant\n%s", tks, tagKeys) t.Fatalf("unexpected error in SearchLabelNamesWithFiltersOnTimeRange(timeRange=%s): %s", &tr, err)
}
sort.Strings(lns)
if !reflect.DeepEqual(lns, labelNames) {
t.Fatalf("unexpected labelNames; got\n%s\nwant\n%s", lns, labelNames)
} }
// Check SearchTagValuesOnTimeRange. // Check SearchLabelValuesWithFiltersOnTimeRange with the specified time range.
tvs, err := db.SearchTagValuesOnTimeRange([]byte(""), TimeRange{ lvs, err := db.SearchLabelValuesWithFiltersOnTimeRange(nil, "", nil, tr, 10000, 1e9, noDeadline)
MinTimestamp: int64(now) - msecPerDay,
MaxTimestamp: int64(now),
}, 10000, noDeadline)
if err != nil { if err != nil {
t.Fatalf("unexpected error in SearchTagValuesOnTimeRange: %s", err) t.Fatalf("unexpected error in SearchLabelValuesWithFiltersOnTimeRange(timeRange=%s): %s", &tr, err)
} }
sort.Strings(tvs) sort.Strings(lvs)
if !reflect.DeepEqual(tvs, tagValues) { if !reflect.DeepEqual(lvs, labelValues) {
t.Fatalf("unexpected tagValues; got\n%s\nwant\n%s", tvs, tagValues) t.Fatalf("unexpected labelValues; got\n%s\nwant\n%s", lvs, labelValues)
} }
// Create a filter that will match series that occur across multiple days // Create a filter that will match series that occur across multiple days
@ -1743,7 +1741,7 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
// Perform a search within a day. // Perform a search within a day.
// This should return the metrics for the day // This should return the metrics for the day
tr := TimeRange{ tr = TimeRange{
MinTimestamp: int64(now - 2*msecPerHour - 1), MinTimestamp: int64(now - 2*msecPerHour - 1),
MaxTimestamp: int64(now), MaxTimestamp: int64(now),
} }
@ -1755,6 +1753,46 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
t.Fatalf("expected %d time series for current day, got %d time series", metricsPerDay, len(matchedTSIDs)) t.Fatalf("expected %d time series for current day, got %d time series", metricsPerDay, len(matchedTSIDs))
} }
// Check SearchLabelNamesWithFiltersOnTimeRange with the specified filter.
lns, err = db.SearchLabelNamesWithFiltersOnTimeRange(nil, []*TagFilters{tfs}, TimeRange{}, 10000, 1e9, noDeadline)
if err != nil {
t.Fatalf("unexpected error in SearchLabelNamesWithFiltersOnTimeRange(filters=%s): %s", tfs, err)
}
sort.Strings(lns)
if !reflect.DeepEqual(lns, labelNames) {
t.Fatalf("unexpected labelNames; got\n%s\nwant\n%s", lns, labelNames)
}
// Check SearchLabelNamesWithFiltersOnTimeRange with the specified filter and time range.
lns, err = db.SearchLabelNamesWithFiltersOnTimeRange(nil, []*TagFilters{tfs}, tr, 10000, 1e9, noDeadline)
if err != nil {
t.Fatalf("unexpected error in SearchLabelNamesWithFiltersOnTimeRange(filters=%s, timeRange=%s): %s", tfs, &tr, err)
}
sort.Strings(lns)
if !reflect.DeepEqual(lns, labelNames) {
t.Fatalf("unexpected labelNames; got\n%s\nwant\n%s", lns, labelNames)
}
// Check SearchLabelValuesWithFiltersOnTimeRange with the specified filter.
lvs, err = db.SearchLabelValuesWithFiltersOnTimeRange(nil, "", []*TagFilters{tfs}, TimeRange{}, 10000, 1e9, noDeadline)
if err != nil {
t.Fatalf("unexpected error in SearchLabelValuesWithFiltersOnTimeRange(filters=%s): %s", tfs, err)
}
sort.Strings(lvs)
if !reflect.DeepEqual(lvs, labelValues) {
t.Fatalf("unexpected labelValues; got\n%s\nwant\n%s", lvs, labelValues)
}
// Check SearchLabelValuesWithFiltersOnTimeRange with the specified filter and time range.
lvs, err = db.SearchLabelValuesWithFiltersOnTimeRange(nil, "", []*TagFilters{tfs}, tr, 10000, 1e9, noDeadline)
if err != nil {
t.Fatalf("unexpected error in SearchLabelValuesWithFiltersOnTimeRange(filters=%s, timeRange=%s): %s", tfs, &tr, err)
}
sort.Strings(lvs)
if !reflect.DeepEqual(lvs, labelValues) {
t.Fatalf("unexpected labelValues; got\n%s\nwant\n%s", lvs, labelValues)
}
// Perform a search across all the days, should match all metrics // Perform a search across all the days, should match all metrics
tr = TimeRange{ tr = TimeRange{
MinTimestamp: int64(now - msecPerDay*days), MinTimestamp: int64(now - msecPerDay*days),
@ -1770,7 +1808,7 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
} }
// Check GetTSDBStatusWithFiltersForDate with nil filters. // Check GetTSDBStatusWithFiltersForDate with nil filters.
status, err := db.GetTSDBStatusWithFiltersForDate(nil, baseDate, 5, 1e6, noDeadline) status, err := db.GetTSDBStatusWithFiltersForDate(nil, nil, baseDate, 5, 1e6, noDeadline)
if err != nil { if err != nil {
t.Fatalf("error in GetTSDBStatusWithFiltersForDate with nil filters: %s", err) t.Fatalf("error in GetTSDBStatusWithFiltersForDate with nil filters: %s", err)
} }
@ -1841,12 +1879,12 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
t.Fatalf("unexpected TotalLabelValuePairs; got %d; want %d", status.TotalLabelValuePairs, expectedLabelValuePairs) t.Fatalf("unexpected TotalLabelValuePairs; got %d; want %d", status.TotalLabelValuePairs, expectedLabelValuePairs)
} }
// Check GetTSDBStatusWithFiltersForDate // Check GetTSDBStatusWithFiltersForDate with non-nil filter, which matches all the series
tfs = NewTagFilters() tfs = NewTagFilters()
if err := tfs.Add([]byte("day"), []byte("0"), false, false); err != nil { if err := tfs.Add([]byte("day"), []byte("0"), false, false); err != nil {
t.Fatalf("cannot add filter: %s", err) t.Fatalf("cannot add filter: %s", err)
} }
status, err = db.GetTSDBStatusWithFiltersForDate([]*TagFilters{tfs}, baseDate, 5, 1e6, noDeadline) status, err = db.GetTSDBStatusWithFiltersForDate(nil, []*TagFilters{tfs}, baseDate, 5, 1e6, noDeadline)
if err != nil { if err != nil {
t.Fatalf("error in GetTSDBStatusWithFiltersForDate: %s", err) t.Fatalf("error in GetTSDBStatusWithFiltersForDate: %s", err)
} }
@ -1870,12 +1908,39 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
if status.TotalLabelValuePairs != expectedLabelValuePairs { if status.TotalLabelValuePairs != expectedLabelValuePairs {
t.Fatalf("unexpected TotalLabelValuePairs; got %d; want %d", status.TotalLabelValuePairs, expectedLabelValuePairs) t.Fatalf("unexpected TotalLabelValuePairs; got %d; want %d", status.TotalLabelValuePairs, expectedLabelValuePairs)
} }
// Check GetTSDBStatusWithFiltersForDate
// Check GetTSDBStatusWithFiltersOnDate, which matches all the series on a global time range
status, err = db.GetTSDBStatusWithFiltersForDate(nil, nil, 0, 5, 1e6, noDeadline)
if err != nil {
t.Fatalf("error in GetTSDBStatusWithFiltersForDate: %s", err)
}
if !status.hasEntries() {
t.Fatalf("expecting non-empty TSDB status")
}
expectedSeriesCountByMetricName = []TopHeapEntry{
{
Name: "testMetric",
Count: 5000,
},
}
if !reflect.DeepEqual(status.SeriesCountByMetricName, expectedSeriesCountByMetricName) {
t.Fatalf("unexpected SeriesCountByMetricName;\ngot\n%v\nwant\n%v", status.SeriesCountByMetricName, expectedSeriesCountByMetricName)
}
expectedTotalSeries = 5000
if status.TotalSeries != expectedTotalSeries {
t.Fatalf("unexpected TotalSeries; got %d; want %d", status.TotalSeries, expectedTotalSeries)
}
expectedLabelValuePairs = 20000
if status.TotalLabelValuePairs != expectedLabelValuePairs {
t.Fatalf("unexpected TotalLabelValuePairs; got %d; want %d", status.TotalLabelValuePairs, expectedLabelValuePairs)
}
// Check GetTSDBStatusWithFiltersForDate with non-nil filter, which matches only 3 series
tfs = NewTagFilters() tfs = NewTagFilters()
if err := tfs.Add([]byte("uniqueid"), []byte("0|1|3"), false, true); err != nil { if err := tfs.Add([]byte("uniqueid"), []byte("0|1|3"), false, true); err != nil {
t.Fatalf("cannot add filter: %s", err) t.Fatalf("cannot add filter: %s", err)
} }
status, err = db.GetTSDBStatusWithFiltersForDate([]*TagFilters{tfs}, baseDate, 5, 1e6, noDeadline) status, err = db.GetTSDBStatusWithFiltersForDate(nil, []*TagFilters{tfs}, baseDate, 5, 1e6, noDeadline)
if err != nil { if err != nil {
t.Fatalf("error in GetTSDBStatusWithFiltersForDate: %s", err) t.Fatalf("error in GetTSDBStatusWithFiltersForDate: %s", err)
} }
@ -1899,6 +1964,32 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
if status.TotalLabelValuePairs != expectedLabelValuePairs { if status.TotalLabelValuePairs != expectedLabelValuePairs {
t.Fatalf("unexpected TotalLabelValuePairs; got %d; want %d", status.TotalLabelValuePairs, expectedLabelValuePairs) t.Fatalf("unexpected TotalLabelValuePairs; got %d; want %d", status.TotalLabelValuePairs, expectedLabelValuePairs)
} }
// Check GetTSDBStatusWithFiltersForDate with non-nil filter on global time range, which matches only 15 series
status, err = db.GetTSDBStatusWithFiltersForDate(nil, []*TagFilters{tfs}, 0, 5, 1e6, noDeadline)
if err != nil {
t.Fatalf("error in GetTSDBStatusWithFiltersForDate: %s", err)
}
if !status.hasEntries() {
t.Fatalf("expecting non-empty TSDB status")
}
expectedSeriesCountByMetricName = []TopHeapEntry{
{
Name: "testMetric",
Count: 15,
},
}
if !reflect.DeepEqual(status.SeriesCountByMetricName, expectedSeriesCountByMetricName) {
t.Fatalf("unexpected SeriesCountByMetricName;\ngot\n%v\nwant\n%v", status.SeriesCountByMetricName, expectedSeriesCountByMetricName)
}
expectedTotalSeries = 15
if status.TotalSeries != expectedTotalSeries {
t.Fatalf("unexpected TotalSeries; got %d; want %d", status.TotalSeries, expectedTotalSeries)
}
expectedLabelValuePairs = 60
if status.TotalLabelValuePairs != expectedLabelValuePairs {
t.Fatalf("unexpected TotalLabelValuePairs; got %d; want %d", status.TotalLabelValuePairs, expectedLabelValuePairs)
}
} }
func toTFPointers(tfs []tagFilter) []*tagFilter { func toTFPointers(tfs []tagFilter) []*tagFilter {

View file

@ -1061,7 +1061,7 @@ func nextRetentionDuration(retentionMsecs int64) time.Duration {
// SearchMetricNames returns metric names matching the given tfss on the given tr. // SearchMetricNames returns metric names matching the given tfss on the given tr.
func (s *Storage) SearchMetricNames(qt *querytracer.Tracer, tfss []*TagFilters, tr TimeRange, maxMetrics int, deadline uint64) ([]MetricName, error) { func (s *Storage) SearchMetricNames(qt *querytracer.Tracer, tfss []*TagFilters, tr TimeRange, maxMetrics int, deadline uint64) ([]MetricName, error) {
qt = qt.NewChild("search for matching metric names") qt = qt.NewChild("search for matching metric names: filters=%s, timeRange=%s", tfss, &tr)
defer qt.Done() defer qt.Done()
tsids, err := s.searchTSIDs(qt, tfss, tr, maxMetrics, deadline) tsids, err := s.searchTSIDs(qt, tfss, tr, maxMetrics, deadline)
if err != nil { if err != nil {
@ -1104,7 +1104,7 @@ func (s *Storage) SearchMetricNames(qt *querytracer.Tracer, tfss []*TagFilters,
// searchTSIDs returns sorted TSIDs for the given tfss and the given tr. // searchTSIDs returns sorted TSIDs for the given tfss and the given tr.
func (s *Storage) searchTSIDs(qt *querytracer.Tracer, tfss []*TagFilters, tr TimeRange, maxMetrics int, deadline uint64) ([]TSID, error) { func (s *Storage) searchTSIDs(qt *querytracer.Tracer, tfss []*TagFilters, tr TimeRange, maxMetrics int, deadline uint64) ([]TSID, error) {
qt = qt.NewChild("search for matching series ids") qt = qt.NewChild("search for matching tsids: filters=%s, timeRange=%s", tfss, &tr)
defer qt.Done() defer qt.Done()
// Do not cache tfss -> tsids here, since the caching is performed // Do not cache tfss -> tsids here, since the caching is performed
// on idb level. // on idb level.
@ -1154,7 +1154,7 @@ var (
// //
// This should speed-up further searchMetricNameWithCache calls for metricIDs from tsids. // This should speed-up further searchMetricNameWithCache calls for metricIDs from tsids.
func (s *Storage) prefetchMetricNames(qt *querytracer.Tracer, tsids []TSID, deadline uint64) error { func (s *Storage) prefetchMetricNames(qt *querytracer.Tracer, tsids []TSID, deadline uint64) error {
qt = qt.NewChild("prefetch metric names for %d series ids", len(tsids)) qt = qt.NewChild("prefetch metric names for %d tsids", len(tsids))
defer qt.Done() defer qt.Done()
if len(tsids) == 0 { if len(tsids) == 0 {
qt.Printf("nothing to prefetch") qt.Printf("nothing to prefetch")
@ -1263,24 +1263,15 @@ func (s *Storage) DeleteMetrics(tfss []*TagFilters) (int, error) {
return deletedCount, nil return deletedCount, nil
} }
// SearchTagKeysOnTimeRange searches for tag keys on tr. // SearchLabelNamesWithFiltersOnTimeRange searches for label names matching the given tfss on tr.
func (s *Storage) SearchTagKeysOnTimeRange(tr TimeRange, maxTagKeys int, deadline uint64) ([]string, error) { func (s *Storage) SearchLabelNamesWithFiltersOnTimeRange(qt *querytracer.Tracer, tfss []*TagFilters, tr TimeRange, maxLabelNames, maxMetrics int, deadline uint64) ([]string, error) {
return s.idb().SearchTagKeysOnTimeRange(tr, maxTagKeys, deadline) return s.idb().SearchLabelNamesWithFiltersOnTimeRange(qt, tfss, tr, maxLabelNames, maxMetrics, deadline)
} }
// SearchTagKeys searches for tag keys // SearchLabelValuesWithFiltersOnTimeRange searches for label values for the given labelName, filters and tr.
func (s *Storage) SearchTagKeys(maxTagKeys int, deadline uint64) ([]string, error) { func (s *Storage) SearchLabelValuesWithFiltersOnTimeRange(qt *querytracer.Tracer, labelName string, tfss []*TagFilters,
return s.idb().SearchTagKeys(maxTagKeys, deadline) tr TimeRange, maxLabelValues, maxMetrics int, deadline uint64) ([]string, error) {
} return s.idb().SearchLabelValuesWithFiltersOnTimeRange(qt, labelName, tfss, tr, maxLabelValues, maxMetrics, deadline)
// SearchTagValuesOnTimeRange searches for tag values for the given tagKey on tr.
func (s *Storage) SearchTagValuesOnTimeRange(tagKey []byte, tr TimeRange, maxTagValues int, deadline uint64) ([]string, error) {
return s.idb().SearchTagValuesOnTimeRange(tagKey, tr, maxTagValues, deadline)
}
// SearchTagValues searches for tag values for the given tagKey
func (s *Storage) SearchTagValues(tagKey []byte, maxTagValues int, deadline uint64) ([]string, error) {
return s.idb().SearchTagValues(tagKey, maxTagValues, deadline)
} }
// SearchTagValueSuffixes returns all the tag value suffixes for the given tagKey and tagValuePrefix on the given tr. // SearchTagValueSuffixes returns all the tag value suffixes for the given tagKey and tagValuePrefix on the given tr.
@ -1468,39 +1459,6 @@ func getRegexpPartsForGraphiteQuery(q string) ([]string, string) {
} }
} }
// SearchTagEntries returns a list of (tagName -> tagValues)
func (s *Storage) SearchTagEntries(maxTagKeys, maxTagValues int, deadline uint64) ([]TagEntry, error) {
idb := s.idb()
keys, err := idb.SearchTagKeys(maxTagKeys, deadline)
if err != nil {
return nil, fmt.Errorf("cannot search tag keys: %w", err)
}
// Sort keys for faster seeks below
sort.Strings(keys)
tes := make([]TagEntry, len(keys))
for i, key := range keys {
values, err := idb.SearchTagValues([]byte(key), maxTagValues, deadline)
if err != nil {
return nil, fmt.Errorf("cannot search values for tag %q: %w", key, err)
}
te := &tes[i]
te.Key = key
te.Values = values
}
return tes, nil
}
// TagEntry contains (tagName -> tagValues) mapping
type TagEntry struct {
// Key is tagName
Key string
// Values contains all the values for Key.
Values []string
}
// GetSeriesCount returns the approximate number of unique time series. // GetSeriesCount returns the approximate number of unique time series.
// //
// It includes the deleted series too and may count the same series // It includes the deleted series too and may count the same series
@ -1510,8 +1468,8 @@ func (s *Storage) GetSeriesCount(deadline uint64) (uint64, error) {
} }
// GetTSDBStatusWithFiltersForDate returns TSDB status data for /api/v1/status/tsdb with match[] filters. // GetTSDBStatusWithFiltersForDate returns TSDB status data for /api/v1/status/tsdb with match[] filters.
func (s *Storage) GetTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint64, topN, maxMetrics int, deadline uint64) (*TSDBStatus, error) { func (s *Storage) GetTSDBStatusWithFiltersForDate(qt *querytracer.Tracer, tfss []*TagFilters, date uint64, topN, maxMetrics int, deadline uint64) (*TSDBStatus, error) {
return s.idb().GetTSDBStatusWithFiltersForDate(tfss, date, topN, maxMetrics, deadline) return s.idb().GetTSDBStatusWithFiltersForDate(qt, tfss, date, topN, maxMetrics, deadline)
} }
// MetricRow is a metric to insert into storage. // MetricRow is a metric to insert into storage.

View file

@ -502,13 +502,13 @@ func TestStorageDeleteMetrics(t *testing.T) {
t.Fatalf("cannot open storage: %s", err) t.Fatalf("cannot open storage: %s", err)
} }
// Verify no tag keys exist // Verify no label names exist
tks, err := s.SearchTagKeys(1e5, noDeadline) lns, err := s.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil { if err != nil {
t.Fatalf("error in SearchTagKeys at the start: %s", err) t.Fatalf("error in SearchLabelNamesWithFiltersOnTimeRange() at the start: %s", err)
} }
if len(tks) != 0 { if len(lns) != 0 {
t.Fatalf("found non-empty tag keys at the start: %q", tks) t.Fatalf("found non-empty tag keys at the start: %q", lns)
} }
t.Run("serial", func(t *testing.T) { t.Run("serial", func(t *testing.T) {
@ -554,12 +554,12 @@ func TestStorageDeleteMetrics(t *testing.T) {
}) })
// Verify no more tag keys exist // Verify no more tag keys exist
tks, err = s.SearchTagKeys(1e5, noDeadline) lns, err = s.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil { if err != nil {
t.Fatalf("error in SearchTagKeys after the test: %s", err) t.Fatalf("error in SearchLabelNamesWithFiltersOnTimeRange after the test: %s", err)
} }
if len(tks) != 0 { if len(lns) != 0 {
t.Fatalf("found non-empty tag keys after the test: %q", tks) t.Fatalf("found non-empty tag keys after the test: %q", lns)
} }
s.MustClose() s.MustClose()
@ -574,8 +574,8 @@ func testStorageDeleteMetrics(s *Storage, workerNum int) error {
workerTag := []byte(fmt.Sprintf("workerTag_%d", workerNum)) workerTag := []byte(fmt.Sprintf("workerTag_%d", workerNum))
tksAll := make(map[string]bool) lnsAll := make(map[string]bool)
tksAll[""] = true // __name__ lnsAll["__name__"] = true
for i := 0; i < metricsCount; i++ { for i := 0; i < metricsCount; i++ {
var mrs []MetricRow var mrs []MetricRow
var mn MetricName var mn MetricName
@ -587,7 +587,7 @@ func testStorageDeleteMetrics(s *Storage, workerNum int) error {
{workerTag, []byte("foobar")}, {workerTag, []byte("foobar")},
} }
for i := range mn.Tags { for i := range mn.Tags {
tksAll[string(mn.Tags[i].Key)] = true lnsAll[string(mn.Tags[i].Key)] = true
} }
mn.MetricGroup = []byte(fmt.Sprintf("metric_%d_%d", i, workerNum)) mn.MetricGroup = []byte(fmt.Sprintf("metric_%d_%d", i, workerNum))
metricNameRaw := mn.marshalRaw(nil) metricNameRaw := mn.marshalRaw(nil)
@ -610,21 +610,21 @@ func testStorageDeleteMetrics(s *Storage, workerNum int) error {
s.DebugFlush() s.DebugFlush()
// Verify tag values exist // Verify tag values exist
tvs, err := s.SearchTagValues(workerTag, 1e5, noDeadline) tvs, err := s.SearchLabelValuesWithFiltersOnTimeRange(nil, string(workerTag), nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil { if err != nil {
return fmt.Errorf("error in SearchTagValues before metrics removal: %w", err) return fmt.Errorf("error in SearchLabelValuesWithFiltersOnTimeRange before metrics removal: %w", err)
} }
if len(tvs) == 0 { if len(tvs) == 0 {
return fmt.Errorf("unexpected empty number of tag values for workerTag") return fmt.Errorf("unexpected empty number of tag values for workerTag")
} }
// Verify tag keys exist // Verify tag keys exist
tks, err := s.SearchTagKeys(1e5, noDeadline) lns, err := s.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil { if err != nil {
return fmt.Errorf("error in SearchTagKeys before metrics removal: %w", err) return fmt.Errorf("error in SearchLabelNamesWithFiltersOnTimeRange before metrics removal: %w", err)
} }
if err := checkTagKeys(tks, tksAll); err != nil { if err := checkLabelNames(lns, lnsAll); err != nil {
return fmt.Errorf("unexpected tag keys before metrics removal: %w", err) return fmt.Errorf("unexpected label names before metrics removal: %w", err)
} }
var sr Search var sr Search
@ -683,9 +683,9 @@ func testStorageDeleteMetrics(s *Storage, workerNum int) error {
if n := metricBlocksCount(tfs); n != 0 { if n := metricBlocksCount(tfs); n != 0 {
return fmt.Errorf("expecting zero metric blocks after deleting all the metrics; got %d blocks", n) return fmt.Errorf("expecting zero metric blocks after deleting all the metrics; got %d blocks", n)
} }
tvs, err = s.SearchTagValues(workerTag, 1e5, noDeadline) tvs, err = s.SearchLabelValuesWithFiltersOnTimeRange(nil, string(workerTag), nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil { if err != nil {
return fmt.Errorf("error in SearchTagValues after all the metrics are removed: %w", err) return fmt.Errorf("error in SearchLabelValuesWithFiltersOnTimeRange after all the metrics are removed: %w", err)
} }
if len(tvs) != 0 { if len(tvs) != 0 {
return fmt.Errorf("found non-empty tag values for %q after metrics removal: %q", workerTag, tvs) return fmt.Errorf("found non-empty tag values for %q after metrics removal: %q", workerTag, tvs)
@ -694,21 +694,21 @@ func testStorageDeleteMetrics(s *Storage, workerNum int) error {
return nil return nil
} }
func checkTagKeys(tks []string, tksExpected map[string]bool) error { func checkLabelNames(lns []string, lnsExpected map[string]bool) error {
if len(tks) < len(tksExpected) { if len(lns) < len(lnsExpected) {
return fmt.Errorf("unexpected number of tag keys found; got %d; want at least %d; tks=%q, tksExpected=%v", len(tks), len(tksExpected), tks, tksExpected) return fmt.Errorf("unexpected number of label names found; got %d; want at least %d; lns=%q, lnsExpected=%v", len(lns), len(lnsExpected), lns, lnsExpected)
} }
hasItem := func(k string, tks []string) bool { hasItem := func(s string, lns []string) bool {
for _, kk := range tks { for _, labelName := range lns {
if k == kk { if s == labelName {
return true return true
} }
} }
return false return false
} }
for k := range tksExpected { for labelName := range lnsExpected {
if !hasItem(k, tks) { if !hasItem(labelName, lns) {
return fmt.Errorf("cannot find %q in tag keys %q", k, tks) return fmt.Errorf("cannot find %q in label names %q", labelName, lns)
} }
} }
return nil return nil
@ -796,23 +796,23 @@ func testStorageRegisterMetricNames(s *Storage) error {
// Verify the storage contains the added metric names. // Verify the storage contains the added metric names.
s.DebugFlush() s.DebugFlush()
// Verify that SearchTagKeys returns correct result. // Verify that SearchLabelNamesWithFiltersOnTimeRange returns correct result.
tksExpected := []string{ lnsExpected := []string{
"", "__name__",
"add_id", "add_id",
"instance", "instance",
"job", "job",
} }
tks, err := s.SearchTagKeys(100, noDeadline) lns, err := s.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, TimeRange{}, 100, 1e9, noDeadline)
if err != nil { if err != nil {
return fmt.Errorf("error in SearchTagKeys: %w", err) return fmt.Errorf("error in SearchLabelNamesWithFiltersOnTimeRange: %w", err)
} }
sort.Strings(tks) sort.Strings(lns)
if !reflect.DeepEqual(tks, tksExpected) { if !reflect.DeepEqual(lns, lnsExpected) {
return fmt.Errorf("unexpected tag keys returned from SearchTagKeys;\ngot\n%q\nwant\n%q", tks, tksExpected) return fmt.Errorf("unexpected label names returned from SearchLabelNamesWithFiltersOnTimeRange;\ngot\n%q\nwant\n%q", lns, lnsExpected)
} }
// Verify that SearchTagKeysOnTimeRange returns correct result. // Verify that SearchLabelNamesWithFiltersOnTimeRange with the specified time range returns correct result.
now := timestampFromTime(time.Now()) now := timestampFromTime(time.Now())
start := now - msecPerDay start := now - msecPerDay
end := now + 60*1000 end := now + 60*1000
@ -820,33 +820,33 @@ func testStorageRegisterMetricNames(s *Storage) error {
MinTimestamp: start, MinTimestamp: start,
MaxTimestamp: end, MaxTimestamp: end,
} }
tks, err = s.SearchTagKeysOnTimeRange(tr, 100, noDeadline) lns, err = s.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, tr, 100, 1e9, noDeadline)
if err != nil { if err != nil {
return fmt.Errorf("error in SearchTagKeysOnTimeRange: %w", err) return fmt.Errorf("error in SearchLabelNamesWithFiltersOnTimeRange: %w", err)
} }
sort.Strings(tks) sort.Strings(lns)
if !reflect.DeepEqual(tks, tksExpected) { if !reflect.DeepEqual(lns, lnsExpected) {
return fmt.Errorf("unexpected tag keys returned from SearchTagKeysOnTimeRange;\ngot\n%q\nwant\n%q", tks, tksExpected) return fmt.Errorf("unexpected label names returned from SearchLabelNamesWithFiltersOnTimeRange;\ngot\n%q\nwant\n%q", lns, lnsExpected)
} }
// Verify that SearchTagValues returns correct result. // Verify that SearchLabelValuesWithFiltersOnTimeRange returns correct result.
addIDs, err := s.SearchTagValues([]byte("add_id"), addsCount+100, noDeadline) addIDs, err := s.SearchLabelValuesWithFiltersOnTimeRange(nil, "add_id", nil, TimeRange{}, addsCount+100, 1e9, noDeadline)
if err != nil { if err != nil {
return fmt.Errorf("error in SearchTagValues: %w", err) return fmt.Errorf("error in SearchLabelValuesWithFiltersOnTimeRange: %w", err)
} }
sort.Strings(addIDs) sort.Strings(addIDs)
if !reflect.DeepEqual(addIDs, addIDsExpected) { if !reflect.DeepEqual(addIDs, addIDsExpected) {
return fmt.Errorf("unexpected tag values returned from SearchTagValues;\ngot\n%q\nwant\n%q", addIDs, addIDsExpected) return fmt.Errorf("unexpected tag values returned from SearchLabelValuesWithFiltersOnTimeRange;\ngot\n%q\nwant\n%q", addIDs, addIDsExpected)
} }
// Verify that SearchTagValuesOnTimeRange returns correct result. // Verify that SearchLabelValuesWithFiltersOnTimeRange with the specified time range returns correct result.
addIDs, err = s.SearchTagValuesOnTimeRange([]byte("add_id"), tr, addsCount+100, noDeadline) addIDs, err = s.SearchLabelValuesWithFiltersOnTimeRange(nil, "add_id", nil, tr, addsCount+100, 1e9, noDeadline)
if err != nil { if err != nil {
return fmt.Errorf("error in SearchTagValuesOnTimeRange: %w", err) return fmt.Errorf("error in SearchLabelValuesWithFiltersOnTimeRange: %w", err)
} }
sort.Strings(addIDs) sort.Strings(addIDs)
if !reflect.DeepEqual(addIDs, addIDsExpected) { if !reflect.DeepEqual(addIDs, addIDsExpected) {
return fmt.Errorf("unexpected tag values returned from SearchTagValuesOnTimeRange;\ngot\n%q\nwant\n%q", addIDs, addIDsExpected) return fmt.Errorf("unexpected tag values returned from SearchLabelValuesWithFiltersOnTimeRange;\ngot\n%q\nwant\n%q", addIDs, addIDsExpected)
} }
// Verify that SearchMetricNames returns correct result. // Verify that SearchMetricNames returns correct result.

View file

@ -211,16 +211,11 @@ func (tfs *TagFilters) addTagFilter() *tagFilter {
// String returns human-readable value for tfs. // String returns human-readable value for tfs.
func (tfs *TagFilters) String() string { func (tfs *TagFilters) String() string {
if len(tfs.tfs) == 0 { a := make([]string, 0, len(tfs.tfs))
return "{}" for _, tf := range tfs.tfs {
a = append(a, tf.String())
} }
var bb bytes.Buffer return fmt.Sprintf("{%s}", strings.Join(a, ","))
fmt.Fprintf(&bb, "{%s", tfs.tfs[0].String())
for i := range tfs.tfs[1:] {
fmt.Fprintf(&bb, ", %s", tfs.tfs[i+1].String())
}
fmt.Fprintf(&bb, "}")
return bb.String()
} }
// Reset resets the tf // Reset resets the tf
@ -305,6 +300,16 @@ func (tf *tagFilter) String() string {
} else if tf.isRegexp { } else if tf.isRegexp {
op = "=~" op = "=~"
} }
if bytes.Equal(tf.key, graphiteReverseTagKey) {
return fmt.Sprintf("__graphite_reverse__%s%q", op, tf.value)
}
if tf.isComposite() {
metricName, key, err := unmarshalCompositeTagKey(tf.key)
if err != nil {
logger.Panicf("BUG: cannot unmarshal composite tag key: %s", err)
}
return fmt.Sprintf("composite(%s,%s)%s%q", metricName, key, op, tf.value)
}
key := tf.key key := tf.key
if len(key) == 0 { if len(key) == 0 {
key = []byte("__name__") key = []byte("__name__")

View file

@ -1263,7 +1263,7 @@ func TestTagFiltersString(t *testing.T) {
mustAdd("tag_n", "n_value", true, false) mustAdd("tag_n", "n_value", true, false)
mustAdd("tag_re_graphite", "foo\\.bar", false, true) mustAdd("tag_re_graphite", "foo\\.bar", false, true)
s := tfs.String() s := tfs.String()
sExpected := `{__name__="metric_name", tag_re=~"re.value", tag_nre!~"nre.value", tag_n!="n_value", tag_re_graphite="foo.bar"}` sExpected := `{__name__="metric_name",tag_re=~"re.value",tag_nre!~"nre.value",tag_n!="n_value",tag_re_graphite="foo.bar"}`
if s != sExpected { if s != sExpected {
t.Fatalf("unexpected TagFilters.String(); got %q; want %q", s, sExpected) t.Fatalf("unexpected TagFilters.String(); got %q; want %q", s, sExpected)
} }

View file

@ -3301,6 +3301,13 @@ var awsPartition = partition{
}, },
}, },
}, },
"catalog.marketplace": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "us-east-1",
}: endpoint{},
},
},
"ce": service{ "ce": service{
PartitionEndpoint: "aws-global", PartitionEndpoint: "aws-global",
IsRegionalized: boxedFalse, IsRegionalized: boxedFalse,
@ -7568,6 +7575,9 @@ var awsPartition = partition{
endpointKey{ endpointKey{
Region: "ap-southeast-2", Region: "ap-southeast-2",
}: endpoint{}, }: endpoint{},
endpointKey{
Region: "ap-southeast-3",
}: endpoint{},
endpointKey{ endpointKey{
Region: "ca-central-1", Region: "ca-central-1",
}: endpoint{}, }: endpoint{},
@ -11213,6 +11223,9 @@ var awsPartition = partition{
endpointKey{ endpointKey{
Region: "ap-southeast-2", Region: "ap-southeast-2",
}: endpoint{}, }: endpoint{},
endpointKey{
Region: "ca-central-1",
}: endpoint{},
endpointKey{ endpointKey{
Region: "eu-central-1", Region: "eu-central-1",
}: endpoint{}, }: endpoint{},
@ -11275,6 +11288,14 @@ var awsPartition = partition{
Region: "ap-southeast-2", Region: "ap-southeast-2",
}, },
}, },
endpointKey{
Region: "ca-central-1",
}: endpoint{
Hostname: "data.iotevents.ca-central-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ca-central-1",
},
},
endpointKey{ endpointKey{
Region: "eu-central-1", Region: "eu-central-1",
}: endpoint{ }: endpoint{
@ -11483,12 +11504,30 @@ var awsPartition = partition{
endpointKey{ endpointKey{
Region: "ap-southeast-2", Region: "ap-southeast-2",
}: endpoint{}, }: endpoint{},
endpointKey{
Region: "ca-central-1",
}: endpoint{},
endpointKey{
Region: "ca-central-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "iotsitewise-fips.ca-central-1.amazonaws.com",
},
endpointKey{ endpointKey{
Region: "eu-central-1", Region: "eu-central-1",
}: endpoint{}, }: endpoint{},
endpointKey{ endpointKey{
Region: "eu-west-1", Region: "eu-west-1",
}: endpoint{}, }: endpoint{},
endpointKey{
Region: "fips-ca-central-1",
}: endpoint{
Hostname: "iotsitewise-fips.ca-central-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ca-central-1",
},
Deprecated: boxedTrue,
},
endpointKey{ endpointKey{
Region: "fips-us-east-1", Region: "fips-us-east-1",
}: endpoint{ }: endpoint{
@ -11498,6 +11537,15 @@ var awsPartition = partition{
}, },
Deprecated: boxedTrue, Deprecated: boxedTrue,
}, },
endpointKey{
Region: "fips-us-east-2",
}: endpoint{
Hostname: "iotsitewise-fips.us-east-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-2",
},
Deprecated: boxedTrue,
},
endpointKey{ endpointKey{
Region: "fips-us-west-2", Region: "fips-us-west-2",
}: endpoint{ }: endpoint{
@ -11516,6 +11564,15 @@ var awsPartition = partition{
}: endpoint{ }: endpoint{
Hostname: "iotsitewise-fips.us-east-1.amazonaws.com", Hostname: "iotsitewise-fips.us-east-1.amazonaws.com",
}, },
endpointKey{
Region: "us-east-2",
}: endpoint{},
endpointKey{
Region: "us-east-2",
Variant: fipsVariant,
}: endpoint{
Hostname: "iotsitewise-fips.us-east-2.amazonaws.com",
},
endpointKey{ endpointKey{
Region: "us-west-2", Region: "us-west-2",
}: endpoint{}, }: endpoint{},
@ -13259,6 +13316,31 @@ var awsPartition = partition{
}: endpoint{}, }: endpoint{},
}, },
}, },
"m2": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "ap-southeast-2",
}: endpoint{},
endpointKey{
Region: "ca-central-1",
}: endpoint{},
endpointKey{
Region: "eu-central-1",
}: endpoint{},
endpointKey{
Region: "eu-west-1",
}: endpoint{},
endpointKey{
Region: "sa-east-1",
}: endpoint{},
endpointKey{
Region: "us-east-1",
}: endpoint{},
endpointKey{
Region: "us-west-2",
}: endpoint{},
},
},
"machinelearning": service{ "machinelearning": service{
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{ endpointKey{
@ -13956,6 +14038,66 @@ var awsPartition = partition{
}, },
}, },
}, },
"memory-db": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "ap-east-1",
}: endpoint{},
endpointKey{
Region: "ap-northeast-1",
}: endpoint{},
endpointKey{
Region: "ap-northeast-2",
}: endpoint{},
endpointKey{
Region: "ap-south-1",
}: endpoint{},
endpointKey{
Region: "ap-southeast-1",
}: endpoint{},
endpointKey{
Region: "ap-southeast-2",
}: endpoint{},
endpointKey{
Region: "ca-central-1",
}: endpoint{},
endpointKey{
Region: "eu-central-1",
}: endpoint{},
endpointKey{
Region: "eu-north-1",
}: endpoint{},
endpointKey{
Region: "eu-west-1",
}: endpoint{},
endpointKey{
Region: "eu-west-2",
}: endpoint{},
endpointKey{
Region: "fips",
}: endpoint{
Hostname: "memory-db-fips.us-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-1",
},
},
endpointKey{
Region: "sa-east-1",
}: endpoint{},
endpointKey{
Region: "us-east-1",
}: endpoint{},
endpointKey{
Region: "us-east-2",
}: endpoint{},
endpointKey{
Region: "us-west-1",
}: endpoint{},
endpointKey{
Region: "us-west-2",
}: endpoint{},
},
},
"messaging-chime": service{ "messaging-chime": service{
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{ endpointKey{
@ -18746,6 +18888,9 @@ var awsPartition = partition{
endpointKey{ endpointKey{
Region: "ap-southeast-2", Region: "ap-southeast-2",
}: endpoint{}, }: endpoint{},
endpointKey{
Region: "ap-southeast-3",
}: endpoint{},
endpointKey{ endpointKey{
Region: "ca-central-1", Region: "ca-central-1",
}: endpoint{}, }: endpoint{},
@ -18972,6 +19117,9 @@ var awsPartition = partition{
endpointKey{ endpointKey{
Region: "ap-southeast-2", Region: "ap-southeast-2",
}: endpoint{}, }: endpoint{},
endpointKey{
Region: "ap-southeast-3",
}: endpoint{},
endpointKey{ endpointKey{
Region: "ca-central-1", Region: "ca-central-1",
}: endpoint{}, }: endpoint{},
@ -24030,6 +24178,16 @@ var awscnPartition = partition{
}, },
}, },
}, },
"memory-db": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "cn-north-1",
}: endpoint{},
endpointKey{
Region: "cn-northwest-1",
}: endpoint{},
},
},
"monitoring": service{ "monitoring": service{
Defaults: endpointDefaults{ Defaults: endpointDefaults{
defaultKey{}: endpoint{ defaultKey{}: endpoint{
@ -28582,42 +28740,12 @@ var awsusgovPartition = partition{
}, },
}, },
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{
Region: "fips-us-gov-east-1",
}: endpoint{
Hostname: "servicecatalog-appregistry.us-gov-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-east-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-us-gov-west-1",
}: endpoint{
Hostname: "servicecatalog-appregistry.us-gov-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-west-1",
},
Deprecated: boxedTrue,
},
endpointKey{ endpointKey{
Region: "us-gov-east-1", Region: "us-gov-east-1",
}: endpoint{}, }: endpoint{},
endpointKey{
Region: "us-gov-east-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "servicecatalog-appregistry.us-gov-east-1.amazonaws.com",
},
endpointKey{ endpointKey{
Region: "us-gov-west-1", Region: "us-gov-west-1",
}: endpoint{}, }: endpoint{},
endpointKey{
Region: "us-gov-west-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "servicecatalog-appregistry.us-gov-west-1.amazonaws.com",
},
}, },
}, },
"servicediscovery": service{ "servicediscovery": service{
@ -29339,6 +29467,16 @@ var awsusgovPartition = partition{
}, },
}, },
}, },
"transcribestreaming": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "us-gov-east-1",
}: endpoint{},
endpointKey{
Region: "us-gov-west-1",
}: endpoint{},
},
},
"transfer": service{ "transfer": service{
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{ endpointKey{
@ -30820,5 +30958,12 @@ var awsisobPartition = partition{
}: endpoint{}, }: endpoint{},
}, },
}, },
"workspaces": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "us-isob-east-1",
}: endpoint{},
},
},
}, },
} }

View file

@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go" const SDKName = "aws-sdk-go"
// SDKVersion is the version of this SDK // SDKVersion is the version of this SDK
const SDKVersion = "1.44.27" const SDKVersion = "1.44.32"

View file

@ -190,6 +190,7 @@ func CredentialsFromJSONWithParams(ctx context.Context, jsonData []byte, params
if err != nil { if err != nil {
return nil, err return nil, err
} }
ts = newErrWrappingTokenSource(ts)
return &DefaultCredentials{ return &DefaultCredentials{
ProjectID: f.ProjectID, ProjectID: f.ProjectID,
TokenSource: ts, TokenSource: ts,

64
vendor/golang.org/x/oauth2/google/error.go generated vendored Normal file
View file

@ -0,0 +1,64 @@
// Copyright 2022 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package google
import (
"errors"
"golang.org/x/oauth2"
)
// AuthenticationError indicates there was an error in the authentication flow.
//
// Use (*AuthenticationError).Temporary to check if the error can be retried.
type AuthenticationError struct {
err *oauth2.RetrieveError
}
func newAuthenticationError(err error) error {
re := &oauth2.RetrieveError{}
if !errors.As(err, &re) {
return err
}
return &AuthenticationError{
err: re,
}
}
// Temporary indicates that the network error has one of the following status codes and may be retried: 500, 503, 408, or 429.
func (e *AuthenticationError) Temporary() bool {
if e.err.Response == nil {
return false
}
sc := e.err.Response.StatusCode
return sc == 500 || sc == 503 || sc == 408 || sc == 429
}
func (e *AuthenticationError) Error() string {
return e.err.Error()
}
func (e *AuthenticationError) Unwrap() error {
return e.err
}
type errWrappingTokenSource struct {
src oauth2.TokenSource
}
func newErrWrappingTokenSource(ts oauth2.TokenSource) oauth2.TokenSource {
return &errWrappingTokenSource{src: ts}
}
// Token returns the current token if it's still valid, else will
// refresh the current token (using r.Context for HTTP client
// information) and return the new one.
func (s *errWrappingTokenSource) Token() (*oauth2.Token, error) {
t, err := s.src.Token()
if err != nil {
return nil, newAuthenticationError(err)
}
return t, nil
}

View file

@ -66,7 +66,8 @@ func newJWTSource(jsonKey []byte, audience string, scopes []string) (oauth2.Toke
if err != nil { if err != nil {
return nil, err return nil, err
} }
return oauth2.ReuseTokenSource(tok, ts), nil rts := newErrWrappingTokenSource(oauth2.ReuseTokenSource(tok, ts))
return rts, nil
} }
type jwtAccessTokenSource struct { type jwtAccessTokenSource struct {

View file

@ -618,6 +618,7 @@ func Sendfile(outfd int, infd int, offset *int64, count int) (written int, err e
//sys Getpriority(which int, who int) (n int, err error) //sys Getpriority(which int, who int) (n int, err error)
//sysnb Getrlimit(which int, lim *Rlimit) (err error) //sysnb Getrlimit(which int, lim *Rlimit) (err error)
//sysnb Getrusage(who int, rusage *Rusage) (err error) //sysnb Getrusage(who int, rusage *Rusage) (err error)
//sysnb Getsid(pid int) (sid int, err error)
//sysnb Gettimeofday(tv *Timeval) (err error) //sysnb Gettimeofday(tv *Timeval) (err error)
//sysnb Getuid() (uid int) //sysnb Getuid() (uid int)
//sys Kill(pid int, signum syscall.Signal) (err error) //sys Kill(pid int, signum syscall.Signal) (err error)

View file

@ -66,6 +66,7 @@ import (
//go:cgo_import_dynamic libc_getpriority getpriority "libc.so" //go:cgo_import_dynamic libc_getpriority getpriority "libc.so"
//go:cgo_import_dynamic libc_getrlimit getrlimit "libc.so" //go:cgo_import_dynamic libc_getrlimit getrlimit "libc.so"
//go:cgo_import_dynamic libc_getrusage getrusage "libc.so" //go:cgo_import_dynamic libc_getrusage getrusage "libc.so"
//go:cgo_import_dynamic libc_getsid getsid "libc.so"
//go:cgo_import_dynamic libc_gettimeofday gettimeofday "libc.so" //go:cgo_import_dynamic libc_gettimeofday gettimeofday "libc.so"
//go:cgo_import_dynamic libc_getuid getuid "libc.so" //go:cgo_import_dynamic libc_getuid getuid "libc.so"
//go:cgo_import_dynamic libc_kill kill "libc.so" //go:cgo_import_dynamic libc_kill kill "libc.so"
@ -202,6 +203,7 @@ import (
//go:linkname procGetpriority libc_getpriority //go:linkname procGetpriority libc_getpriority
//go:linkname procGetrlimit libc_getrlimit //go:linkname procGetrlimit libc_getrlimit
//go:linkname procGetrusage libc_getrusage //go:linkname procGetrusage libc_getrusage
//go:linkname procGetsid libc_getsid
//go:linkname procGettimeofday libc_gettimeofday //go:linkname procGettimeofday libc_gettimeofday
//go:linkname procGetuid libc_getuid //go:linkname procGetuid libc_getuid
//go:linkname procKill libc_kill //go:linkname procKill libc_kill
@ -339,6 +341,7 @@ var (
procGetpriority, procGetpriority,
procGetrlimit, procGetrlimit,
procGetrusage, procGetrusage,
procGetsid,
procGettimeofday, procGettimeofday,
procGetuid, procGetuid,
procKill, procKill,
@ -1044,6 +1047,17 @@ func Getrusage(who int, rusage *Rusage) (err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func Getsid(pid int) (sid int, err error) {
r0, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procGetsid)), 1, uintptr(pid), 0, 0, 0, 0, 0)
sid = int(r0)
if e1 != 0 {
err = e1
}
return
}
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func Gettimeofday(tv *Timeval) (err error) { func Gettimeofday(tv *Timeval) (err error) {
_, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procGettimeofday)), 1, uintptr(unsafe.Pointer(tv)), 0, 0, 0, 0, 0) _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procGettimeofday)), 1, uintptr(unsafe.Pointer(tv)), 0, 0, 0, 0, 0)
if e1 != 0 { if e1 != 0 {

3
vendor/golang.org/x/xerrors/fmt.go generated vendored
View file

@ -34,7 +34,8 @@ const percentBangString = "%!"
// operand that does not implement the error interface. The %w verb is otherwise // operand that does not implement the error interface. The %w verb is otherwise
// a synonym for %v. // a synonym for %v.
// //
// Deprecated: As of Go 1.13, use fmt.Errorf instead. // Note that as of Go 1.13, the fmt.Errorf function will do error formatting,
// but it will not capture a stack backtrace.
func Errorf(format string, a ...interface{}) error { func Errorf(format string, a ...interface{}) error {
format = formatPlusW(format) format = formatPlusW(format)
// Support a ": %[wsv]" suffix, which works well with xerrors.Formatter. // Support a ": %[wsv]" suffix, which works well with xerrors.Formatter.

View file

@ -5,4 +5,4 @@
package internal package internal
// Version is the current tagged release of the library. // Version is the current tagged release of the library.
const Version = "0.82.0" const Version = "0.83.0"

16
vendor/modules.txt vendored
View file

@ -34,7 +34,7 @@ github.com/VictoriaMetrics/metricsql/binaryop
# github.com/VividCortex/ewma v1.2.0 # github.com/VividCortex/ewma v1.2.0
## explicit; go 1.12 ## explicit; go 1.12
github.com/VividCortex/ewma github.com/VividCortex/ewma
# github.com/aws/aws-sdk-go v1.44.27 # github.com/aws/aws-sdk-go v1.44.32
## explicit; go 1.11 ## explicit; go 1.11
github.com/aws/aws-sdk-go/aws github.com/aws/aws-sdk-go/aws
github.com/aws/aws-sdk-go/aws/arn github.com/aws/aws-sdk-go/aws/arn
@ -277,7 +277,7 @@ go.opencensus.io/trace/tracestate
go.uber.org/atomic go.uber.org/atomic
# go.uber.org/goleak v1.1.11-0.20210813005559-691160354723 # go.uber.org/goleak v1.1.11-0.20210813005559-691160354723
## explicit; go 1.13 ## explicit; go 1.13
# golang.org/x/net v0.0.0-20220531201128-c960675eff93 # golang.org/x/net v0.0.0-20220607020251-c690dde0001d
## explicit; go 1.17 ## explicit; go 1.17
golang.org/x/net/context golang.org/x/net/context
golang.org/x/net/context/ctxhttp golang.org/x/net/context/ctxhttp
@ -289,8 +289,8 @@ golang.org/x/net/internal/socks
golang.org/x/net/internal/timeseries golang.org/x/net/internal/timeseries
golang.org/x/net/proxy golang.org/x/net/proxy
golang.org/x/net/trace golang.org/x/net/trace
# golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401 # golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb
## explicit; go 1.11 ## explicit; go 1.15
golang.org/x/oauth2 golang.org/x/oauth2
golang.org/x/oauth2/authhandler golang.org/x/oauth2/authhandler
golang.org/x/oauth2/clientcredentials golang.org/x/oauth2/clientcredentials
@ -302,7 +302,7 @@ golang.org/x/oauth2/jwt
# golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f # golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f
## explicit ## explicit
golang.org/x/sync/errgroup golang.org/x/sync/errgroup
# golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a # golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d
## explicit; go 1.17 ## explicit; go 1.17
golang.org/x/sys/internal/unsafeheader golang.org/x/sys/internal/unsafeheader
golang.org/x/sys/unix golang.org/x/sys/unix
@ -313,11 +313,11 @@ golang.org/x/text/secure/bidirule
golang.org/x/text/transform golang.org/x/text/transform
golang.org/x/text/unicode/bidi golang.org/x/text/unicode/bidi
golang.org/x/text/unicode/norm golang.org/x/text/unicode/norm
# golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df # golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f
## explicit; go 1.17 ## explicit; go 1.17
golang.org/x/xerrors golang.org/x/xerrors
golang.org/x/xerrors/internal golang.org/x/xerrors/internal
# google.golang.org/api v0.82.0 # google.golang.org/api v0.83.0
## explicit; go 1.15 ## explicit; go 1.15
google.golang.org/api/googleapi google.golang.org/api/googleapi
google.golang.org/api/googleapi/transport google.golang.org/api/googleapi/transport
@ -350,7 +350,7 @@ google.golang.org/appengine/internal/socket
google.golang.org/appengine/internal/urlfetch google.golang.org/appengine/internal/urlfetch
google.golang.org/appengine/socket google.golang.org/appengine/socket
google.golang.org/appengine/urlfetch google.golang.org/appengine/urlfetch
# google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8 # google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac
## explicit; go 1.15 ## explicit; go 1.15
google.golang.org/genproto/googleapis/api/annotations google.golang.org/genproto/googleapis/api/annotations
google.golang.org/genproto/googleapis/iam/v1 google.golang.org/genproto/googleapis/iam/v1