mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2025-03-01 15:33:35 +00:00
Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files
This commit is contained in:
commit
0ea0168d98
215 changed files with 8724 additions and 5211 deletions
2
Makefile
2
Makefile
|
@ -283,7 +283,7 @@ golangci-lint: install-golangci-lint
|
|||
golangci-lint run --exclude '(SA4003|SA1019|SA5011):' -D errcheck -D structcheck --timeout 2m
|
||||
|
||||
install-golangci-lint:
|
||||
which golangci-lint || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v1.46.1
|
||||
which golangci-lint || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v1.46.2
|
||||
|
||||
install-wwhrd:
|
||||
which wwhrd || GO111MODULE=off go get github.com/frapposelli/wwhrd
|
||||
|
|
183
README.md
183
README.md
|
@ -14,12 +14,18 @@ VictoriaMetrics is a fast, cost-effective and scalable monitoring solution and t
|
|||
|
||||
VictoriaMetrics is available in [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases),
|
||||
[Docker images](https://hub.docker.com/r/victoriametrics/victoria-metrics/), [Snap packages](https://snapcraft.io/victoriametrics)
|
||||
and [source code](https://github.com/VictoriaMetrics/VictoriaMetrics). Just download VictoriaMetrics and follow [these instructions](#how-to-start-victoriametrics).
|
||||
Then read [Prometheus setup](#prometheus-setup) and [Grafana setup](#grafana-setup) docs.
|
||||
and [source code](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
Just download VictoriaMetrics and follow [these instructions](https://docs.victoriametrics.com/Quick-Start.html).
|
||||
|
||||
Cluster version of VictoriaMetrics is available [here](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html).
|
||||
|
||||
[Contact us](mailto:info@victoriametrics.com) if you need enterprise support for VictoriaMetrics. See [features available in enterprise package](https://victoriametrics.com/products/enterprise/). Enterprise binaries can be downloaded and evaluated for free from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
|
||||
Learn more about [key concepts](https://docs.victoriametrics.com/keyConcepts.html) of VictoriaMetrics and follow
|
||||
[QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for better experience.
|
||||
|
||||
[Contact us](mailto:info@victoriametrics.com) if you need enterprise support for VictoriaMetrics.
|
||||
See [features available in enterprise package](https://victoriametrics.com/products/enterprise/).
|
||||
Enterprise binaries can be downloaded and evaluated for free
|
||||
from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
|
||||
|
||||
## Prominent features
|
||||
|
||||
|
@ -53,7 +59,7 @@ VictoriaMetrics has the following prominent features:
|
|||
* [JSON line format](#how-to-import-data-in-json-line-format).
|
||||
* [Arbitrary CSV data](#how-to-import-csv-data).
|
||||
* [Native binary format](#how-to-import-data-in-native-format).
|
||||
* It supports metrics' relabeling. See [these docs](#relabeling) for details.
|
||||
* It supports metrics [relabeling](#relabeling).
|
||||
* It can deal with [high cardinality issues](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) and [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) issues via [series limiter](#cardinality-limiter).
|
||||
* It ideally works with big amounts of time series data from APM, Kubernetes, IoT sensors, connected cars, industrial telemetry, financial data and various [Enterprise workloads](https://victoriametrics.com/products/enterprise/).
|
||||
* It has open source [cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster).
|
||||
|
@ -92,9 +98,10 @@ See also [articles and slides about VictoriaMetrics from our users](https://docs
|
|||
|
||||
## Operation
|
||||
|
||||
## How to start VictoriaMetrics
|
||||
### How to start VictoriaMetrics
|
||||
|
||||
Just download [VictoriaMetrics executable](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or [Docker image](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and start it with the desired command-line flags.
|
||||
See also [QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for additional information.
|
||||
|
||||
The following command-line flags are used the most:
|
||||
|
||||
|
@ -143,18 +150,26 @@ After changes were made, trigger config re-read with the command `curl 127.0.0.1
|
|||
|
||||
Add the following lines to Prometheus config file (it is usually located at `/etc/prometheus/prometheus.yml`) in order to send data to VictoriaMetrics:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```yml
|
||||
remote_write:
|
||||
- url: http://<victoriametrics-addr>:8428/api/v1/write
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Substitute `<victoriametrics-addr>` with hostname or IP address of VictoriaMetrics.
|
||||
Then apply new config via the following command:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
kill -HUP `pidof prometheus`
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Prometheus writes incoming data to local storage and replicates it to remote storage in parallel.
|
||||
This means that data remains available in local storage for `--storage.tsdb.retention.time` duration
|
||||
even if remote storage is unavailable.
|
||||
|
@ -174,6 +189,8 @@ across Prometheus instances, so time series could be filtered and grouped by thi
|
|||
|
||||
For highly loaded Prometheus instances (200k+ samples per second) the following tuning may be applied:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```yaml
|
||||
remote_write:
|
||||
- url: http://<victoriametrics-addr>:8428/api/v1/write
|
||||
|
@ -183,13 +200,18 @@ remote_write:
|
|||
max_shards: 30
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Using remote write increases memory usage for Prometheus by up to ~25%. If you are experiencing issues with
|
||||
too high memory consumption of Prometheus, then try to lower `max_samples_per_send` and `capacity` params. Keep in mind that these two params are tightly connected.
|
||||
too high memory consumption of Prometheus, then try to lower `max_samples_per_send` and `capacity` params.
|
||||
Keep in mind that these two params are tightly connected.
|
||||
Read more about tuning remote write for Prometheus [here](https://prometheus.io/docs/practices/remote_write).
|
||||
|
||||
It is recommended upgrading Prometheus to [v2.12.0](https://github.com/prometheus/prometheus/releases) or newer, since previous versions may have issues with `remote_write`.
|
||||
It is recommended upgrading Prometheus to [v2.12.0](https://github.com/prometheus/prometheus/releases) or newer,
|
||||
since previous versions may have issues with `remote_write`.
|
||||
|
||||
Take a look also at [vmagent](https://docs.victoriametrics.com/vmagent.html) and [vmalert](https://docs.victoriametrics.com/vmalert.html),
|
||||
Take a look also at [vmagent](https://docs.victoriametrics.com/vmagent.html)
|
||||
and [vmalert](https://docs.victoriametrics.com/vmalert.html),
|
||||
which can be used as faster and less resource-hungry alternative to Prometheus.
|
||||
|
||||
## Grafana setup
|
||||
|
@ -218,6 +240,27 @@ The following steps must be performed during the upgrade / downgrade procedure:
|
|||
|
||||
Prometheus doesn't drop data during VictoriaMetrics restart. See [this article](https://grafana.com/blog/2019/03/25/whats-new-in-prometheus-2.8-wal-based-remote-write/) for details. The same applies also to [vmagent](https://docs.victoriametrics.com/vmagent.html).
|
||||
|
||||
## vmui
|
||||
|
||||
VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at `http://victoriametrics:8428/vmui`.
|
||||
The UI allows exploring query results via graphs and tables. Graphs support scrolling and zooming:
|
||||
|
||||
* Drag the graph to the left / right in order to move the displayed time range into the past / future.
|
||||
* Hold `Ctrl` (or `Cmd` on MacOS) and scroll up / down in order to zoom in / out the graph.
|
||||
|
||||
Query history can be navigated by holding `Ctrl` (or `Cmd` on MacOS) and pressing `up` or `down` arrows on the keyboard while the cursor is located in the query input field.
|
||||
|
||||
Multi-line queries can be entered by pressing `Shift-Enter` in query input field.
|
||||
|
||||
When querying the [backfilled data](https://docs.victoriametrics.com/#backfilling), it may be useful disabling response cache by clicking `Enable cache` checkbox.
|
||||
|
||||
VMUI automatically adjusts the interval between datapoints on the graph depending on the horizontal resolution and on the selected time range. The step value can be customized by clickhing `Override step value` checkbox.
|
||||
|
||||
VMUI allows investigating correlations between two queries on the same graph. Just click `+Query` button, enter the second query in the newly appeared input field and press `Ctrl+Enter`. Results for both queries should be displayed simultaneously on the same graph. Every query has its own vertical scale, which is displayed on the left and the right side of the graph. Lines for the second query are dashed.
|
||||
|
||||
See the [example VMUI at VictoriaMetrics playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/?g0.expr=100%20*%20sum(rate(process_cpu_seconds_total))%20by%20(job)&g0.range_input=1d).
|
||||
|
||||
|
||||
## How to apply new config to VictoriaMetrics
|
||||
|
||||
VictoriaMetrics is configured via command-line flags, so it must be restarted when new command-line flags should be applied:
|
||||
|
@ -316,7 +359,7 @@ and stream plain InfluxDB line protocol data to the configured TCP and/or UDP ad
|
|||
|
||||
VictoriaMetrics performs the following transformations to the ingested InfluxDB data:
|
||||
|
||||
* [`db` query arg](https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint) is mapped into `db` label value
|
||||
* [db query arg](https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint) is mapped into `db` label value
|
||||
unless `db` tag exists in the InfluxDB line. The `db` label name can be overriden via `-influxDBLabel` command-line flag.
|
||||
* Field names are mapped to time series names prefixed with `{measurement}{separator}` value, where `{separator}` equals to `_` by default. It can be changed with `-influxMeasurementFieldSeparator` command-line flag. See also `-influxSkipSingleField` command-line flag. If `{measurement}` is empty or if `-influxSkipMeasurement` command-line flag is set, then time series names correspond to field names.
|
||||
* Field values are mapped to time series values.
|
||||
|
@ -338,20 +381,28 @@ foo_field2{tag1="value1", tag2="value2"} 40
|
|||
Example for writing data with [InfluxDB line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/)
|
||||
to local VictoriaMetrics using `curl`:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
An arbitrary number of lines delimited by '\n' (aka newline char) can be sent in a single request.
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The `/api/v1/export` endpoint should return the following response:
|
||||
|
||||
```jsonl
|
||||
```json
|
||||
{"metric":{"__name__":"measurement_field1","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560272508147]}
|
||||
{"metric":{"__name__":"measurement_field2","tag1":"value1","tag2":"value2"},"values":[1.23],"timestamps":[1560272508147]}
|
||||
```
|
||||
|
@ -431,20 +482,28 @@ Send data to the given address from OpenTSDB-compatible agents.
|
|||
|
||||
Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `nc`:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go.
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The `/api/v1/export` endpoint should return the following response:
|
||||
|
||||
```bash
|
||||
```json
|
||||
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277292000]}
|
||||
```
|
||||
|
||||
|
@ -461,25 +520,37 @@ Send data to the given address from OpenTSDB-compatible agents.
|
|||
|
||||
Example for writing a single data point:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Example for writing multiple data points in a single request:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The `/api/v1/export` endpoint should return the following response:
|
||||
|
||||
```bash
|
||||
```json
|
||||
{"metric":{"__name__":"foo"},"values":[45.34],"timestamps":[1566464846000]}
|
||||
{"metric":{"__name__":"bar"},"values":[43],"timestamps":[1566464846000]}
|
||||
{"metric":{"__name__":"x.y.z","t1":"v1","t2":"v2"},"values":[45.34],"timestamps":[1566464763000]}
|
||||
|
@ -519,7 +590,7 @@ VictoriaMetrics accepts `round_digits` query arg for `/api/v1/query` and `/api/v
|
|||
|
||||
By default, VictoriaMetrics returns time series for the last 5 minutes from `/api/v1/series`, while the Prometheus API defaults to all time. Use `start` and `end` to select a different time range.
|
||||
|
||||
Additionally VictoriaMetrics provides the following handlers:
|
||||
Additionally, VictoriaMetrics provides the following handlers:
|
||||
|
||||
* `/vmui` - Basic Web UI. See [these docs](#vmui).
|
||||
* `/api/v1/series/count` - returns the total number of time series in the database. Some notes:
|
||||
|
@ -587,26 +658,6 @@ VictoriaMetrics supports the following handlers from [Graphite Tags API](https:/
|
|||
* [/tags/autoComplete/values](https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support)
|
||||
* [/tags/delSeries](https://graphite.readthedocs.io/en/stable/tags.html#removing-series-from-the-tagdb)
|
||||
|
||||
## vmui
|
||||
|
||||
VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at `http://victoriametrics:8428/vmui`.
|
||||
The UI allows exploring query results via graphs and tables. Graphs support scrolling and zooming:
|
||||
|
||||
* Drag the graph to the left / right in order to move the displayed time range into the past / future.
|
||||
* Hold `Ctrl` (or `Cmd` on MacOS) and scroll up / down in order to zoom in / out the graph.
|
||||
|
||||
Query history can be navigated by holding `Ctrl` (or `Cmd` on MacOS) and pressing `up` or `down` arrows on the keyboard while the cursor is located in the query input field.
|
||||
|
||||
Multi-line queries can be entered by pressing `Shift-Enter` in query input field.
|
||||
|
||||
When querying the [backfilled data](https://docs.victoriametrics.com/#backfilling), it may be useful disabling response cache by clicking `Enable cache` checkbox.
|
||||
|
||||
VMUI automatically adjusts the interval between datapoints on the graph depending on the horizontal resolution and on the selected time range. The step value can be customized by clickhing `Override step value` checkbox.
|
||||
|
||||
VMUI allows investigating correlations between two queries on the same graph. Just click `+Query` button, enter the second query in the newly appeared input field and press `Ctrl+Enter`. Results for both queries should be displayed simultaneously on the same graph. Every query has its own vertical scale, which is displayed on the left and the right side of the graph. Lines for the second query are dashed.
|
||||
|
||||
See the [example VMUI at VictoriaMetrics playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/?g0.expr=100%20*%20sum(rate(process_cpu_seconds_total))%20by%20(job)&g0.range_input=1d).
|
||||
|
||||
## How to build from sources
|
||||
|
||||
We recommend using either [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or
|
||||
|
@ -1314,6 +1365,69 @@ VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way simi
|
|||
* `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account.
|
||||
* `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details.
|
||||
|
||||
## Query tracing
|
||||
|
||||
VictoriaMetrics supports query tracing, which can be used for determining bottlenecks during query processing.
|
||||
|
||||
Query tracing can be enabled for a specific query by passing `trace=1` query arg.
|
||||
In this case VictoriaMetrics puts query trace into `trace` field in the output JSON.
|
||||
|
||||
For example, the following command:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq -r '.trace'
|
||||
```
|
||||
|
||||
would return the following trace:
|
||||
|
||||
```json
|
||||
{
|
||||
"duration_msec": 0.099,
|
||||
"message": "/api/v1/query_range: start=1654034340000, end=1654037880000, step=60000, query=\"2*rand()\": series=1",
|
||||
"children": [
|
||||
{
|
||||
"duration_msec": 0.034,
|
||||
"message": "eval: query=2 * rand(), timeRange=[1654034340000..1654037880000], step=60000, mayCache=true: series=1, points=60, pointsPerSeries=60",
|
||||
"children": [
|
||||
{
|
||||
"duration_msec": 0.032,
|
||||
"message": "binary op \"*\": series=1",
|
||||
"children": [
|
||||
{
|
||||
"duration_msec": 0.009,
|
||||
"message": "eval: query=2, timeRange=[1654034340000..1654037880000], step=60000, mayCache=true: series=1, points=60, pointsPerSeries=60"
|
||||
},
|
||||
{
|
||||
"duration_msec": 0.017,
|
||||
"message": "eval: query=rand(), timeRange=[1654034340000..1654037880000], step=60000, mayCache=true: series=1, points=60, pointsPerSeries=60",
|
||||
"children": [
|
||||
{
|
||||
"duration_msec": 0.015,
|
||||
"message": "transform rand(): series=1"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"duration_msec": 0.004,
|
||||
"message": "sort series by metric name and labels"
|
||||
},
|
||||
{
|
||||
"duration_msec": 0.044,
|
||||
"message": "generate /api/v1/query_range response for series=1, points=60"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
All the durations and timestamps in traces are in milliseconds.
|
||||
|
||||
Query tracing is allowed by default. It can be denied by passing `-denyQueryTracing` command-line flag to VictoriaMetrics.
|
||||
|
||||
|
||||
## Cardinality limiter
|
||||
|
||||
By default VictoriaMetrics doesn't limit the number of stored time series. The limit can be enforced by setting the following command-line flags:
|
||||
|
@ -1423,7 +1537,8 @@ The panel `Cache usage %` in `Troubleshooting` section shows the percentage of u
|
|||
from the allowed size by type. If the percentage is below 100%, then no further tuning needed.
|
||||
|
||||
Please note, default cache sizes were carefully adjusted accordingly to the most
|
||||
practical scenarios and workloads. Change the defaults only if you understand the implications.
|
||||
practical scenarios and workloads. Change the defaults only if you understand the implications
|
||||
and vmstorage has enough free memory to accommodate new cache sizes.
|
||||
|
||||
To override the default values see command-line flags with `-storage.cacheSize` prefix.
|
||||
See the full description of flags [here](#list-of-command-line-flags).
|
||||
|
|
|
@ -371,8 +371,12 @@ start a cluster of three `vmagent` instances, where each target is scraped by tw
|
|||
```
|
||||
|
||||
If each target is scraped by multiple `vmagent` instances, then data deduplication must be enabled at remote storage pointed by `-remoteWrite.url`.
|
||||
The `-dedup.minScrapeInterval` must be set to the `scrape_interval` configured at `-promscrape.config`.
|
||||
See [these docs](https://docs.victoriametrics.com/#deduplication) for details.
|
||||
|
||||
If multiple `vmagent` clusters scrape the same set of targets, then each cluster must have unique value for the `-promscrape.cluster.name` command-line flag.
|
||||
This is needed for proper data de-duplication. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2679) for details.
|
||||
|
||||
## Scraping targets via a proxy
|
||||
|
||||
`vmagent` supports scraping targets via http, https and socks5 proxies. Proxy address must be specified in `proxy_url` option. For example, the following scrape config instructs
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"embed"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
|
@ -23,6 +24,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/vmimport"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
|
@ -63,6 +65,12 @@ var (
|
|||
opentsdbhttpServer *opentsdbhttpserver.Server
|
||||
)
|
||||
|
||||
var (
|
||||
//go:embed static
|
||||
staticFiles embed.FS
|
||||
staticServer = http.FileServer(http.FS(staticFiles))
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Write flags and help message to stdout, since it is easier to grep or pipe.
|
||||
flag.CommandLine.SetOutput(os.Stdout)
|
||||
|
@ -284,6 +292,22 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
|
||||
promscrape.WriteConfigData(w)
|
||||
return true
|
||||
case "/api/v1/status/config":
|
||||
// See https://prometheus.io/docs/prometheus/latest/querying/api/#config
|
||||
if *configAuthKey != "" && r.FormValue("authKey") != *configAuthKey {
|
||||
err := &httpserver.ErrorWithStatusCode{
|
||||
Err: fmt.Errorf("The provided authKey doesn't match -configAuthKey"),
|
||||
StatusCode: http.StatusUnauthorized,
|
||||
}
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return true
|
||||
}
|
||||
promscrapeStatusConfigRequests.Inc()
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
var bb bytesutil.ByteBuffer
|
||||
promscrape.WriteConfigData(&bb)
|
||||
fmt.Fprintf(w, `{"status":"success","data":{"yaml":%q}}`, bb.B)
|
||||
return true
|
||||
case "/api/v1/targets":
|
||||
promscrapeAPIV1TargetsRequests.Inc()
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
|
@ -305,11 +329,16 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
w.Write([]byte("OK"))
|
||||
}
|
||||
return true
|
||||
default:
|
||||
if strings.HasPrefix(r.URL.Path, "/static") {
|
||||
staticServer.ServeHTTP(w, r)
|
||||
return true
|
||||
}
|
||||
if remotewrite.MultitenancyEnabled() {
|
||||
return processMultitenantRequest(w, r, path)
|
||||
}
|
||||
return false
|
||||
}
|
||||
if remotewrite.MultitenancyEnabled() {
|
||||
return processMultitenantRequest(w, r, path)
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func processMultitenantRequest(w http.ResponseWriter, r *http.Request, path string) bool {
|
||||
|
@ -455,7 +484,8 @@ var (
|
|||
promscrapeTargetResponseRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/target_response"}`)
|
||||
promscrapeTargetResponseErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/target_response"}`)
|
||||
|
||||
promscrapeConfigRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/config"}`)
|
||||
promscrapeConfigRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/config"}`)
|
||||
promscrapeStatusConfigRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/status/config"}`)
|
||||
|
||||
promscrapeConfigReloadRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/-/reload"}`)
|
||||
)
|
||||
|
|
6
app/vmagent/static/css/bootstrap.min.css
vendored
Normal file
6
app/vmagent/static/css/bootstrap.min.css
vendored
Normal file
File diff suppressed because one or more lines are too long
6
app/vmagent/static/js/bootstrap.bundle.min.js
vendored
Normal file
6
app/vmagent/static/js/bootstrap.bundle.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
2
app/vmagent/static/js/jquery-3.6.0.min.js
vendored
Normal file
2
app/vmagent/static/js/jquery-3.6.0.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
|
@ -68,7 +68,7 @@ Then configure `vmalert` accordingly:
|
|||
-external.label=replica=a # Multiple external labels may be set
|
||||
```
|
||||
|
||||
Note there's a separate `remoteRead.url` to allow writing results of
|
||||
Note there's a separate `remoteWrite.url` to allow writing results of
|
||||
alerting/recording rules into a different storage than the initial data that's
|
||||
queried. This allows using `vmalert` to aggregate data from a short-term,
|
||||
high-frequency, high-cardinality storage into a long-term storage with
|
||||
|
@ -525,7 +525,7 @@ There are following non-required `replay` flags:
|
|||
(rules which depend on each other) rules. It is expected, that remote storage will be able to persist
|
||||
previously accepted data during the delay, so data will be available for the subsequent queries.
|
||||
Keep it equal or bigger than `-remoteWrite.flushInterval`.
|
||||
* `replay.disableProgressBar` - whether to disable progress bar which shows progress work.
|
||||
* `-replay.disableProgressBar` - whether to disable progress bar which shows progress work.
|
||||
Progress bar may generate a lot of log records, which is not formatted as standard VictoriaMetrics logger.
|
||||
It could break logs parsing by external system and generate additional load on it.
|
||||
|
||||
|
|
|
@ -163,9 +163,6 @@ func templateAnnotation(dst io.Writer, text string, data tplData, tmpl *textTpl.
|
|||
if !execute {
|
||||
return nil
|
||||
}
|
||||
if !execute {
|
||||
return nil
|
||||
}
|
||||
if err = tpl.Execute(dst, data); err != nil {
|
||||
return fmt.Errorf("error evaluating annotation template: %w", err)
|
||||
}
|
||||
|
|
|
@ -166,7 +166,7 @@ func (cw *configWatcher) start() error {
|
|||
if err != nil {
|
||||
return fmt.Errorf("failed to parse labels for target %q: %s", target, err)
|
||||
}
|
||||
notifier, err := NewAlertManager(address, cw.genFn, cw.cfg.HTTPClientConfig, cw.cfg.parsedRelabelConfigs, cw.cfg.Timeout.Duration())
|
||||
notifier, err := NewAlertManager(address, cw.genFn, cw.cfg.HTTPClientConfig, cw.cfg.parsedAlertRelabelConfigs, cw.cfg.Timeout.Duration())
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to init alertmanager for addr %q: %s", address, err)
|
||||
}
|
||||
|
|
6
app/vmalert/static/css/bootstrap.min.css
vendored
Normal file
6
app/vmalert/static/css/bootstrap.min.css
vendored
Normal file
File diff suppressed because one or more lines are too long
6
app/vmalert/static/js/bootstrap.bundle.min.js
vendored
Normal file
6
app/vmalert/static/js/bootstrap.bundle.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
2
app/vmalert/static/js/jquery-3.6.0.min.js
vendored
Normal file
2
app/vmalert/static/js/jquery-3.6.0.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
|
@ -1,7 +1,7 @@
|
|||
{% func Footer() %}
|
||||
</main>
|
||||
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/js/bootstrap.bundle.min.js" integrity="sha384-MrcW6ZMFYlzcLA8Nl+NtUVF0sA7MsXsP1UyJoMp4YLEuNSfAP+JcXn/tWtIaxVXM" crossorigin="anonymous"></script>
|
||||
<script src="https://code.jquery.com/jquery-3.3.1.min.js"></script>
|
||||
<script src="static/js/jquery-3.6.0.min.js" type="text/javascript"></script>
|
||||
<script src="static/js/bootstrap.bundle.min.js" type="text/javascript"></script>
|
||||
<script type="text/javascript">
|
||||
function expandAll() {
|
||||
$('.collapse').addClass('show');
|
||||
|
@ -18,14 +18,14 @@
|
|||
|
||||
$(".group-heading").click(function(e) {
|
||||
let target = $(this).attr('data-bs-target');
|
||||
let el = $('#'+target);
|
||||
let el = $("#"+target);
|
||||
new bootstrap.Collapse(el, {
|
||||
toggle: true
|
||||
});
|
||||
});
|
||||
|
||||
var hash = window.location.hash.substr(1);
|
||||
let group = $('#'+hash);
|
||||
let group = $("#"+hash);
|
||||
if (group.length > 0) {
|
||||
group.click();
|
||||
}
|
||||
|
|
|
@ -22,8 +22,8 @@ func StreamFooter(qw422016 *qt422016.Writer) {
|
|||
//line app/vmalert/tpl/footer.qtpl:1
|
||||
qw422016.N().S(`
|
||||
</main>
|
||||
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/js/bootstrap.bundle.min.js" integrity="sha384-MrcW6ZMFYlzcLA8Nl+NtUVF0sA7MsXsP1UyJoMp4YLEuNSfAP+JcXn/tWtIaxVXM" crossorigin="anonymous"></script>
|
||||
<script src="https://code.jquery.com/jquery-3.3.1.min.js"></script>
|
||||
<script src="static/js/jquery-3.6.0.min.js" type="text/javascript"></script>
|
||||
<script src="static/js/bootstrap.bundle.min.js" type="text/javascript"></script>
|
||||
<script type="text/javascript">
|
||||
function expandAll() {
|
||||
$('.collapse').addClass('show');
|
||||
|
@ -40,14 +40,14 @@ func StreamFooter(qw422016 *qt422016.Writer) {
|
|||
|
||||
$(".group-heading").click(function(e) {
|
||||
let target = $(this).attr('data-bs-target');
|
||||
let el = $('#'+target);
|
||||
let el = $("#"+target);
|
||||
new bootstrap.Collapse(el, {
|
||||
toggle: true
|
||||
});
|
||||
});
|
||||
|
||||
var hash = window.location.hash.substr(1);
|
||||
let group = $('#'+hash);
|
||||
let group = $("#"+hash);
|
||||
if (group.length > 0) {
|
||||
group.click();
|
||||
}
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
<html lang="en">
|
||||
<head>
|
||||
<title>vmalert{% if title != "" %} - {%s title %}{% endif %}</title>
|
||||
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous">
|
||||
<link href="static/css/bootstrap.min.css" rel="stylesheet" crossorigin="anonymous">
|
||||
<style>
|
||||
body{
|
||||
min-height: 75rem;
|
||||
|
|
|
@ -35,7 +35,7 @@ func StreamHeader(qw422016 *qt422016.Writer, title string, pages []NavItem) {
|
|||
}
|
||||
//line app/vmalert/tpl/header.qtpl:5
|
||||
qw422016.N().S(`</title>
|
||||
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous">
|
||||
<link href="static/css/bootstrap.min.css" rel="stylesheet" crossorigin="anonymous">
|
||||
<style>
|
||||
body{
|
||||
min-height: 75rem;
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"embed"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
@ -23,6 +24,12 @@ var (
|
|||
navItems []tpl.NavItem
|
||||
)
|
||||
|
||||
var (
|
||||
//go:embed static
|
||||
staticFiles embed.FS
|
||||
staticServer = http.FileServer(http.FS(staticFiles))
|
||||
)
|
||||
|
||||
func initLinks() {
|
||||
pathPrefix := httpserver.GetPathPrefix()
|
||||
if pathPrefix == "" {
|
||||
|
@ -99,6 +106,11 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
|
|||
w.WriteHeader(http.StatusOK)
|
||||
return true
|
||||
default:
|
||||
if strings.HasPrefix(r.URL.Path, "/static") {
|
||||
staticServer.ServeHTTP(w, r)
|
||||
return true
|
||||
}
|
||||
|
||||
if !strings.HasSuffix(r.URL.Path, "/status") {
|
||||
return false
|
||||
}
|
||||
|
|
|
@ -197,7 +197,8 @@ One important note for OpenTSDB migration: Queries/HBase scans can "get stuck" w
|
|||
|
||||
## Migrating data from InfluxDB (1.x)
|
||||
|
||||
`vmctl` supports the `influx` mode to migrate data from InfluxDB to VictoriaMetrics time-series database.
|
||||
`vmctl` supports the `influx` mode for [migrating data from InfluxDB to VictoriaMetrics](https://docs.victoriametrics.com/guides/migrate-from-influx.html)
|
||||
time-series database.
|
||||
|
||||
See `./vmctl influx --help` for details and full list of flags.
|
||||
|
||||
|
|
|
@ -20,6 +20,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/promremotewrite"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/vmimport"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/influxutils"
|
||||
graphiteserver "github.com/VictoriaMetrics/VictoriaMetrics/lib/ingestserver/graphite"
|
||||
|
@ -222,6 +223,22 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
|
||||
promscrape.WriteConfigData(w)
|
||||
return true
|
||||
case "/prometheus/api/v1/status/config", "/api/v1/status/config":
|
||||
// See https://prometheus.io/docs/prometheus/latest/querying/api/#config
|
||||
if *configAuthKey != "" && r.FormValue("authKey") != *configAuthKey {
|
||||
err := &httpserver.ErrorWithStatusCode{
|
||||
Err: fmt.Errorf("The provided authKey doesn't match -configAuthKey"),
|
||||
StatusCode: http.StatusUnauthorized,
|
||||
}
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return true
|
||||
}
|
||||
promscrapeStatusConfigRequests.Inc()
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
var bb bytesutil.ByteBuffer
|
||||
promscrape.WriteConfigData(&bb)
|
||||
fmt.Fprintf(w, `{"status":"success","data":{"yaml":%q}}`, bb.B)
|
||||
return true
|
||||
case "/prometheus/-/reload", "/-/reload":
|
||||
promscrapeConfigReloadRequests.Inc()
|
||||
procutil.SelfSIGHUP()
|
||||
|
@ -285,7 +302,8 @@ var (
|
|||
promscrapeTargetResponseRequests = metrics.NewCounter(`vm_http_requests_total{path="/target_response"}`)
|
||||
promscrapeTargetResponseErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/target_response"}`)
|
||||
|
||||
promscrapeConfigRequests = metrics.NewCounter(`vm_http_requests_total{path="/config"}`)
|
||||
promscrapeConfigRequests = metrics.NewCounter(`vm_http_requests_total{path="/config"}`)
|
||||
promscrapeStatusConfigRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/status/config"}`)
|
||||
|
||||
promscrapeConfigReloadRequests = metrics.NewCounter(`vm_http_requests_total{path="/-/reload"}`)
|
||||
|
||||
|
|
|
@ -206,7 +206,7 @@ func MetricsIndexHandler(startTime time.Time, w http.ResponseWriter, r *http.Req
|
|||
return fmt.Errorf("cannot parse form values: %w", err)
|
||||
}
|
||||
jsonp := r.FormValue("jsonp")
|
||||
metricNames, err := netstorage.GetLabelValues("__name__", deadline)
|
||||
metricNames, err := netstorage.GetLabelValues(nil, "__name__", deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf(`cannot obtain metric names: %w`, err)
|
||||
}
|
||||
|
@ -227,7 +227,7 @@ func metricsFind(tr storage.TimeRange, label, qHead, qTail string, delimiter byt
|
|||
n := strings.IndexAny(qTail, "*{[")
|
||||
if n < 0 {
|
||||
query := qHead + qTail
|
||||
suffixes, err := netstorage.GetTagValueSuffixes(tr, label, query, delimiter, deadline)
|
||||
suffixes, err := netstorage.GetTagValueSuffixes(nil, tr, label, query, delimiter, deadline)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -247,7 +247,7 @@ func metricsFind(tr storage.TimeRange, label, qHead, qTail string, delimiter byt
|
|||
}
|
||||
if n == len(qTail)-1 && strings.HasSuffix(qTail, "*") {
|
||||
query := qHead + qTail[:len(qTail)-1]
|
||||
suffixes, err := netstorage.GetTagValueSuffixes(tr, label, query, delimiter, deadline)
|
||||
suffixes, err := netstorage.GetTagValueSuffixes(nil, tr, label, query, delimiter, deadline)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
|
@ -55,7 +55,7 @@ func TagsDelSeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Re
|
|||
}
|
||||
tfss := joinTagFilterss(tfs, etfs)
|
||||
sq := storage.NewSearchQuery(0, ct, tfss, 0)
|
||||
n, err := netstorage.DeleteSeries(sq, deadline)
|
||||
n, err := netstorage.DeleteSeries(nil, sq, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot delete series for %q: %w", sq, err)
|
||||
}
|
||||
|
@ -190,7 +190,7 @@ func TagsAutoCompleteValuesHandler(startTime time.Time, w http.ResponseWriter, r
|
|||
// Escape special chars in tagPrefix as Graphite does.
|
||||
// See https://github.com/graphite-project/graphite-web/blob/3ad279df5cb90b211953e39161df416e54a84948/webapp/graphite/tags/base.py#L228
|
||||
filter := regexp.QuoteMeta(valuePrefix)
|
||||
tagValues, err = netstorage.GetGraphiteTagValues(tag, filter, limit, deadline)
|
||||
tagValues, err = netstorage.GetGraphiteTagValues(nil, tag, filter, limit, deadline)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -200,7 +200,7 @@ func TagsAutoCompleteValuesHandler(startTime time.Time, w http.ResponseWriter, r
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
mns, err := netstorage.SearchMetricNames(sq, deadline)
|
||||
mns, err := netstorage.SearchMetricNames(nil, sq, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot fetch metric names for %q: %w", sq, err)
|
||||
}
|
||||
|
@ -276,7 +276,7 @@ func TagsAutoCompleteTagsHandler(startTime time.Time, w http.ResponseWriter, r *
|
|||
// Escape special chars in tagPrefix as Graphite does.
|
||||
// See https://github.com/graphite-project/graphite-web/blob/3ad279df5cb90b211953e39161df416e54a84948/webapp/graphite/tags/base.py#L181
|
||||
filter := regexp.QuoteMeta(tagPrefix)
|
||||
labels, err = netstorage.GetGraphiteTags(filter, limit, deadline)
|
||||
labels, err = netstorage.GetGraphiteTags(nil, filter, limit, deadline)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -286,7 +286,7 @@ func TagsAutoCompleteTagsHandler(startTime time.Time, w http.ResponseWriter, r *
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
mns, err := netstorage.SearchMetricNames(sq, deadline)
|
||||
mns, err := netstorage.SearchMetricNames(nil, sq, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot fetch metric names for %q: %w", sq, err)
|
||||
}
|
||||
|
@ -353,7 +353,7 @@ func TagsFindSeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.R
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
mns, err := netstorage.SearchMetricNames(sq, deadline)
|
||||
mns, err := netstorage.SearchMetricNames(nil, sq, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot fetch metric names for %q: %w", sq, err)
|
||||
}
|
||||
|
@ -413,7 +413,7 @@ func TagValuesHandler(startTime time.Time, tagName string, w http.ResponseWriter
|
|||
return err
|
||||
}
|
||||
filter := r.FormValue("filter")
|
||||
tagValues, err := netstorage.GetGraphiteTagValues(tagName, filter, limit, deadline)
|
||||
tagValues, err := netstorage.GetGraphiteTagValues(nil, tagName, filter, limit, deadline)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -444,7 +444,7 @@ func TagsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) er
|
|||
return err
|
||||
}
|
||||
filter := r.FormValue("filter")
|
||||
labels, err := netstorage.GetGraphiteTags(filter, limit, deadline)
|
||||
labels, err := netstorage.GetGraphiteTags(nil, filter, limit, deadline)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -21,6 +21,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
@ -85,12 +86,20 @@ var (
|
|||
//go:embed vmui
|
||||
var vmuiFiles embed.FS
|
||||
|
||||
var vmuiFileServer = http.FileServer(http.FS(vmuiFiles))
|
||||
//go:embed static
|
||||
var staticFiles embed.FS
|
||||
|
||||
var (
|
||||
vmuiFileServer = http.FileServer(http.FS(vmuiFiles))
|
||||
staticServer = http.FileServer(http.FS(staticFiles))
|
||||
)
|
||||
|
||||
// RequestHandler handles remote read API requests
|
||||
func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
||||
startTime := time.Now()
|
||||
defer requestDuration.UpdateDuration(startTime)
|
||||
tracerEnabled := searchutils.GetBool(r, "trace")
|
||||
qt := querytracer.New(tracerEnabled)
|
||||
|
||||
// Limit the number of concurrent queries.
|
||||
select {
|
||||
|
@ -106,6 +115,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
t := timerpool.Get(d)
|
||||
select {
|
||||
case concurrencyCh <- struct{}{}:
|
||||
qt.Printf("wait in queue because -search.maxConcurrentRequests=%d concurrent requests are executed", *maxConcurrentRequests)
|
||||
timerpool.Put(t)
|
||||
defer func() { <-concurrencyCh }()
|
||||
case <-t.C:
|
||||
|
@ -177,6 +187,10 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
vmuiFileServer.ServeHTTP(w, r)
|
||||
return true
|
||||
}
|
||||
if strings.HasPrefix(path, "/static") {
|
||||
staticServer.ServeHTTP(w, r)
|
||||
return true
|
||||
}
|
||||
|
||||
if strings.HasPrefix(path, "/api/v1/label/") {
|
||||
s := path[len("/api/v1/label/"):]
|
||||
|
@ -184,7 +198,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
labelValuesRequests.Inc()
|
||||
labelName := s[:len(s)-len("/values")]
|
||||
httpserver.EnableCORS(w, r)
|
||||
if err := prometheus.LabelValuesHandler(startTime, labelName, w, r); err != nil {
|
||||
if err := prometheus.LabelValuesHandler(qt, startTime, labelName, w, r); err != nil {
|
||||
labelValuesErrors.Inc()
|
||||
sendPrometheusError(w, r, err)
|
||||
return true
|
||||
|
@ -212,7 +226,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
case "/api/v1/query":
|
||||
queryRequests.Inc()
|
||||
httpserver.EnableCORS(w, r)
|
||||
if err := prometheus.QueryHandler(startTime, w, r); err != nil {
|
||||
if err := prometheus.QueryHandler(qt, startTime, w, r); err != nil {
|
||||
queryErrors.Inc()
|
||||
sendPrometheusError(w, r, err)
|
||||
return true
|
||||
|
@ -221,7 +235,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
case "/api/v1/query_range":
|
||||
queryRangeRequests.Inc()
|
||||
httpserver.EnableCORS(w, r)
|
||||
if err := prometheus.QueryRangeHandler(startTime, w, r); err != nil {
|
||||
if err := prometheus.QueryRangeHandler(qt, startTime, w, r); err != nil {
|
||||
queryRangeErrors.Inc()
|
||||
sendPrometheusError(w, r, err)
|
||||
return true
|
||||
|
@ -230,7 +244,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
case "/api/v1/series":
|
||||
seriesRequests.Inc()
|
||||
httpserver.EnableCORS(w, r)
|
||||
if err := prometheus.SeriesHandler(startTime, w, r); err != nil {
|
||||
if err := prometheus.SeriesHandler(qt, startTime, w, r); err != nil {
|
||||
seriesErrors.Inc()
|
||||
sendPrometheusError(w, r, err)
|
||||
return true
|
||||
|
@ -248,7 +262,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
case "/api/v1/labels":
|
||||
labelsRequests.Inc()
|
||||
httpserver.EnableCORS(w, r)
|
||||
if err := prometheus.LabelsHandler(startTime, w, r); err != nil {
|
||||
if err := prometheus.LabelsHandler(qt, startTime, w, r); err != nil {
|
||||
labelsErrors.Inc()
|
||||
sendPrometheusError(w, r, err)
|
||||
return true
|
||||
|
|
|
@ -18,6 +18,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
"github.com/valyala/fastrand"
|
||||
|
@ -193,7 +194,8 @@ var resultPool sync.Pool
|
|||
// Data processing is immediately stopped if f returns non-nil error.
|
||||
//
|
||||
// rss becomes unusable after the call to RunParallel.
|
||||
func (rss *Results) RunParallel(f func(rs *Result, workerID uint) error) error {
|
||||
func (rss *Results) RunParallel(qt *querytracer.Tracer, f func(rs *Result, workerID uint) error) error {
|
||||
qt = qt.NewChild()
|
||||
defer rss.mustClose()
|
||||
|
||||
// Spin up local workers.
|
||||
|
@ -255,6 +257,7 @@ func (rss *Results) RunParallel(f func(rs *Result, workerID uint) error) error {
|
|||
close(workCh)
|
||||
}
|
||||
workChsWG.Wait()
|
||||
qt.Donef("parallel process of fetched data: series=%d, samples=%d", seriesProcessedTotal, rowsProcessedTotal)
|
||||
|
||||
return firstErr
|
||||
}
|
||||
|
@ -636,7 +639,9 @@ func (sbh *sortBlocksHeap) Pop() interface{} {
|
|||
}
|
||||
|
||||
// DeleteSeries deletes time series matching the given tagFilterss.
|
||||
func DeleteSeries(sq *storage.SearchQuery, deadline searchutils.Deadline) (int, error) {
|
||||
func DeleteSeries(qt *querytracer.Tracer, sq *storage.SearchQuery, deadline searchutils.Deadline) (int, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("delete series: %s", sq)
|
||||
tr := storage.TimeRange{
|
||||
MinTimestamp: sq.MinTimestamp,
|
||||
MaxTimestamp: sq.MaxTimestamp,
|
||||
|
@ -649,11 +654,14 @@ func DeleteSeries(sq *storage.SearchQuery, deadline searchutils.Deadline) (int,
|
|||
}
|
||||
|
||||
// GetLabelsOnTimeRange returns labels for the given tr until the given deadline.
|
||||
func GetLabelsOnTimeRange(tr storage.TimeRange, deadline searchutils.Deadline) ([]string, error) {
|
||||
func GetLabelsOnTimeRange(qt *querytracer.Tracer, tr storage.TimeRange, deadline searchutils.Deadline) ([]string, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("get labels on timeRange=%s", &tr)
|
||||
if deadline.Exceeded() {
|
||||
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
||||
}
|
||||
labels, err := vmstorage.SearchTagKeysOnTimeRange(tr, *maxTagKeysPerSearch, deadline.Deadline())
|
||||
qt.Printf("get %d labels", len(labels))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error during labels search on time range: %w", err)
|
||||
}
|
||||
|
@ -673,15 +681,18 @@ func GetLabelsOnTimeRange(tr storage.TimeRange, deadline searchutils.Deadline) (
|
|||
|
||||
// Sort labels like Prometheus does
|
||||
sort.Strings(labels)
|
||||
qt.Printf("sort %d labels", len(labels))
|
||||
return labels, nil
|
||||
}
|
||||
|
||||
// GetGraphiteTags returns Graphite tags until the given deadline.
|
||||
func GetGraphiteTags(filter string, limit int, deadline searchutils.Deadline) ([]string, error) {
|
||||
func GetGraphiteTags(qt *querytracer.Tracer, filter string, limit int, deadline searchutils.Deadline) ([]string, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("get graphite tags: filter=%s, limit=%d", filter, limit)
|
||||
if deadline.Exceeded() {
|
||||
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
||||
}
|
||||
labels, err := GetLabels(deadline)
|
||||
labels, err := GetLabels(nil, deadline)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -722,11 +733,14 @@ func hasString(a []string, s string) bool {
|
|||
}
|
||||
|
||||
// GetLabels returns labels until the given deadline.
|
||||
func GetLabels(deadline searchutils.Deadline) ([]string, error) {
|
||||
func GetLabels(qt *querytracer.Tracer, deadline searchutils.Deadline) ([]string, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("get labels")
|
||||
if deadline.Exceeded() {
|
||||
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
||||
}
|
||||
labels, err := vmstorage.SearchTagKeys(*maxTagKeysPerSearch, deadline.Deadline())
|
||||
qt.Printf("get %d labels from global index", len(labels))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error during labels search: %w", err)
|
||||
}
|
||||
|
@ -746,6 +760,7 @@ func GetLabels(deadline searchutils.Deadline) ([]string, error) {
|
|||
|
||||
// Sort labels like Prometheus does
|
||||
sort.Strings(labels)
|
||||
qt.Printf("sort %d labels", len(labels))
|
||||
return labels, nil
|
||||
}
|
||||
|
||||
|
@ -772,7 +787,9 @@ func mergeStrings(a, b []string) []string {
|
|||
|
||||
// GetLabelValuesOnTimeRange returns label values for the given labelName on the given tr
|
||||
// until the given deadline.
|
||||
func GetLabelValuesOnTimeRange(labelName string, tr storage.TimeRange, deadline searchutils.Deadline) ([]string, error) {
|
||||
func GetLabelValuesOnTimeRange(qt *querytracer.Tracer, labelName string, tr storage.TimeRange, deadline searchutils.Deadline) ([]string, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("get values for label %s on a timeRange %s", labelName, &tr)
|
||||
if deadline.Exceeded() {
|
||||
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
||||
}
|
||||
|
@ -781,6 +798,7 @@ func GetLabelValuesOnTimeRange(labelName string, tr storage.TimeRange, deadline
|
|||
}
|
||||
// Search for tag values
|
||||
labelValues, err := vmstorage.SearchTagValuesOnTimeRange([]byte(labelName), tr, *maxTagValuesPerSearch, deadline.Deadline())
|
||||
qt.Printf("get %d label values", len(labelValues))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error during label values search on time range for labelName=%q: %w", labelName, err)
|
||||
}
|
||||
|
@ -794,18 +812,21 @@ func GetLabelValuesOnTimeRange(labelName string, tr storage.TimeRange, deadline
|
|||
|
||||
// Sort labelValues like Prometheus does
|
||||
sort.Strings(labelValues)
|
||||
qt.Printf("sort %d label values", len(labelValues))
|
||||
return labelValues, nil
|
||||
}
|
||||
|
||||
// GetGraphiteTagValues returns tag values for the given tagName until the given deadline.
|
||||
func GetGraphiteTagValues(tagName, filter string, limit int, deadline searchutils.Deadline) ([]string, error) {
|
||||
func GetGraphiteTagValues(qt *querytracer.Tracer, tagName, filter string, limit int, deadline searchutils.Deadline) ([]string, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("get graphite tag values for tagName=%s, filter=%s, limit=%d", tagName, filter, limit)
|
||||
if deadline.Exceeded() {
|
||||
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
||||
}
|
||||
if tagName == "name" {
|
||||
tagName = ""
|
||||
}
|
||||
tagValues, err := GetLabelValues(tagName, deadline)
|
||||
tagValues, err := GetLabelValues(nil, tagName, deadline)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -823,7 +844,9 @@ func GetGraphiteTagValues(tagName, filter string, limit int, deadline searchutil
|
|||
|
||||
// GetLabelValues returns label values for the given labelName
|
||||
// until the given deadline.
|
||||
func GetLabelValues(labelName string, deadline searchutils.Deadline) ([]string, error) {
|
||||
func GetLabelValues(qt *querytracer.Tracer, labelName string, deadline searchutils.Deadline) ([]string, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("get values for label %s", labelName)
|
||||
if deadline.Exceeded() {
|
||||
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
||||
}
|
||||
|
@ -832,6 +855,7 @@ func GetLabelValues(labelName string, deadline searchutils.Deadline) ([]string,
|
|||
}
|
||||
// Search for tag values
|
||||
labelValues, err := vmstorage.SearchTagValues([]byte(labelName), *maxTagValuesPerSearch, deadline.Deadline())
|
||||
qt.Printf("get %d label values", len(labelValues))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error during label values search for labelName=%q: %w", labelName, err)
|
||||
}
|
||||
|
@ -845,13 +869,16 @@ func GetLabelValues(labelName string, deadline searchutils.Deadline) ([]string,
|
|||
|
||||
// Sort labelValues like Prometheus does
|
||||
sort.Strings(labelValues)
|
||||
qt.Printf("sort %d label values", len(labelValues))
|
||||
return labelValues, nil
|
||||
}
|
||||
|
||||
// GetTagValueSuffixes returns tag value suffixes for the given tagKey and the given tagValuePrefix.
|
||||
//
|
||||
// It can be used for implementing https://graphite-api.readthedocs.io/en/latest/api.html#metrics-find
|
||||
func GetTagValueSuffixes(tr storage.TimeRange, tagKey, tagValuePrefix string, delimiter byte, deadline searchutils.Deadline) ([]string, error) {
|
||||
func GetTagValueSuffixes(qt *querytracer.Tracer, tr storage.TimeRange, tagKey, tagValuePrefix string, delimiter byte, deadline searchutils.Deadline) ([]string, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("get tag value suffixes for tagKey=%s, tagValuePrefix=%s, timeRange=%s", tagKey, tagValuePrefix, &tr)
|
||||
if deadline.Exceeded() {
|
||||
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
||||
}
|
||||
|
@ -869,7 +896,9 @@ func GetTagValueSuffixes(tr storage.TimeRange, tagKey, tagValuePrefix string, de
|
|||
}
|
||||
|
||||
// GetLabelEntries returns all the label entries until the given deadline.
|
||||
func GetLabelEntries(deadline searchutils.Deadline) ([]storage.TagEntry, error) {
|
||||
func GetLabelEntries(qt *querytracer.Tracer, deadline searchutils.Deadline) ([]storage.TagEntry, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("get label entries")
|
||||
if deadline.Exceeded() {
|
||||
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
||||
}
|
||||
|
@ -877,6 +906,7 @@ func GetLabelEntries(deadline searchutils.Deadline) ([]storage.TagEntry, error)
|
|||
if err != nil {
|
||||
return nil, fmt.Errorf("error during label entries request: %w", err)
|
||||
}
|
||||
qt.Printf("get %d label entries", len(labelEntries))
|
||||
|
||||
// Substitute "" with "__name__"
|
||||
for i := range labelEntries {
|
||||
|
@ -894,12 +924,15 @@ func GetLabelEntries(deadline searchutils.Deadline) ([]storage.TagEntry, error)
|
|||
}
|
||||
return labelEntries[i].Key > labelEntries[j].Key
|
||||
})
|
||||
qt.Printf("sort %d label entries", len(labelEntries))
|
||||
|
||||
return labelEntries, nil
|
||||
}
|
||||
|
||||
// GetTSDBStatusForDate returns tsdb status according to https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats
|
||||
func GetTSDBStatusForDate(deadline searchutils.Deadline, date uint64, topN, maxMetrics int) (*storage.TSDBStatus, error) {
|
||||
func GetTSDBStatusForDate(qt *querytracer.Tracer, deadline searchutils.Deadline, date uint64, topN, maxMetrics int) (*storage.TSDBStatus, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("get tsdb stats for date=%d, topN=%d", date, topN)
|
||||
if deadline.Exceeded() {
|
||||
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
||||
}
|
||||
|
@ -913,7 +946,9 @@ func GetTSDBStatusForDate(deadline searchutils.Deadline, date uint64, topN, maxM
|
|||
// GetTSDBStatusWithFilters returns tsdb status according to https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats
|
||||
//
|
||||
// It accepts aribtrary filters on time series in sq.
|
||||
func GetTSDBStatusWithFilters(deadline searchutils.Deadline, sq *storage.SearchQuery, topN int) (*storage.TSDBStatus, error) {
|
||||
func GetTSDBStatusWithFilters(qt *querytracer.Tracer, deadline searchutils.Deadline, sq *storage.SearchQuery, topN int) (*storage.TSDBStatus, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("get tsdb stats: %s, topN=%d", sq, topN)
|
||||
if deadline.Exceeded() {
|
||||
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
||||
}
|
||||
|
@ -934,7 +969,9 @@ func GetTSDBStatusWithFilters(deadline searchutils.Deadline, sq *storage.SearchQ
|
|||
}
|
||||
|
||||
// GetSeriesCount returns the number of unique series.
|
||||
func GetSeriesCount(deadline searchutils.Deadline) (uint64, error) {
|
||||
func GetSeriesCount(qt *querytracer.Tracer, deadline searchutils.Deadline) (uint64, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("get series count")
|
||||
if deadline.Exceeded() {
|
||||
return 0, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
||||
}
|
||||
|
@ -966,7 +1003,9 @@ var ssPool sync.Pool
|
|||
// Data processing is immediately stopped if f returns non-nil error.
|
||||
// It is the responsibility of f to call b.UnmarshalData before reading timestamps and values from the block.
|
||||
// It is the responsibility of f to filter blocks according to the given tr.
|
||||
func ExportBlocks(sq *storage.SearchQuery, deadline searchutils.Deadline, f func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error) error {
|
||||
func ExportBlocks(qt *querytracer.Tracer, sq *storage.SearchQuery, deadline searchutils.Deadline, f func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error) error {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("export blocks: %s", sq)
|
||||
if deadline.Exceeded() {
|
||||
return fmt.Errorf("timeout exceeded before starting data export: %s", deadline.String())
|
||||
}
|
||||
|
@ -988,7 +1027,7 @@ func ExportBlocks(sq *storage.SearchQuery, deadline searchutils.Deadline, f func
|
|||
sr := getStorageSearch()
|
||||
defer putStorageSearch(sr)
|
||||
startTime := time.Now()
|
||||
sr.Init(vmstorage.Storage, tfss, tr, sq.MaxMetrics, deadline.Deadline())
|
||||
sr.Init(qt, vmstorage.Storage, tfss, tr, sq.MaxMetrics, deadline.Deadline())
|
||||
indexSearchDuration.UpdateDuration(startTime)
|
||||
|
||||
// Start workers that call f in parallel on available CPU cores.
|
||||
|
@ -1021,6 +1060,7 @@ func ExportBlocks(sq *storage.SearchQuery, deadline searchutils.Deadline, f func
|
|||
|
||||
// Feed workers with work
|
||||
blocksRead := 0
|
||||
samples := 0
|
||||
for sr.NextMetricBlock() {
|
||||
blocksRead++
|
||||
if deadline.Exceeded() {
|
||||
|
@ -1033,13 +1073,16 @@ func ExportBlocks(sq *storage.SearchQuery, deadline searchutils.Deadline, f func
|
|||
if err := xw.mn.Unmarshal(sr.MetricBlockRef.MetricName); err != nil {
|
||||
return fmt.Errorf("cannot unmarshal metricName for block #%d: %w", blocksRead, err)
|
||||
}
|
||||
sr.MetricBlockRef.BlockRef.MustReadBlock(&xw.b, true)
|
||||
br := sr.MetricBlockRef.BlockRef
|
||||
br.MustReadBlock(&xw.b, true)
|
||||
samples += br.RowsCount()
|
||||
workCh <- xw
|
||||
}
|
||||
close(workCh)
|
||||
|
||||
// Wait for workers to finish.
|
||||
wg.Wait()
|
||||
qt.Printf("export blocks=%d, samples=%d", blocksRead, samples)
|
||||
|
||||
// Check errors.
|
||||
err = sr.Error()
|
||||
|
@ -1072,7 +1115,9 @@ var exportWorkPool = &sync.Pool{
|
|||
}
|
||||
|
||||
// SearchMetricNames returns all the metric names matching sq until the given deadline.
|
||||
func SearchMetricNames(sq *storage.SearchQuery, deadline searchutils.Deadline) ([]storage.MetricName, error) {
|
||||
func SearchMetricNames(qt *querytracer.Tracer, sq *storage.SearchQuery, deadline searchutils.Deadline) ([]storage.MetricName, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("fetch metric names: %s", sq)
|
||||
if deadline.Exceeded() {
|
||||
return nil, fmt.Errorf("timeout exceeded before starting to search metric names: %s", deadline.String())
|
||||
}
|
||||
|
@ -1090,7 +1135,7 @@ func SearchMetricNames(sq *storage.SearchQuery, deadline searchutils.Deadline) (
|
|||
return nil, err
|
||||
}
|
||||
|
||||
mns, err := vmstorage.SearchMetricNames(tfss, tr, sq.MaxMetrics, deadline.Deadline())
|
||||
mns, err := vmstorage.SearchMetricNames(qt, tfss, tr, sq.MaxMetrics, deadline.Deadline())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot find metric names: %w", err)
|
||||
}
|
||||
|
@ -1100,7 +1145,9 @@ func SearchMetricNames(sq *storage.SearchQuery, deadline searchutils.Deadline) (
|
|||
// ProcessSearchQuery performs sq until the given deadline.
|
||||
//
|
||||
// Results.RunParallel or Results.Cancel must be called on the returned Results.
|
||||
func ProcessSearchQuery(sq *storage.SearchQuery, fetchData bool, deadline searchutils.Deadline) (*Results, error) {
|
||||
func ProcessSearchQuery(qt *querytracer.Tracer, sq *storage.SearchQuery, fetchData bool, deadline searchutils.Deadline) (*Results, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("fetch matching series: %s, fetchData=%v", sq, fetchData)
|
||||
if deadline.Exceeded() {
|
||||
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
||||
}
|
||||
|
@ -1123,7 +1170,7 @@ func ProcessSearchQuery(sq *storage.SearchQuery, fetchData bool, deadline search
|
|||
|
||||
sr := getStorageSearch()
|
||||
startTime := time.Now()
|
||||
maxSeriesCount := sr.Init(vmstorage.Storage, tfss, tr, sq.MaxMetrics, deadline.Deadline())
|
||||
maxSeriesCount := sr.Init(qt, vmstorage.Storage, tfss, tr, sq.MaxMetrics, deadline.Deadline())
|
||||
indexSearchDuration.UpdateDuration(startTime)
|
||||
m := make(map[string][]blockRef, maxSeriesCount)
|
||||
orderedMetricNames := make([]string, 0, maxSeriesCount)
|
||||
|
@ -1180,6 +1227,7 @@ func ProcessSearchQuery(sq *storage.SearchQuery, fetchData bool, deadline search
|
|||
putStorageSearch(sr)
|
||||
return nil, fmt.Errorf("cannot finalize temporary file: %w", err)
|
||||
}
|
||||
qt.Printf("fetch unique series=%d, blocks=%d, samples=%d, bytes=%d", len(m), blocksRead, samples, tbf.Len())
|
||||
|
||||
// Fetch data from promdb.
|
||||
pm := make(map[string]*promData)
|
||||
|
|
|
@ -124,6 +124,11 @@ func (tbf *tmpBlocksFile) WriteBlockRefData(b []byte) (tmpBlockAddr, error) {
|
|||
return addr, nil
|
||||
}
|
||||
|
||||
// Len() returnt tbf size in bytes.
|
||||
func (tbf *tmpBlocksFile) Len() uint64 {
|
||||
return tbf.offset
|
||||
}
|
||||
|
||||
func (tbf *tmpBlocksFile) Finalize() error {
|
||||
if tbf.f == nil {
|
||||
return nil
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
"time"
|
||||
|
||||
"github.com/valyala/quicktemplate"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
) %}
|
||||
|
||||
|
@ -125,8 +126,12 @@
|
|||
}
|
||||
{% endfunc %}
|
||||
|
||||
{% func ExportPromAPIResponse(resultsCh <-chan *quicktemplate.ByteBuffer) %}
|
||||
{% func ExportPromAPIResponse(resultsCh <-chan *quicktemplate.ByteBuffer, qt *querytracer.Tracer) %}
|
||||
{
|
||||
{% code
|
||||
lines := 0
|
||||
bytesTotal := 0
|
||||
%}
|
||||
"status":"success",
|
||||
"data":{
|
||||
"resultType":"matrix",
|
||||
|
@ -134,18 +139,30 @@
|
|||
{% code bb, ok := <-resultsCh %}
|
||||
{% if ok %}
|
||||
{%z= bb.B %}
|
||||
{% code quicktemplate.ReleaseByteBuffer(bb) %}
|
||||
{% code
|
||||
lines++
|
||||
bytesTotal += len(bb.B)
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
%}
|
||||
{% for bb := range resultsCh %}
|
||||
,{%z= bb.B %}
|
||||
{% code quicktemplate.ReleaseByteBuffer(bb) %}
|
||||
{% code
|
||||
lines++
|
||||
bytesTotal += len(bb.B)
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
%}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
]
|
||||
}
|
||||
{% code
|
||||
qt.Donef("export format=promapi: lines=%d, bytes=%d", lines, bytesTotal)
|
||||
%}
|
||||
{%= dumpQueryTrace(qt) %}
|
||||
}
|
||||
{% endfunc %}
|
||||
|
||||
{% func ExportStdResponse(resultsCh <-chan *quicktemplate.ByteBuffer) %}
|
||||
{% func ExportStdResponse(resultsCh <-chan *quicktemplate.ByteBuffer, qt *querytracer.Tracer) %}
|
||||
{% for bb := range resultsCh %}
|
||||
{%z= bb.B %}
|
||||
{% code quicktemplate.ReleaseByteBuffer(bb) %}
|
||||
|
|
|
@ -11,570 +11,588 @@ import (
|
|||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:13
|
||||
//line app/vmselect/prometheus/export.qtpl:14
|
||||
import (
|
||||
qtio422016 "io"
|
||||
|
||||
qt422016 "github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:13
|
||||
//line app/vmselect/prometheus/export.qtpl:14
|
||||
var (
|
||||
_ = qtio422016.Copy
|
||||
_ = qt422016.AcquireByteBuffer
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:13
|
||||
//line app/vmselect/prometheus/export.qtpl:14
|
||||
func StreamExportCSVLine(qw422016 *qt422016.Writer, xb *exportBlock, fieldNames []string) {
|
||||
//line app/vmselect/prometheus/export.qtpl:14
|
||||
if len(xb.timestamps) == 0 || len(fieldNames) == 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:14
|
||||
return
|
||||
//line app/vmselect/prometheus/export.qtpl:14
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:15
|
||||
for i, timestamp := range xb.timestamps {
|
||||
if len(xb.timestamps) == 0 || len(fieldNames) == 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:15
|
||||
return
|
||||
//line app/vmselect/prometheus/export.qtpl:15
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:16
|
||||
for i, timestamp := range xb.timestamps {
|
||||
//line app/vmselect/prometheus/export.qtpl:17
|
||||
value := xb.values[i]
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:17
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
streamexportCSVField(qw422016, xb.mn, fieldNames[0], timestamp, value)
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
for _, fieldName := range fieldNames[1:] {
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/export.qtpl:20
|
||||
streamexportCSVField(qw422016, xb.mn, fieldName, timestamp, value)
|
||||
//line app/vmselect/prometheus/export.qtpl:21
|
||||
}
|
||||
streamexportCSVField(qw422016, xb.mn, fieldName, timestamp, value)
|
||||
//line app/vmselect/prometheus/export.qtpl:22
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:23
|
||||
qw422016.N().S(`
|
||||
`)
|
||||
//line app/vmselect/prometheus/export.qtpl:23
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
func WriteExportCSVLine(qq422016 qtio422016.Writer, xb *exportBlock, fieldNames []string) {
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
StreamExportCSVLine(qw422016, xb, fieldNames)
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
func ExportCSVLine(xb *exportBlock, fieldNames []string) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
WriteExportCSVLine(qb422016, xb, fieldNames)
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:26
|
||||
func streamexportCSVField(qw422016 *qt422016.Writer, mn *storage.MetricName, fieldName string, timestamp int64, value float64) {
|
||||
//line app/vmselect/prometheus/export.qtpl:27
|
||||
if fieldName == "__value__" {
|
||||
func streamexportCSVField(qw422016 *qt422016.Writer, mn *storage.MetricName, fieldName string, timestamp int64, value float64) {
|
||||
//line app/vmselect/prometheus/export.qtpl:28
|
||||
qw422016.N().F(value)
|
||||
if fieldName == "__value__" {
|
||||
//line app/vmselect/prometheus/export.qtpl:29
|
||||
return
|
||||
qw422016.N().F(value)
|
||||
//line app/vmselect/prometheus/export.qtpl:30
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:31
|
||||
if fieldName == "__timestamp__" {
|
||||
//line app/vmselect/prometheus/export.qtpl:32
|
||||
qw422016.N().DL(timestamp)
|
||||
//line app/vmselect/prometheus/export.qtpl:33
|
||||
return
|
||||
//line app/vmselect/prometheus/export.qtpl:34
|
||||
//line app/vmselect/prometheus/export.qtpl:31
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:32
|
||||
if fieldName == "__timestamp__" {
|
||||
//line app/vmselect/prometheus/export.qtpl:33
|
||||
qw422016.N().DL(timestamp)
|
||||
//line app/vmselect/prometheus/export.qtpl:34
|
||||
return
|
||||
//line app/vmselect/prometheus/export.qtpl:35
|
||||
if strings.HasPrefix(fieldName, "__timestamp__:") {
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:36
|
||||
if strings.HasPrefix(fieldName, "__timestamp__:") {
|
||||
//line app/vmselect/prometheus/export.qtpl:37
|
||||
timeFormat := fieldName[len("__timestamp__:"):]
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:37
|
||||
switch timeFormat {
|
||||
//line app/vmselect/prometheus/export.qtpl:38
|
||||
case "unix_s":
|
||||
switch timeFormat {
|
||||
//line app/vmselect/prometheus/export.qtpl:39
|
||||
qw422016.N().DL(timestamp / 1000)
|
||||
case "unix_s":
|
||||
//line app/vmselect/prometheus/export.qtpl:40
|
||||
case "unix_ms":
|
||||
qw422016.N().DL(timestamp / 1000)
|
||||
//line app/vmselect/prometheus/export.qtpl:41
|
||||
qw422016.N().DL(timestamp)
|
||||
case "unix_ms":
|
||||
//line app/vmselect/prometheus/export.qtpl:42
|
||||
case "unix_ns":
|
||||
qw422016.N().DL(timestamp)
|
||||
//line app/vmselect/prometheus/export.qtpl:43
|
||||
qw422016.N().DL(timestamp * 1e6)
|
||||
case "unix_ns":
|
||||
//line app/vmselect/prometheus/export.qtpl:44
|
||||
qw422016.N().DL(timestamp * 1e6)
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
case "rfc3339":
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
//line app/vmselect/prometheus/export.qtpl:47
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
bb.B = time.Unix(timestamp/1000, (timestamp%1000)*1e6).AppendFormat(bb.B[:0], time.RFC3339)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:49
|
||||
//line app/vmselect/prometheus/export.qtpl:50
|
||||
qw422016.N().Z(bb.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:51
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
default:
|
||||
//line app/vmselect/prometheus/export.qtpl:54
|
||||
default:
|
||||
//line app/vmselect/prometheus/export.qtpl:55
|
||||
if strings.HasPrefix(timeFormat, "custom:") {
|
||||
//line app/vmselect/prometheus/export.qtpl:56
|
||||
//line app/vmselect/prometheus/export.qtpl:57
|
||||
layout := timeFormat[len("custom:"):]
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
bb.B = time.Unix(timestamp/1000, (timestamp%1000)*1e6).AppendFormat(bb.B[:0], layout)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:60
|
||||
if bytes.ContainsAny(bb.B, `"`+",\n") {
|
||||
//line app/vmselect/prometheus/export.qtpl:61
|
||||
qw422016.E().QZ(bb.B)
|
||||
if bytes.ContainsAny(bb.B, `"`+",\n") {
|
||||
//line app/vmselect/prometheus/export.qtpl:62
|
||||
} else {
|
||||
qw422016.E().QZ(bb.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:63
|
||||
qw422016.N().Z(bb.B)
|
||||
} else {
|
||||
//line app/vmselect/prometheus/export.qtpl:64
|
||||
qw422016.N().Z(bb.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:65
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:66
|
||||
//line app/vmselect/prometheus/export.qtpl:67
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:68
|
||||
} else {
|
||||
//line app/vmselect/prometheus/export.qtpl:68
|
||||
qw422016.N().S(`Unsupported timeFormat=`)
|
||||
//line app/vmselect/prometheus/export.qtpl:69
|
||||
qw422016.N().S(timeFormat)
|
||||
} else {
|
||||
//line app/vmselect/prometheus/export.qtpl:69
|
||||
qw422016.N().S(`Unsupported timeFormat=`)
|
||||
//line app/vmselect/prometheus/export.qtpl:70
|
||||
}
|
||||
qw422016.N().S(timeFormat)
|
||||
//line app/vmselect/prometheus/export.qtpl:71
|
||||
}
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
return
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
}
|
||||
return
|
||||
//line app/vmselect/prometheus/export.qtpl:74
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:75
|
||||
v := mn.GetTagValue(fieldName)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:75
|
||||
if bytes.ContainsAny(v, `"`+",\n") {
|
||||
//line app/vmselect/prometheus/export.qtpl:76
|
||||
qw422016.N().QZ(v)
|
||||
if bytes.ContainsAny(v, `"`+",\n") {
|
||||
//line app/vmselect/prometheus/export.qtpl:77
|
||||
} else {
|
||||
qw422016.N().QZ(v)
|
||||
//line app/vmselect/prometheus/export.qtpl:78
|
||||
qw422016.N().Z(v)
|
||||
} else {
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
}
|
||||
qw422016.N().Z(v)
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
func writeexportCSVField(qq422016 qtio422016.Writer, mn *storage.MetricName, fieldName string, timestamp int64, value float64) {
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
streamexportCSVField(qw422016, mn, fieldName, timestamp, value)
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
func exportCSVField(mn *storage.MetricName, fieldName string, timestamp int64, value float64) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
writeexportCSVField(qb422016, mn, fieldName, timestamp, value)
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:82
|
||||
//line app/vmselect/prometheus/export.qtpl:83
|
||||
func StreamExportPrometheusLine(qw422016 *qt422016.Writer, xb *exportBlock) {
|
||||
//line app/vmselect/prometheus/export.qtpl:83
|
||||
if len(xb.timestamps) == 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:83
|
||||
return
|
||||
//line app/vmselect/prometheus/export.qtpl:83
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:84
|
||||
if len(xb.timestamps) == 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:84
|
||||
return
|
||||
//line app/vmselect/prometheus/export.qtpl:84
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:85
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:85
|
||||
//line app/vmselect/prometheus/export.qtpl:86
|
||||
writeprometheusMetricName(bb, xb.mn)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:86
|
||||
//line app/vmselect/prometheus/export.qtpl:87
|
||||
for i, ts := range xb.timestamps {
|
||||
//line app/vmselect/prometheus/export.qtpl:87
|
||||
//line app/vmselect/prometheus/export.qtpl:88
|
||||
qw422016.N().Z(bb.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:87
|
||||
qw422016.N().S(` `)
|
||||
//line app/vmselect/prometheus/export.qtpl:88
|
||||
qw422016.N().S(` `)
|
||||
//line app/vmselect/prometheus/export.qtpl:89
|
||||
qw422016.N().F(xb.values[i])
|
||||
//line app/vmselect/prometheus/export.qtpl:88
|
||||
//line app/vmselect/prometheus/export.qtpl:89
|
||||
qw422016.N().S(` `)
|
||||
//line app/vmselect/prometheus/export.qtpl:89
|
||||
//line app/vmselect/prometheus/export.qtpl:90
|
||||
qw422016.N().DL(ts)
|
||||
//line app/vmselect/prometheus/export.qtpl:89
|
||||
//line app/vmselect/prometheus/export.qtpl:90
|
||||
qw422016.N().S(`
|
||||
`)
|
||||
//line app/vmselect/prometheus/export.qtpl:90
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:91
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
func WriteExportPrometheusLine(qq422016 qtio422016.Writer, xb *exportBlock) {
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
StreamExportPrometheusLine(qw422016, xb)
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
func ExportPrometheusLine(xb *exportBlock) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
WriteExportPrometheusLine(qb422016, xb)
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
func StreamExportJSONLine(qw422016 *qt422016.Writer, xb *exportBlock) {
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:96
|
||||
if len(xb.timestamps) == 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:96
|
||||
return
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:96
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:96
|
||||
qw422016.N().S(`{"metric":`)
|
||||
//line app/vmselect/prometheus/export.qtpl:97
|
||||
//line app/vmselect/prometheus/export.qtpl:98
|
||||
streammetricNameObject(qw422016, xb.mn)
|
||||
//line app/vmselect/prometheus/export.qtpl:97
|
||||
//line app/vmselect/prometheus/export.qtpl:98
|
||||
qw422016.N().S(`,"values":[`)
|
||||
//line app/vmselect/prometheus/export.qtpl:99
|
||||
if len(xb.values) > 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:100
|
||||
if len(xb.values) > 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:101
|
||||
values := xb.values
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:101
|
||||
qw422016.N().F(values[0])
|
||||
//line app/vmselect/prometheus/export.qtpl:102
|
||||
qw422016.N().F(values[0])
|
||||
//line app/vmselect/prometheus/export.qtpl:103
|
||||
values = values[1:]
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:103
|
||||
//line app/vmselect/prometheus/export.qtpl:104
|
||||
for _, v := range values {
|
||||
//line app/vmselect/prometheus/export.qtpl:103
|
||||
//line app/vmselect/prometheus/export.qtpl:104
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/export.qtpl:104
|
||||
if math.IsNaN(v) {
|
||||
//line app/vmselect/prometheus/export.qtpl:104
|
||||
qw422016.N().S(`null`)
|
||||
//line app/vmselect/prometheus/export.qtpl:104
|
||||
} else {
|
||||
//line app/vmselect/prometheus/export.qtpl:104
|
||||
qw422016.N().F(v)
|
||||
//line app/vmselect/prometheus/export.qtpl:104
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:105
|
||||
if math.IsNaN(v) {
|
||||
//line app/vmselect/prometheus/export.qtpl:105
|
||||
qw422016.N().S(`null`)
|
||||
//line app/vmselect/prometheus/export.qtpl:105
|
||||
} else {
|
||||
//line app/vmselect/prometheus/export.qtpl:105
|
||||
qw422016.N().F(v)
|
||||
//line app/vmselect/prometheus/export.qtpl:105
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:106
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:106
|
||||
//line app/vmselect/prometheus/export.qtpl:107
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:106
|
||||
//line app/vmselect/prometheus/export.qtpl:107
|
||||
qw422016.N().S(`],"timestamps":[`)
|
||||
//line app/vmselect/prometheus/export.qtpl:109
|
||||
if len(xb.timestamps) > 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:110
|
||||
if len(xb.timestamps) > 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:111
|
||||
timestamps := xb.timestamps
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:111
|
||||
qw422016.N().DL(timestamps[0])
|
||||
//line app/vmselect/prometheus/export.qtpl:112
|
||||
qw422016.N().DL(timestamps[0])
|
||||
//line app/vmselect/prometheus/export.qtpl:113
|
||||
timestamps = timestamps[1:]
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:113
|
||||
for _, ts := range timestamps {
|
||||
//line app/vmselect/prometheus/export.qtpl:113
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/export.qtpl:114
|
||||
qw422016.N().DL(ts)
|
||||
for _, ts := range timestamps {
|
||||
//line app/vmselect/prometheus/export.qtpl:114
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/export.qtpl:115
|
||||
qw422016.N().DL(ts)
|
||||
//line app/vmselect/prometheus/export.qtpl:116
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:116
|
||||
//line app/vmselect/prometheus/export.qtpl:117
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:116
|
||||
//line app/vmselect/prometheus/export.qtpl:117
|
||||
qw422016.N().S(`]}`)
|
||||
//line app/vmselect/prometheus/export.qtpl:118
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
qw422016.N().S(`
|
||||
`)
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
//line app/vmselect/prometheus/export.qtpl:120
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
//line app/vmselect/prometheus/export.qtpl:120
|
||||
func WriteExportJSONLine(qq422016 qtio422016.Writer, xb *exportBlock) {
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
//line app/vmselect/prometheus/export.qtpl:120
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
//line app/vmselect/prometheus/export.qtpl:120
|
||||
StreamExportJSONLine(qw422016, xb)
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
//line app/vmselect/prometheus/export.qtpl:120
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
//line app/vmselect/prometheus/export.qtpl:120
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
//line app/vmselect/prometheus/export.qtpl:120
|
||||
func ExportJSONLine(xb *exportBlock) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
//line app/vmselect/prometheus/export.qtpl:120
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
//line app/vmselect/prometheus/export.qtpl:120
|
||||
WriteExportJSONLine(qb422016, xb)
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
//line app/vmselect/prometheus/export.qtpl:120
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
//line app/vmselect/prometheus/export.qtpl:120
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
//line app/vmselect/prometheus/export.qtpl:120
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:119
|
||||
//line app/vmselect/prometheus/export.qtpl:120
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:121
|
||||
//line app/vmselect/prometheus/export.qtpl:122
|
||||
func StreamExportPromAPILine(qw422016 *qt422016.Writer, xb *exportBlock) {
|
||||
//line app/vmselect/prometheus/export.qtpl:121
|
||||
//line app/vmselect/prometheus/export.qtpl:122
|
||||
qw422016.N().S(`{"metric":`)
|
||||
//line app/vmselect/prometheus/export.qtpl:123
|
||||
//line app/vmselect/prometheus/export.qtpl:124
|
||||
streammetricNameObject(qw422016, xb.mn)
|
||||
//line app/vmselect/prometheus/export.qtpl:123
|
||||
//line app/vmselect/prometheus/export.qtpl:124
|
||||
qw422016.N().S(`,"values":`)
|
||||
//line app/vmselect/prometheus/export.qtpl:124
|
||||
//line app/vmselect/prometheus/export.qtpl:125
|
||||
streamvaluesWithTimestamps(qw422016, xb.values, xb.timestamps)
|
||||
//line app/vmselect/prometheus/export.qtpl:124
|
||||
//line app/vmselect/prometheus/export.qtpl:125
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/export.qtpl:126
|
||||
//line app/vmselect/prometheus/export.qtpl:127
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:126
|
||||
//line app/vmselect/prometheus/export.qtpl:127
|
||||
func WriteExportPromAPILine(qq422016 qtio422016.Writer, xb *exportBlock) {
|
||||
//line app/vmselect/prometheus/export.qtpl:126
|
||||
//line app/vmselect/prometheus/export.qtpl:127
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:126
|
||||
//line app/vmselect/prometheus/export.qtpl:127
|
||||
StreamExportPromAPILine(qw422016, xb)
|
||||
//line app/vmselect/prometheus/export.qtpl:126
|
||||
//line app/vmselect/prometheus/export.qtpl:127
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:126
|
||||
//line app/vmselect/prometheus/export.qtpl:127
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:126
|
||||
//line app/vmselect/prometheus/export.qtpl:127
|
||||
func ExportPromAPILine(xb *exportBlock) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:126
|
||||
//line app/vmselect/prometheus/export.qtpl:127
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:126
|
||||
//line app/vmselect/prometheus/export.qtpl:127
|
||||
WriteExportPromAPILine(qb422016, xb)
|
||||
//line app/vmselect/prometheus/export.qtpl:126
|
||||
//line app/vmselect/prometheus/export.qtpl:127
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:126
|
||||
//line app/vmselect/prometheus/export.qtpl:127
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:126
|
||||
//line app/vmselect/prometheus/export.qtpl:127
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:126
|
||||
//line app/vmselect/prometheus/export.qtpl:127
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:128
|
||||
func StreamExportPromAPIResponse(qw422016 *qt422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
|
||||
//line app/vmselect/prometheus/export.qtpl:128
|
||||
qw422016.N().S(`{"status":"success","data":{"resultType":"matrix","result":[`)
|
||||
//line app/vmselect/prometheus/export.qtpl:129
|
||||
func StreamExportPromAPIResponse(qw422016 *qt422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer, qt *querytracer.Tracer) {
|
||||
//line app/vmselect/prometheus/export.qtpl:129
|
||||
qw422016.N().S(`{`)
|
||||
//line app/vmselect/prometheus/export.qtpl:132
|
||||
lines := 0
|
||||
bytesTotal := 0
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:134
|
||||
qw422016.N().S(`"status":"success","data":{"resultType":"matrix","result":[`)
|
||||
//line app/vmselect/prometheus/export.qtpl:139
|
||||
bb, ok := <-resultsCh
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:135
|
||||
//line app/vmselect/prometheus/export.qtpl:140
|
||||
if ok {
|
||||
//line app/vmselect/prometheus/export.qtpl:136
|
||||
//line app/vmselect/prometheus/export.qtpl:141
|
||||
qw422016.N().Z(bb.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:137
|
||||
//line app/vmselect/prometheus/export.qtpl:143
|
||||
lines++
|
||||
bytesTotal += len(bb.B)
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:138
|
||||
//line app/vmselect/prometheus/export.qtpl:147
|
||||
for bb := range resultsCh {
|
||||
//line app/vmselect/prometheus/export.qtpl:138
|
||||
//line app/vmselect/prometheus/export.qtpl:147
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/export.qtpl:139
|
||||
//line app/vmselect/prometheus/export.qtpl:148
|
||||
qw422016.N().Z(bb.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:140
|
||||
//line app/vmselect/prometheus/export.qtpl:150
|
||||
lines++
|
||||
bytesTotal += len(bb.B)
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:141
|
||||
//line app/vmselect/prometheus/export.qtpl:154
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:142
|
||||
//line app/vmselect/prometheus/export.qtpl:155
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:142
|
||||
qw422016.N().S(`]}}`)
|
||||
//line app/vmselect/prometheus/export.qtpl:146
|
||||
//line app/vmselect/prometheus/export.qtpl:155
|
||||
qw422016.N().S(`]}`)
|
||||
//line app/vmselect/prometheus/export.qtpl:159
|
||||
qt.Donef("export format=promapi: lines=%d, bytes=%d", lines, bytesTotal)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:161
|
||||
streamdumpQueryTrace(qw422016, qt)
|
||||
//line app/vmselect/prometheus/export.qtpl:161
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:146
|
||||
func WriteExportPromAPIResponse(qq422016 qtio422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
|
||||
//line app/vmselect/prometheus/export.qtpl:146
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
func WriteExportPromAPIResponse(qq422016 qtio422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer, qt *querytracer.Tracer) {
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:146
|
||||
StreamExportPromAPIResponse(qw422016, resultsCh)
|
||||
//line app/vmselect/prometheus/export.qtpl:146
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
StreamExportPromAPIResponse(qw422016, resultsCh, qt)
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:146
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:146
|
||||
func ExportPromAPIResponse(resultsCh <-chan *quicktemplate.ByteBuffer) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:146
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
func ExportPromAPIResponse(resultsCh <-chan *quicktemplate.ByteBuffer, qt *querytracer.Tracer) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:146
|
||||
WriteExportPromAPIResponse(qb422016, resultsCh)
|
||||
//line app/vmselect/prometheus/export.qtpl:146
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
WriteExportPromAPIResponse(qb422016, resultsCh, qt)
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:146
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:146
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:146
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:148
|
||||
func StreamExportStdResponse(qw422016 *qt422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
|
||||
//line app/vmselect/prometheus/export.qtpl:149
|
||||
//line app/vmselect/prometheus/export.qtpl:165
|
||||
func StreamExportStdResponse(qw422016 *qt422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer, qt *querytracer.Tracer) {
|
||||
//line app/vmselect/prometheus/export.qtpl:166
|
||||
for bb := range resultsCh {
|
||||
//line app/vmselect/prometheus/export.qtpl:150
|
||||
//line app/vmselect/prometheus/export.qtpl:167
|
||||
qw422016.N().Z(bb.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:151
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:152
|
||||
//line app/vmselect/prometheus/export.qtpl:169
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:153
|
||||
//line app/vmselect/prometheus/export.qtpl:170
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:153
|
||||
func WriteExportStdResponse(qq422016 qtio422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
|
||||
//line app/vmselect/prometheus/export.qtpl:153
|
||||
//line app/vmselect/prometheus/export.qtpl:170
|
||||
func WriteExportStdResponse(qq422016 qtio422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer, qt *querytracer.Tracer) {
|
||||
//line app/vmselect/prometheus/export.qtpl:170
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:153
|
||||
StreamExportStdResponse(qw422016, resultsCh)
|
||||
//line app/vmselect/prometheus/export.qtpl:153
|
||||
//line app/vmselect/prometheus/export.qtpl:170
|
||||
StreamExportStdResponse(qw422016, resultsCh, qt)
|
||||
//line app/vmselect/prometheus/export.qtpl:170
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:153
|
||||
//line app/vmselect/prometheus/export.qtpl:170
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:153
|
||||
func ExportStdResponse(resultsCh <-chan *quicktemplate.ByteBuffer) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:153
|
||||
//line app/vmselect/prometheus/export.qtpl:170
|
||||
func ExportStdResponse(resultsCh <-chan *quicktemplate.ByteBuffer, qt *querytracer.Tracer) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:170
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:153
|
||||
WriteExportStdResponse(qb422016, resultsCh)
|
||||
//line app/vmselect/prometheus/export.qtpl:153
|
||||
//line app/vmselect/prometheus/export.qtpl:170
|
||||
WriteExportStdResponse(qb422016, resultsCh, qt)
|
||||
//line app/vmselect/prometheus/export.qtpl:170
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:153
|
||||
//line app/vmselect/prometheus/export.qtpl:170
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:153
|
||||
//line app/vmselect/prometheus/export.qtpl:170
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:153
|
||||
//line app/vmselect/prometheus/export.qtpl:170
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:155
|
||||
//line app/vmselect/prometheus/export.qtpl:172
|
||||
func streamprometheusMetricName(qw422016 *qt422016.Writer, mn *storage.MetricName) {
|
||||
//line app/vmselect/prometheus/export.qtpl:156
|
||||
//line app/vmselect/prometheus/export.qtpl:173
|
||||
qw422016.N().Z(mn.MetricGroup)
|
||||
//line app/vmselect/prometheus/export.qtpl:157
|
||||
//line app/vmselect/prometheus/export.qtpl:174
|
||||
if len(mn.Tags) > 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:157
|
||||
//line app/vmselect/prometheus/export.qtpl:174
|
||||
qw422016.N().S(`{`)
|
||||
//line app/vmselect/prometheus/export.qtpl:159
|
||||
//line app/vmselect/prometheus/export.qtpl:176
|
||||
tags := mn.Tags
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:160
|
||||
//line app/vmselect/prometheus/export.qtpl:177
|
||||
qw422016.N().Z(tags[0].Key)
|
||||
//line app/vmselect/prometheus/export.qtpl:160
|
||||
//line app/vmselect/prometheus/export.qtpl:177
|
||||
qw422016.N().S(`=`)
|
||||
//line app/vmselect/prometheus/export.qtpl:160
|
||||
//line app/vmselect/prometheus/export.qtpl:177
|
||||
qw422016.N().QZ(tags[0].Value)
|
||||
//line app/vmselect/prometheus/export.qtpl:161
|
||||
//line app/vmselect/prometheus/export.qtpl:178
|
||||
tags = tags[1:]
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:162
|
||||
//line app/vmselect/prometheus/export.qtpl:179
|
||||
for i := range tags {
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
//line app/vmselect/prometheus/export.qtpl:180
|
||||
tag := &tags[i]
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:163
|
||||
//line app/vmselect/prometheus/export.qtpl:180
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/export.qtpl:164
|
||||
//line app/vmselect/prometheus/export.qtpl:181
|
||||
qw422016.N().Z(tag.Key)
|
||||
//line app/vmselect/prometheus/export.qtpl:164
|
||||
//line app/vmselect/prometheus/export.qtpl:181
|
||||
qw422016.N().S(`=`)
|
||||
//line app/vmselect/prometheus/export.qtpl:164
|
||||
//line app/vmselect/prometheus/export.qtpl:181
|
||||
qw422016.N().QZ(tag.Value)
|
||||
//line app/vmselect/prometheus/export.qtpl:165
|
||||
//line app/vmselect/prometheus/export.qtpl:182
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:165
|
||||
//line app/vmselect/prometheus/export.qtpl:182
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/export.qtpl:167
|
||||
//line app/vmselect/prometheus/export.qtpl:184
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
//line app/vmselect/prometheus/export.qtpl:185
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
//line app/vmselect/prometheus/export.qtpl:185
|
||||
func writeprometheusMetricName(qq422016 qtio422016.Writer, mn *storage.MetricName) {
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
//line app/vmselect/prometheus/export.qtpl:185
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
//line app/vmselect/prometheus/export.qtpl:185
|
||||
streamprometheusMetricName(qw422016, mn)
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
//line app/vmselect/prometheus/export.qtpl:185
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
//line app/vmselect/prometheus/export.qtpl:185
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
//line app/vmselect/prometheus/export.qtpl:185
|
||||
func prometheusMetricName(mn *storage.MetricName) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
//line app/vmselect/prometheus/export.qtpl:185
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
//line app/vmselect/prometheus/export.qtpl:185
|
||||
writeprometheusMetricName(qb422016, mn)
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
//line app/vmselect/prometheus/export.qtpl:185
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
//line app/vmselect/prometheus/export.qtpl:185
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
//line app/vmselect/prometheus/export.qtpl:185
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:168
|
||||
//line app/vmselect/prometheus/export.qtpl:185
|
||||
}
|
||||
|
|
|
@ -1,7 +1,12 @@
|
|||
{% stripspace %}
|
||||
|
||||
{% import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
) %}
|
||||
|
||||
LabelValuesResponse generates response for /api/v1/label/<labelName>/values .
|
||||
See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values
|
||||
{% func LabelValuesResponse(labelValues []string) %}
|
||||
{% func LabelValuesResponse(labelValues []string, qt *querytracer.Tracer, qtDone func()) %}
|
||||
{
|
||||
"status":"success",
|
||||
"data":[
|
||||
|
@ -10,6 +15,11 @@ See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-va
|
|||
{% if i+1 < len(labelValues) %},{% endif %}
|
||||
{% endfor %}
|
||||
]
|
||||
{% code
|
||||
qt.Printf("generate response for %d label values", len(labelValues))
|
||||
qtDone()
|
||||
%}
|
||||
{%= dumpQueryTrace(qt) %}
|
||||
}
|
||||
{% endfunc %}
|
||||
{% endstripspace %}
|
||||
|
|
|
@ -1,67 +1,80 @@
|
|||
// Code generated by qtc from "label_values_response.qtpl". DO NOT EDIT.
|
||||
// See https://github.com/valyala/quicktemplate for details.
|
||||
|
||||
// LabelValuesResponse generates response for /api/v1/label/<labelName>/values .See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values
|
||||
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:4
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:3
|
||||
package prometheus
|
||||
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:4
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:3
|
||||
import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
)
|
||||
|
||||
// LabelValuesResponse generates response for /api/v1/label/<labelName>/values .See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values
|
||||
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:9
|
||||
import (
|
||||
qtio422016 "io"
|
||||
|
||||
qt422016 "github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:4
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:9
|
||||
var (
|
||||
_ = qtio422016.Copy
|
||||
_ = qt422016.AcquireByteBuffer
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:4
|
||||
func StreamLabelValuesResponse(qw422016 *qt422016.Writer, labelValues []string) {
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:4
|
||||
qw422016.N().S(`{"status":"success","data":[`)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:8
|
||||
for i, labelValue := range labelValues {
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:9
|
||||
func StreamLabelValuesResponse(qw422016 *qt422016.Writer, labelValues []string, qt *querytracer.Tracer, qtDone func()) {
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:9
|
||||
qw422016.N().S(`{"status":"success","data":[`)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:13
|
||||
for i, labelValue := range labelValues {
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
qw422016.N().Q(labelValue)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:10
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:15
|
||||
if i+1 < len(labelValues) {
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:10
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:15
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:10
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:15
|
||||
}
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:11
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:16
|
||||
}
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:11
|
||||
qw422016.N().S(`]}`)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:16
|
||||
qw422016.N().S(`]`)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:19
|
||||
qt.Printf("generate response for %d label values", len(labelValues))
|
||||
qtDone()
|
||||
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:22
|
||||
streamdumpQueryTrace(qw422016, qt)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:22
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:24
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
func WriteLabelValuesResponse(qq422016 qtio422016.Writer, labelValues []string) {
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:24
|
||||
func WriteLabelValuesResponse(qq422016 qtio422016.Writer, labelValues []string, qt *querytracer.Tracer, qtDone func()) {
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:24
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
StreamLabelValuesResponse(qw422016, labelValues)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:24
|
||||
StreamLabelValuesResponse(qw422016, labelValues, qt, qtDone)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:24
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:24
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
func LabelValuesResponse(labelValues []string) string {
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:24
|
||||
func LabelValuesResponse(labelValues []string, qt *querytracer.Tracer, qtDone func()) string {
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:24
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
WriteLabelValuesResponse(qb422016, labelValues)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:24
|
||||
WriteLabelValuesResponse(qb422016, labelValues, qt, qtDone)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:24
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:24
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:24
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:14
|
||||
//line app/vmselect/prometheus/label_values_response.qtpl:24
|
||||
}
|
||||
|
|
|
@ -1,7 +1,12 @@
|
|||
{% stripspace %}
|
||||
|
||||
{% import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
) %}
|
||||
|
||||
LabelsResponse generates response for /api/v1/labels .
|
||||
See https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names
|
||||
{% func LabelsResponse(labels []string) %}
|
||||
{% func LabelsResponse(labels []string, qt *querytracer.Tracer, qtDone func()) %}
|
||||
{
|
||||
"status":"success",
|
||||
"data":[
|
||||
|
@ -10,6 +15,11 @@ See https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-nam
|
|||
{% if i+1 < len(labels) %},{% endif %}
|
||||
{% endfor %}
|
||||
]
|
||||
{% code
|
||||
qt.Printf("generate response for %d labels", len(labels))
|
||||
qtDone()
|
||||
%}
|
||||
{%= dumpQueryTrace(qt) %}
|
||||
}
|
||||
{% endfunc %}
|
||||
{% endstripspace %}
|
||||
|
|
|
@ -1,67 +1,80 @@
|
|||
// Code generated by qtc from "labels_response.qtpl". DO NOT EDIT.
|
||||
// See https://github.com/valyala/quicktemplate for details.
|
||||
|
||||
// LabelsResponse generates response for /api/v1/labels .See https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names
|
||||
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:4
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:3
|
||||
package prometheus
|
||||
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:4
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:3
|
||||
import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
)
|
||||
|
||||
// LabelsResponse generates response for /api/v1/labels .See https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names
|
||||
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:9
|
||||
import (
|
||||
qtio422016 "io"
|
||||
|
||||
qt422016 "github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:4
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:9
|
||||
var (
|
||||
_ = qtio422016.Copy
|
||||
_ = qt422016.AcquireByteBuffer
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:4
|
||||
func StreamLabelsResponse(qw422016 *qt422016.Writer, labels []string) {
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:4
|
||||
qw422016.N().S(`{"status":"success","data":[`)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:8
|
||||
for i, label := range labels {
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:9
|
||||
func StreamLabelsResponse(qw422016 *qt422016.Writer, labels []string, qt *querytracer.Tracer, qtDone func()) {
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:9
|
||||
qw422016.N().S(`{"status":"success","data":[`)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:13
|
||||
for i, label := range labels {
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
qw422016.N().Q(label)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:10
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:15
|
||||
if i+1 < len(labels) {
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:10
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:15
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:10
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:15
|
||||
}
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:11
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:16
|
||||
}
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:11
|
||||
qw422016.N().S(`]}`)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:16
|
||||
qw422016.N().S(`]`)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:19
|
||||
qt.Printf("generate response for %d labels", len(labels))
|
||||
qtDone()
|
||||
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:22
|
||||
streamdumpQueryTrace(qw422016, qt)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:22
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:24
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
func WriteLabelsResponse(qq422016 qtio422016.Writer, labels []string) {
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:24
|
||||
func WriteLabelsResponse(qq422016 qtio422016.Writer, labels []string, qt *querytracer.Tracer, qtDone func()) {
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:24
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
StreamLabelsResponse(qw422016, labels)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:24
|
||||
StreamLabelsResponse(qw422016, labels, qt, qtDone)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:24
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:24
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
func LabelsResponse(labels []string) string {
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:24
|
||||
func LabelsResponse(labels []string, qt *querytracer.Tracer, qtDone func()) string {
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:24
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
WriteLabelsResponse(qb422016, labels)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:24
|
||||
WriteLabelsResponse(qb422016, labels, qt, qtDone)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:24
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:24
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:24
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:14
|
||||
//line app/vmselect/prometheus/labels_response.qtpl:24
|
||||
}
|
||||
|
|
|
@ -23,6 +23,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
"github.com/valyala/fastjson/fastfloat"
|
||||
|
@ -85,7 +86,7 @@ func FederateHandler(startTime time.Time, w http.ResponseWriter, r *http.Request
|
|||
return err
|
||||
}
|
||||
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxFederateSeries)
|
||||
rss, err := netstorage.ProcessSearchQuery(sq, true, deadline)
|
||||
rss, err := netstorage.ProcessSearchQuery(nil, sq, true, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
|
||||
}
|
||||
|
@ -93,7 +94,7 @@ func FederateHandler(startTime time.Time, w http.ResponseWriter, r *http.Request
|
|||
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
err = rss.RunParallel(nil, func(rs *netstorage.Result, workerID uint) error {
|
||||
if err := bw.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -148,12 +149,12 @@ func ExportCSVHandler(startTime time.Time, w http.ResponseWriter, r *http.Reques
|
|||
}
|
||||
doneCh := make(chan error, 1)
|
||||
if !reduceMemUsage {
|
||||
rss, err := netstorage.ProcessSearchQuery(sq, true, ep.deadline)
|
||||
rss, err := netstorage.ProcessSearchQuery(nil, sq, true, ep.deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
|
||||
}
|
||||
go func() {
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
err := rss.RunParallel(nil, func(rs *netstorage.Result, workerID uint) error {
|
||||
if err := bw.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -171,7 +172,7 @@ func ExportCSVHandler(startTime time.Time, w http.ResponseWriter, r *http.Reques
|
|||
}()
|
||||
} else {
|
||||
go func() {
|
||||
err := netstorage.ExportBlocks(sq, ep.deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error {
|
||||
err := netstorage.ExportBlocks(nil, sq, ep.deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error {
|
||||
if err := bw.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -232,7 +233,7 @@ func ExportNativeHandler(startTime time.Time, w http.ResponseWriter, r *http.Req
|
|||
_, _ = bw.Write(trBuf)
|
||||
|
||||
// Marshal native blocks.
|
||||
err = netstorage.ExportBlocks(sq, ep.deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error {
|
||||
err = netstorage.ExportBlocks(nil, sq, ep.deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error {
|
||||
if err := bw.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -287,7 +288,7 @@ func ExportHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
format := r.FormValue("format")
|
||||
maxRowsPerLine := int(fastfloat.ParseInt64BestEffort(r.FormValue("max_rows_per_line")))
|
||||
reduceMemUsage := searchutils.GetBool(r, "reduce_mem_usage")
|
||||
if err := exportHandler(w, ep, format, maxRowsPerLine, reduceMemUsage); err != nil {
|
||||
if err := exportHandler(nil, w, ep, format, maxRowsPerLine, reduceMemUsage); err != nil {
|
||||
return fmt.Errorf("error when exporting data on the time range (start=%d, end=%d): %w", ep.start, ep.end, err)
|
||||
}
|
||||
return nil
|
||||
|
@ -295,7 +296,7 @@ func ExportHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
|
||||
var exportDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/export"}`)
|
||||
|
||||
func exportHandler(w http.ResponseWriter, ep *exportParams, format string, maxRowsPerLine int, reduceMemUsage bool) error {
|
||||
func exportHandler(qt *querytracer.Tracer, w http.ResponseWriter, ep *exportParams, format string, maxRowsPerLine int, reduceMemUsage bool) error {
|
||||
writeResponseFunc := WriteExportStdResponse
|
||||
writeLineFunc := func(xb *exportBlock, resultsCh chan<- *quicktemplate.ByteBuffer) {
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
|
@ -356,12 +357,13 @@ func exportHandler(w http.ResponseWriter, ep *exportParams, format string, maxRo
|
|||
resultsCh := make(chan *quicktemplate.ByteBuffer, cgroup.AvailableCPUs())
|
||||
doneCh := make(chan error, 1)
|
||||
if !reduceMemUsage {
|
||||
rss, err := netstorage.ProcessSearchQuery(sq, true, ep.deadline)
|
||||
rss, err := netstorage.ProcessSearchQuery(qt, sq, true, ep.deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
|
||||
}
|
||||
qtChild := qt.NewChild()
|
||||
go func() {
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
err := rss.RunParallel(qtChild, func(rs *netstorage.Result, workerID uint) error {
|
||||
if err := bw.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -374,12 +376,14 @@ func exportHandler(w http.ResponseWriter, ep *exportParams, format string, maxRo
|
|||
exportBlockPool.Put(xb)
|
||||
return nil
|
||||
})
|
||||
qtChild.Donef("background export format=%s", format)
|
||||
close(resultsCh)
|
||||
doneCh <- err
|
||||
}()
|
||||
} else {
|
||||
qtChild := qt.NewChild()
|
||||
go func() {
|
||||
err := netstorage.ExportBlocks(sq, ep.deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error {
|
||||
err := netstorage.ExportBlocks(qtChild, sq, ep.deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error {
|
||||
if err := bw.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -396,13 +400,14 @@ func exportHandler(w http.ResponseWriter, ep *exportParams, format string, maxRo
|
|||
exportBlockPool.Put(xb)
|
||||
return nil
|
||||
})
|
||||
qtChild.Donef("background export format=%s", format)
|
||||
close(resultsCh)
|
||||
doneCh <- err
|
||||
}()
|
||||
}
|
||||
|
||||
// writeResponseFunc must consume all the data from resultsCh.
|
||||
writeResponseFunc(bw, resultsCh)
|
||||
writeResponseFunc(bw, resultsCh, qt)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -450,7 +455,7 @@ func DeleteHandler(startTime time.Time, r *http.Request) error {
|
|||
}
|
||||
ct := startTime.UnixNano() / 1e6
|
||||
sq := storage.NewSearchQuery(0, ct, tagFilterss, 0)
|
||||
deletedCount, err := netstorage.DeleteSeries(sq, deadline)
|
||||
deletedCount, err := netstorage.DeleteSeries(nil, sq, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot delete time series: %w", err)
|
||||
}
|
||||
|
@ -465,7 +470,7 @@ var deleteDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/
|
|||
// LabelValuesHandler processes /api/v1/label/<labelName>/values request.
|
||||
//
|
||||
// See https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values
|
||||
func LabelValuesHandler(startTime time.Time, labelName string, w http.ResponseWriter, r *http.Request) error {
|
||||
func LabelValuesHandler(qt *querytracer.Tracer, startTime time.Time, labelName string, w http.ResponseWriter, r *http.Request) error {
|
||||
defer labelValuesDuration.UpdateDuration(startTime)
|
||||
|
||||
deadline := searchutils.GetDeadlineForQuery(r, startTime)
|
||||
|
@ -481,7 +486,7 @@ func LabelValuesHandler(startTime time.Time, labelName string, w http.ResponseWr
|
|||
if len(matches) == 0 && len(etfs) == 0 {
|
||||
if len(r.Form["start"]) == 0 && len(r.Form["end"]) == 0 {
|
||||
var err error
|
||||
labelValues, err = netstorage.GetLabelValues(labelName, deadline)
|
||||
labelValues, err = netstorage.GetLabelValues(qt, labelName, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf(`cannot obtain label values for %q: %w`, labelName, err)
|
||||
}
|
||||
|
@ -499,7 +504,7 @@ func LabelValuesHandler(startTime time.Time, labelName string, w http.ResponseWr
|
|||
MinTimestamp: start,
|
||||
MaxTimestamp: end,
|
||||
}
|
||||
labelValues, err = netstorage.GetLabelValuesOnTimeRange(labelName, tr, deadline)
|
||||
labelValues, err = netstorage.GetLabelValuesOnTimeRange(qt, labelName, tr, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf(`cannot obtain label values on time range for %q: %w`, labelName, err)
|
||||
}
|
||||
|
@ -521,7 +526,7 @@ func LabelValuesHandler(startTime time.Time, labelName string, w http.ResponseWr
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
labelValues, err = labelValuesWithMatches(labelName, matches, etfs, start, end, deadline)
|
||||
labelValues, err = labelValuesWithMatches(qt, labelName, matches, etfs, start, end, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot obtain label values for %q, match[]=%q, start=%d, end=%d: %w", labelName, matches, start, end, err)
|
||||
}
|
||||
|
@ -530,14 +535,18 @@ func LabelValuesHandler(startTime time.Time, labelName string, w http.ResponseWr
|
|||
w.Header().Set("Content-Type", "application/json")
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteLabelValuesResponse(bw, labelValues)
|
||||
qtDone := func() {
|
||||
qt.Donef("/api/v1/labels")
|
||||
}
|
||||
WriteLabelValuesResponse(bw, labelValues, qt, qtDone)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return fmt.Errorf("canot flush label values to remote client: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func labelValuesWithMatches(labelName string, matches []string, etfs [][]storage.TagFilter, start, end int64, deadline searchutils.Deadline) ([]string, error) {
|
||||
func labelValuesWithMatches(qt *querytracer.Tracer, labelName string, matches []string, etfs [][]storage.TagFilter,
|
||||
start, end int64, deadline searchutils.Deadline) ([]string, error) {
|
||||
tagFilterss, err := getTagFilterssFromMatches(matches)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -566,7 +575,7 @@ func labelValuesWithMatches(labelName string, matches []string, etfs [][]storage
|
|||
m := make(map[string]struct{})
|
||||
if end-start > 24*3600*1000 {
|
||||
// It is cheaper to call SearchMetricNames on time ranges exceeding a day.
|
||||
mns, err := netstorage.SearchMetricNames(sq, deadline)
|
||||
mns, err := netstorage.SearchMetricNames(qt, sq, deadline)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot fetch time series for %q: %w", sq, err)
|
||||
}
|
||||
|
@ -578,12 +587,12 @@ func labelValuesWithMatches(labelName string, matches []string, etfs [][]storage
|
|||
m[string(labelValue)] = struct{}{}
|
||||
}
|
||||
} else {
|
||||
rss, err := netstorage.ProcessSearchQuery(sq, false, deadline)
|
||||
rss, err := netstorage.ProcessSearchQuery(qt, sq, false, deadline)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot fetch data for %q: %w", sq, err)
|
||||
}
|
||||
var mLock sync.Mutex
|
||||
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
err = rss.RunParallel(qt, func(rs *netstorage.Result, workerID uint) error {
|
||||
labelValue := rs.MetricName.GetTagValue(labelName)
|
||||
if len(labelValue) == 0 {
|
||||
return nil
|
||||
|
@ -602,6 +611,7 @@ func labelValuesWithMatches(labelName string, matches []string, etfs [][]storage
|
|||
labelValues = append(labelValues, labelValue)
|
||||
}
|
||||
sort.Strings(labelValues)
|
||||
qt.Printf("sort %d label values", len(labelValues))
|
||||
return labelValues, nil
|
||||
}
|
||||
|
||||
|
@ -612,7 +622,7 @@ func LabelsCountHandler(startTime time.Time, w http.ResponseWriter, r *http.Requ
|
|||
defer labelsCountDuration.UpdateDuration(startTime)
|
||||
|
||||
deadline := searchutils.GetDeadlineForStatusRequest(r, startTime)
|
||||
labelEntries, err := netstorage.GetLabelEntries(deadline)
|
||||
labelEntries, err := netstorage.GetLabelEntries(nil, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf(`cannot obtain label entries: %w`, err)
|
||||
}
|
||||
|
@ -674,7 +684,7 @@ func TSDBStatusHandler(startTime time.Time, w http.ResponseWriter, r *http.Reque
|
|||
}
|
||||
var status *storage.TSDBStatus
|
||||
if len(matches) == 0 && len(etfs) == 0 {
|
||||
status, err = netstorage.GetTSDBStatusForDate(deadline, date, topN, *maxTSDBStatusSeries)
|
||||
status, err = netstorage.GetTSDBStatusForDate(nil, deadline, date, topN, *maxTSDBStatusSeries)
|
||||
if err != nil {
|
||||
return fmt.Errorf(`cannot obtain tsdb status for date=%d, topN=%d: %w`, date, topN, err)
|
||||
}
|
||||
|
@ -706,7 +716,7 @@ func tsdbStatusWithMatches(matches []string, etfs [][]storage.TagFilter, date ui
|
|||
start := int64(date*secsPerDay) * 1000
|
||||
end := int64(date*secsPerDay+secsPerDay) * 1000
|
||||
sq := storage.NewSearchQuery(start, end, tagFilterss, maxMetrics)
|
||||
status, err := netstorage.GetTSDBStatusWithFilters(deadline, sq, topN)
|
||||
status, err := netstorage.GetTSDBStatusWithFilters(nil, deadline, sq, topN)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -718,7 +728,7 @@ var tsdbStatusDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/
|
|||
// LabelsHandler processes /api/v1/labels request.
|
||||
//
|
||||
// See https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names
|
||||
func LabelsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
|
||||
func LabelsHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, r *http.Request) error {
|
||||
defer labelsDuration.UpdateDuration(startTime)
|
||||
|
||||
deadline := searchutils.GetDeadlineForQuery(r, startTime)
|
||||
|
@ -734,7 +744,7 @@ func LabelsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
if len(matches) == 0 && len(etfs) == 0 {
|
||||
if len(r.Form["start"]) == 0 && len(r.Form["end"]) == 0 {
|
||||
var err error
|
||||
labels, err = netstorage.GetLabels(deadline)
|
||||
labels, err = netstorage.GetLabels(qt, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot obtain labels: %w", err)
|
||||
}
|
||||
|
@ -752,7 +762,7 @@ func LabelsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
MinTimestamp: start,
|
||||
MaxTimestamp: end,
|
||||
}
|
||||
labels, err = netstorage.GetLabelsOnTimeRange(tr, deadline)
|
||||
labels, err = netstorage.GetLabelsOnTimeRange(qt, tr, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot obtain labels on time range: %w", err)
|
||||
}
|
||||
|
@ -772,7 +782,7 @@ func LabelsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
labels, err = labelsWithMatches(matches, etfs, start, end, deadline)
|
||||
labels, err = labelsWithMatches(qt, matches, etfs, start, end, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot obtain labels for match[]=%q, start=%d, end=%d: %w", matches, start, end, err)
|
||||
}
|
||||
|
@ -781,14 +791,17 @@ func LabelsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
w.Header().Set("Content-Type", "application/json")
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteLabelsResponse(bw, labels)
|
||||
qtDone := func() {
|
||||
qt.Donef("/api/v1/labels")
|
||||
}
|
||||
WriteLabelsResponse(bw, labels, qt, qtDone)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return fmt.Errorf("cannot send labels response to remote client: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func labelsWithMatches(matches []string, etfs [][]storage.TagFilter, start, end int64, deadline searchutils.Deadline) ([]string, error) {
|
||||
func labelsWithMatches(qt *querytracer.Tracer, matches []string, etfs [][]storage.TagFilter, start, end int64, deadline searchutils.Deadline) ([]string, error) {
|
||||
tagFilterss, err := getTagFilterssFromMatches(matches)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -804,7 +817,7 @@ func labelsWithMatches(matches []string, etfs [][]storage.TagFilter, start, end
|
|||
m := make(map[string]struct{})
|
||||
if end-start > 24*3600*1000 {
|
||||
// It is cheaper to call SearchMetricNames on time ranges exceeding a day.
|
||||
mns, err := netstorage.SearchMetricNames(sq, deadline)
|
||||
mns, err := netstorage.SearchMetricNames(qt, sq, deadline)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot fetch time series for %q: %w", sq, err)
|
||||
}
|
||||
|
@ -817,12 +830,12 @@ func labelsWithMatches(matches []string, etfs [][]storage.TagFilter, start, end
|
|||
m["__name__"] = struct{}{}
|
||||
}
|
||||
} else {
|
||||
rss, err := netstorage.ProcessSearchQuery(sq, false, deadline)
|
||||
rss, err := netstorage.ProcessSearchQuery(qt, sq, false, deadline)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot fetch data for %q: %w", sq, err)
|
||||
}
|
||||
var mLock sync.Mutex
|
||||
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
err = rss.RunParallel(qt, func(rs *netstorage.Result, workerID uint) error {
|
||||
mLock.Lock()
|
||||
for _, tag := range rs.MetricName.Tags {
|
||||
m[string(tag.Key)] = struct{}{}
|
||||
|
@ -840,6 +853,7 @@ func labelsWithMatches(matches []string, etfs [][]storage.TagFilter, start, end
|
|||
labels = append(labels, label)
|
||||
}
|
||||
sort.Strings(labels)
|
||||
qt.Printf("sort %d labels", len(labels))
|
||||
return labels, nil
|
||||
}
|
||||
|
||||
|
@ -850,7 +864,7 @@ func SeriesCountHandler(startTime time.Time, w http.ResponseWriter, r *http.Requ
|
|||
defer seriesCountDuration.UpdateDuration(startTime)
|
||||
|
||||
deadline := searchutils.GetDeadlineForStatusRequest(r, startTime)
|
||||
n, err := netstorage.GetSeriesCount(deadline)
|
||||
n, err := netstorage.GetSeriesCount(nil, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot obtain series count: %w", err)
|
||||
}
|
||||
|
@ -869,7 +883,7 @@ var seriesCountDuration = metrics.NewSummary(`vm_request_duration_seconds{path="
|
|||
// SeriesHandler processes /api/v1/series request.
|
||||
//
|
||||
// See https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers
|
||||
func SeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
|
||||
func SeriesHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, r *http.Request) error {
|
||||
defer seriesDuration.UpdateDuration(startTime)
|
||||
|
||||
ct := startTime.UnixNano() / 1e6
|
||||
|
@ -899,9 +913,12 @@ func SeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
end = start + defaultStep
|
||||
}
|
||||
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxSeriesLimit)
|
||||
qtDone := func() {
|
||||
qt.Donef("/api/v1/series: start=%d, end=%d", start, end)
|
||||
}
|
||||
if end-start > 24*3600*1000 {
|
||||
// It is cheaper to call SearchMetricNames on time ranges exceeding a day.
|
||||
mns, err := netstorage.SearchMetricNames(sq, deadline)
|
||||
mns, err := netstorage.SearchMetricNames(qt, sq, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot fetch time series for %q: %w", sq, err)
|
||||
}
|
||||
|
@ -918,14 +935,14 @@ func SeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
close(resultsCh)
|
||||
}()
|
||||
// WriteSeriesResponse must consume all the data from resultsCh.
|
||||
WriteSeriesResponse(bw, resultsCh)
|
||||
WriteSeriesResponse(bw, resultsCh, qt, qtDone)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
seriesDuration.UpdateDuration(startTime)
|
||||
return nil
|
||||
}
|
||||
rss, err := netstorage.ProcessSearchQuery(sq, false, deadline)
|
||||
rss, err := netstorage.ProcessSearchQuery(qt, sq, false, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
|
||||
}
|
||||
|
@ -936,7 +953,7 @@ func SeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
resultsCh := make(chan *quicktemplate.ByteBuffer)
|
||||
doneCh := make(chan error)
|
||||
go func() {
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
err := rss.RunParallel(qt, func(rs *netstorage.Result, workerID uint) error {
|
||||
if err := bw.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -949,7 +966,7 @@ func SeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
doneCh <- err
|
||||
}()
|
||||
// WriteSeriesResponse must consume all the data from resultsCh.
|
||||
WriteSeriesResponse(bw, resultsCh)
|
||||
WriteSeriesResponse(bw, resultsCh, qt, qtDone)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return fmt.Errorf("cannot flush series response to remote client: %w", err)
|
||||
}
|
||||
|
@ -965,10 +982,11 @@ var seriesDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/
|
|||
// QueryHandler processes /api/v1/query request.
|
||||
//
|
||||
// See https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries
|
||||
func QueryHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
|
||||
func QueryHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, r *http.Request) error {
|
||||
defer queryDuration.UpdateDuration(startTime)
|
||||
|
||||
ct := startTime.UnixNano() / 1e6
|
||||
mayCache := !searchutils.GetBool(r, "nocache")
|
||||
query := r.FormValue("query")
|
||||
if len(query) == 0 {
|
||||
return fmt.Errorf("missing `query` arg")
|
||||
|
@ -1021,7 +1039,7 @@ func QueryHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) e
|
|||
end: end,
|
||||
filterss: filterss,
|
||||
}
|
||||
if err := exportHandler(w, ep, "promapi", 0, false); err != nil {
|
||||
if err := exportHandler(qt, w, ep, "promapi", 0, false); err != nil {
|
||||
return fmt.Errorf("error when exporting data for query=%q on the time range (start=%d, end=%d): %w", childQuery, start, end, err)
|
||||
}
|
||||
queryDuration.UpdateDuration(startTime)
|
||||
|
@ -1037,7 +1055,7 @@ func QueryHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) e
|
|||
start -= offset
|
||||
end := start
|
||||
start = end - window
|
||||
if err := queryRangeHandler(startTime, w, childQuery, start, end, step, r, ct, etfs); err != nil {
|
||||
if err := queryRangeHandler(qt, startTime, w, childQuery, start, end, step, r, ct, etfs); err != nil {
|
||||
return fmt.Errorf("error when executing query=%q on the time range (start=%d, end=%d, step=%d): %w", childQuery, start, end, step, err)
|
||||
}
|
||||
queryDuration.UpdateDuration(startTime)
|
||||
|
@ -1061,11 +1079,12 @@ func QueryHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) e
|
|||
MaxSeries: *maxUniqueTimeseries,
|
||||
QuotedRemoteAddr: httpserver.GetQuotedRemoteAddr(r),
|
||||
Deadline: deadline,
|
||||
MayCache: mayCache,
|
||||
LookbackDelta: lookbackDelta,
|
||||
RoundDigits: getRoundDigits(r),
|
||||
EnforcedTagFilterss: etfs,
|
||||
}
|
||||
result, err := promql.Exec(&ec, query, true)
|
||||
result, err := promql.Exec(qt, &ec, query, true)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error when executing query=%q for (time=%d, step=%d): %w", query, start, step, err)
|
||||
}
|
||||
|
@ -1081,7 +1100,10 @@ func QueryHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) e
|
|||
w.Header().Set("Content-Type", "application/json")
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteQueryResponse(bw, result)
|
||||
qtDone := func() {
|
||||
qt.Donef("/api/v1/query: query=%s, time=%d: series=%d", query, start, len(result))
|
||||
}
|
||||
WriteQueryResponse(bw, result, qt, qtDone)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return fmt.Errorf("cannot flush query response to remote client: %w", err)
|
||||
}
|
||||
|
@ -1093,7 +1115,7 @@ var queryDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v
|
|||
// QueryRangeHandler processes /api/v1/query_range request.
|
||||
//
|
||||
// See https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries
|
||||
func QueryRangeHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
|
||||
func QueryRangeHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, r *http.Request) error {
|
||||
defer queryRangeDuration.UpdateDuration(startTime)
|
||||
|
||||
ct := startTime.UnixNano() / 1e6
|
||||
|
@ -1117,13 +1139,14 @@ func QueryRangeHandler(startTime time.Time, w http.ResponseWriter, r *http.Reque
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := queryRangeHandler(startTime, w, query, start, end, step, r, ct, etfs); err != nil {
|
||||
if err := queryRangeHandler(qt, startTime, w, query, start, end, step, r, ct, etfs); err != nil {
|
||||
return fmt.Errorf("error when executing query=%q on the time range (start=%d, end=%d, step=%d): %w", query, start, end, step, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func queryRangeHandler(startTime time.Time, w http.ResponseWriter, query string, start, end, step int64, r *http.Request, ct int64, etfs [][]storage.TagFilter) error {
|
||||
func queryRangeHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWriter, query string,
|
||||
start, end, step int64, r *http.Request, ct int64, etfs [][]storage.TagFilter) error {
|
||||
deadline := searchutils.GetDeadlineForQuery(r, startTime)
|
||||
mayCache := !searchutils.GetBool(r, "nocache")
|
||||
lookbackDelta, err := getMaxLookback(r)
|
||||
|
@ -1157,7 +1180,7 @@ func queryRangeHandler(startTime time.Time, w http.ResponseWriter, query string,
|
|||
RoundDigits: getRoundDigits(r),
|
||||
EnforcedTagFilterss: etfs,
|
||||
}
|
||||
result, err := promql.Exec(&ec, query, false)
|
||||
result, err := promql.Exec(qt, &ec, query, false)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot execute query: %w", err)
|
||||
}
|
||||
|
@ -1175,7 +1198,10 @@ func queryRangeHandler(startTime time.Time, w http.ResponseWriter, query string,
|
|||
w.Header().Set("Content-Type", "application/json")
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteQueryRangeResponse(bw, result)
|
||||
qtDone := func() {
|
||||
qt.Donef("/api/v1/query_range: start=%d, end=%d, step=%d, query=%q: series=%d", start, end, step, query, len(result))
|
||||
}
|
||||
WriteQueryRangeResponse(bw, result, qt, qtDone)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return fmt.Errorf("cannot send query range response to remote client: %w", err)
|
||||
}
|
||||
|
|
|
@ -1,25 +1,37 @@
|
|||
{% import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
) %}
|
||||
|
||||
{% stripspace %}
|
||||
QueryRangeResponse generates response for /api/v1/query_range.
|
||||
See https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries
|
||||
{% func QueryRangeResponse(rs []netstorage.Result) %}
|
||||
{% func QueryRangeResponse(rs []netstorage.Result, qt *querytracer.Tracer, qtDone func()) %}
|
||||
{
|
||||
{% code
|
||||
seriesCount := len(rs)
|
||||
pointsCount := 0
|
||||
%}
|
||||
"status":"success",
|
||||
"data":{
|
||||
"resultType":"matrix",
|
||||
"result":[
|
||||
{% if len(rs) > 0 %}
|
||||
{%= queryRangeLine(&rs[0]) %}
|
||||
{% code pointsCount += len(rs[0].Values) %}
|
||||
{% code rs = rs[1:] %}
|
||||
{% for i := range rs %}
|
||||
,{%= queryRangeLine(&rs[i]) %}
|
||||
{% code pointsCount += len(rs[i].Values) %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
]
|
||||
}
|
||||
{% code
|
||||
qt.Printf("generate /api/v1/query_range response for series=%d, points=%d", seriesCount, pointsCount)
|
||||
qtDone()
|
||||
%}
|
||||
{%= dumpQueryTrace(qt) %}
|
||||
}
|
||||
{% endfunc %}
|
||||
|
||||
|
|
|
@ -7,112 +7,133 @@ package prometheus
|
|||
//line app/vmselect/prometheus/query_range_response.qtpl:1
|
||||
import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
)
|
||||
|
||||
// QueryRangeResponse generates response for /api/v1/query_range.See https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:8
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:9
|
||||
import (
|
||||
qtio422016 "io"
|
||||
|
||||
qt422016 "github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:8
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:9
|
||||
var (
|
||||
_ = qtio422016.Copy
|
||||
_ = qt422016.AcquireByteBuffer
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:8
|
||||
func StreamQueryRangeResponse(qw422016 *qt422016.Writer, rs []netstorage.Result) {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:8
|
||||
qw422016.N().S(`{"status":"success","data":{"resultType":"matrix","result":[`)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:9
|
||||
func StreamQueryRangeResponse(qw422016 *qt422016.Writer, rs []netstorage.Result, qt *querytracer.Tracer, qtDone func()) {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:9
|
||||
qw422016.N().S(`{`)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:12
|
||||
seriesCount := len(rs)
|
||||
pointsCount := 0
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:14
|
||||
qw422016.N().S(`"status":"success","data":{"resultType":"matrix","result":[`)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:19
|
||||
if len(rs) > 0 {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:15
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:20
|
||||
streamqueryRangeLine(qw422016, &rs[0])
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:16
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:21
|
||||
pointsCount += len(rs[0].Values)
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:22
|
||||
rs = rs[1:]
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:17
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:23
|
||||
for i := range rs {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:17
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:23
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:18
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
streamqueryRangeLine(qw422016, &rs[i])
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:19
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:25
|
||||
pointsCount += len(rs[i].Values)
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:26
|
||||
}
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:20
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:27
|
||||
}
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:20
|
||||
qw422016.N().S(`]}}`)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
}
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:27
|
||||
qw422016.N().S(`]}`)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
qt.Printf("generate /api/v1/query_range response for series=%d, points=%d", seriesCount, pointsCount)
|
||||
qtDone()
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
func WriteQueryRangeResponse(qq422016 qtio422016.Writer, rs []netstorage.Result) {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
StreamQueryRangeResponse(qw422016, rs)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
func QueryRangeResponse(rs []netstorage.Result) string {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
WriteQueryRangeResponse(qb422016, rs)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:24
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:26
|
||||
func streamqueryRangeLine(qw422016 *qt422016.Writer, r *netstorage.Result) {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:26
|
||||
qw422016.N().S(`{"metric":`)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:28
|
||||
streammetricNameObject(qw422016, &r.MetricName)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:28
|
||||
qw422016.N().S(`,"values":`)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:29
|
||||
streamvaluesWithTimestamps(qw422016, r.Values, r.Timestamps)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:29
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:34
|
||||
streamdumpQueryTrace(qw422016, qt)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:34
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:36
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
func writequeryRangeLine(qq422016 qtio422016.Writer, r *netstorage.Result) {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:36
|
||||
func WriteQueryRangeResponse(qq422016 qtio422016.Writer, rs []netstorage.Result, qt *querytracer.Tracer, qtDone func()) {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:36
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
streamqueryRangeLine(qw422016, r)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:36
|
||||
StreamQueryRangeResponse(qw422016, rs, qt, qtDone)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:36
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:36
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
func queryRangeLine(r *netstorage.Result) string {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:36
|
||||
func QueryRangeResponse(rs []netstorage.Result, qt *querytracer.Tracer, qtDone func()) string {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:36
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
writequeryRangeLine(qb422016, r)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:36
|
||||
WriteQueryRangeResponse(qb422016, rs, qt, qtDone)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:36
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:36
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:36
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:36
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:38
|
||||
func streamqueryRangeLine(qw422016 *qt422016.Writer, r *netstorage.Result) {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:38
|
||||
qw422016.N().S(`{"metric":`)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:40
|
||||
streammetricNameObject(qw422016, &r.MetricName)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:40
|
||||
qw422016.N().S(`,"values":`)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:41
|
||||
streamvaluesWithTimestamps(qw422016, r.Values, r.Timestamps)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:41
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:43
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:43
|
||||
func writequeryRangeLine(qq422016 qtio422016.Writer, r *netstorage.Result) {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:43
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:43
|
||||
streamqueryRangeLine(qw422016, r)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:43
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:43
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:43
|
||||
func queryRangeLine(r *netstorage.Result) string {
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:43
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:43
|
||||
writequeryRangeLine(qb422016, r)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:43
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:43
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:43
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/query_range_response.qtpl:43
|
||||
}
|
||||
|
|
|
@ -1,12 +1,14 @@
|
|||
{% import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
) %}
|
||||
|
||||
{% stripspace %}
|
||||
QueryResponse generates response for /api/v1/query.
|
||||
See https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries
|
||||
{% func QueryResponse(rs []netstorage.Result) %}
|
||||
{% func QueryResponse(rs []netstorage.Result, qt *querytracer.Tracer, qtDone func()) %}
|
||||
{
|
||||
{% code seriesCount := len(rs) %}
|
||||
"status":"success",
|
||||
"data":{
|
||||
"resultType":"vector",
|
||||
|
@ -27,6 +29,11 @@ See https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries
|
|||
{% endif %}
|
||||
]
|
||||
}
|
||||
{% code
|
||||
qt.Printf("generate /api/v1/query response for series=%d", seriesCount)
|
||||
qtDone()
|
||||
%}
|
||||
{%= dumpQueryTrace(qt) %}
|
||||
}
|
||||
{% endfunc %}
|
||||
{% endstripspace %}
|
||||
|
|
|
@ -7,88 +7,102 @@ package prometheus
|
|||
//line app/vmselect/prometheus/query_response.qtpl:1
|
||||
import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
)
|
||||
|
||||
// QueryResponse generates response for /api/v1/query.See https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries
|
||||
|
||||
//line app/vmselect/prometheus/query_response.qtpl:8
|
||||
//line app/vmselect/prometheus/query_response.qtpl:9
|
||||
import (
|
||||
qtio422016 "io"
|
||||
|
||||
qt422016 "github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/query_response.qtpl:8
|
||||
//line app/vmselect/prometheus/query_response.qtpl:9
|
||||
var (
|
||||
_ = qtio422016.Copy
|
||||
_ = qt422016.AcquireByteBuffer
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/query_response.qtpl:8
|
||||
func StreamQueryResponse(qw422016 *qt422016.Writer, rs []netstorage.Result) {
|
||||
//line app/vmselect/prometheus/query_response.qtpl:8
|
||||
qw422016.N().S(`{"status":"success","data":{"resultType":"vector","result":[`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:14
|
||||
//line app/vmselect/prometheus/query_response.qtpl:9
|
||||
func StreamQueryResponse(qw422016 *qt422016.Writer, rs []netstorage.Result, qt *querytracer.Tracer, qtDone func()) {
|
||||
//line app/vmselect/prometheus/query_response.qtpl:9
|
||||
qw422016.N().S(`{`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:11
|
||||
seriesCount := len(rs)
|
||||
|
||||
//line app/vmselect/prometheus/query_response.qtpl:11
|
||||
qw422016.N().S(`"status":"success","data":{"resultType":"vector","result":[`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:16
|
||||
if len(rs) > 0 {
|
||||
//line app/vmselect/prometheus/query_response.qtpl:14
|
||||
//line app/vmselect/prometheus/query_response.qtpl:16
|
||||
qw422016.N().S(`{"metric":`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:16
|
||||
//line app/vmselect/prometheus/query_response.qtpl:18
|
||||
streammetricNameObject(qw422016, &rs[0].MetricName)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:16
|
||||
//line app/vmselect/prometheus/query_response.qtpl:18
|
||||
qw422016.N().S(`,"value":`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:17
|
||||
streammetricRow(qw422016, rs[0].Timestamps[0], rs[0].Values[0])
|
||||
//line app/vmselect/prometheus/query_response.qtpl:17
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:19
|
||||
streammetricRow(qw422016, rs[0].Timestamps[0], rs[0].Values[0])
|
||||
//line app/vmselect/prometheus/query_response.qtpl:19
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:21
|
||||
rs = rs[1:]
|
||||
|
||||
//line app/vmselect/prometheus/query_response.qtpl:20
|
||||
//line app/vmselect/prometheus/query_response.qtpl:22
|
||||
for i := range rs {
|
||||
//line app/vmselect/prometheus/query_response.qtpl:21
|
||||
//line app/vmselect/prometheus/query_response.qtpl:23
|
||||
r := &rs[i]
|
||||
|
||||
//line app/vmselect/prometheus/query_response.qtpl:21
|
||||
//line app/vmselect/prometheus/query_response.qtpl:23
|
||||
qw422016.N().S(`,{"metric":`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:23
|
||||
//line app/vmselect/prometheus/query_response.qtpl:25
|
||||
streammetricNameObject(qw422016, &r.MetricName)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:23
|
||||
//line app/vmselect/prometheus/query_response.qtpl:25
|
||||
qw422016.N().S(`,"value":`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:24
|
||||
streammetricRow(qw422016, r.Timestamps[0], r.Values[0])
|
||||
//line app/vmselect/prometheus/query_response.qtpl:24
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:26
|
||||
streammetricRow(qw422016, r.Timestamps[0], r.Values[0])
|
||||
//line app/vmselect/prometheus/query_response.qtpl:26
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:28
|
||||
}
|
||||
//line app/vmselect/prometheus/query_response.qtpl:27
|
||||
//line app/vmselect/prometheus/query_response.qtpl:29
|
||||
}
|
||||
//line app/vmselect/prometheus/query_response.qtpl:27
|
||||
qw422016.N().S(`]}}`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_response.qtpl:29
|
||||
qw422016.N().S(`]}`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:33
|
||||
qt.Printf("generate /api/v1/query response for series=%d", seriesCount)
|
||||
qtDone()
|
||||
|
||||
//line app/vmselect/prometheus/query_response.qtpl:36
|
||||
streamdumpQueryTrace(qw422016, qt)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:36
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:38
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/query_response.qtpl:31
|
||||
func WriteQueryResponse(qq422016 qtio422016.Writer, rs []netstorage.Result) {
|
||||
//line app/vmselect/prometheus/query_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_response.qtpl:38
|
||||
func WriteQueryResponse(qq422016 qtio422016.Writer, rs []netstorage.Result, qt *querytracer.Tracer, qtDone func()) {
|
||||
//line app/vmselect/prometheus/query_response.qtpl:38
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:31
|
||||
StreamQueryResponse(qw422016, rs)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_response.qtpl:38
|
||||
StreamQueryResponse(qw422016, rs, qt, qtDone)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:38
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_response.qtpl:38
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/query_response.qtpl:31
|
||||
func QueryResponse(rs []netstorage.Result) string {
|
||||
//line app/vmselect/prometheus/query_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_response.qtpl:38
|
||||
func QueryResponse(rs []netstorage.Result, qt *querytracer.Tracer, qtDone func()) string {
|
||||
//line app/vmselect/prometheus/query_response.qtpl:38
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/query_response.qtpl:31
|
||||
WriteQueryResponse(qb422016, rs)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_response.qtpl:38
|
||||
WriteQueryResponse(qb422016, rs, qt, qtDone)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:38
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_response.qtpl:38
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/query_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_response.qtpl:38
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/query_response.qtpl:31
|
||||
//line app/vmselect/prometheus/query_response.qtpl:38
|
||||
}
|
||||
|
|
|
@ -1,24 +1,37 @@
|
|||
{% import (
|
||||
"github.com/valyala/quicktemplate"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
) %}
|
||||
|
||||
{% stripspace %}
|
||||
SeriesResponse generates response for /api/v1/series.
|
||||
See https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers
|
||||
{% func SeriesResponse(resultsCh <-chan *quicktemplate.ByteBuffer) %}
|
||||
{% func SeriesResponse(resultsCh <-chan *quicktemplate.ByteBuffer, qt *querytracer.Tracer, qtDone func()) %}
|
||||
{
|
||||
{% code seriesCount := 0 %}
|
||||
"status":"success",
|
||||
"data":[
|
||||
{% code bb, ok := <-resultsCh %}
|
||||
{% if ok %}
|
||||
{%z= bb.B %}
|
||||
{% code quicktemplate.ReleaseByteBuffer(bb) %}
|
||||
{% code
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
seriesCount++
|
||||
%}
|
||||
{% for bb := range resultsCh %}
|
||||
,{%z= bb.B %}
|
||||
{% code quicktemplate.ReleaseByteBuffer(bb) %}
|
||||
{% code
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
seriesCount++
|
||||
%}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
]
|
||||
{% code
|
||||
qt.Printf("generate response: series=%d", seriesCount)
|
||||
qtDone()
|
||||
%}
|
||||
{%= dumpQueryTrace(qt) %}
|
||||
}
|
||||
{% endfunc %}
|
||||
{% endstripspace %}
|
||||
|
|
|
@ -6,78 +6,94 @@ package prometheus
|
|||
|
||||
//line app/vmselect/prometheus/series_response.qtpl:1
|
||||
import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
"github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
// SeriesResponse generates response for /api/v1/series.See https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers
|
||||
|
||||
//line app/vmselect/prometheus/series_response.qtpl:8
|
||||
//line app/vmselect/prometheus/series_response.qtpl:9
|
||||
import (
|
||||
qtio422016 "io"
|
||||
|
||||
qt422016 "github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/series_response.qtpl:8
|
||||
//line app/vmselect/prometheus/series_response.qtpl:9
|
||||
var (
|
||||
_ = qtio422016.Copy
|
||||
_ = qt422016.AcquireByteBuffer
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/series_response.qtpl:8
|
||||
func StreamSeriesResponse(qw422016 *qt422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
|
||||
//line app/vmselect/prometheus/series_response.qtpl:8
|
||||
qw422016.N().S(`{"status":"success","data":[`)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:12
|
||||
//line app/vmselect/prometheus/series_response.qtpl:9
|
||||
func StreamSeriesResponse(qw422016 *qt422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer, qt *querytracer.Tracer, qtDone func()) {
|
||||
//line app/vmselect/prometheus/series_response.qtpl:9
|
||||
qw422016.N().S(`{`)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:11
|
||||
seriesCount := 0
|
||||
|
||||
//line app/vmselect/prometheus/series_response.qtpl:11
|
||||
qw422016.N().S(`"status":"success","data":[`)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:14
|
||||
bb, ok := <-resultsCh
|
||||
|
||||
//line app/vmselect/prometheus/series_response.qtpl:13
|
||||
if ok {
|
||||
//line app/vmselect/prometheus/series_response.qtpl:14
|
||||
qw422016.N().Z(bb.B)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:15
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
|
||||
if ok {
|
||||
//line app/vmselect/prometheus/series_response.qtpl:16
|
||||
for bb := range resultsCh {
|
||||
//line app/vmselect/prometheus/series_response.qtpl:16
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:17
|
||||
qw422016.N().Z(bb.B)
|
||||
qw422016.N().Z(bb.B)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:18
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
seriesCount++
|
||||
|
||||
//line app/vmselect/prometheus/series_response.qtpl:21
|
||||
for bb := range resultsCh {
|
||||
//line app/vmselect/prometheus/series_response.qtpl:21
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:22
|
||||
qw422016.N().Z(bb.B)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:24
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
seriesCount++
|
||||
|
||||
//line app/vmselect/prometheus/series_response.qtpl:19
|
||||
//line app/vmselect/prometheus/series_response.qtpl:27
|
||||
}
|
||||
//line app/vmselect/prometheus/series_response.qtpl:20
|
||||
//line app/vmselect/prometheus/series_response.qtpl:28
|
||||
}
|
||||
//line app/vmselect/prometheus/series_response.qtpl:20
|
||||
qw422016.N().S(`]}`)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:23
|
||||
//line app/vmselect/prometheus/series_response.qtpl:28
|
||||
qw422016.N().S(`]`)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:31
|
||||
qt.Printf("generate response: series=%d", seriesCount)
|
||||
qtDone()
|
||||
|
||||
//line app/vmselect/prometheus/series_response.qtpl:34
|
||||
streamdumpQueryTrace(qw422016, qt)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:34
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:36
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/series_response.qtpl:23
|
||||
func WriteSeriesResponse(qq422016 qtio422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
|
||||
//line app/vmselect/prometheus/series_response.qtpl:23
|
||||
//line app/vmselect/prometheus/series_response.qtpl:36
|
||||
func WriteSeriesResponse(qq422016 qtio422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer, qt *querytracer.Tracer, qtDone func()) {
|
||||
//line app/vmselect/prometheus/series_response.qtpl:36
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:23
|
||||
StreamSeriesResponse(qw422016, resultsCh)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:23
|
||||
//line app/vmselect/prometheus/series_response.qtpl:36
|
||||
StreamSeriesResponse(qw422016, resultsCh, qt, qtDone)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:36
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:23
|
||||
//line app/vmselect/prometheus/series_response.qtpl:36
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/series_response.qtpl:23
|
||||
func SeriesResponse(resultsCh <-chan *quicktemplate.ByteBuffer) string {
|
||||
//line app/vmselect/prometheus/series_response.qtpl:23
|
||||
//line app/vmselect/prometheus/series_response.qtpl:36
|
||||
func SeriesResponse(resultsCh <-chan *quicktemplate.ByteBuffer, qt *querytracer.Tracer, qtDone func()) string {
|
||||
//line app/vmselect/prometheus/series_response.qtpl:36
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/series_response.qtpl:23
|
||||
WriteSeriesResponse(qb422016, resultsCh)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:23
|
||||
//line app/vmselect/prometheus/series_response.qtpl:36
|
||||
WriteSeriesResponse(qb422016, resultsCh, qt, qtDone)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:36
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:23
|
||||
//line app/vmselect/prometheus/series_response.qtpl:36
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/series_response.qtpl:23
|
||||
//line app/vmselect/prometheus/series_response.qtpl:36
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/series_response.qtpl:23
|
||||
//line app/vmselect/prometheus/series_response.qtpl:36
|
||||
}
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
{% import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
) %}
|
||||
|
||||
|
@ -45,4 +46,9 @@
|
|||
]
|
||||
{% endfunc %}
|
||||
|
||||
{% func dumpQueryTrace(qt *querytracer.Tracer) %}
|
||||
{% code traceJSON := qt.ToJSON() %}
|
||||
{% if traceJSON != "" %},"trace":{%s= traceJSON %}{% endif %}
|
||||
{% endfunc %}
|
||||
|
||||
{% endstripspace %}
|
||||
|
|
|
@ -6,212 +6,255 @@ package prometheus
|
|||
|
||||
//line app/vmselect/prometheus/util.qtpl:1
|
||||
import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:7
|
||||
//line app/vmselect/prometheus/util.qtpl:8
|
||||
import (
|
||||
qtio422016 "io"
|
||||
|
||||
qt422016 "github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:7
|
||||
//line app/vmselect/prometheus/util.qtpl:8
|
||||
var (
|
||||
_ = qtio422016.Copy
|
||||
_ = qt422016.AcquireByteBuffer
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:7
|
||||
//line app/vmselect/prometheus/util.qtpl:8
|
||||
func streammetricNameObject(qw422016 *qt422016.Writer, mn *storage.MetricName) {
|
||||
//line app/vmselect/prometheus/util.qtpl:7
|
||||
//line app/vmselect/prometheus/util.qtpl:8
|
||||
qw422016.N().S(`{`)
|
||||
//line app/vmselect/prometheus/util.qtpl:9
|
||||
//line app/vmselect/prometheus/util.qtpl:10
|
||||
if len(mn.MetricGroup) > 0 {
|
||||
//line app/vmselect/prometheus/util.qtpl:9
|
||||
//line app/vmselect/prometheus/util.qtpl:10
|
||||
qw422016.N().S(`"__name__":`)
|
||||
//line app/vmselect/prometheus/util.qtpl:10
|
||||
qw422016.N().QZ(mn.MetricGroup)
|
||||
//line app/vmselect/prometheus/util.qtpl:10
|
||||
if len(mn.Tags) > 0 {
|
||||
//line app/vmselect/prometheus/util.qtpl:10
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/util.qtpl:10
|
||||
}
|
||||
//line app/vmselect/prometheus/util.qtpl:11
|
||||
}
|
||||
qw422016.N().QZ(mn.MetricGroup)
|
||||
//line app/vmselect/prometheus/util.qtpl:11
|
||||
if len(mn.Tags) > 0 {
|
||||
//line app/vmselect/prometheus/util.qtpl:11
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/util.qtpl:11
|
||||
}
|
||||
//line app/vmselect/prometheus/util.qtpl:12
|
||||
for j := range mn.Tags {
|
||||
}
|
||||
//line app/vmselect/prometheus/util.qtpl:13
|
||||
for j := range mn.Tags {
|
||||
//line app/vmselect/prometheus/util.qtpl:14
|
||||
tag := &mn.Tags[j]
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:14
|
||||
//line app/vmselect/prometheus/util.qtpl:15
|
||||
qw422016.N().QZ(tag.Key)
|
||||
//line app/vmselect/prometheus/util.qtpl:14
|
||||
//line app/vmselect/prometheus/util.qtpl:15
|
||||
qw422016.N().S(`:`)
|
||||
//line app/vmselect/prometheus/util.qtpl:14
|
||||
//line app/vmselect/prometheus/util.qtpl:15
|
||||
qw422016.N().QZ(tag.Value)
|
||||
//line app/vmselect/prometheus/util.qtpl:14
|
||||
//line app/vmselect/prometheus/util.qtpl:15
|
||||
if j+1 < len(mn.Tags) {
|
||||
//line app/vmselect/prometheus/util.qtpl:14
|
||||
//line app/vmselect/prometheus/util.qtpl:15
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/util.qtpl:14
|
||||
//line app/vmselect/prometheus/util.qtpl:15
|
||||
}
|
||||
//line app/vmselect/prometheus/util.qtpl:15
|
||||
//line app/vmselect/prometheus/util.qtpl:16
|
||||
}
|
||||
//line app/vmselect/prometheus/util.qtpl:15
|
||||
//line app/vmselect/prometheus/util.qtpl:16
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/util.qtpl:17
|
||||
//line app/vmselect/prometheus/util.qtpl:18
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:17
|
||||
//line app/vmselect/prometheus/util.qtpl:18
|
||||
func writemetricNameObject(qq422016 qtio422016.Writer, mn *storage.MetricName) {
|
||||
//line app/vmselect/prometheus/util.qtpl:17
|
||||
//line app/vmselect/prometheus/util.qtpl:18
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/util.qtpl:17
|
||||
//line app/vmselect/prometheus/util.qtpl:18
|
||||
streammetricNameObject(qw422016, mn)
|
||||
//line app/vmselect/prometheus/util.qtpl:17
|
||||
//line app/vmselect/prometheus/util.qtpl:18
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/util.qtpl:17
|
||||
//line app/vmselect/prometheus/util.qtpl:18
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:17
|
||||
//line app/vmselect/prometheus/util.qtpl:18
|
||||
func metricNameObject(mn *storage.MetricName) string {
|
||||
//line app/vmselect/prometheus/util.qtpl:17
|
||||
//line app/vmselect/prometheus/util.qtpl:18
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/util.qtpl:17
|
||||
//line app/vmselect/prometheus/util.qtpl:18
|
||||
writemetricNameObject(qb422016, mn)
|
||||
//line app/vmselect/prometheus/util.qtpl:17
|
||||
//line app/vmselect/prometheus/util.qtpl:18
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/util.qtpl:17
|
||||
//line app/vmselect/prometheus/util.qtpl:18
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/util.qtpl:17
|
||||
//line app/vmselect/prometheus/util.qtpl:18
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/util.qtpl:17
|
||||
//line app/vmselect/prometheus/util.qtpl:18
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:19
|
||||
//line app/vmselect/prometheus/util.qtpl:20
|
||||
func streammetricRow(qw422016 *qt422016.Writer, timestamp int64, value float64) {
|
||||
//line app/vmselect/prometheus/util.qtpl:19
|
||||
qw422016.N().S(`[`)
|
||||
//line app/vmselect/prometheus/util.qtpl:20
|
||||
qw422016.N().S(`[`)
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
qw422016.N().F(float64(timestamp) / 1e3)
|
||||
//line app/vmselect/prometheus/util.qtpl:20
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
qw422016.N().S(`,"`)
|
||||
//line app/vmselect/prometheus/util.qtpl:20
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
qw422016.N().F(value)
|
||||
//line app/vmselect/prometheus/util.qtpl:20
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
qw422016.N().S(`"]`)
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
//line app/vmselect/prometheus/util.qtpl:22
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
//line app/vmselect/prometheus/util.qtpl:22
|
||||
func writemetricRow(qq422016 qtio422016.Writer, timestamp int64, value float64) {
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
//line app/vmselect/prometheus/util.qtpl:22
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
//line app/vmselect/prometheus/util.qtpl:22
|
||||
streammetricRow(qw422016, timestamp, value)
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
//line app/vmselect/prometheus/util.qtpl:22
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
//line app/vmselect/prometheus/util.qtpl:22
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
//line app/vmselect/prometheus/util.qtpl:22
|
||||
func metricRow(timestamp int64, value float64) string {
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
//line app/vmselect/prometheus/util.qtpl:22
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
//line app/vmselect/prometheus/util.qtpl:22
|
||||
writemetricRow(qb422016, timestamp, value)
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
//line app/vmselect/prometheus/util.qtpl:22
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
//line app/vmselect/prometheus/util.qtpl:22
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
//line app/vmselect/prometheus/util.qtpl:22
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/util.qtpl:21
|
||||
//line app/vmselect/prometheus/util.qtpl:22
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:23
|
||||
//line app/vmselect/prometheus/util.qtpl:24
|
||||
func streamvaluesWithTimestamps(qw422016 *qt422016.Writer, values []float64, timestamps []int64) {
|
||||
//line app/vmselect/prometheus/util.qtpl:24
|
||||
//line app/vmselect/prometheus/util.qtpl:25
|
||||
if len(values) == 0 {
|
||||
//line app/vmselect/prometheus/util.qtpl:24
|
||||
//line app/vmselect/prometheus/util.qtpl:25
|
||||
qw422016.N().S(`[]`)
|
||||
//line app/vmselect/prometheus/util.qtpl:26
|
||||
//line app/vmselect/prometheus/util.qtpl:27
|
||||
return
|
||||
//line app/vmselect/prometheus/util.qtpl:27
|
||||
//line app/vmselect/prometheus/util.qtpl:28
|
||||
}
|
||||
//line app/vmselect/prometheus/util.qtpl:27
|
||||
//line app/vmselect/prometheus/util.qtpl:28
|
||||
qw422016.N().S(`[`)
|
||||
//line app/vmselect/prometheus/util.qtpl:29
|
||||
//line app/vmselect/prometheus/util.qtpl:30
|
||||
/* inline metricRow call here for the sake of performance optimization */
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:29
|
||||
//line app/vmselect/prometheus/util.qtpl:30
|
||||
qw422016.N().S(`[`)
|
||||
//line app/vmselect/prometheus/util.qtpl:30
|
||||
//line app/vmselect/prometheus/util.qtpl:31
|
||||
qw422016.N().F(float64(timestamps[0]) / 1e3)
|
||||
//line app/vmselect/prometheus/util.qtpl:30
|
||||
//line app/vmselect/prometheus/util.qtpl:31
|
||||
qw422016.N().S(`,"`)
|
||||
//line app/vmselect/prometheus/util.qtpl:30
|
||||
//line app/vmselect/prometheus/util.qtpl:31
|
||||
qw422016.N().F(values[0])
|
||||
//line app/vmselect/prometheus/util.qtpl:30
|
||||
//line app/vmselect/prometheus/util.qtpl:31
|
||||
qw422016.N().S(`"]`)
|
||||
//line app/vmselect/prometheus/util.qtpl:32
|
||||
//line app/vmselect/prometheus/util.qtpl:33
|
||||
timestamps = timestamps[1:]
|
||||
values = values[1:]
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:35
|
||||
//line app/vmselect/prometheus/util.qtpl:36
|
||||
if len(values) > 0 {
|
||||
//line app/vmselect/prometheus/util.qtpl:37
|
||||
//line app/vmselect/prometheus/util.qtpl:38
|
||||
// Remove bounds check inside the loop below
|
||||
_ = timestamps[len(values)-1]
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:40
|
||||
for i, v := range values {
|
||||
//line app/vmselect/prometheus/util.qtpl:41
|
||||
for i, v := range values {
|
||||
//line app/vmselect/prometheus/util.qtpl:42
|
||||
/* inline metricRow call here for the sake of performance optimization */
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:41
|
||||
//line app/vmselect/prometheus/util.qtpl:42
|
||||
qw422016.N().S(`,[`)
|
||||
//line app/vmselect/prometheus/util.qtpl:42
|
||||
qw422016.N().F(float64(timestamps[i]) / 1e3)
|
||||
//line app/vmselect/prometheus/util.qtpl:42
|
||||
qw422016.N().S(`,"`)
|
||||
//line app/vmselect/prometheus/util.qtpl:42
|
||||
qw422016.N().F(v)
|
||||
//line app/vmselect/prometheus/util.qtpl:42
|
||||
qw422016.N().S(`"]`)
|
||||
//line app/vmselect/prometheus/util.qtpl:43
|
||||
qw422016.N().F(float64(timestamps[i]) / 1e3)
|
||||
//line app/vmselect/prometheus/util.qtpl:43
|
||||
qw422016.N().S(`,"`)
|
||||
//line app/vmselect/prometheus/util.qtpl:43
|
||||
qw422016.N().F(v)
|
||||
//line app/vmselect/prometheus/util.qtpl:43
|
||||
qw422016.N().S(`"]`)
|
||||
//line app/vmselect/prometheus/util.qtpl:44
|
||||
}
|
||||
//line app/vmselect/prometheus/util.qtpl:44
|
||||
//line app/vmselect/prometheus/util.qtpl:45
|
||||
}
|
||||
//line app/vmselect/prometheus/util.qtpl:44
|
||||
//line app/vmselect/prometheus/util.qtpl:45
|
||||
qw422016.N().S(`]`)
|
||||
//line app/vmselect/prometheus/util.qtpl:46
|
||||
//line app/vmselect/prometheus/util.qtpl:47
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:46
|
||||
//line app/vmselect/prometheus/util.qtpl:47
|
||||
func writevaluesWithTimestamps(qq422016 qtio422016.Writer, values []float64, timestamps []int64) {
|
||||
//line app/vmselect/prometheus/util.qtpl:46
|
||||
//line app/vmselect/prometheus/util.qtpl:47
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/util.qtpl:46
|
||||
//line app/vmselect/prometheus/util.qtpl:47
|
||||
streamvaluesWithTimestamps(qw422016, values, timestamps)
|
||||
//line app/vmselect/prometheus/util.qtpl:46
|
||||
//line app/vmselect/prometheus/util.qtpl:47
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/util.qtpl:46
|
||||
//line app/vmselect/prometheus/util.qtpl:47
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:46
|
||||
//line app/vmselect/prometheus/util.qtpl:47
|
||||
func valuesWithTimestamps(values []float64, timestamps []int64) string {
|
||||
//line app/vmselect/prometheus/util.qtpl:46
|
||||
//line app/vmselect/prometheus/util.qtpl:47
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/util.qtpl:46
|
||||
//line app/vmselect/prometheus/util.qtpl:47
|
||||
writevaluesWithTimestamps(qb422016, values, timestamps)
|
||||
//line app/vmselect/prometheus/util.qtpl:46
|
||||
//line app/vmselect/prometheus/util.qtpl:47
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/util.qtpl:46
|
||||
//line app/vmselect/prometheus/util.qtpl:47
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/util.qtpl:46
|
||||
//line app/vmselect/prometheus/util.qtpl:47
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/util.qtpl:46
|
||||
//line app/vmselect/prometheus/util.qtpl:47
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:49
|
||||
func streamdumpQueryTrace(qw422016 *qt422016.Writer, qt *querytracer.Tracer) {
|
||||
//line app/vmselect/prometheus/util.qtpl:50
|
||||
traceJSON := qt.ToJSON()
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:51
|
||||
if traceJSON != "" {
|
||||
//line app/vmselect/prometheus/util.qtpl:51
|
||||
qw422016.N().S(`,"trace":`)
|
||||
//line app/vmselect/prometheus/util.qtpl:51
|
||||
qw422016.N().S(traceJSON)
|
||||
//line app/vmselect/prometheus/util.qtpl:51
|
||||
}
|
||||
//line app/vmselect/prometheus/util.qtpl:52
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:52
|
||||
func writedumpQueryTrace(qq422016 qtio422016.Writer, qt *querytracer.Tracer) {
|
||||
//line app/vmselect/prometheus/util.qtpl:52
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/util.qtpl:52
|
||||
streamdumpQueryTrace(qw422016, qt)
|
||||
//line app/vmselect/prometheus/util.qtpl:52
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/util.qtpl:52
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/util.qtpl:52
|
||||
func dumpQueryTrace(qt *querytracer.Tracer) string {
|
||||
//line app/vmselect/prometheus/util.qtpl:52
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/util.qtpl:52
|
||||
writedumpQueryTrace(qb422016, qt)
|
||||
//line app/vmselect/prometheus/util.qtpl:52
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/util.qtpl:52
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/util.qtpl:52
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/util.qtpl:52
|
||||
}
|
||||
|
|
|
@ -16,6 +16,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/memory"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
"github.com/VictoriaMetrics/metricsql"
|
||||
|
@ -102,6 +103,7 @@ type EvalConfig struct {
|
|||
|
||||
Deadline searchutils.Deadline
|
||||
|
||||
// Whether the response can be cached.
|
||||
MayCache bool
|
||||
|
||||
// LookbackDelta is analog to `-query.lookback-delta` from Prometheus.
|
||||
|
@ -190,19 +192,40 @@ func getTimestamps(start, end, step int64) []int64 {
|
|||
return timestamps
|
||||
}
|
||||
|
||||
func evalExpr(ec *EvalConfig, e metricsql.Expr) ([]*timeseries, error) {
|
||||
func evalExpr(qt *querytracer.Tracer, ec *EvalConfig, e metricsql.Expr) ([]*timeseries, error) {
|
||||
qt = qt.NewChild()
|
||||
rv, err := evalExprInternal(qt, ec, e)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if qt.Enabled() {
|
||||
query := e.AppendString(nil)
|
||||
seriesCount := len(rv)
|
||||
pointsPerSeries := 0
|
||||
if len(rv) > 0 {
|
||||
pointsPerSeries = len(rv[0].Timestamps)
|
||||
}
|
||||
pointsCount := seriesCount * pointsPerSeries
|
||||
mayCache := ec.mayCache()
|
||||
qt.Donef("eval: query=%s, timeRange=[%d..%d], step=%d, mayCache=%v: series=%d, points=%d, pointsPerSeries=%d",
|
||||
query, ec.Start, ec.End, ec.Step, mayCache, seriesCount, pointsCount, pointsPerSeries)
|
||||
}
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
func evalExprInternal(qt *querytracer.Tracer, ec *EvalConfig, e metricsql.Expr) ([]*timeseries, error) {
|
||||
if me, ok := e.(*metricsql.MetricExpr); ok {
|
||||
re := &metricsql.RollupExpr{
|
||||
Expr: me,
|
||||
}
|
||||
rv, err := evalRollupFunc(ec, "default_rollup", rollupDefault, e, re, nil)
|
||||
rv, err := evalRollupFunc(qt, ec, "default_rollup", rollupDefault, e, re, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(`cannot evaluate %q: %w`, me.AppendString(nil), err)
|
||||
}
|
||||
return rv, nil
|
||||
}
|
||||
if re, ok := e.(*metricsql.RollupExpr); ok {
|
||||
rv, err := evalRollupFunc(ec, "default_rollup", rollupDefault, e, re, nil)
|
||||
rv, err := evalRollupFunc(qt, ec, "default_rollup", rollupDefault, e, re, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(`cannot evaluate %q: %w`, re.AppendString(nil), err)
|
||||
}
|
||||
|
@ -211,26 +234,12 @@ func evalExpr(ec *EvalConfig, e metricsql.Expr) ([]*timeseries, error) {
|
|||
if fe, ok := e.(*metricsql.FuncExpr); ok {
|
||||
nrf := getRollupFunc(fe.Name)
|
||||
if nrf == nil {
|
||||
args, err := evalExprs(ec, fe.Args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tf := getTransformFunc(fe.Name)
|
||||
if tf == nil {
|
||||
return nil, fmt.Errorf(`unknown func %q`, fe.Name)
|
||||
}
|
||||
tfa := &transformFuncArg{
|
||||
ec: ec,
|
||||
fe: fe,
|
||||
args: args,
|
||||
}
|
||||
rv, err := tf(tfa)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(`cannot evaluate %q: %w`, fe.AppendString(nil), err)
|
||||
}
|
||||
return rv, nil
|
||||
qtChild := qt.NewChild()
|
||||
rv, err := evalTransformFunc(qtChild, ec, fe)
|
||||
qtChild.Donef("transform %s(): series=%d", fe.Name, len(rv))
|
||||
return rv, err
|
||||
}
|
||||
args, re, err := evalRollupFuncArgs(ec, fe)
|
||||
args, re, err := evalRollupFuncArgs(qt, ec, fe)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -238,79 +247,23 @@ func evalExpr(ec *EvalConfig, e metricsql.Expr) ([]*timeseries, error) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
rv, err := evalRollupFunc(ec, fe.Name, rf, e, re, nil)
|
||||
rv, err := evalRollupFunc(qt, ec, fe.Name, rf, e, re, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(`cannot evaluate %q: %w`, fe.AppendString(nil), err)
|
||||
}
|
||||
return rv, nil
|
||||
}
|
||||
if ae, ok := e.(*metricsql.AggrFuncExpr); ok {
|
||||
if callbacks := getIncrementalAggrFuncCallbacks(ae.Name); callbacks != nil {
|
||||
fe, nrf := tryGetArgRollupFuncWithMetricExpr(ae)
|
||||
if fe != nil {
|
||||
// There is an optimized path for calculating metricsql.AggrFuncExpr over rollupFunc over metricsql.MetricExpr.
|
||||
// The optimized path saves RAM for aggregates over big number of time series.
|
||||
args, re, err := evalRollupFuncArgs(ec, fe)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
rf, err := nrf(args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
iafc := newIncrementalAggrFuncContext(ae, callbacks)
|
||||
return evalRollupFunc(ec, fe.Name, rf, e, re, iafc)
|
||||
}
|
||||
}
|
||||
args, err := evalExprs(ec, ae.Args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
af := getAggrFunc(ae.Name)
|
||||
if af == nil {
|
||||
return nil, fmt.Errorf(`unknown func %q`, ae.Name)
|
||||
}
|
||||
afa := &aggrFuncArg{
|
||||
ae: ae,
|
||||
args: args,
|
||||
ec: ec,
|
||||
}
|
||||
rv, err := af(afa)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(`cannot evaluate %q: %w`, ae.AppendString(nil), err)
|
||||
}
|
||||
return rv, nil
|
||||
qtChild := qt.NewChild()
|
||||
rv, err := evalAggrFunc(qtChild, ec, ae)
|
||||
qtChild.Donef("aggregate %s(): series=%d", ae.Name, len(rv))
|
||||
return rv, err
|
||||
}
|
||||
if be, ok := e.(*metricsql.BinaryOpExpr); ok {
|
||||
bf := getBinaryOpFunc(be.Op)
|
||||
if bf == nil {
|
||||
return nil, fmt.Errorf(`unknown binary op %q`, be.Op)
|
||||
}
|
||||
var err error
|
||||
var tssLeft, tssRight []*timeseries
|
||||
switch strings.ToLower(be.Op) {
|
||||
case "and", "if":
|
||||
// Fetch right-side series at first, since it usually contains
|
||||
// lower number of time series for `and` and `if` operator.
|
||||
// This should produce more specific label filters for the left side of the query.
|
||||
// This, in turn, should reduce the time to select series for the left side of the query.
|
||||
tssRight, tssLeft, err = execBinaryOpArgs(ec, be.Right, be.Left, be)
|
||||
default:
|
||||
tssLeft, tssRight, err = execBinaryOpArgs(ec, be.Left, be.Right, be)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot execute %q: %w", be.AppendString(nil), err)
|
||||
}
|
||||
bfa := &binaryOpFuncArg{
|
||||
be: be,
|
||||
left: tssLeft,
|
||||
right: tssRight,
|
||||
}
|
||||
rv, err := bf(bfa)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(`cannot evaluate %q: %w`, be.AppendString(nil), err)
|
||||
}
|
||||
return rv, nil
|
||||
qtChild := qt.NewChild()
|
||||
rv, err := evalBinaryOp(qtChild, ec, be)
|
||||
qtChild.Donef("binary op %q: series=%d", be.Op, len(rv))
|
||||
return rv, err
|
||||
}
|
||||
if ne, ok := e.(*metricsql.NumberExpr); ok {
|
||||
rv := evalNumber(ec, ne.N)
|
||||
|
@ -329,7 +282,98 @@ func evalExpr(ec *EvalConfig, e metricsql.Expr) ([]*timeseries, error) {
|
|||
return nil, fmt.Errorf("unexpected expression %q", e.AppendString(nil))
|
||||
}
|
||||
|
||||
func execBinaryOpArgs(ec *EvalConfig, exprFirst, exprSecond metricsql.Expr, be *metricsql.BinaryOpExpr) ([]*timeseries, []*timeseries, error) {
|
||||
func evalTransformFunc(qt *querytracer.Tracer, ec *EvalConfig, fe *metricsql.FuncExpr) ([]*timeseries, error) {
|
||||
args, err := evalExprs(qt, ec, fe.Args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tf := getTransformFunc(fe.Name)
|
||||
if tf == nil {
|
||||
return nil, fmt.Errorf(`unknown func %q`, fe.Name)
|
||||
}
|
||||
tfa := &transformFuncArg{
|
||||
ec: ec,
|
||||
fe: fe,
|
||||
args: args,
|
||||
}
|
||||
rv, err := tf(tfa)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(`cannot evaluate %q: %w`, fe.AppendString(nil), err)
|
||||
}
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
func evalAggrFunc(qt *querytracer.Tracer, ec *EvalConfig, ae *metricsql.AggrFuncExpr) ([]*timeseries, error) {
|
||||
if callbacks := getIncrementalAggrFuncCallbacks(ae.Name); callbacks != nil {
|
||||
fe, nrf := tryGetArgRollupFuncWithMetricExpr(ae)
|
||||
if fe != nil {
|
||||
// There is an optimized path for calculating metricsql.AggrFuncExpr over rollupFunc over metricsql.MetricExpr.
|
||||
// The optimized path saves RAM for aggregates over big number of time series.
|
||||
args, re, err := evalRollupFuncArgs(qt, ec, fe)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
rf, err := nrf(args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
iafc := newIncrementalAggrFuncContext(ae, callbacks)
|
||||
return evalRollupFunc(qt, ec, fe.Name, rf, ae, re, iafc)
|
||||
}
|
||||
}
|
||||
args, err := evalExprs(qt, ec, ae.Args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
af := getAggrFunc(ae.Name)
|
||||
if af == nil {
|
||||
return nil, fmt.Errorf(`unknown func %q`, ae.Name)
|
||||
}
|
||||
afa := &aggrFuncArg{
|
||||
ae: ae,
|
||||
args: args,
|
||||
ec: ec,
|
||||
}
|
||||
rv, err := af(afa)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(`cannot evaluate %q: %w`, ae.AppendString(nil), err)
|
||||
}
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
func evalBinaryOp(qt *querytracer.Tracer, ec *EvalConfig, be *metricsql.BinaryOpExpr) ([]*timeseries, error) {
|
||||
bf := getBinaryOpFunc(be.Op)
|
||||
if bf == nil {
|
||||
return nil, fmt.Errorf(`unknown binary op %q`, be.Op)
|
||||
}
|
||||
var err error
|
||||
var tssLeft, tssRight []*timeseries
|
||||
switch strings.ToLower(be.Op) {
|
||||
case "and", "if":
|
||||
// Fetch right-side series at first, since it usually contains
|
||||
// lower number of time series for `and` and `if` operator.
|
||||
// This should produce more specific label filters for the left side of the query.
|
||||
// This, in turn, should reduce the time to select series for the left side of the query.
|
||||
tssRight, tssLeft, err = execBinaryOpArgs(qt, ec, be.Right, be.Left, be)
|
||||
default:
|
||||
tssLeft, tssRight, err = execBinaryOpArgs(qt, ec, be.Left, be.Right, be)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot execute %q: %w", be.AppendString(nil), err)
|
||||
}
|
||||
bfa := &binaryOpFuncArg{
|
||||
be: be,
|
||||
left: tssLeft,
|
||||
right: tssRight,
|
||||
}
|
||||
rv, err := bf(bfa)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(`cannot evaluate %q: %w`, be.AppendString(nil), err)
|
||||
}
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
func execBinaryOpArgs(qt *querytracer.Tracer, ec *EvalConfig, exprFirst, exprSecond metricsql.Expr, be *metricsql.BinaryOpExpr) ([]*timeseries, []*timeseries, error) {
|
||||
// Execute binary operation in the following way:
|
||||
//
|
||||
// 1) execute the exprFirst
|
||||
|
@ -353,7 +397,7 @@ func execBinaryOpArgs(ec *EvalConfig, exprFirst, exprSecond metricsql.Expr, be *
|
|||
//
|
||||
// - Queries, which get additional labels from `info` metrics.
|
||||
// See https://www.robustperception.io/exposing-the-software-version-to-prometheus
|
||||
tssFirst, err := evalExpr(ec, exprFirst)
|
||||
tssFirst, err := evalExpr(qt, ec, exprFirst)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
@ -366,7 +410,7 @@ func execBinaryOpArgs(ec *EvalConfig, exprFirst, exprSecond metricsql.Expr, be *
|
|||
lfs = metricsql.TrimFiltersByGroupModifier(lfs, be)
|
||||
exprSecond = metricsql.PushdownBinaryOpFilters(exprSecond, lfs)
|
||||
}
|
||||
tssSecond, err := evalExpr(ec, exprSecond)
|
||||
tssSecond, err := evalExpr(qt, ec, exprSecond)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
@ -503,10 +547,10 @@ func tryGetArgRollupFuncWithMetricExpr(ae *metricsql.AggrFuncExpr) (*metricsql.F
|
|||
return nil, nil
|
||||
}
|
||||
|
||||
func evalExprs(ec *EvalConfig, es []metricsql.Expr) ([][]*timeseries, error) {
|
||||
func evalExprs(qt *querytracer.Tracer, ec *EvalConfig, es []metricsql.Expr) ([][]*timeseries, error) {
|
||||
var rvs [][]*timeseries
|
||||
for _, e := range es {
|
||||
rv, err := evalExpr(ec, e)
|
||||
rv, err := evalExpr(qt, ec, e)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -515,7 +559,7 @@ func evalExprs(ec *EvalConfig, es []metricsql.Expr) ([][]*timeseries, error) {
|
|||
return rvs, nil
|
||||
}
|
||||
|
||||
func evalRollupFuncArgs(ec *EvalConfig, fe *metricsql.FuncExpr) ([]interface{}, *metricsql.RollupExpr, error) {
|
||||
func evalRollupFuncArgs(qt *querytracer.Tracer, ec *EvalConfig, fe *metricsql.FuncExpr) ([]interface{}, *metricsql.RollupExpr, error) {
|
||||
var re *metricsql.RollupExpr
|
||||
rollupArgIdx := metricsql.GetRollupArgIdx(fe)
|
||||
if len(fe.Args) <= rollupArgIdx {
|
||||
|
@ -528,7 +572,7 @@ func evalRollupFuncArgs(ec *EvalConfig, fe *metricsql.FuncExpr) ([]interface{},
|
|||
args[i] = re
|
||||
continue
|
||||
}
|
||||
ts, err := evalExpr(ec, arg)
|
||||
ts, err := evalExpr(qt, ec, arg)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("cannot evaluate arg #%d for %q: %w", i+1, fe.AppendString(nil), err)
|
||||
}
|
||||
|
@ -568,11 +612,12 @@ func getRollupExprArg(arg metricsql.Expr) *metricsql.RollupExpr {
|
|||
// expr may contain:
|
||||
// - rollupFunc(m) if iafc is nil
|
||||
// - aggrFunc(rollupFunc(m)) if iafc isn't nil
|
||||
func evalRollupFunc(ec *EvalConfig, funcName string, rf rollupFunc, expr metricsql.Expr, re *metricsql.RollupExpr, iafc *incrementalAggrFuncContext) ([]*timeseries, error) {
|
||||
func evalRollupFunc(qt *querytracer.Tracer, ec *EvalConfig, funcName string, rf rollupFunc, expr metricsql.Expr,
|
||||
re *metricsql.RollupExpr, iafc *incrementalAggrFuncContext) ([]*timeseries, error) {
|
||||
if re.At == nil {
|
||||
return evalRollupFuncWithoutAt(ec, funcName, rf, expr, re, iafc)
|
||||
return evalRollupFuncWithoutAt(qt, ec, funcName, rf, expr, re, iafc)
|
||||
}
|
||||
tssAt, err := evalExpr(ec, re.At)
|
||||
tssAt, err := evalExpr(qt, ec, re.At)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot evaluate `@` modifier: %w", err)
|
||||
}
|
||||
|
@ -583,7 +628,7 @@ func evalRollupFunc(ec *EvalConfig, funcName string, rf rollupFunc, expr metrics
|
|||
ecNew := copyEvalConfig(ec)
|
||||
ecNew.Start = atTimestamp
|
||||
ecNew.End = atTimestamp
|
||||
tss, err := evalRollupFuncWithoutAt(ecNew, funcName, rf, expr, re, iafc)
|
||||
tss, err := evalRollupFuncWithoutAt(qt, ecNew, funcName, rf, expr, re, iafc)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -601,7 +646,8 @@ func evalRollupFunc(ec *EvalConfig, funcName string, rf rollupFunc, expr metrics
|
|||
return tss, nil
|
||||
}
|
||||
|
||||
func evalRollupFuncWithoutAt(ec *EvalConfig, funcName string, rf rollupFunc, expr metricsql.Expr, re *metricsql.RollupExpr, iafc *incrementalAggrFuncContext) ([]*timeseries, error) {
|
||||
func evalRollupFuncWithoutAt(qt *querytracer.Tracer, ec *EvalConfig, funcName string, rf rollupFunc,
|
||||
expr metricsql.Expr, re *metricsql.RollupExpr, iafc *incrementalAggrFuncContext) ([]*timeseries, error) {
|
||||
funcName = strings.ToLower(funcName)
|
||||
ecNew := ec
|
||||
var offset int64
|
||||
|
@ -628,12 +674,12 @@ func evalRollupFuncWithoutAt(ec *EvalConfig, funcName string, rf rollupFunc, exp
|
|||
var rvs []*timeseries
|
||||
var err error
|
||||
if me, ok := re.Expr.(*metricsql.MetricExpr); ok {
|
||||
rvs, err = evalRollupFuncWithMetricExpr(ecNew, funcName, rf, expr, me, iafc, re.Window)
|
||||
rvs, err = evalRollupFuncWithMetricExpr(qt, ecNew, funcName, rf, expr, me, iafc, re.Window)
|
||||
} else {
|
||||
if iafc != nil {
|
||||
logger.Panicf("BUG: iafc must be nil for rollup %q over subquery %q", funcName, re.AppendString(nil))
|
||||
}
|
||||
rvs, err = evalRollupFuncWithSubquery(ecNew, funcName, rf, expr, re)
|
||||
rvs, err = evalRollupFuncWithSubquery(qt, ecNew, funcName, rf, expr, re)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -676,8 +722,10 @@ func aggregateAbsentOverTime(ec *EvalConfig, expr metricsql.Expr, tss []*timeser
|
|||
return rvs
|
||||
}
|
||||
|
||||
func evalRollupFuncWithSubquery(ec *EvalConfig, funcName string, rf rollupFunc, expr metricsql.Expr, re *metricsql.RollupExpr) ([]*timeseries, error) {
|
||||
func evalRollupFuncWithSubquery(qt *querytracer.Tracer, ec *EvalConfig, funcName string, rf rollupFunc, expr metricsql.Expr, re *metricsql.RollupExpr) ([]*timeseries, error) {
|
||||
// TODO: determine whether to use rollupResultCacheV here.
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("subquery")
|
||||
step := re.Step.Duration(ec.Step)
|
||||
if step == 0 {
|
||||
step = ec.Step
|
||||
|
@ -693,7 +741,7 @@ func evalRollupFuncWithSubquery(ec *EvalConfig, funcName string, rf rollupFunc,
|
|||
}
|
||||
// unconditionally align start and end args to step for subquery as Prometheus does.
|
||||
ecSQ.Start, ecSQ.End = alignStartEnd(ecSQ.Start, ecSQ.End, ecSQ.Step)
|
||||
tssSQ, err := evalExpr(ecSQ, re.Expr)
|
||||
tssSQ, err := evalExpr(qt, ecSQ, re.Expr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -727,6 +775,7 @@ func evalRollupFuncWithSubquery(ec *EvalConfig, funcName string, rf rollupFunc,
|
|||
}
|
||||
return values, timestamps
|
||||
})
|
||||
qt.Printf("rollup %s() over %d series returned by subquery: series=%d", funcName, len(tssSQ), len(tss))
|
||||
return tss, nil
|
||||
}
|
||||
|
||||
|
@ -802,15 +851,20 @@ var (
|
|||
rollupResultCacheMiss = metrics.NewCounter(`vm_rollup_result_cache_miss_total`)
|
||||
)
|
||||
|
||||
func evalRollupFuncWithMetricExpr(ec *EvalConfig, funcName string, rf rollupFunc,
|
||||
func evalRollupFuncWithMetricExpr(qt *querytracer.Tracer, ec *EvalConfig, funcName string, rf rollupFunc,
|
||||
expr metricsql.Expr, me *metricsql.MetricExpr, iafc *incrementalAggrFuncContext, windowExpr *metricsql.DurationExpr) ([]*timeseries, error) {
|
||||
var rollupMemorySize int64
|
||||
window := windowExpr.Duration(ec.Step)
|
||||
qt = qt.NewChild()
|
||||
defer func() {
|
||||
qt.Donef("rollup %s(): timeRange=[%d..%d], step=%d, window=%d, neededMemoryBytes=%d", funcName, ec.Start, ec.End, ec.Step, window, rollupMemorySize)
|
||||
}()
|
||||
if me.IsEmpty() {
|
||||
return evalNumber(ec, nan), nil
|
||||
}
|
||||
window := windowExpr.Duration(ec.Step)
|
||||
|
||||
// Search for partial results in cache.
|
||||
tssCached, start := rollupResultCacheV.Get(ec, expr, window)
|
||||
tssCached, start := rollupResultCacheV.Get(qt, ec, expr, window)
|
||||
if start > ec.End {
|
||||
// The result is fully cached.
|
||||
rollupResultCacheFullHits.Inc()
|
||||
|
@ -840,7 +894,7 @@ func evalRollupFuncWithMetricExpr(ec *EvalConfig, funcName string, rf rollupFunc
|
|||
minTimestamp -= ec.Step
|
||||
}
|
||||
sq := storage.NewSearchQuery(minTimestamp, ec.End, tfss, ec.MaxSeries)
|
||||
rss, err := netstorage.ProcessSearchQuery(sq, true, ec.Deadline)
|
||||
rss, err := netstorage.ProcessSearchQuery(qt, sq, true, ec.Deadline)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -874,7 +928,7 @@ func evalRollupFuncWithMetricExpr(ec *EvalConfig, funcName string, rf rollupFunc
|
|||
}
|
||||
}
|
||||
rollupPoints := mulNoOverflow(pointsPerTimeseries, int64(timeseriesLen*len(rcs)))
|
||||
rollupMemorySize := mulNoOverflow(rollupPoints, 16)
|
||||
rollupMemorySize = mulNoOverflow(rollupPoints, 16)
|
||||
rml := getRollupMemoryLimiter()
|
||||
if !rml.Get(uint64(rollupMemorySize)) {
|
||||
rss.Cancel()
|
||||
|
@ -891,15 +945,15 @@ func evalRollupFuncWithMetricExpr(ec *EvalConfig, funcName string, rf rollupFunc
|
|||
keepMetricNames := getKeepMetricNames(expr)
|
||||
var tss []*timeseries
|
||||
if iafc != nil {
|
||||
tss, err = evalRollupWithIncrementalAggregate(funcName, keepMetricNames, iafc, rss, rcs, preFunc, sharedTimestamps)
|
||||
tss, err = evalRollupWithIncrementalAggregate(qt, funcName, keepMetricNames, iafc, rss, rcs, preFunc, sharedTimestamps)
|
||||
} else {
|
||||
tss, err = evalRollupNoIncrementalAggregate(funcName, keepMetricNames, rss, rcs, preFunc, sharedTimestamps)
|
||||
tss, err = evalRollupNoIncrementalAggregate(qt, funcName, keepMetricNames, rss, rcs, preFunc, sharedTimestamps)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tss = mergeTimeseries(tssCached, tss, start, ec)
|
||||
rollupResultCacheV.Put(ec, expr, window, tss)
|
||||
rollupResultCacheV.Put(qt, ec, expr, window, tss)
|
||||
return tss, nil
|
||||
}
|
||||
|
||||
|
@ -915,9 +969,12 @@ func getRollupMemoryLimiter() *memoryLimiter {
|
|||
return &rollupMemoryLimiter
|
||||
}
|
||||
|
||||
func evalRollupWithIncrementalAggregate(funcName string, keepMetricNames bool, iafc *incrementalAggrFuncContext, rss *netstorage.Results, rcs []*rollupConfig,
|
||||
func evalRollupWithIncrementalAggregate(qt *querytracer.Tracer, funcName string, keepMetricNames bool,
|
||||
iafc *incrementalAggrFuncContext, rss *netstorage.Results, rcs []*rollupConfig,
|
||||
preFunc func(values []float64, timestamps []int64), sharedTimestamps []int64) ([]*timeseries, error) {
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("rollup %s() with incremental aggregation %s() over %d series", funcName, iafc.ae.Name, rss.Len())
|
||||
err := rss.RunParallel(qt, func(rs *netstorage.Result, workerID uint) error {
|
||||
rs.Values, rs.Timestamps = dropStaleNaNs(funcName, rs.Values, rs.Timestamps)
|
||||
preFunc(rs.Values, rs.Timestamps)
|
||||
ts := getTimeseries()
|
||||
|
@ -944,14 +1001,17 @@ func evalRollupWithIncrementalAggregate(funcName string, keepMetricNames bool, i
|
|||
return nil, err
|
||||
}
|
||||
tss := iafc.finalizeTimeseries()
|
||||
qt.Printf("series after aggregation with %s(): %d", iafc.ae.Name, len(tss))
|
||||
return tss, nil
|
||||
}
|
||||
|
||||
func evalRollupNoIncrementalAggregate(funcName string, keepMetricNames bool, rss *netstorage.Results, rcs []*rollupConfig,
|
||||
func evalRollupNoIncrementalAggregate(qt *querytracer.Tracer, funcName string, keepMetricNames bool, rss *netstorage.Results, rcs []*rollupConfig,
|
||||
preFunc func(values []float64, timestamps []int64), sharedTimestamps []int64) ([]*timeseries, error) {
|
||||
qt = qt.NewChild()
|
||||
defer qt.Donef("rollup %s() over %d series", funcName, rss.Len())
|
||||
tss := make([]*timeseries, 0, rss.Len()*len(rcs))
|
||||
var tssLock sync.Mutex
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
err := rss.RunParallel(qt, func(rs *netstorage.Result, workerID uint) error {
|
||||
rs.Values, rs.Timestamps = dropStaleNaNs(funcName, rs.Values, rs.Timestamps)
|
||||
preFunc(rs.Values, rs.Timestamps)
|
||||
for _, rc := range rcs {
|
||||
|
|
|
@ -13,6 +13,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/querystats"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
"github.com/VictoriaMetrics/metricsql"
|
||||
|
@ -26,7 +27,7 @@ var (
|
|||
)
|
||||
|
||||
// Exec executes q for the given ec.
|
||||
func Exec(ec *EvalConfig, q string, isFirstPointOnly bool) ([]netstorage.Result, error) {
|
||||
func Exec(qt *querytracer.Tracer, ec *EvalConfig, q string, isFirstPointOnly bool) ([]netstorage.Result, error) {
|
||||
if querystats.Enabled() {
|
||||
startTime := time.Now()
|
||||
defer querystats.RegisterQuery(q, ec.End-ec.Start, startTime)
|
||||
|
@ -40,25 +41,29 @@ func Exec(ec *EvalConfig, q string, isFirstPointOnly bool) ([]netstorage.Result,
|
|||
}
|
||||
|
||||
qid := activeQueriesV.Add(ec, q)
|
||||
rv, err := evalExpr(ec, e)
|
||||
rv, err := evalExpr(qt, ec, e)
|
||||
activeQueriesV.Remove(qid)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if isFirstPointOnly {
|
||||
// Remove all the points except the first one from every time series.
|
||||
for _, ts := range rv {
|
||||
ts.Values = ts.Values[:1]
|
||||
ts.Timestamps = ts.Timestamps[:1]
|
||||
}
|
||||
qt.Printf("leave only the first point in every series")
|
||||
}
|
||||
|
||||
maySort := maySortResults(e, rv)
|
||||
result, err := timeseriesToResult(rv, maySort)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if maySort {
|
||||
qt.Printf("sort series by metric name and labels")
|
||||
} else {
|
||||
qt.Printf("do not sort series by metric name and labels")
|
||||
}
|
||||
if n := ec.RoundDigits; n < 100 {
|
||||
for i := range result {
|
||||
values := result[i].Values
|
||||
|
@ -66,6 +71,7 @@ func Exec(ec *EvalConfig, q string, isFirstPointOnly bool) ([]netstorage.Result,
|
|||
values[j] = decimal.RoundToDecimalDigits(v, n)
|
||||
}
|
||||
}
|
||||
qt.Printf("round series values to %d decimal digits after the point", n)
|
||||
}
|
||||
return result, err
|
||||
}
|
||||
|
|
|
@ -66,7 +66,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
RoundDigits: 100,
|
||||
}
|
||||
for i := 0; i < 5; i++ {
|
||||
result, err := Exec(ec, q, false)
|
||||
result, err := Exec(nil, ec, q, false)
|
||||
if err != nil {
|
||||
t.Fatalf(`unexpected error when executing %q: %s`, q, err)
|
||||
}
|
||||
|
@ -7728,14 +7728,14 @@ func TestExecError(t *testing.T) {
|
|||
RoundDigits: 100,
|
||||
}
|
||||
for i := 0; i < 4; i++ {
|
||||
rv, err := Exec(ec, q, false)
|
||||
rv, err := Exec(nil, ec, q, false)
|
||||
if err == nil {
|
||||
t.Fatalf(`expecting non-nil error on %q`, q)
|
||||
}
|
||||
if rv != nil {
|
||||
t.Fatalf(`expecting nil rv`)
|
||||
}
|
||||
rv, err = Exec(ec, q, true)
|
||||
rv, err = Exec(nil, ec, q, true)
|
||||
if err == nil {
|
||||
t.Fatalf(`expecting non-nil error on %q`, q)
|
||||
}
|
||||
|
|
|
@ -4,6 +4,7 @@ import (
|
|||
"crypto/rand"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
@ -11,8 +12,10 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/memory"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/workingsetcache"
|
||||
"github.com/VictoriaMetrics/fastcache"
|
||||
|
@ -110,8 +113,10 @@ func InitRollupResultCache(cachePath string) {
|
|||
if len(rollupResultCachePath) > 0 {
|
||||
logger.Infof("loading rollupResult cache from %q...", rollupResultCachePath)
|
||||
c = workingsetcache.Load(rollupResultCachePath, cacheSize)
|
||||
mustLoadRollupResultCacheKeyPrefix(rollupResultCachePath)
|
||||
} else {
|
||||
c = workingsetcache.New(cacheSize)
|
||||
rollupResultCacheKeyPrefix = newRollupResultCacheKeyPrefix()
|
||||
}
|
||||
if *disableCache {
|
||||
c.Reset()
|
||||
|
@ -169,9 +174,10 @@ func StopRollupResultCache() {
|
|||
logger.Infof("saving rollupResult cache to %q...", rollupResultCachePath)
|
||||
startTime := time.Now()
|
||||
if err := rollupResultCacheV.c.Save(rollupResultCachePath); err != nil {
|
||||
logger.Errorf("cannot close rollupResult cache at %q: %s", rollupResultCachePath, err)
|
||||
logger.Errorf("cannot save rollupResult cache at %q: %s", rollupResultCachePath, err)
|
||||
return
|
||||
}
|
||||
mustSaveRollupResultCacheKeyPrefix(rollupResultCachePath)
|
||||
var fcs fastcache.Stats
|
||||
rollupResultCacheV.c.UpdateStats(&fcs)
|
||||
rollupResultCacheV.c.Stop()
|
||||
|
@ -193,8 +199,14 @@ func ResetRollupResultCache() {
|
|||
logger.Infof("rollupResult cache has been cleared")
|
||||
}
|
||||
|
||||
func (rrc *rollupResultCache) Get(ec *EvalConfig, expr metricsql.Expr, window int64) (tss []*timeseries, newStart int64) {
|
||||
func (rrc *rollupResultCache) Get(qt *querytracer.Tracer, ec *EvalConfig, expr metricsql.Expr, window int64) (tss []*timeseries, newStart int64) {
|
||||
qt = qt.NewChild()
|
||||
if qt.Enabled() {
|
||||
query := expr.AppendString(nil)
|
||||
defer qt.Donef("rollup cache get: query=%s, timeRange=[%d..%d], step=%d, window=%d", query, ec.Start, ec.End, ec.Step, window)
|
||||
}
|
||||
if !ec.mayCache() {
|
||||
qt.Printf("do not fetch series from cache, since it is disabled in the current context")
|
||||
return nil, ec.Start
|
||||
}
|
||||
|
||||
|
@ -205,6 +217,7 @@ func (rrc *rollupResultCache) Get(ec *EvalConfig, expr metricsql.Expr, window in
|
|||
bb.B = marshalRollupResultCacheKey(bb.B[:0], expr, window, ec.Step, ec.EnforcedTagFilterss)
|
||||
metainfoBuf := rrc.c.Get(nil, bb.B)
|
||||
if len(metainfoBuf) == 0 {
|
||||
qt.Printf("nothing found")
|
||||
return nil, ec.Start
|
||||
}
|
||||
var mi rollupResultCacheMetainfo
|
||||
|
@ -213,6 +226,7 @@ func (rrc *rollupResultCache) Get(ec *EvalConfig, expr metricsql.Expr, window in
|
|||
}
|
||||
key := mi.GetBestKey(ec.Start, ec.End)
|
||||
if key.prefix == 0 && key.suffix == 0 {
|
||||
qt.Printf("nothing found on the timeRange")
|
||||
return nil, ec.Start
|
||||
}
|
||||
bb.B = key.Marshal(bb.B[:0])
|
||||
|
@ -224,18 +238,22 @@ func (rrc *rollupResultCache) Get(ec *EvalConfig, expr metricsql.Expr, window in
|
|||
metainfoBuf = mi.Marshal(metainfoBuf[:0])
|
||||
bb.B = marshalRollupResultCacheKey(bb.B[:0], expr, window, ec.Step, ec.EnforcedTagFilterss)
|
||||
rrc.c.Set(bb.B, metainfoBuf)
|
||||
qt.Printf("missing cache entry")
|
||||
return nil, ec.Start
|
||||
}
|
||||
// Decompress into newly allocated byte slice, since tss returned from unmarshalTimeseriesFast
|
||||
// refers to the byte slice, so it cannot be returned to the resultBufPool.
|
||||
qt.Printf("load compressed entry from cache with size %d bytes", len(compressedResultBuf.B))
|
||||
resultBuf, err := encoding.DecompressZSTD(nil, compressedResultBuf.B)
|
||||
if err != nil {
|
||||
logger.Panicf("BUG: cannot decompress resultBuf from rollupResultCache: %s; it looks like it was improperly saved", err)
|
||||
}
|
||||
qt.Printf("unpack the entry into %d bytes", len(resultBuf))
|
||||
tss, err = unmarshalTimeseriesFast(resultBuf)
|
||||
if err != nil {
|
||||
logger.Panicf("BUG: cannot unmarshal timeseries from rollupResultCache: %s; it looks like it was improperly saved", err)
|
||||
}
|
||||
qt.Printf("unmarshal %d series", len(tss))
|
||||
|
||||
// Extract values for the matching timestamps
|
||||
timestamps := tss[0].Timestamps
|
||||
|
@ -245,10 +263,12 @@ func (rrc *rollupResultCache) Get(ec *EvalConfig, expr metricsql.Expr, window in
|
|||
}
|
||||
if i == len(timestamps) {
|
||||
// no matches.
|
||||
qt.Printf("no datapoints found in the cached series on the given timeRange")
|
||||
return nil, ec.Start
|
||||
}
|
||||
if timestamps[i] != ec.Start {
|
||||
// The cached range doesn't cover the requested range.
|
||||
qt.Printf("cached series don't cover the given timeRange")
|
||||
return nil, ec.Start
|
||||
}
|
||||
|
||||
|
@ -269,13 +289,20 @@ func (rrc *rollupResultCache) Get(ec *EvalConfig, expr metricsql.Expr, window in
|
|||
|
||||
timestamps = tss[0].Timestamps
|
||||
newStart = timestamps[len(timestamps)-1] + ec.Step
|
||||
qt.Printf("return %d series on a timeRange=[%d..%d]", len(tss), ec.Start, newStart-ec.Step)
|
||||
return tss, newStart
|
||||
}
|
||||
|
||||
var resultBufPool bytesutil.ByteBufferPool
|
||||
|
||||
func (rrc *rollupResultCache) Put(ec *EvalConfig, expr metricsql.Expr, window int64, tss []*timeseries) {
|
||||
func (rrc *rollupResultCache) Put(qt *querytracer.Tracer, ec *EvalConfig, expr metricsql.Expr, window int64, tss []*timeseries) {
|
||||
qt = qt.NewChild()
|
||||
if qt.Enabled() {
|
||||
query := expr.AppendString(nil)
|
||||
defer qt.Donef("rollup cache put: query=%s, timeRange=[%d..%d], step=%d, window=%d, series=%d", query, ec.Start, ec.End, ec.Step, window, len(tss))
|
||||
}
|
||||
if len(tss) == 0 || !ec.mayCache() {
|
||||
qt.Printf("do not store series to cache, since it is disabled in the current context")
|
||||
return
|
||||
}
|
||||
|
||||
|
@ -290,6 +317,7 @@ func (rrc *rollupResultCache) Put(ec *EvalConfig, expr metricsql.Expr, window in
|
|||
i++
|
||||
if i == 0 {
|
||||
// Nothing to store in the cache.
|
||||
qt.Printf("nothing to store in the cache, since all the points have timestamps bigger than %d", deadline)
|
||||
return
|
||||
}
|
||||
if i < len(timestamps) {
|
||||
|
@ -304,52 +332,96 @@ func (rrc *rollupResultCache) Put(ec *EvalConfig, expr metricsql.Expr, window in
|
|||
}
|
||||
|
||||
// Store tss in the cache.
|
||||
metainfoKey := bbPool.Get()
|
||||
defer bbPool.Put(metainfoKey)
|
||||
metainfoBuf := bbPool.Get()
|
||||
defer bbPool.Put(metainfoBuf)
|
||||
|
||||
metainfoKey.B = marshalRollupResultCacheKey(metainfoKey.B[:0], expr, window, ec.Step, ec.EnforcedTagFilterss)
|
||||
metainfoBuf.B = rrc.c.Get(metainfoBuf.B[:0], metainfoKey.B)
|
||||
var mi rollupResultCacheMetainfo
|
||||
if len(metainfoBuf.B) > 0 {
|
||||
if err := mi.Unmarshal(metainfoBuf.B); err != nil {
|
||||
logger.Panicf("BUG: cannot unmarshal rollupResultCacheMetainfo: %s; it looks like it was improperly saved", err)
|
||||
}
|
||||
}
|
||||
start := timestamps[0]
|
||||
end := timestamps[len(timestamps)-1]
|
||||
if mi.CoversTimeRange(start, end) {
|
||||
qt.Printf("series on the given timeRange=[%d..%d] already exist in the cache", start, end)
|
||||
return
|
||||
}
|
||||
|
||||
maxMarshaledSize := getRollupResultCacheSize() / 4
|
||||
resultBuf := resultBufPool.Get()
|
||||
defer resultBufPool.Put(resultBuf)
|
||||
resultBuf.B = marshalTimeseriesFast(resultBuf.B[:0], tss, maxMarshaledSize, ec.Step)
|
||||
if len(resultBuf.B) == 0 {
|
||||
tooBigRollupResults.Inc()
|
||||
qt.Printf("cannot store series in the cache, since they would occupy more than %d bytes", maxMarshaledSize)
|
||||
return
|
||||
}
|
||||
qt.Printf("marshal %d series on a timeRange=[%d..%d] into %d bytes", len(tss), start, end, len(resultBuf.B))
|
||||
compressedResultBuf := resultBufPool.Get()
|
||||
defer resultBufPool.Put(compressedResultBuf)
|
||||
compressedResultBuf.B = encoding.CompressZSTDLevel(compressedResultBuf.B[:0], resultBuf.B, 1)
|
||||
|
||||
bb := bbPool.Get()
|
||||
defer bbPool.Put(bb)
|
||||
qt.Printf("compress %d bytes into %d bytes", len(resultBuf.B), len(compressedResultBuf.B))
|
||||
|
||||
var key rollupResultCacheKey
|
||||
key.prefix = rollupResultCacheKeyPrefix
|
||||
key.suffix = atomic.AddUint64(&rollupResultCacheKeySuffix, 1)
|
||||
bb.B = key.Marshal(bb.B[:0])
|
||||
rrc.c.SetBig(bb.B, compressedResultBuf.B)
|
||||
rollupResultKey := key.Marshal(nil)
|
||||
rrc.c.SetBig(rollupResultKey, compressedResultBuf.B)
|
||||
qt.Printf("store %d bytes in the cache", len(compressedResultBuf.B))
|
||||
|
||||
bb.B = marshalRollupResultCacheKey(bb.B[:0], expr, window, ec.Step, ec.EnforcedTagFilterss)
|
||||
metainfoBuf := rrc.c.Get(nil, bb.B)
|
||||
var mi rollupResultCacheMetainfo
|
||||
if len(metainfoBuf) > 0 {
|
||||
if err := mi.Unmarshal(metainfoBuf); err != nil {
|
||||
logger.Panicf("BUG: cannot unmarshal rollupResultCacheMetainfo: %s; it looks like it was improperly saved", err)
|
||||
}
|
||||
}
|
||||
mi.AddKey(key, timestamps[0], timestamps[len(timestamps)-1])
|
||||
metainfoBuf = mi.Marshal(metainfoBuf[:0])
|
||||
rrc.c.Set(bb.B, metainfoBuf)
|
||||
metainfoBuf.B = mi.Marshal(metainfoBuf.B[:0])
|
||||
rrc.c.Set(metainfoKey.B, metainfoBuf.B)
|
||||
}
|
||||
|
||||
var (
|
||||
rollupResultCacheKeyPrefix = func() uint64 {
|
||||
var buf [8]byte
|
||||
if _, err := rand.Read(buf[:]); err != nil {
|
||||
// do not use logger.Panicf, since it isn't initialized yet.
|
||||
panic(fmt.Errorf("FATAL: cannot read random data for rollupResultCacheKeyPrefix: %w", err))
|
||||
}
|
||||
return encoding.UnmarshalUint64(buf[:])
|
||||
}()
|
||||
rollupResultCacheKeyPrefix uint64
|
||||
rollupResultCacheKeySuffix = uint64(time.Now().UnixNano())
|
||||
)
|
||||
|
||||
func newRollupResultCacheKeyPrefix() uint64 {
|
||||
var buf [8]byte
|
||||
if _, err := rand.Read(buf[:]); err != nil {
|
||||
// do not use logger.Panicf, since it isn't initialized yet.
|
||||
panic(fmt.Errorf("FATAL: cannot read random data for rollupResultCacheKeyPrefix: %w", err))
|
||||
}
|
||||
return encoding.UnmarshalUint64(buf[:])
|
||||
}
|
||||
|
||||
func mustLoadRollupResultCacheKeyPrefix(path string) {
|
||||
path = path + ".key.prefix"
|
||||
if !fs.IsPathExist(path) {
|
||||
rollupResultCacheKeyPrefix = newRollupResultCacheKeyPrefix()
|
||||
return
|
||||
}
|
||||
data, err := ioutil.ReadFile(path)
|
||||
if err != nil {
|
||||
logger.Errorf("cannot load %s: %s; reset rollupResult cache", path, err)
|
||||
rollupResultCacheKeyPrefix = newRollupResultCacheKeyPrefix()
|
||||
return
|
||||
}
|
||||
if len(data) != 8 {
|
||||
logger.Errorf("unexpected size of %s; want 8 bytes; got %d bytes; reset rollupResult cache", path, len(data))
|
||||
rollupResultCacheKeyPrefix = newRollupResultCacheKeyPrefix()
|
||||
return
|
||||
}
|
||||
rollupResultCacheKeyPrefix = encoding.UnmarshalUint64(data)
|
||||
}
|
||||
|
||||
func mustSaveRollupResultCacheKeyPrefix(path string) {
|
||||
path = path + ".key.prefix"
|
||||
data := encoding.MarshalUint64(nil, rollupResultCacheKeyPrefix)
|
||||
fs.MustRemoveAll(path)
|
||||
if err := fs.WriteFileAtomically(path, data); err != nil {
|
||||
logger.Fatalf("cannot store rollupResult cache key prefix to %q: %s", path, err)
|
||||
}
|
||||
}
|
||||
|
||||
var tooBigRollupResults = metrics.NewCounter("vm_too_big_rollup_results_total")
|
||||
|
||||
// Increment this value every time the format of the cache changes.
|
||||
|
@ -490,20 +562,36 @@ func (mi *rollupResultCacheMetainfo) Unmarshal(src []byte) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (mi *rollupResultCacheMetainfo) CoversTimeRange(start, end int64) bool {
|
||||
if start > end {
|
||||
logger.Panicf("BUG: start cannot exceed end; got %d vs %d", start, end)
|
||||
}
|
||||
for i := range mi.entries {
|
||||
e := &mi.entries[i]
|
||||
if start >= e.start && end <= e.end {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (mi *rollupResultCacheMetainfo) GetBestKey(start, end int64) rollupResultCacheKey {
|
||||
if start > end {
|
||||
logger.Panicf("BUG: start cannot exceed end; got %d vs %d", start, end)
|
||||
}
|
||||
var bestKey rollupResultCacheKey
|
||||
bestD := int64(1<<63 - 1)
|
||||
dMax := int64(0)
|
||||
for i := range mi.entries {
|
||||
e := &mi.entries[i]
|
||||
if start < e.start || end <= e.start {
|
||||
if start < e.start {
|
||||
continue
|
||||
}
|
||||
d := start - e.start
|
||||
if d < bestD {
|
||||
bestD = d
|
||||
d := e.end - start
|
||||
if end <= e.end {
|
||||
d = end - start
|
||||
}
|
||||
if d >= dMax {
|
||||
dMax = d
|
||||
bestKey = e.key
|
||||
}
|
||||
}
|
||||
|
|
|
@ -22,6 +22,7 @@ func TestRollupResultCacheInitStop(t *testing.T) {
|
|||
StopRollupResultCache()
|
||||
}
|
||||
fs.MustRemoveAll(cacheFilePath)
|
||||
fs.MustRemoveAll(cacheFilePath + ".key.prefix")
|
||||
})
|
||||
}
|
||||
|
||||
|
@ -55,7 +56,7 @@ func TestRollupResultCache(t *testing.T) {
|
|||
|
||||
// Try obtaining an empty value.
|
||||
t.Run("empty", func(t *testing.T) {
|
||||
tss, newStart := rollupResultCacheV.Get(ec, fe, window)
|
||||
tss, newStart := rollupResultCacheV.Get(nil, ec, fe, window)
|
||||
if newStart != ec.Start {
|
||||
t.Fatalf("unexpected newStart; got %d; want %d", newStart, ec.Start)
|
||||
}
|
||||
|
@ -73,8 +74,8 @@ func TestRollupResultCache(t *testing.T) {
|
|||
Values: []float64{0, 1, 2},
|
||||
},
|
||||
}
|
||||
rollupResultCacheV.Put(ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(ec, fe, window)
|
||||
rollupResultCacheV.Put(nil, ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(nil, ec, fe, window)
|
||||
if newStart != 1400 {
|
||||
t.Fatalf("unexpected newStart; got %d; want %d", newStart, 1400)
|
||||
}
|
||||
|
@ -94,8 +95,8 @@ func TestRollupResultCache(t *testing.T) {
|
|||
Values: []float64{0, 1, 2},
|
||||
},
|
||||
}
|
||||
rollupResultCacheV.Put(ec, ae, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(ec, ae, window)
|
||||
rollupResultCacheV.Put(nil, ec, ae, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(nil, ec, ae, window)
|
||||
if newStart != 1400 {
|
||||
t.Fatalf("unexpected newStart; got %d; want %d", newStart, 1400)
|
||||
}
|
||||
|
@ -117,8 +118,8 @@ func TestRollupResultCache(t *testing.T) {
|
|||
Values: []float64{333, 0, 1, 2},
|
||||
},
|
||||
}
|
||||
rollupResultCacheV.Put(ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(ec, fe, window)
|
||||
rollupResultCacheV.Put(nil, ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(nil, ec, fe, window)
|
||||
if newStart != 1000 {
|
||||
t.Fatalf("unexpected newStart; got %d; want %d", newStart, 1000)
|
||||
}
|
||||
|
@ -136,8 +137,8 @@ func TestRollupResultCache(t *testing.T) {
|
|||
Values: []float64{0, 1, 2},
|
||||
},
|
||||
}
|
||||
rollupResultCacheV.Put(ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(ec, fe, window)
|
||||
rollupResultCacheV.Put(nil, ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(nil, ec, fe, window)
|
||||
if newStart != 1000 {
|
||||
t.Fatalf("unexpected newStart; got %d; want %d", newStart, 1000)
|
||||
}
|
||||
|
@ -155,8 +156,8 @@ func TestRollupResultCache(t *testing.T) {
|
|||
Values: []float64{0, 1, 2},
|
||||
},
|
||||
}
|
||||
rollupResultCacheV.Put(ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(ec, fe, window)
|
||||
rollupResultCacheV.Put(nil, ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(nil, ec, fe, window)
|
||||
if newStart != 1000 {
|
||||
t.Fatalf("unexpected newStart; got %d; want %d", newStart, 1000)
|
||||
}
|
||||
|
@ -174,8 +175,8 @@ func TestRollupResultCache(t *testing.T) {
|
|||
Values: []float64{0, 1, 2},
|
||||
},
|
||||
}
|
||||
rollupResultCacheV.Put(ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(ec, fe, window)
|
||||
rollupResultCacheV.Put(nil, ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(nil, ec, fe, window)
|
||||
if newStart != 1000 {
|
||||
t.Fatalf("unexpected newStart; got %d; want %d", newStart, 1000)
|
||||
}
|
||||
|
@ -193,8 +194,8 @@ func TestRollupResultCache(t *testing.T) {
|
|||
Values: []float64{0, 1, 2, 3, 4, 5, 6, 7},
|
||||
},
|
||||
}
|
||||
rollupResultCacheV.Put(ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(ec, fe, window)
|
||||
rollupResultCacheV.Put(nil, ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(nil, ec, fe, window)
|
||||
if newStart != 2200 {
|
||||
t.Fatalf("unexpected newStart; got %d; want %d", newStart, 2200)
|
||||
}
|
||||
|
@ -216,8 +217,8 @@ func TestRollupResultCache(t *testing.T) {
|
|||
Values: []float64{1, 2, 3, 4, 5, 6},
|
||||
},
|
||||
}
|
||||
rollupResultCacheV.Put(ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(ec, fe, window)
|
||||
rollupResultCacheV.Put(nil, ec, fe, window, tss)
|
||||
tss, newStart := rollupResultCacheV.Get(nil, ec, fe, window)
|
||||
if newStart != 2200 {
|
||||
t.Fatalf("unexpected newStart; got %d; want %d", newStart, 2200)
|
||||
}
|
||||
|
@ -241,8 +242,8 @@ func TestRollupResultCache(t *testing.T) {
|
|||
}
|
||||
tss = append(tss, ts)
|
||||
}
|
||||
rollupResultCacheV.Put(ec, fe, window, tss)
|
||||
tssResult, newStart := rollupResultCacheV.Get(ec, fe, window)
|
||||
rollupResultCacheV.Put(nil, ec, fe, window, tss)
|
||||
tssResult, newStart := rollupResultCacheV.Get(nil, ec, fe, window)
|
||||
if newStart != 2200 {
|
||||
t.Fatalf("unexpected newStart; got %d; want %d", newStart, 2200)
|
||||
}
|
||||
|
@ -270,10 +271,10 @@ func TestRollupResultCache(t *testing.T) {
|
|||
Values: []float64{0, 1, 2},
|
||||
},
|
||||
}
|
||||
rollupResultCacheV.Put(ec, fe, window, tss1)
|
||||
rollupResultCacheV.Put(ec, fe, window, tss2)
|
||||
rollupResultCacheV.Put(ec, fe, window, tss3)
|
||||
tss, newStart := rollupResultCacheV.Get(ec, fe, window)
|
||||
rollupResultCacheV.Put(nil, ec, fe, window, tss1)
|
||||
rollupResultCacheV.Put(nil, ec, fe, window, tss2)
|
||||
rollupResultCacheV.Put(nil, ec, fe, window, tss3)
|
||||
tss, newStart := rollupResultCacheV.Get(nil, ec, fe, window)
|
||||
if newStart != 1400 {
|
||||
t.Fatalf("unexpected newStart; got %d; want %d", newStart, 1400)
|
||||
}
|
||||
|
|
6
app/vmselect/static/css/bootstrap.min.css
vendored
Normal file
6
app/vmselect/static/css/bootstrap.min.css
vendored
Normal file
File diff suppressed because one or more lines are too long
6
app/vmselect/static/js/bootstrap.bundle.min.js
vendored
Normal file
6
app/vmselect/static/js/bootstrap.bundle.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
2
app/vmselect/static/js/jquery-3.6.0.min.js
vendored
Normal file
2
app/vmselect/static/js/jquery-3.6.0.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
|
@ -1,12 +1,12 @@
|
|||
{
|
||||
"files": {
|
||||
"main.css": "./static/css/main.d8362c27.css",
|
||||
"main.js": "./static/js/main.348f50e1.js",
|
||||
"main.js": "./static/js/main.a35e61a3.js",
|
||||
"static/js/27.939f971b.chunk.js": "./static/js/27.939f971b.chunk.js",
|
||||
"index.html": "./index.html"
|
||||
},
|
||||
"entrypoints": [
|
||||
"static/css/main.d8362c27.css",
|
||||
"static/js/main.348f50e1.js"
|
||||
"static/js/main.a35e61a3.js"
|
||||
]
|
||||
}
|
|
@ -1,7 +1,7 @@
|
|||
### Setup
|
||||
1. Create `.json` config file in a folder `dashboards`
|
||||
2. Import your config file into the `dashboards/index.js`
|
||||
3. Add imported variable into the array `window.__VMUI_PREDEFINED_DASHBOARDS__`
|
||||
3. Add filename into the array `window.__VMUI_PREDEFINED_DASHBOARDS__`
|
||||
|
||||
### Configuration options
|
||||
|
||||
|
|
|
@ -1,5 +1,3 @@
|
|||
import perJob from "./perJobUsage.json" assert { type: "json" };
|
||||
|
||||
window.__VMUI_PREDEFINED_DASHBOARDS__ = [
|
||||
perJob
|
||||
"perJobUsage.json"
|
||||
];
|
||||
|
|
|
@ -1 +1 @@
|
|||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script src="./dashboards/index.js" type="module"></script><script defer="defer" src="./static/js/main.348f50e1.js"></script><link href="./static/css/main.d8362c27.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
||||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script src="./dashboards/index.js" type="module"></script><script defer="defer" src="./static/js/main.a35e61a3.js"></script><link href="./static/css/main.d8362c27.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
File diff suppressed because one or more lines are too long
|
@ -17,6 +17,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/syncwg"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
|
@ -37,8 +38,11 @@ var (
|
|||
finalMergeDelay = flag.Duration("finalMergeDelay", 0, "The delay before starting final merge for per-month partition after no new data is ingested into it. "+
|
||||
"Final merge may require additional disk IO and CPU resources. Final merge may increase query speed and reduce disk space usage in some cases. "+
|
||||
"Zero value disables final merge")
|
||||
bigMergeConcurrency = flag.Int("bigMergeConcurrency", 0, "The maximum number of CPU cores to use for big merges. Default value is used if set to 0")
|
||||
smallMergeConcurrency = flag.Int("smallMergeConcurrency", 0, "The maximum number of CPU cores to use for small merges. Default value is used if set to 0")
|
||||
bigMergeConcurrency = flag.Int("bigMergeConcurrency", 0, "The maximum number of CPU cores to use for big merges. Default value is used if set to 0")
|
||||
smallMergeConcurrency = flag.Int("smallMergeConcurrency", 0, "The maximum number of CPU cores to use for small merges. Default value is used if set to 0")
|
||||
retentionTimezoneOffset = flag.Duration("retentionTimezoneOffset", 0, "The offset for performing indexdb rotation. "+
|
||||
"If set to 0, then the indexdb rotation is performed at 4am UTC time per each -retentionPeriod. "+
|
||||
"If set to 2h, then the indexdb rotation is performed at 4am EET time (the timezone with +2h offset)")
|
||||
|
||||
logNewSeries = flag.Bool("logNewSeries", false, "Whether to log new series. This option is for debug purposes only. It can lead to performance issues "+
|
||||
"when big number of new series are ingested into VictoriaMetrics")
|
||||
|
@ -55,6 +59,7 @@ var (
|
|||
cacheSizeStorageTSID = flagutil.NewBytes("storage.cacheSizeStorageTSID", 0, "Overrides max size for storage/tsid cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning")
|
||||
cacheSizeIndexDBIndexBlocks = flagutil.NewBytes("storage.cacheSizeIndexDBIndexBlocks", 0, "Overrides max size for indexdb/indexBlocks cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning")
|
||||
cacheSizeIndexDBDataBlocks = flagutil.NewBytes("storage.cacheSizeIndexDBDataBlocks", 0, "Overrides max size for indexdb/dataBlocks cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning")
|
||||
cacheSizeIndexDBTagFilters = flagutil.NewBytes("storage.cacheSizeIndexDBTagFilters", 0, "Overrides max size for indexdb/tagFilters cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning")
|
||||
)
|
||||
|
||||
// CheckTimeRange returns true if the given tr is denied for querying.
|
||||
|
@ -91,8 +96,10 @@ func InitWithoutMetrics(resetCacheIfNeeded func(mrs []storage.MetricRow)) {
|
|||
storage.SetFinalMergeDelay(*finalMergeDelay)
|
||||
storage.SetBigMergeWorkersCount(*bigMergeConcurrency)
|
||||
storage.SetSmallMergeWorkersCount(*smallMergeConcurrency)
|
||||
storage.SetRetentionTimezoneOffset(*retentionTimezoneOffset)
|
||||
storage.SetFreeDiskSpaceLimit(minFreeDiskSpaceBytes.N)
|
||||
storage.SetTSIDCacheSize(cacheSizeStorageTSID.N)
|
||||
storage.SetTagFilterCacheSize(cacheSizeIndexDBTagFilters.N)
|
||||
mergeset.SetIndexBlocksCacheSize(cacheSizeIndexDBIndexBlocks.N)
|
||||
mergeset.SetDataBlocksCacheSize(cacheSizeIndexDBDataBlocks.N)
|
||||
|
||||
|
@ -169,9 +176,9 @@ func DeleteMetrics(tfss []*storage.TagFilters) (int, error) {
|
|||
}
|
||||
|
||||
// SearchMetricNames returns metric names for the given tfss on the given tr.
|
||||
func SearchMetricNames(tfss []*storage.TagFilters, tr storage.TimeRange, maxMetrics int, deadline uint64) ([]storage.MetricName, error) {
|
||||
func SearchMetricNames(qt *querytracer.Tracer, tfss []*storage.TagFilters, tr storage.TimeRange, maxMetrics int, deadline uint64) ([]storage.MetricName, error) {
|
||||
WG.Add(1)
|
||||
mns, err := Storage.SearchMetricNames(tfss, tr, maxMetrics, deadline)
|
||||
mns, err := Storage.SearchMetricNames(qt, tfss, tr, maxMetrics, deadline)
|
||||
WG.Done()
|
||||
return mns, err
|
||||
}
|
||||
|
|
|
@ -6,7 +6,7 @@ COPY web/ /build/
|
|||
RUN GOOS=linux GOARCH=amd64 GO111MODULE=on CGO_ENABLED=0 go build -o web-amd64 github.com/VictoriMetrics/vmui/ && \
|
||||
GOOS=windows GOARCH=amd64 GO111MODULE=on CGO_ENABLED=0 go build -o web-windows github.com/VictoriMetrics/vmui/
|
||||
|
||||
FROM alpine:3.15.4
|
||||
FROM alpine:3.16.0
|
||||
USER root
|
||||
|
||||
COPY --from=build-web-stage /build/web-amd64 /app/web
|
||||
|
|
22
app/vmui/packages/vmui/package-lock.json
generated
22
app/vmui/packages/vmui/package-lock.json
generated
|
@ -17808,16 +17808,6 @@
|
|||
"js-yaml": "bin/js-yaml.js"
|
||||
}
|
||||
},
|
||||
"node_modules/svgo/node_modules/nth-check": {
|
||||
"version": "1.0.2",
|
||||
"resolved": "https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz",
|
||||
"integrity": "sha512-WeBOdju8SnzPN5vTUJYxYUxLeXpCaVP5i5e0LF8fg7WORF2Wd7wFX/pk0tYZk7s8T+J7VLy0Da6J1+wCT0AtHg==",
|
||||
"dev": true,
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"boolbase": "~1.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/symbol-tree": {
|
||||
"version": "3.2.4",
|
||||
"resolved": "https://registry.npmjs.org/symbol-tree/-/symbol-tree-3.2.4.tgz",
|
||||
|
@ -32702,7 +32692,7 @@
|
|||
"boolbase": "^1.0.0",
|
||||
"css-what": "^3.2.1",
|
||||
"domutils": "^1.7.0",
|
||||
"nth-check": "^1.0.2"
|
||||
"nth-check": "^2.0.1"
|
||||
}
|
||||
},
|
||||
"css-what": {
|
||||
|
@ -32753,16 +32743,6 @@
|
|||
"argparse": "^1.0.7",
|
||||
"esprima": "^4.0.0"
|
||||
}
|
||||
},
|
||||
"nth-check": {
|
||||
"version": "1.0.2",
|
||||
"resolved": "https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz",
|
||||
"integrity": "sha512-WeBOdju8SnzPN5vTUJYxYUxLeXpCaVP5i5e0LF8fg7WORF2Wd7wFX/pk0tYZk7s8T+J7VLy0Da6J1+wCT0AtHg==",
|
||||
"dev": true,
|
||||
"peer": true,
|
||||
"requires": {
|
||||
"boolbase": "~1.0.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
|
|
|
@ -69,6 +69,9 @@
|
|||
"overrides": {
|
||||
"react-app-rewired": {
|
||||
"nth-check": "^2.0.1"
|
||||
},
|
||||
"css-select": {
|
||||
"nth-check": "^2.0.1"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
### Setup
|
||||
1. Create `.json` config file in a folder `dashboards`
|
||||
2. Import your config file into the `dashboards/index.js`
|
||||
3. Add imported variable into the array `window.__VMUI_PREDEFINED_DASHBOARDS__`
|
||||
3. Add filename into the array `window.__VMUI_PREDEFINED_DASHBOARDS__`
|
||||
|
||||
### Configuration options
|
||||
|
||||
|
|
|
@ -1,5 +1,3 @@
|
|||
import perJob from "./perJobUsage.json" assert { type: "json" };
|
||||
|
||||
window.__VMUI_PREDEFINED_DASHBOARDS__ = [
|
||||
perJob
|
||||
"perJobUsage.json"
|
||||
];
|
||||
|
|
|
@ -20,7 +20,7 @@ const DashboardLayout: FC = () => {
|
|||
}, [dashboards, tab]);
|
||||
|
||||
useEffect(() => {
|
||||
setDashboards(getDashboardSettings());
|
||||
getDashboardSettings().then(d => d.length && setDashboards(d));
|
||||
}, []);
|
||||
|
||||
return <>
|
||||
|
|
|
@ -1,6 +1,12 @@
|
|||
import {DashboardSettings} from "../../types";
|
||||
|
||||
export default (): DashboardSettings[] => {
|
||||
return window.__VMUI_PREDEFINED_DASHBOARDS__ || [];
|
||||
const importModule = async (filename: string) => {
|
||||
const data = await fetch(`./dashboards/${filename}`);
|
||||
const json = await data.json();
|
||||
return json as DashboardSettings;
|
||||
};
|
||||
|
||||
export default async () => {
|
||||
const filenames = window.__VMUI_PREDEFINED_DASHBOARDS__;
|
||||
return await Promise.all(filenames.map(async f => importModule(f)));
|
||||
};
|
||||
|
|
|
@ -2,7 +2,7 @@ import {MetricBase} from "../api/types";
|
|||
|
||||
declare global {
|
||||
interface Window {
|
||||
__VMUI_PREDEFINED_DASHBOARDS__: DashboardSettings[];
|
||||
__VMUI_PREDEFINED_DASHBOARDS__: string[];
|
||||
}
|
||||
}
|
||||
|
||||
|
|
1289
dashboards/operator.json
Normal file
1289
dashboards/operator.json
Normal file
File diff suppressed because it is too large
Load diff
|
@ -2,8 +2,8 @@
|
|||
|
||||
DOCKER_NAMESPACE := victoriametrics
|
||||
|
||||
ROOT_IMAGE ?= alpine:3.15.4
|
||||
CERTS_IMAGE := alpine:3.15.4
|
||||
ROOT_IMAGE ?= alpine:3.16.0
|
||||
CERTS_IMAGE := alpine:3.16.0
|
||||
GO_BUILDER_IMAGE := golang:1.18.2-alpine
|
||||
BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr :/ __)-1
|
||||
BASE_IMAGE := local/base:1.1.3-$(shell echo $(ROOT_IMAGE) | tr :/ __)-$(shell echo $(CERTS_IMAGE) | tr :/ __)
|
||||
|
|
|
@ -15,6 +15,21 @@ The following tip changes can be tested by building VictoriaMetrics components f
|
|||
|
||||
## tip
|
||||
|
||||
* FEATURE: adds service discovery visualisation tab for `/targets` page. It simplifies service discovery debugging. See [this PR](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2675).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): Allows using kubeconfig file within `kubernetes_sd_configs`. It may be useful for kubernetes cluster monitoring by `vmagent` outside kubernetes cluster. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1464).
|
||||
* FEATURE: allow overriding default limits for in-memory cache `indexdb/tagFilters` via flag `-storage.cacheSizeIndexDBTagFilters`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2663).
|
||||
* FEATURE: add support of `lowercase` and `uppercase` relabeling actions in the same way as [Prometheus 2.36.0 does](https://github.com/prometheus/prometheus/releases/tag/v2.36.0). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2664).
|
||||
* FEATURE: support query tracing, which allows determining bottlenecks during query processing. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#query-tracing) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1403).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): remove dependency on Internet access in `http://vmagent:8429/targets` page. Previously the page layout was broken without Internet access. See [shis issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2594).
|
||||
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): remove dependency on Internet access in [web API pages](https://docs.victoriametrics.com/vmalert.html#web). Previously the functionality and the layout of these pages was broken without Internet access. See [shis issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2594).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): expose `/api/v1/status/config` endpoint in the same way as Prometheus does. See [these docs](https://prometheus.io/docs/prometheus/latest/querying/api/#config).
|
||||
* FEATURE: add ability to change the `indexdb` rotation timezone offset via `-retentionTimezoneOffset` command-line flag. Previously it was performed at 4am UTC time. This could lead to performance degradation in the middle of the day when VictoriaMetrics runs in time zones located too far from UTC. Thanks to @cnych for [the pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2574).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `-promscrape.suppressScrapeErrorsDelay` command-line flag, which can be used for delaying and aggregating the logging of per-target scrape errors. This may reduce the amounts of logs when `vmagent` scrapes many unreliable targets. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2575). Thanks to @jelmd for [the initial implementation](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2576).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `-promscrape.cluster.name` command-line flag, which allows proper data de-duplication when the same target is scraped from multiple [vmagent clusters](https://docs.victoriametrics.com/vmagent.html#scraping-big-number-of-targets). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2679).
|
||||
|
||||
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly apply `alert_relabel_configs` relabeling rules to `-notifier.config` according to [these docs](https://docs.victoriametrics.com/vmalert.html#notifier-configuration-file). Thanks to @spectvtor for [the bugfix](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2633).
|
||||
* BUGFIX: deny [background merge](https://valyala.medium.com/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282) when the storage enters read-only mode, e.g. when free disk space becomes lower than `-storage.minFreeDiskSpaceBytes`. Background merge needs additional disk space, so it could result in `no space left on device` errors. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2603).
|
||||
|
||||
## [v1.77.2](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.77.2)
|
||||
|
||||
Released at 21-05-2022
|
||||
|
|
|
@ -36,7 +36,7 @@ Each service may scale independently and may run on the most suitable hardware.
|
|||
This is [shared nothing architecture](https://en.wikipedia.org/wiki/Shared-nothing_architecture).
|
||||
It increases cluster availability, simplifies cluster maintenance and cluster scaling.
|
||||
|
||||
<img src="https://docs.google.com/drawings/d/e/2PACX-1vTvk2raU9kFgZ84oF-OKolrGwHaePhHRsZEcfQ1I_EC5AB_XPWwB392XshxPramLJ8E4bqptTnFn5LL/pub?w=1104&h=746">
|
||||

|
||||
|
||||
## Multitenancy
|
||||
|
||||
|
|
183
docs/README.md
183
docs/README.md
|
@ -14,12 +14,18 @@ VictoriaMetrics is a fast, cost-effective and scalable monitoring solution and t
|
|||
|
||||
VictoriaMetrics is available in [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases),
|
||||
[Docker images](https://hub.docker.com/r/victoriametrics/victoria-metrics/), [Snap packages](https://snapcraft.io/victoriametrics)
|
||||
and [source code](https://github.com/VictoriaMetrics/VictoriaMetrics). Just download VictoriaMetrics and follow [these instructions](#how-to-start-victoriametrics).
|
||||
Then read [Prometheus setup](#prometheus-setup) and [Grafana setup](#grafana-setup) docs.
|
||||
and [source code](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
Just download VictoriaMetrics and follow [these instructions](https://docs.victoriametrics.com/Quick-Start.html).
|
||||
|
||||
Cluster version of VictoriaMetrics is available [here](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html).
|
||||
|
||||
[Contact us](mailto:info@victoriametrics.com) if you need enterprise support for VictoriaMetrics. See [features available in enterprise package](https://victoriametrics.com/products/enterprise/). Enterprise binaries can be downloaded and evaluated for free from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
|
||||
Learn more about [key concepts](https://docs.victoriametrics.com/keyConcepts.html) of VictoriaMetrics and follow
|
||||
[QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for better experience.
|
||||
|
||||
[Contact us](mailto:info@victoriametrics.com) if you need enterprise support for VictoriaMetrics.
|
||||
See [features available in enterprise package](https://victoriametrics.com/products/enterprise/).
|
||||
Enterprise binaries can be downloaded and evaluated for free
|
||||
from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
|
||||
|
||||
## Prominent features
|
||||
|
||||
|
@ -53,7 +59,7 @@ VictoriaMetrics has the following prominent features:
|
|||
* [JSON line format](#how-to-import-data-in-json-line-format).
|
||||
* [Arbitrary CSV data](#how-to-import-csv-data).
|
||||
* [Native binary format](#how-to-import-data-in-native-format).
|
||||
* It supports metrics' relabeling. See [these docs](#relabeling) for details.
|
||||
* It supports metrics [relabeling](#relabeling).
|
||||
* It can deal with [high cardinality issues](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) and [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) issues via [series limiter](#cardinality-limiter).
|
||||
* It ideally works with big amounts of time series data from APM, Kubernetes, IoT sensors, connected cars, industrial telemetry, financial data and various [Enterprise workloads](https://victoriametrics.com/products/enterprise/).
|
||||
* It has open source [cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster).
|
||||
|
@ -92,9 +98,10 @@ See also [articles and slides about VictoriaMetrics from our users](https://docs
|
|||
|
||||
## Operation
|
||||
|
||||
## How to start VictoriaMetrics
|
||||
### How to start VictoriaMetrics
|
||||
|
||||
Just download [VictoriaMetrics executable](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or [Docker image](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and start it with the desired command-line flags.
|
||||
See also [QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for additional information.
|
||||
|
||||
The following command-line flags are used the most:
|
||||
|
||||
|
@ -143,18 +150,26 @@ After changes were made, trigger config re-read with the command `curl 127.0.0.1
|
|||
|
||||
Add the following lines to Prometheus config file (it is usually located at `/etc/prometheus/prometheus.yml`) in order to send data to VictoriaMetrics:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```yml
|
||||
remote_write:
|
||||
- url: http://<victoriametrics-addr>:8428/api/v1/write
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Substitute `<victoriametrics-addr>` with hostname or IP address of VictoriaMetrics.
|
||||
Then apply new config via the following command:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
kill -HUP `pidof prometheus`
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Prometheus writes incoming data to local storage and replicates it to remote storage in parallel.
|
||||
This means that data remains available in local storage for `--storage.tsdb.retention.time` duration
|
||||
even if remote storage is unavailable.
|
||||
|
@ -174,6 +189,8 @@ across Prometheus instances, so time series could be filtered and grouped by thi
|
|||
|
||||
For highly loaded Prometheus instances (200k+ samples per second) the following tuning may be applied:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```yaml
|
||||
remote_write:
|
||||
- url: http://<victoriametrics-addr>:8428/api/v1/write
|
||||
|
@ -183,13 +200,18 @@ remote_write:
|
|||
max_shards: 30
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Using remote write increases memory usage for Prometheus by up to ~25%. If you are experiencing issues with
|
||||
too high memory consumption of Prometheus, then try to lower `max_samples_per_send` and `capacity` params. Keep in mind that these two params are tightly connected.
|
||||
too high memory consumption of Prometheus, then try to lower `max_samples_per_send` and `capacity` params.
|
||||
Keep in mind that these two params are tightly connected.
|
||||
Read more about tuning remote write for Prometheus [here](https://prometheus.io/docs/practices/remote_write).
|
||||
|
||||
It is recommended upgrading Prometheus to [v2.12.0](https://github.com/prometheus/prometheus/releases) or newer, since previous versions may have issues with `remote_write`.
|
||||
It is recommended upgrading Prometheus to [v2.12.0](https://github.com/prometheus/prometheus/releases) or newer,
|
||||
since previous versions may have issues with `remote_write`.
|
||||
|
||||
Take a look also at [vmagent](https://docs.victoriametrics.com/vmagent.html) and [vmalert](https://docs.victoriametrics.com/vmalert.html),
|
||||
Take a look also at [vmagent](https://docs.victoriametrics.com/vmagent.html)
|
||||
and [vmalert](https://docs.victoriametrics.com/vmalert.html),
|
||||
which can be used as faster and less resource-hungry alternative to Prometheus.
|
||||
|
||||
## Grafana setup
|
||||
|
@ -218,6 +240,27 @@ The following steps must be performed during the upgrade / downgrade procedure:
|
|||
|
||||
Prometheus doesn't drop data during VictoriaMetrics restart. See [this article](https://grafana.com/blog/2019/03/25/whats-new-in-prometheus-2.8-wal-based-remote-write/) for details. The same applies also to [vmagent](https://docs.victoriametrics.com/vmagent.html).
|
||||
|
||||
## vmui
|
||||
|
||||
VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at `http://victoriametrics:8428/vmui`.
|
||||
The UI allows exploring query results via graphs and tables. Graphs support scrolling and zooming:
|
||||
|
||||
* Drag the graph to the left / right in order to move the displayed time range into the past / future.
|
||||
* Hold `Ctrl` (or `Cmd` on MacOS) and scroll up / down in order to zoom in / out the graph.
|
||||
|
||||
Query history can be navigated by holding `Ctrl` (or `Cmd` on MacOS) and pressing `up` or `down` arrows on the keyboard while the cursor is located in the query input field.
|
||||
|
||||
Multi-line queries can be entered by pressing `Shift-Enter` in query input field.
|
||||
|
||||
When querying the [backfilled data](https://docs.victoriametrics.com/#backfilling), it may be useful disabling response cache by clicking `Enable cache` checkbox.
|
||||
|
||||
VMUI automatically adjusts the interval between datapoints on the graph depending on the horizontal resolution and on the selected time range. The step value can be customized by clickhing `Override step value` checkbox.
|
||||
|
||||
VMUI allows investigating correlations between two queries on the same graph. Just click `+Query` button, enter the second query in the newly appeared input field and press `Ctrl+Enter`. Results for both queries should be displayed simultaneously on the same graph. Every query has its own vertical scale, which is displayed on the left and the right side of the graph. Lines for the second query are dashed.
|
||||
|
||||
See the [example VMUI at VictoriaMetrics playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/?g0.expr=100%20*%20sum(rate(process_cpu_seconds_total))%20by%20(job)&g0.range_input=1d).
|
||||
|
||||
|
||||
## How to apply new config to VictoriaMetrics
|
||||
|
||||
VictoriaMetrics is configured via command-line flags, so it must be restarted when new command-line flags should be applied:
|
||||
|
@ -316,7 +359,7 @@ and stream plain InfluxDB line protocol data to the configured TCP and/or UDP ad
|
|||
|
||||
VictoriaMetrics performs the following transformations to the ingested InfluxDB data:
|
||||
|
||||
* [`db` query arg](https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint) is mapped into `db` label value
|
||||
* [db query arg](https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint) is mapped into `db` label value
|
||||
unless `db` tag exists in the InfluxDB line. The `db` label name can be overriden via `-influxDBLabel` command-line flag.
|
||||
* Field names are mapped to time series names prefixed with `{measurement}{separator}` value, where `{separator}` equals to `_` by default. It can be changed with `-influxMeasurementFieldSeparator` command-line flag. See also `-influxSkipSingleField` command-line flag. If `{measurement}` is empty or if `-influxSkipMeasurement` command-line flag is set, then time series names correspond to field names.
|
||||
* Field values are mapped to time series values.
|
||||
|
@ -338,20 +381,28 @@ foo_field2{tag1="value1", tag2="value2"} 40
|
|||
Example for writing data with [InfluxDB line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/)
|
||||
to local VictoriaMetrics using `curl`:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
An arbitrary number of lines delimited by '\n' (aka newline char) can be sent in a single request.
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The `/api/v1/export` endpoint should return the following response:
|
||||
|
||||
```jsonl
|
||||
```json
|
||||
{"metric":{"__name__":"measurement_field1","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560272508147]}
|
||||
{"metric":{"__name__":"measurement_field2","tag1":"value1","tag2":"value2"},"values":[1.23],"timestamps":[1560272508147]}
|
||||
```
|
||||
|
@ -431,20 +482,28 @@ Send data to the given address from OpenTSDB-compatible agents.
|
|||
|
||||
Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `nc`:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go.
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The `/api/v1/export` endpoint should return the following response:
|
||||
|
||||
```bash
|
||||
```json
|
||||
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277292000]}
|
||||
```
|
||||
|
||||
|
@ -461,25 +520,37 @@ Send data to the given address from OpenTSDB-compatible agents.
|
|||
|
||||
Example for writing a single data point:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Example for writing multiple data points in a single request:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The `/api/v1/export` endpoint should return the following response:
|
||||
|
||||
```bash
|
||||
```json
|
||||
{"metric":{"__name__":"foo"},"values":[45.34],"timestamps":[1566464846000]}
|
||||
{"metric":{"__name__":"bar"},"values":[43],"timestamps":[1566464846000]}
|
||||
{"metric":{"__name__":"x.y.z","t1":"v1","t2":"v2"},"values":[45.34],"timestamps":[1566464763000]}
|
||||
|
@ -519,7 +590,7 @@ VictoriaMetrics accepts `round_digits` query arg for `/api/v1/query` and `/api/v
|
|||
|
||||
By default, VictoriaMetrics returns time series for the last 5 minutes from `/api/v1/series`, while the Prometheus API defaults to all time. Use `start` and `end` to select a different time range.
|
||||
|
||||
Additionally VictoriaMetrics provides the following handlers:
|
||||
Additionally, VictoriaMetrics provides the following handlers:
|
||||
|
||||
* `/vmui` - Basic Web UI. See [these docs](#vmui).
|
||||
* `/api/v1/series/count` - returns the total number of time series in the database. Some notes:
|
||||
|
@ -587,26 +658,6 @@ VictoriaMetrics supports the following handlers from [Graphite Tags API](https:/
|
|||
* [/tags/autoComplete/values](https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support)
|
||||
* [/tags/delSeries](https://graphite.readthedocs.io/en/stable/tags.html#removing-series-from-the-tagdb)
|
||||
|
||||
## vmui
|
||||
|
||||
VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at `http://victoriametrics:8428/vmui`.
|
||||
The UI allows exploring query results via graphs and tables. Graphs support scrolling and zooming:
|
||||
|
||||
* Drag the graph to the left / right in order to move the displayed time range into the past / future.
|
||||
* Hold `Ctrl` (or `Cmd` on MacOS) and scroll up / down in order to zoom in / out the graph.
|
||||
|
||||
Query history can be navigated by holding `Ctrl` (or `Cmd` on MacOS) and pressing `up` or `down` arrows on the keyboard while the cursor is located in the query input field.
|
||||
|
||||
Multi-line queries can be entered by pressing `Shift-Enter` in query input field.
|
||||
|
||||
When querying the [backfilled data](https://docs.victoriametrics.com/#backfilling), it may be useful disabling response cache by clicking `Enable cache` checkbox.
|
||||
|
||||
VMUI automatically adjusts the interval between datapoints on the graph depending on the horizontal resolution and on the selected time range. The step value can be customized by clickhing `Override step value` checkbox.
|
||||
|
||||
VMUI allows investigating correlations between two queries on the same graph. Just click `+Query` button, enter the second query in the newly appeared input field and press `Ctrl+Enter`. Results for both queries should be displayed simultaneously on the same graph. Every query has its own vertical scale, which is displayed on the left and the right side of the graph. Lines for the second query are dashed.
|
||||
|
||||
See the [example VMUI at VictoriaMetrics playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/?g0.expr=100%20*%20sum(rate(process_cpu_seconds_total))%20by%20(job)&g0.range_input=1d).
|
||||
|
||||
## How to build from sources
|
||||
|
||||
We recommend using either [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or
|
||||
|
@ -1314,6 +1365,69 @@ VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way simi
|
|||
* `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account.
|
||||
* `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details.
|
||||
|
||||
## Query tracing
|
||||
|
||||
VictoriaMetrics supports query tracing, which can be used for determining bottlenecks during query processing.
|
||||
|
||||
Query tracing can be enabled for a specific query by passing `trace=1` query arg.
|
||||
In this case VictoriaMetrics puts query trace into `trace` field in the output JSON.
|
||||
|
||||
For example, the following command:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq -r '.trace'
|
||||
```
|
||||
|
||||
would return the following trace:
|
||||
|
||||
```json
|
||||
{
|
||||
"duration_msec": 0.099,
|
||||
"message": "/api/v1/query_range: start=1654034340000, end=1654037880000, step=60000, query=\"2*rand()\": series=1",
|
||||
"children": [
|
||||
{
|
||||
"duration_msec": 0.034,
|
||||
"message": "eval: query=2 * rand(), timeRange=[1654034340000..1654037880000], step=60000, mayCache=true: series=1, points=60, pointsPerSeries=60",
|
||||
"children": [
|
||||
{
|
||||
"duration_msec": 0.032,
|
||||
"message": "binary op \"*\": series=1",
|
||||
"children": [
|
||||
{
|
||||
"duration_msec": 0.009,
|
||||
"message": "eval: query=2, timeRange=[1654034340000..1654037880000], step=60000, mayCache=true: series=1, points=60, pointsPerSeries=60"
|
||||
},
|
||||
{
|
||||
"duration_msec": 0.017,
|
||||
"message": "eval: query=rand(), timeRange=[1654034340000..1654037880000], step=60000, mayCache=true: series=1, points=60, pointsPerSeries=60",
|
||||
"children": [
|
||||
{
|
||||
"duration_msec": 0.015,
|
||||
"message": "transform rand(): series=1"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"duration_msec": 0.004,
|
||||
"message": "sort series by metric name and labels"
|
||||
},
|
||||
{
|
||||
"duration_msec": 0.044,
|
||||
"message": "generate /api/v1/query_range response for series=1, points=60"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
All the durations and timestamps in traces are in milliseconds.
|
||||
|
||||
Query tracing is allowed by default. It can be denied by passing `-denyQueryTracing` command-line flag to VictoriaMetrics.
|
||||
|
||||
|
||||
## Cardinality limiter
|
||||
|
||||
By default VictoriaMetrics doesn't limit the number of stored time series. The limit can be enforced by setting the following command-line flags:
|
||||
|
@ -1423,7 +1537,8 @@ The panel `Cache usage %` in `Troubleshooting` section shows the percentage of u
|
|||
from the allowed size by type. If the percentage is below 100%, then no further tuning needed.
|
||||
|
||||
Please note, default cache sizes were carefully adjusted accordingly to the most
|
||||
practical scenarios and workloads. Change the defaults only if you understand the implications.
|
||||
practical scenarios and workloads. Change the defaults only if you understand the implications
|
||||
and vmstorage has enough free memory to accommodate new cache sizes.
|
||||
|
||||
To override the default values see command-line flags with `-storage.cacheSize` prefix.
|
||||
See the full description of flags [here](#list-of-command-line-flags).
|
||||
|
|
|
@ -18,12 +18,18 @@ VictoriaMetrics is a fast, cost-effective and scalable monitoring solution and t
|
|||
|
||||
VictoriaMetrics is available in [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases),
|
||||
[Docker images](https://hub.docker.com/r/victoriametrics/victoria-metrics/), [Snap packages](https://snapcraft.io/victoriametrics)
|
||||
and [source code](https://github.com/VictoriaMetrics/VictoriaMetrics). Just download VictoriaMetrics and follow [these instructions](#how-to-start-victoriametrics).
|
||||
Then read [Prometheus setup](#prometheus-setup) and [Grafana setup](#grafana-setup) docs.
|
||||
and [source code](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
Just download VictoriaMetrics and follow [these instructions](https://docs.victoriametrics.com/Quick-Start.html).
|
||||
|
||||
Cluster version of VictoriaMetrics is available [here](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html).
|
||||
|
||||
[Contact us](mailto:info@victoriametrics.com) if you need enterprise support for VictoriaMetrics. See [features available in enterprise package](https://victoriametrics.com/products/enterprise/). Enterprise binaries can be downloaded and evaluated for free from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
|
||||
Learn more about [key concepts](https://docs.victoriametrics.com/keyConcepts.html) of VictoriaMetrics and follow
|
||||
[QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for better experience.
|
||||
|
||||
[Contact us](mailto:info@victoriametrics.com) if you need enterprise support for VictoriaMetrics.
|
||||
See [features available in enterprise package](https://victoriametrics.com/products/enterprise/).
|
||||
Enterprise binaries can be downloaded and evaluated for free
|
||||
from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
|
||||
|
||||
## Prominent features
|
||||
|
||||
|
@ -57,7 +63,7 @@ VictoriaMetrics has the following prominent features:
|
|||
* [JSON line format](#how-to-import-data-in-json-line-format).
|
||||
* [Arbitrary CSV data](#how-to-import-csv-data).
|
||||
* [Native binary format](#how-to-import-data-in-native-format).
|
||||
* It supports metrics' relabeling. See [these docs](#relabeling) for details.
|
||||
* It supports metrics [relabeling](#relabeling).
|
||||
* It can deal with [high cardinality issues](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) and [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) issues via [series limiter](#cardinality-limiter).
|
||||
* It ideally works with big amounts of time series data from APM, Kubernetes, IoT sensors, connected cars, industrial telemetry, financial data and various [Enterprise workloads](https://victoriametrics.com/products/enterprise/).
|
||||
* It has open source [cluster version](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster).
|
||||
|
@ -96,9 +102,10 @@ See also [articles and slides about VictoriaMetrics from our users](https://docs
|
|||
|
||||
## Operation
|
||||
|
||||
## How to start VictoriaMetrics
|
||||
### How to start VictoriaMetrics
|
||||
|
||||
Just download [VictoriaMetrics executable](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or [Docker image](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and start it with the desired command-line flags.
|
||||
See also [QuickStart guide](https://docs.victoriametrics.com/Quick-Start.html) for additional information.
|
||||
|
||||
The following command-line flags are used the most:
|
||||
|
||||
|
@ -147,18 +154,26 @@ After changes were made, trigger config re-read with the command `curl 127.0.0.1
|
|||
|
||||
Add the following lines to Prometheus config file (it is usually located at `/etc/prometheus/prometheus.yml`) in order to send data to VictoriaMetrics:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```yml
|
||||
remote_write:
|
||||
- url: http://<victoriametrics-addr>:8428/api/v1/write
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Substitute `<victoriametrics-addr>` with hostname or IP address of VictoriaMetrics.
|
||||
Then apply new config via the following command:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
kill -HUP `pidof prometheus`
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Prometheus writes incoming data to local storage and replicates it to remote storage in parallel.
|
||||
This means that data remains available in local storage for `--storage.tsdb.retention.time` duration
|
||||
even if remote storage is unavailable.
|
||||
|
@ -178,6 +193,8 @@ across Prometheus instances, so time series could be filtered and grouped by thi
|
|||
|
||||
For highly loaded Prometheus instances (200k+ samples per second) the following tuning may be applied:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```yaml
|
||||
remote_write:
|
||||
- url: http://<victoriametrics-addr>:8428/api/v1/write
|
||||
|
@ -187,13 +204,18 @@ remote_write:
|
|||
max_shards: 30
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Using remote write increases memory usage for Prometheus by up to ~25%. If you are experiencing issues with
|
||||
too high memory consumption of Prometheus, then try to lower `max_samples_per_send` and `capacity` params. Keep in mind that these two params are tightly connected.
|
||||
too high memory consumption of Prometheus, then try to lower `max_samples_per_send` and `capacity` params.
|
||||
Keep in mind that these two params are tightly connected.
|
||||
Read more about tuning remote write for Prometheus [here](https://prometheus.io/docs/practices/remote_write).
|
||||
|
||||
It is recommended upgrading Prometheus to [v2.12.0](https://github.com/prometheus/prometheus/releases) or newer, since previous versions may have issues with `remote_write`.
|
||||
It is recommended upgrading Prometheus to [v2.12.0](https://github.com/prometheus/prometheus/releases) or newer,
|
||||
since previous versions may have issues with `remote_write`.
|
||||
|
||||
Take a look also at [vmagent](https://docs.victoriametrics.com/vmagent.html) and [vmalert](https://docs.victoriametrics.com/vmalert.html),
|
||||
Take a look also at [vmagent](https://docs.victoriametrics.com/vmagent.html)
|
||||
and [vmalert](https://docs.victoriametrics.com/vmalert.html),
|
||||
which can be used as faster and less resource-hungry alternative to Prometheus.
|
||||
|
||||
## Grafana setup
|
||||
|
@ -222,6 +244,27 @@ The following steps must be performed during the upgrade / downgrade procedure:
|
|||
|
||||
Prometheus doesn't drop data during VictoriaMetrics restart. See [this article](https://grafana.com/blog/2019/03/25/whats-new-in-prometheus-2.8-wal-based-remote-write/) for details. The same applies also to [vmagent](https://docs.victoriametrics.com/vmagent.html).
|
||||
|
||||
## vmui
|
||||
|
||||
VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at `http://victoriametrics:8428/vmui`.
|
||||
The UI allows exploring query results via graphs and tables. Graphs support scrolling and zooming:
|
||||
|
||||
* Drag the graph to the left / right in order to move the displayed time range into the past / future.
|
||||
* Hold `Ctrl` (or `Cmd` on MacOS) and scroll up / down in order to zoom in / out the graph.
|
||||
|
||||
Query history can be navigated by holding `Ctrl` (or `Cmd` on MacOS) and pressing `up` or `down` arrows on the keyboard while the cursor is located in the query input field.
|
||||
|
||||
Multi-line queries can be entered by pressing `Shift-Enter` in query input field.
|
||||
|
||||
When querying the [backfilled data](https://docs.victoriametrics.com/#backfilling), it may be useful disabling response cache by clicking `Enable cache` checkbox.
|
||||
|
||||
VMUI automatically adjusts the interval between datapoints on the graph depending on the horizontal resolution and on the selected time range. The step value can be customized by clickhing `Override step value` checkbox.
|
||||
|
||||
VMUI allows investigating correlations between two queries on the same graph. Just click `+Query` button, enter the second query in the newly appeared input field and press `Ctrl+Enter`. Results for both queries should be displayed simultaneously on the same graph. Every query has its own vertical scale, which is displayed on the left and the right side of the graph. Lines for the second query are dashed.
|
||||
|
||||
See the [example VMUI at VictoriaMetrics playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/?g0.expr=100%20*%20sum(rate(process_cpu_seconds_total))%20by%20(job)&g0.range_input=1d).
|
||||
|
||||
|
||||
## How to apply new config to VictoriaMetrics
|
||||
|
||||
VictoriaMetrics is configured via command-line flags, so it must be restarted when new command-line flags should be applied:
|
||||
|
@ -320,7 +363,7 @@ and stream plain InfluxDB line protocol data to the configured TCP and/or UDP ad
|
|||
|
||||
VictoriaMetrics performs the following transformations to the ingested InfluxDB data:
|
||||
|
||||
* [`db` query arg](https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint) is mapped into `db` label value
|
||||
* [db query arg](https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint) is mapped into `db` label value
|
||||
unless `db` tag exists in the InfluxDB line. The `db` label name can be overriden via `-influxDBLabel` command-line flag.
|
||||
* Field names are mapped to time series names prefixed with `{measurement}{separator}` value, where `{separator}` equals to `_` by default. It can be changed with `-influxMeasurementFieldSeparator` command-line flag. See also `-influxSkipSingleField` command-line flag. If `{measurement}` is empty or if `-influxSkipMeasurement` command-line flag is set, then time series names correspond to field names.
|
||||
* Field values are mapped to time series values.
|
||||
|
@ -342,20 +385,28 @@ foo_field2{tag1="value1", tag2="value2"} 40
|
|||
Example for writing data with [InfluxDB line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/)
|
||||
to local VictoriaMetrics using `curl`:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
An arbitrary number of lines delimited by '\n' (aka newline char) can be sent in a single request.
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The `/api/v1/export` endpoint should return the following response:
|
||||
|
||||
```jsonl
|
||||
```json
|
||||
{"metric":{"__name__":"measurement_field1","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560272508147]}
|
||||
{"metric":{"__name__":"measurement_field2","tag1":"value1","tag2":"value2"},"values":[1.23],"timestamps":[1560272508147]}
|
||||
```
|
||||
|
@ -435,20 +486,28 @@ Send data to the given address from OpenTSDB-compatible agents.
|
|||
|
||||
Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `nc`:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go.
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The `/api/v1/export` endpoint should return the following response:
|
||||
|
||||
```bash
|
||||
```json
|
||||
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277292000]}
|
||||
```
|
||||
|
||||
|
@ -465,25 +524,37 @@ Send data to the given address from OpenTSDB-compatible agents.
|
|||
|
||||
Example for writing a single data point:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Example for writing multiple data points in a single request:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The `/api/v1/export` endpoint should return the following response:
|
||||
|
||||
```bash
|
||||
```json
|
||||
{"metric":{"__name__":"foo"},"values":[45.34],"timestamps":[1566464846000]}
|
||||
{"metric":{"__name__":"bar"},"values":[43],"timestamps":[1566464846000]}
|
||||
{"metric":{"__name__":"x.y.z","t1":"v1","t2":"v2"},"values":[45.34],"timestamps":[1566464763000]}
|
||||
|
@ -523,7 +594,7 @@ VictoriaMetrics accepts `round_digits` query arg for `/api/v1/query` and `/api/v
|
|||
|
||||
By default, VictoriaMetrics returns time series for the last 5 minutes from `/api/v1/series`, while the Prometheus API defaults to all time. Use `start` and `end` to select a different time range.
|
||||
|
||||
Additionally VictoriaMetrics provides the following handlers:
|
||||
Additionally, VictoriaMetrics provides the following handlers:
|
||||
|
||||
* `/vmui` - Basic Web UI. See [these docs](#vmui).
|
||||
* `/api/v1/series/count` - returns the total number of time series in the database. Some notes:
|
||||
|
@ -591,26 +662,6 @@ VictoriaMetrics supports the following handlers from [Graphite Tags API](https:/
|
|||
* [/tags/autoComplete/values](https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support)
|
||||
* [/tags/delSeries](https://graphite.readthedocs.io/en/stable/tags.html#removing-series-from-the-tagdb)
|
||||
|
||||
## vmui
|
||||
|
||||
VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at `http://victoriametrics:8428/vmui`.
|
||||
The UI allows exploring query results via graphs and tables. Graphs support scrolling and zooming:
|
||||
|
||||
* Drag the graph to the left / right in order to move the displayed time range into the past / future.
|
||||
* Hold `Ctrl` (or `Cmd` on MacOS) and scroll up / down in order to zoom in / out the graph.
|
||||
|
||||
Query history can be navigated by holding `Ctrl` (or `Cmd` on MacOS) and pressing `up` or `down` arrows on the keyboard while the cursor is located in the query input field.
|
||||
|
||||
Multi-line queries can be entered by pressing `Shift-Enter` in query input field.
|
||||
|
||||
When querying the [backfilled data](https://docs.victoriametrics.com/#backfilling), it may be useful disabling response cache by clicking `Enable cache` checkbox.
|
||||
|
||||
VMUI automatically adjusts the interval between datapoints on the graph depending on the horizontal resolution and on the selected time range. The step value can be customized by clickhing `Override step value` checkbox.
|
||||
|
||||
VMUI allows investigating correlations between two queries on the same graph. Just click `+Query` button, enter the second query in the newly appeared input field and press `Ctrl+Enter`. Results for both queries should be displayed simultaneously on the same graph. Every query has its own vertical scale, which is displayed on the left and the right side of the graph. Lines for the second query are dashed.
|
||||
|
||||
See the [example VMUI at VictoriaMetrics playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/?g0.expr=100%20*%20sum(rate(process_cpu_seconds_total))%20by%20(job)&g0.range_input=1d).
|
||||
|
||||
## How to build from sources
|
||||
|
||||
We recommend using either [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or
|
||||
|
@ -1318,6 +1369,69 @@ VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way simi
|
|||
* `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account.
|
||||
* `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details.
|
||||
|
||||
## Query tracing
|
||||
|
||||
VictoriaMetrics supports query tracing, which can be used for determining bottlenecks during query processing.
|
||||
|
||||
Query tracing can be enabled for a specific query by passing `trace=1` query arg.
|
||||
In this case VictoriaMetrics puts query trace into `trace` field in the output JSON.
|
||||
|
||||
For example, the following command:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8428/api/v1/query_range -d 'query=2*rand()' -d 'start=-1h' -d 'step=1m' -d 'trace=1' | jq -r '.trace'
|
||||
```
|
||||
|
||||
would return the following trace:
|
||||
|
||||
```json
|
||||
{
|
||||
"duration_msec": 0.099,
|
||||
"message": "/api/v1/query_range: start=1654034340000, end=1654037880000, step=60000, query=\"2*rand()\": series=1",
|
||||
"children": [
|
||||
{
|
||||
"duration_msec": 0.034,
|
||||
"message": "eval: query=2 * rand(), timeRange=[1654034340000..1654037880000], step=60000, mayCache=true: series=1, points=60, pointsPerSeries=60",
|
||||
"children": [
|
||||
{
|
||||
"duration_msec": 0.032,
|
||||
"message": "binary op \"*\": series=1",
|
||||
"children": [
|
||||
{
|
||||
"duration_msec": 0.009,
|
||||
"message": "eval: query=2, timeRange=[1654034340000..1654037880000], step=60000, mayCache=true: series=1, points=60, pointsPerSeries=60"
|
||||
},
|
||||
{
|
||||
"duration_msec": 0.017,
|
||||
"message": "eval: query=rand(), timeRange=[1654034340000..1654037880000], step=60000, mayCache=true: series=1, points=60, pointsPerSeries=60",
|
||||
"children": [
|
||||
{
|
||||
"duration_msec": 0.015,
|
||||
"message": "transform rand(): series=1"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"duration_msec": 0.004,
|
||||
"message": "sort series by metric name and labels"
|
||||
},
|
||||
{
|
||||
"duration_msec": 0.044,
|
||||
"message": "generate /api/v1/query_range response for series=1, points=60"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
All the durations and timestamps in traces are in milliseconds.
|
||||
|
||||
Query tracing is allowed by default. It can be denied by passing `-denyQueryTracing` command-line flag to VictoriaMetrics.
|
||||
|
||||
|
||||
## Cardinality limiter
|
||||
|
||||
By default VictoriaMetrics doesn't limit the number of stored time series. The limit can be enforced by setting the following command-line flags:
|
||||
|
@ -1427,7 +1541,8 @@ The panel `Cache usage %` in `Troubleshooting` section shows the percentage of u
|
|||
from the allowed size by type. If the percentage is below 100%, then no further tuning needed.
|
||||
|
||||
Please note, default cache sizes were carefully adjusted accordingly to the most
|
||||
practical scenarios and workloads. Change the defaults only if you understand the implications.
|
||||
practical scenarios and workloads. Change the defaults only if you understand the implications
|
||||
and vmstorage has enough free memory to accommodate new cache sizes.
|
||||
|
||||
To override the default values see command-line flags with `-storage.cacheSize` prefix.
|
||||
See the full description of flags [here](#list-of-command-line-flags).
|
||||
|
|
BIN
docs/assets/images/Naive_cluster_scheme.png
Normal file
BIN
docs/assets/images/Naive_cluster_scheme.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 125 KiB |
|
@ -13,6 +13,7 @@ compare it to other processes, perform some calculations with it, or even define
|
|||
user-defined thresholds.
|
||||
|
||||
The most common use-cases for metrics are:
|
||||
|
||||
- check how the system behaves at the particular time period;
|
||||
- correlate behavior changes to other measurements;
|
||||
- observe or forecast trends;
|
||||
|
@ -22,79 +23,87 @@ Collecting and analyzing metrics provides advantages that are difficult to overe
|
|||
|
||||
### Structure of a metric
|
||||
|
||||
Let's start with an example. To track how many requests our application serves,
|
||||
we'll define a metric with the name `requests_total`.
|
||||
Let's start with an example. To track how many requests our application serves, we'll define a metric with the
|
||||
name `requests_total`.
|
||||
|
||||
You can be more specific here by saying `requests_success_total` (for only successful requests)
|
||||
or `request_errors_total` (for requests which failed). Choosing a metric name is very important and supposed
|
||||
to clarify what is actually measured to every person who reads it, just like variable names in programming.
|
||||
or `request_errors_total` (for requests which failed). Choosing a metric name is very important and supposed to clarify
|
||||
what is actually measured to every person who reads it, just like variable names in programming.
|
||||
|
||||
Every metric can contain additional meta information in the form of label-value pairs:
|
||||
|
||||
```
|
||||
requests_total{path="/", code="200"}
|
||||
requests_total{path="/", code="403"}
|
||||
```
|
||||
|
||||
The meta-information (set of `labels` in curly braces) gives us a context for which `path` and with what `code`
|
||||
the `request` was served. Label-value pairs are always of a `string` type. VictoriaMetrics data model
|
||||
is schemaless, which means there is no need to define metric names or their labels in advance.
|
||||
User is free to add or change ingested metrics anytime.
|
||||
the `request` was served. Label-value pairs are always of a `string` type. VictoriaMetrics data model is schemaless,
|
||||
which means there is no need to define metric names or their labels in advance. User is free to add or change ingested
|
||||
metrics anytime.
|
||||
|
||||
Actually, the metric's name is also a label with a special name `__name__`. So the following two series are identical:
|
||||
|
||||
```
|
||||
requests_total{path="/", code="200"}
|
||||
{__name__="requests_total", path="/", code="200"}
|
||||
```
|
||||
|
||||
A combination of a metric name and its labels defines a `time series`.
|
||||
For example, `requests_total{path="/", code="200"}` and `requests_total{path="/", code="403"}`
|
||||
#### Time series
|
||||
|
||||
A combination of a metric name and its labels defines a `time series`. For
|
||||
example, `requests_total{path="/", code="200"}` and `requests_total{path="/", code="403"}`
|
||||
are two different time series.
|
||||
|
||||
The number of all unique label combinations for one metric defines its `cardinality`.
|
||||
For example, if `requests_total` has 3 unique `path` values and 5 unique `code` values,
|
||||
then its cardinality will be `3*5=15` of unique time series. If you add one more
|
||||
unique `path` value, cardinality will bump to `20`. See more in
|
||||
Number of time series has an impact on database resource usage. See
|
||||
also [What is an active time series?](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series)
|
||||
and [What is high churn rate?](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate).
|
||||
|
||||
#### Cardinality
|
||||
|
||||
The number of all unique label combinations for one metric defines its `cardinality`. For example, if `requests_total`
|
||||
has 3 unique `path` values and 5 unique `code` values, then its cardinality will be `3*5=15` of unique time series. If
|
||||
you add one more unique `path` value, cardinality will bump to `20`. See more in
|
||||
[What is cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality).
|
||||
|
||||
Every time series consists of `datapoints` (also called `samples`).
|
||||
A `datapoint` is value-timestamp pair associated with the specific series:
|
||||
#### Data points
|
||||
|
||||
Every time series consists of `data points` (also called `samples`). A `data point` is value-timestamp pair associated
|
||||
with the specific series:
|
||||
|
||||
```
|
||||
requests_total{path="/", code="200"} <float64 value> <unixtimestamp>
|
||||
```
|
||||
|
||||
In VictoriaMetrics data model, datapoint's value is of type `float64`.
|
||||
And timestamp is unix time with milliseconds precision. Each series can contain an infinite number of datapoints.
|
||||
|
||||
In VictoriaMetrics data model, data point's value is always of type `float64`. And timestamp is unix time with
|
||||
milliseconds precision. Each series can contain an infinite number of data points.
|
||||
|
||||
### Types of metrics
|
||||
|
||||
Internally, VictoriaMetrics does not have a notion of a metric type. All metrics are the same.
|
||||
The concept of a metric type exists specifically to help users to understand how the metric was measured.
|
||||
There are 4 common metric types.
|
||||
Internally, VictoriaMetrics does not have a notion of a metric type. All metrics are the same. The concept of a metric
|
||||
type exists specifically to help users to understand how the metric was measured. There are 4 common metric types.
|
||||
|
||||
#### Counter
|
||||
|
||||
Counter metric type is a [monotonically increasing counter](https://en.wikipedia.org/wiki/Monotonic_function)
|
||||
used for capturing a number of events.
|
||||
It represents a cumulative metric whose value never goes down and always shows the current number of captured
|
||||
events. In other words, `counter` always shows the number of observed events since the application has started.
|
||||
In programming, `counter` is a variable that you **increment** each time something happens.
|
||||
used for capturing a number of events. It represents a cumulative metric whose value never goes down and always shows
|
||||
the current number of captured events. In other words, `counter` always shows the number of observed events since the
|
||||
application has started. In programming, `counter` is a variable that you **increment** each time something happens.
|
||||
|
||||
{% include img.html href="keyConcepts_counter.png" %}
|
||||
|
||||
|
||||
`vm_http_requests_total` is a typical example of a counter - a metric which only grows.
|
||||
The interpretation of a graph above is that time series
|
||||
`vm_http_requests_total` is a typical example of a counter - a metric which only grows. The interpretation of a graph
|
||||
above is that time series
|
||||
`vm_http_requests_total{instance="localhost:8428", job="victoriametrics", path="api/v1/query_range"}`
|
||||
was rapidly changing from 1:38 pm to 1:39 pm, then there were no changes until 1:41 pm.
|
||||
|
||||
Counter is used for measuring a number of events, like a number of requests, errors, logs, messages, etc.
|
||||
The most common [MetricsQL](#metricsql) functions used with counters are:
|
||||
* [rate](https://docs.victoriametrics.com/MetricsQL.html#rate) - calculates the speed of metric's change.
|
||||
For example, `rate(requests_total)` will show how many requests are served per second;
|
||||
* [increase](https://docs.victoriametrics.com/MetricsQL.html#increase) - calculates the growth of a metric
|
||||
on the given time period. For example, `increase(requests_total[1h])` will show how many requests were
|
||||
served over `1h` interval.
|
||||
Counter is used for measuring a number of events, like a number of requests, errors, logs, messages, etc. The most
|
||||
common [MetricsQL](#metricsql) functions used with counters are:
|
||||
|
||||
* [rate](https://docs.victoriametrics.com/MetricsQL.html#rate) - calculates the speed of metric's change. For
|
||||
example, `rate(requests_total)` will show how many requests are served per second;
|
||||
* [increase](https://docs.victoriametrics.com/MetricsQL.html#increase) - calculates the growth of a metric on the given
|
||||
time period. For example, `increase(requests_total[1h])` will show how many requests were served over `1h` interval.
|
||||
|
||||
#### Gauge
|
||||
|
||||
|
@ -102,22 +111,28 @@ Gauge is used for measuring a value that can go up and down:
|
|||
|
||||
{% include img.html href="keyConcepts_gauge.png" %}
|
||||
|
||||
|
||||
The metric `process_resident_memory_anon_bytes` on the graph shows the number of bytes of memory
|
||||
used by the application during the runtime. It is changing frequently, going up and down showing how
|
||||
the process allocates and frees the memory.
|
||||
The metric `process_resident_memory_anon_bytes` on the graph shows the number of bytes of memory used by the application
|
||||
during the runtime. It is changing frequently, going up and down showing how the process allocates and frees the memory.
|
||||
In programming, `gauge` is a variable to which you **set** a specific value as it changes.
|
||||
|
||||
Gauge is used for measuring temperature, memory usage, disk usage, etc. The most common [MetricsQL](#metricsql)
|
||||
Gauge is used in the following scenarios:
|
||||
|
||||
* measuring temperature, memory usage, disk usage etc;
|
||||
* storing the state of some process. For example, gauge `config_reloaded_successful` can be set to `1` if everything is
|
||||
good, and to `0` if configuration failed to reload;
|
||||
* storing the timestamp when event happened. For example, `config_last_reload_success_timestamp_seconds`
|
||||
can store the timestamp of the last successful configuration relaod.
|
||||
|
||||
The most common [MetricsQL](#metricsql)
|
||||
functions used with gauges are [aggregation and grouping functions](#aggregation-and-grouping-functions).
|
||||
|
||||
#### Histogram
|
||||
|
||||
Histogram is a set of [counter](#counter) metrics with different labels for tracking the dispersion
|
||||
and [quantiles](https://prometheus.io/docs/practices/histograms/#quantiles) of the observed value.
|
||||
For example, in VictoriaMetrics we track how many rows is processed per query
|
||||
using the histogram with the name `vm_per_query_rows_processed_count`.
|
||||
The exposition format for this histogram has the following form:
|
||||
and [quantiles](https://prometheus.io/docs/practices/histograms/#quantiles) of the observed value. For example, in
|
||||
VictoriaMetrics we track how many rows is processed per query using the histogram with the
|
||||
name `vm_per_query_rows_processed_count`. The exposition format for this histogram has the following form:
|
||||
|
||||
```
|
||||
vm_per_query_rows_processed_count_bucket{vmrange="4.084e+02...4.642e+02"} 2
|
||||
vm_per_query_rows_processed_count_bucket{vmrange="5.275e+02...5.995e+02"} 1
|
||||
|
@ -129,51 +144,56 @@ vm_per_query_rows_processed_count_count 11
|
|||
```
|
||||
|
||||
In practice, histogram `vm_per_query_rows_processed_count` may be used in the following way:
|
||||
|
||||
```Go
|
||||
// define the histogram
|
||||
perQueryRowsProcessed := metrics.NewHistogram(`vm_per_query_rows_processed_count`)
|
||||
|
||||
// use the histogram during processing
|
||||
for _, query := range queries {
|
||||
perQueryRowsProcessed.Update(len(query.Rows))
|
||||
perQueryRowsProcessed.Update(len(query.Rows))
|
||||
}
|
||||
```
|
||||
|
||||
Now let's see what happens each time when `perQueryRowsProcessed.Update` is called:
|
||||
* counter `vm_per_query_rows_processed_count_sum` increments by value of `len(query.Rows)` expression
|
||||
and accounts for total sum of all observed values;
|
||||
* counter `vm_per_query_rows_processed_count_count` increments by 1 and accounts for total number
|
||||
of observations;
|
||||
* counter `vm_per_query_rows_processed_count_bucket` gets incremented only if observed value is within
|
||||
the range (`bucket`) defined in `vmrange`.
|
||||
|
||||
Such a combination of `counter` metrics allows plotting [Heatmaps in Grafana](https://grafana.com/docs/grafana/latest/visualizations/heatmap/)
|
||||
* counter `vm_per_query_rows_processed_count_sum` increments by value of `len(query.Rows)` expression and accounts for
|
||||
total sum of all observed values;
|
||||
* counter `vm_per_query_rows_processed_count_count` increments by 1 and accounts for total number of observations;
|
||||
* counter `vm_per_query_rows_processed_count_bucket` gets incremented only if observed value is within the
|
||||
range (`bucket`) defined in `vmrange`.
|
||||
|
||||
Such a combination of `counter` metrics allows
|
||||
plotting [Heatmaps in Grafana](https://grafana.com/docs/grafana/latest/visualizations/heatmap/)
|
||||
and calculating [quantiles](https://prometheus.io/docs/practices/histograms/#quantiles):
|
||||
|
||||
{% include img.html href="keyConcepts_histogram.png" %}
|
||||
|
||||
Histograms are usually used for measuring latency, sizes of elements (batch size, for example) etc.
|
||||
There are two implementations of a histogram supported by VictoriaMetrics:
|
||||
Histograms are usually used for measuring latency, sizes of elements (batch size, for example) etc. There are two
|
||||
implementations of a histogram supported by VictoriaMetrics:
|
||||
|
||||
1. [Prometheus histogram](https://prometheus.io/docs/practices/histograms/). The canonical histogram implementation
|
||||
supported by most of the [client libraries for metrics instrumentation](https://prometheus.io/docs/instrumenting/clientlibs/).
|
||||
Prometheus histogram requires a user to define ranges (`buckets`) statically.
|
||||
supported by most of
|
||||
the [client libraries for metrics instrumentation](https://prometheus.io/docs/instrumenting/clientlibs/). Prometheus
|
||||
histogram requires a user to define ranges (`buckets`) statically.
|
||||
2. [VictoriaMetrics histogram](https://valyala.medium.com/improving-histogram-usability-for-prometheus-and-grafana-bc7e5df0e350)
|
||||
supported by [VictoriaMetrics/metrics](https://github.com/VictoriaMetrics/metrics) instrumentation library. Victoriametrics
|
||||
histogram automatically adjusts buckets, so users don't need to think about them.
|
||||
supported by [VictoriaMetrics/metrics](https://github.com/VictoriaMetrics/metrics) instrumentation library.
|
||||
Victoriametrics histogram automatically adjusts buckets, so users don't need to think about them.
|
||||
|
||||
Histograms aren't trivial to learn and use. We recommend reading the following articles before you start:
|
||||
|
||||
1. [Prometheus histogram](https://prometheus.io/docs/concepts/metric_types/#histogram)
|
||||
2. [Histograms and summaries](https://prometheus.io/docs/practices/histograms/)
|
||||
3. [How does a Prometheus Histogram work?](https://www.robustperception.io/how-does-a-prometheus-histogram-work)
|
||||
4. [Improving histogram usability for Prometheus and Grafana](https://valyala.medium.com/improving-histogram-usability-for-prometheus-and-grafana-bc7e5df0e350)
|
||||
|
||||
|
||||
#### Summary
|
||||
|
||||
Summary is quite similar to [histogram](#histogram) and is used for
|
||||
[quantiles](https://prometheus.io/docs/practices/histograms/#quantiles) calculations.
|
||||
The main difference to histograms is that calculations are made on the client-side, so
|
||||
metrics exposition format already contains pre-calculated quantiles:
|
||||
[quantiles](https://prometheus.io/docs/practices/histograms/#quantiles) calculations. The main difference to histograms
|
||||
is that calculations are made on the client-side, so metrics exposition format already contains pre-calculated
|
||||
quantiles:
|
||||
|
||||
```
|
||||
go_gc_duration_seconds{quantile="0"} 0
|
||||
go_gc_duration_seconds{quantile="0.25"} 0
|
||||
|
@ -189,36 +209,56 @@ The visualisation of summaries is pretty straightforward:
|
|||
{% include img.html href="keyConcepts_summary.png" %}
|
||||
|
||||
Such an approach makes summaries easier to use but also puts significant limitations - summaries can't be aggregated.
|
||||
The [histogram](#histogram) exposes the raw values via counters. It means a user can aggregate these counters
|
||||
for different metrics (for example, for metrics with different `instance` label) and **then calculate quantiles**.
|
||||
For summary, quantiles are already calculated, so they [can't be aggregated](https://latencytipoftheday.blogspot.de/2014/06/latencytipoftheday-you-cant-average.html)
|
||||
The [histogram](#histogram) exposes the raw values via counters. It means a user can aggregate these counters for
|
||||
different metrics (for example, for metrics with different `instance` label) and **then calculate quantiles**. For
|
||||
summary, quantiles are already calculated, so
|
||||
they [can't be aggregated](https://latencytipoftheday.blogspot.de/2014/06/latencytipoftheday-you-cant-average.html)
|
||||
with other metrics.
|
||||
|
||||
Summaries are usually used for measuring latency, sizes of elements (batch size, for example) etc.
|
||||
But taking into account the limitation mentioned above.
|
||||
Summaries are usually used for measuring latency, sizes of elements (batch size, for example) etc. But taking into
|
||||
account the limitation mentioned above.
|
||||
|
||||
### Instrumenting application with metrics
|
||||
|
||||
#### Instrumenting application with metrics
|
||||
|
||||
As was said at the beginning of the section [Types of metrics](#types-of-metrics), metric type defines
|
||||
how it was measured. VictoriaMetrics TSDB doesn't know about metric types, all it sees are labels,
|
||||
values, and timestamps. And what are these metrics, what do they measure, and how - all this depends
|
||||
on the application which emits them.
|
||||
As was said at the beginning of the section [Types of metrics](#types-of-metrics), metric type defines how it was
|
||||
measured. VictoriaMetrics TSDB doesn't know about metric types, all it sees are labels, values, and timestamps. And what
|
||||
are these metrics, what do they measure, and how - all this depends on the application which emits them.
|
||||
|
||||
To instrument your application with metrics compatible with VictoriaMetrics TSDB we recommend
|
||||
using [VictoriaMetrics/metrics](https://github.com/VictoriaMetrics/metrics) instrumentation library.
|
||||
See more about how to use it on example of
|
||||
using [VictoriaMetrics/metrics](https://github.com/VictoriaMetrics/metrics) instrumentation library. See more about how
|
||||
to use it on example of
|
||||
[How to monitor Go applications with VictoriaMetrics](https://victoriametrics.medium.com/how-to-monitor-go-applications-with-victoriametrics-c04703110870)
|
||||
article.
|
||||
|
||||
VictoriaMetrics is also compatible with
|
||||
Prometheus [client libraries for metrics instrumentation](https://prometheus.io/docs/instrumenting/clientlibs/).
|
||||
|
||||
#### Naming
|
||||
|
||||
We recommend following [naming convention introduced by Prometheus](https://prometheus.io/docs/practices/naming/). There
|
||||
are no strict (except allowed chars) restrictions and any metric name would be accepted by VictoriaMetrics. But
|
||||
convention will help to keep names meaningful, descriptive and clear to other people. Following convention is a good
|
||||
practice.
|
||||
|
||||
#### Labels
|
||||
|
||||
Every metric can contain an arbitrary number of label names. The good practice is to keep this number limited.
|
||||
Otherwise, it would be difficult to use or plot on the graphs. By default, VictoriaMetrics limits the number of labels
|
||||
per series to `30` and drops all excessive labels. This limit can be changed via `-maxLabelsPerTimeseries` flag.
|
||||
|
||||
Every label value can contain arbitrary string value. The good practice is to use short and meaningful label values to
|
||||
describe the attribute of the metric, not to tell the story about it. For example, label-value pair
|
||||
`environment=prod` is ok, but `log_message=long log message with a lot of details...` is not ok. By default,
|
||||
VcitoriaMetrics limits label's value size with 16kB. This limit can be changed via `-maxLabelValueLen` flag.
|
||||
|
||||
It is very important to control the max number of unique label values since it defines the number
|
||||
of [time series](#time-series). Try to avoid using volatile values such as session ID or query ID in label values to
|
||||
avoid excessive resource usage and database slowdown.
|
||||
|
||||
## Write data
|
||||
|
||||
There are two main models in monitoring for data collection: [push](#push-model) and [pull](#pull-model).
|
||||
Both are used in modern monitoring and both are supported by VictoriaMetrics.
|
||||
There are two main models in monitoring for data collection: [push](#push-model) and [pull](#pull-model). Both are used
|
||||
in modern monitoring and both are supported by VictoriaMetrics.
|
||||
|
||||
### Push model
|
||||
|
||||
|
@ -226,29 +266,41 @@ Push model is a traditional model of the client sending data to the server:
|
|||
|
||||
{% include img.html href="keyConcepts_push_model.png" %}
|
||||
|
||||
The client (application) decides when and where to send/ingest its metrics.
|
||||
VictoriaMetrics supports following protocols for ingesting:
|
||||
The client (application) decides when and where to send/ingest its metrics. VictoriaMetrics supports following protocols
|
||||
for ingesting:
|
||||
|
||||
* [Prometheus remote write API](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#prometheus-setup).
|
||||
* [Prometheus exposition format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-import-data-in-prometheus-exposition-format).
|
||||
* [InfluxDB line protocol](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) over HTTP, TCP and UDP.
|
||||
* [Graphite plaintext protocol](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-graphite-compatible-agents-such-as-statsd) with [tags](https://graphite.readthedocs.io/en/latest/tags.html#carbon).
|
||||
* [OpenTSDB put message](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#sending-data-via-telnet-put-protocol).
|
||||
* [HTTP OpenTSDB /api/put requests](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#sending-opentsdb-data-via-http-apiput-requests).
|
||||
* [JSON line format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-import-data-in-json-line-format).
|
||||
* [Prometheus exposition format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-import-data-in-prometheus-exposition-format)
|
||||
.
|
||||
* [InfluxDB line protocol](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)
|
||||
over HTTP, TCP and UDP.
|
||||
* [Graphite plaintext protocol](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-graphite-compatible-agents-such-as-statsd)
|
||||
with [tags](https://graphite.readthedocs.io/en/latest/tags.html#carbon).
|
||||
* [OpenTSDB put message](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#sending-data-via-telnet-put-protocol)
|
||||
.
|
||||
* [HTTP OpenTSDB /api/put requests](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#sending-opentsdb-data-via-http-apiput-requests)
|
||||
.
|
||||
* [JSON line format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-import-data-in-json-line-format)
|
||||
.
|
||||
* [Arbitrary CSV data](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-import-csv-data).
|
||||
* [Native binary format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-import-data-in-native-format).
|
||||
* [Native binary format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-import-data-in-native-format)
|
||||
.
|
||||
|
||||
All the protocols are fully compatible with VictoriaMetrics [data model](#data-model) and can be used in production.
|
||||
There are no officially supported clients by VictoriaMetrics team for data ingestion.
|
||||
We recommend choosing from already existing clients compatible with the listed above protocols
|
||||
(like [Telegraf](https://github.com/influxdata/telegraf) for [InfluxDB line protocol](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)).
|
||||
There are no officially supported clients by VictoriaMetrics team for data ingestion. We recommend choosing from already
|
||||
existing clients compatible with the listed above protocols
|
||||
(like [Telegraf](https://github.com/influxdata/telegraf)
|
||||
for [InfluxDB line protocol](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf))
|
||||
.
|
||||
|
||||
Creating custom clients or instrumenting the application for metrics writing is as easy as sending a POST request:
|
||||
|
||||
```bash
|
||||
curl -d '{"metric":{"__name__":"foo","job":"node_exporter"},"values":[0,1,2],"timestamps":[1549891472010,1549891487724,1549891503438]}' -X POST 'http://localhost:8428/api/v1/import'
|
||||
```
|
||||
|
||||
It is allowed to push/write metrics to [Single-server-VictoriaMetrics](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html),
|
||||
It is allowed to push/write metrics
|
||||
to [Single-server-VictoriaMetrics](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html),
|
||||
[cluster component vminsert](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#architecture-overview)
|
||||
and [vmagent](https://docs.victoriametrics.com/vmagent.html).
|
||||
|
||||
|
@ -264,40 +316,37 @@ elaborating more on why Percona switched from pull to push model.
|
|||
|
||||
The cons of push protocol:
|
||||
|
||||
* it requires applications to be more complex,
|
||||
since they need to be responsible for metrics delivery;
|
||||
* it requires applications to be more complex, since they need to be responsible for metrics delivery;
|
||||
* applications need to be aware of monitoring systems;
|
||||
* using a monitoring system it is hard to tell whether the application
|
||||
went down or just stopped sending metrics for a different reason;
|
||||
* applications can overload the monitoring system by pushing
|
||||
too many metrics.
|
||||
* using a monitoring system it is hard to tell whether the application went down or just stopped sending metrics for a
|
||||
different reason;
|
||||
* applications can overload the monitoring system by pushing too many metrics.
|
||||
|
||||
### Pull model
|
||||
|
||||
Pull model is an approach popularized by [Prometheus](https://prometheus.io/),
|
||||
where the monitoring system decides when and where to pull metrics from:
|
||||
Pull model is an approach popularized by [Prometheus](https://prometheus.io/), where the monitoring system decides when
|
||||
and where to pull metrics from:
|
||||
|
||||
{% include img.html href="keyConcepts_pull_model.png" %}
|
||||
|
||||
In pull model, the monitoring system needs to be aware of all the applications it needs
|
||||
to monitor. The metrics are scraped (pulled) with fixed intervals via HTTP protocol.
|
||||
In pull model, the monitoring system needs to be aware of all the applications it needs to monitor. The metrics are
|
||||
scraped (pulled) with fixed intervals via HTTP protocol.
|
||||
|
||||
For metrics scraping VictoriaMetrics supports [Prometheus exposition format](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter)
|
||||
and needs to be configured with `-promscrape.config` flag pointing to the file with scrape configuration.
|
||||
This configuration may include list of static `targets` (applications or services)
|
||||
For metrics scraping VictoriaMetrics
|
||||
supports [Prometheus exposition format](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter)
|
||||
and needs to be configured with `-promscrape.config` flag pointing to the file with scrape configuration. This
|
||||
configuration may include list of static `targets` (applications or services)
|
||||
or `targets` discovered via various service discoveries.
|
||||
|
||||
Metrics scraping is supported by [Single-server-VictoriaMetrics](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html)
|
||||
Metrics scraping is supported
|
||||
by [Single-server-VictoriaMetrics](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html)
|
||||
and [vmagent](https://docs.victoriametrics.com/vmagent.html).
|
||||
|
||||
The pros of the pull model:
|
||||
|
||||
* monitoring system decides how and when to scrape data,
|
||||
so it can't be overloaded;
|
||||
* applications aren't aware of the monitoring system and don't need
|
||||
to implement the logic for delivering metrics;
|
||||
* the list of all monitored targets belongs to the monitoring system
|
||||
and can be quickly checked;
|
||||
* monitoring system decides how and when to scrape data, so it can't be overloaded;
|
||||
* applications aren't aware of the monitoring system and don't need to implement the logic for delivering metrics;
|
||||
* the list of all monitored targets belongs to the monitoring system and can be quickly checked;
|
||||
* easy to detect faulty or crashed services when they don't respond.
|
||||
|
||||
The cons of the pull model:
|
||||
|
@ -308,47 +357,51 @@ The cons of the pull model:
|
|||
### Common approaches for data collection
|
||||
|
||||
VictoriaMetrics supports both [Push](#push-model) and [Pull](#pull-model)
|
||||
models for data collection. Many installations are using
|
||||
exclusively one or second model, or both at once.
|
||||
models for data collection. Many installations are using exclusively one or second model, or both at once.
|
||||
|
||||
The most common approach for data collection is using both models:
|
||||
|
||||
{% include img.html href="keyConcepts_data_collection.png" %}
|
||||
|
||||
In this approach the additional component is used - [vmagent](https://docs.victoriametrics.com/vmagent.html).
|
||||
Vmagent is a lightweight agent whose main purpose is to collect and deliver metrics.
|
||||
It supports all the same mentioned protocols and approaches mentioned for both data collection models.
|
||||
In this approach the additional component is used - [vmagent](https://docs.victoriametrics.com/vmagent.html). Vmagent is
|
||||
a lightweight agent whose main purpose is to collect and deliver metrics. It supports all the same mentioned protocols
|
||||
and approaches mentioned for both data collection models.
|
||||
|
||||
The basic setup for using VictoriaMetrics and vmagent for monitoring is described
|
||||
in example of [docker-compose manifest](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker).
|
||||
In this example, vmagent [scrapes a list of targets](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/prometheus.yml)
|
||||
and [forwards collected data to VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/9d7da130b5a873be334b38c8d8dec702c9e8fac5/deployment/docker/docker-compose.yml#L15).
|
||||
VictoriaMetrics is then used as a [datasource for Grafana](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/provisioning/datasources/datasource.yml)
|
||||
The basic setup for using VictoriaMetrics and vmagent for monitoring is described in example
|
||||
of [docker-compose manifest](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker). In this
|
||||
example,
|
||||
vmagent [scrapes a list of targets](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/prometheus.yml)
|
||||
and [forwards collected data to VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/9d7da130b5a873be334b38c8d8dec702c9e8fac5/deployment/docker/docker-compose.yml#L15)
|
||||
. VictoriaMetrics is then used as
|
||||
a [datasource for Grafana](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/provisioning/datasources/datasource.yml)
|
||||
installation for querying collected data.
|
||||
|
||||
VictoriaMetrics components allow building more advanced topologies.
|
||||
For example, vmagents pushing metrics from separate datacenters to the central VictoriaMetrics:
|
||||
VictoriaMetrics components allow building more advanced topologies. For example, vmagents pushing metrics from separate
|
||||
datacenters to the central VictoriaMetrics:
|
||||
|
||||
{% include img.html href="keyConcepts_two_dcs.png" %}
|
||||
|
||||
VictoriaMetrics in example may be [Single-server-VictoriaMetrics](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html)
|
||||
or [VictoriaMetrics Cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html).
|
||||
Vmagent also allows to fan-out the same data to multiple destinations.
|
||||
VictoriaMetrics in example may
|
||||
be [Single-server-VictoriaMetrics](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html)
|
||||
or [VictoriaMetrics Cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html). Vmagent also allows to
|
||||
fan-out the same data to multiple destinations.
|
||||
|
||||
## Query data
|
||||
|
||||
VictoriaMetrics provides an [HTTP API](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#prometheus-querying-api-usage)
|
||||
VictoriaMetrics provides
|
||||
an [HTTP API](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#prometheus-querying-api-usage)
|
||||
for serving read queries. The API is used in various integrations such as
|
||||
[Grafana](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#grafana-setup).
|
||||
The same API is also used by
|
||||
[VMUI](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#vmui) - graphical User Interface
|
||||
for querying and visualizing metrics.
|
||||
[Grafana](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#grafana-setup). The same API is also used
|
||||
by
|
||||
[VMUI](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#vmui) - graphical User Interface for querying
|
||||
and visualizing metrics.
|
||||
|
||||
The API consists of two main handlers: [instant](#instant-query) and [range queries](#range-query).
|
||||
|
||||
### Instant query
|
||||
|
||||
Instant query executes the query expression at the given moment of time:
|
||||
|
||||
```
|
||||
GET | POST /api/v1/query
|
||||
|
||||
|
@ -359,6 +412,7 @@ step - max lookback window if no datapoints found at the given time. If omitted,
|
|||
```
|
||||
|
||||
To understand how instant queries work, let's begin with a data sample:
|
||||
|
||||
```
|
||||
foo_bar 1.00 1652169600000 # 2022-05-10 10:00:00
|
||||
foo_bar 2.00 1652169660000 # 2022-05-10 10:01:00
|
||||
|
@ -375,8 +429,8 @@ foo_bar 1.00 1652170500000 # 2022-05-10 10:15:00
|
|||
foo_bar 4.00 1652170560000 # 2022-05-10 10:16:00
|
||||
```
|
||||
|
||||
The data sample contains a list of samples for one time series with time intervals between
|
||||
samples from 1m to 3m. If we plot this data sample on the system of coordinates, it will have the following form:
|
||||
The data sample contains a list of samples for one time series with time intervals between samples from 1m to 3m. If we
|
||||
plot this data sample on the system of coordinates, it will have the following form:
|
||||
|
||||
<p style="text-align: center">
|
||||
<a href="keyConcepts_data_samples.png" target="_blank">
|
||||
|
@ -384,13 +438,31 @@ samples from 1m to 3m. If we plot this data sample on the system of coordinates,
|
|||
</a>
|
||||
</p>
|
||||
|
||||
To get the value of `foo_bar` metric at some specific moment of time, for example `2022-05-10 10:03:00`,
|
||||
in VictoriaMetrics we need to issue an **instant query**:
|
||||
To get the value of `foo_bar` metric at some specific moment of time, for example `2022-05-10 10:03:00`, in
|
||||
VictoriaMetrics we need to issue an **instant query**:
|
||||
|
||||
```bash
|
||||
curl "http://<victoria-metrics-addr>/api/v1/query?query=foo_bar&time=2022-05-10T10:03:00.000Z"
|
||||
```
|
||||
|
||||
```json
|
||||
{"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"foo_bar"},"value":[1652169780,"3"]}]}}
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"resultType": "vector",
|
||||
"result": [
|
||||
{
|
||||
"metric": {
|
||||
"__name__": "foo_bar"
|
||||
},
|
||||
"value": [
|
||||
1652169780,
|
||||
"3"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In response, VictoriaMetrics returns a single sample-timestamp pair with a value of `3` for the series
|
||||
|
@ -408,16 +480,17 @@ requested timestamp, VictoriaMetrics will try to locate the closest sample on th
|
|||
The time range at which VictoriaMetrics will try to locate a missing data sample is equal to `5m`
|
||||
by default and can be overridden via `step` parameter.
|
||||
|
||||
Instant query can return multiple time series, but always only one data sample per series.
|
||||
Instant queries are used in the following scenarios:
|
||||
Instant query can return multiple time series, but always only one data sample per series. Instant queries are used in
|
||||
the following scenarios:
|
||||
|
||||
* Getting the last recorded value;
|
||||
* For alerts and recording rules evaluation;
|
||||
* Plotting Stat or Table panels in Grafana.
|
||||
|
||||
|
||||
### Range query
|
||||
|
||||
Range query executes the query expression at the given time range with the given step:
|
||||
|
||||
```
|
||||
GET | POST /api/v1/query_range
|
||||
|
||||
|
@ -428,20 +501,104 @@ end - end (rfc3339 | unix_timestamp) of the time range. If omitted, current time
|
|||
step - step in seconds for evaluating query expression on the time range. If omitted, is set to 5m
|
||||
```
|
||||
|
||||
To get the values of `foo_bar` on time range from `2022-05-10 09:59:00` to `2022-05-10 10:17:00`,
|
||||
in VictoriaMetrics we need to issue a range query:
|
||||
To get the values of `foo_bar` on time range from `2022-05-10 09:59:00` to `2022-05-10 10:17:00`, in VictoriaMetrics we
|
||||
need to issue a range query:
|
||||
|
||||
```bash
|
||||
curl "http://<victoria-metrics-addr>/api/v1/query_range?query=foo_bar&step=1m&start=2022-05-10T09:59:00.000Z&end=2022-05-10T10:17:00.000Z"
|
||||
```
|
||||
|
||||
```json
|
||||
{"status":"success","data":{"resultType":"matrix","result":[{"metric":{"__name__":"foo_bar"},"values":[[1652169600,"1"],[1652169660,"2"],[1652169720,"3"],[1652169780,"3"],[1652169840,"7"],[1652169900,"7"],[1652169960,"7.5"],[1652170020,"7.5"],[1652170080,"6"],[1652170140,"6"],[1652170260,"5.5"],[1652170320,"5.25"],[1652170380,"5"],[1652170440,"3"],[1652170500,"1"],[1652170560,"4"],[1652170620,"4"]]}]}}
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"resultType": "matrix",
|
||||
"result": [
|
||||
{
|
||||
"metric": {
|
||||
"__name__": "foo_bar"
|
||||
},
|
||||
"values": [
|
||||
[
|
||||
1652169600,
|
||||
"1"
|
||||
],
|
||||
[
|
||||
1652169660,
|
||||
"2"
|
||||
],
|
||||
[
|
||||
1652169720,
|
||||
"3"
|
||||
],
|
||||
[
|
||||
1652169780,
|
||||
"3"
|
||||
],
|
||||
[
|
||||
1652169840,
|
||||
"7"
|
||||
],
|
||||
[
|
||||
1652169900,
|
||||
"7"
|
||||
],
|
||||
[
|
||||
1652169960,
|
||||
"7.5"
|
||||
],
|
||||
[
|
||||
1652170020,
|
||||
"7.5"
|
||||
],
|
||||
[
|
||||
1652170080,
|
||||
"6"
|
||||
],
|
||||
[
|
||||
1652170140,
|
||||
"6"
|
||||
],
|
||||
[
|
||||
1652170260,
|
||||
"5.5"
|
||||
],
|
||||
[
|
||||
1652170320,
|
||||
"5.25"
|
||||
],
|
||||
[
|
||||
1652170380,
|
||||
"5"
|
||||
],
|
||||
[
|
||||
1652170440,
|
||||
"3"
|
||||
],
|
||||
[
|
||||
1652170500,
|
||||
"1"
|
||||
],
|
||||
[
|
||||
1652170560,
|
||||
"4"
|
||||
],
|
||||
[
|
||||
1652170620,
|
||||
"4"
|
||||
]
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In response, VictoriaMetrics returns `17` sample-timestamp pairs for the series `foo_bar` at the given time range
|
||||
from `2022-05-10 09:59:00` to `2022-05-10 10:17:00`. But, if we take a look at the original data sample again,
|
||||
we'll see that it contains only 13 data points. What happens here is that the range query is actually
|
||||
an [instant query](#instant-query) executed `(start-end)/step` times on the time range from `start` to `end`.
|
||||
If we plot this request in VictoriaMetrics the graph will be shown as the following:
|
||||
from `2022-05-10 09:59:00` to `2022-05-10 10:17:00`. But, if we take a look at the original data sample again, we'll
|
||||
see that it contains only 13 data points. What happens here is that the range query is actually
|
||||
an [instant query](#instant-query) executed `(start-end)/step` times on the time range from `start` to `end`. If we plot
|
||||
this request in VictoriaMetrics the graph will be shown as the following:
|
||||
|
||||
<p style="text-align: center">
|
||||
<a href="keyConcepts_range_query.png" target="_blank">
|
||||
|
@ -450,87 +607,100 @@ If we plot this request in VictoriaMetrics the graph will be shown as the follow
|
|||
</p>
|
||||
|
||||
|
||||
The blue dotted lines on the pic are the moments when instant query was executed.
|
||||
Since instant query retains the ability to locate the missing point, the graph contains two types of
|
||||
points: `real` and `ephemeral` data points. `ephemeral` data point always repeats the left closest
|
||||
The blue dotted lines on the pic are the moments when instant query was executed. Since instant query retains the
|
||||
ability to locate the missing point, the graph contains two types of points: `real` and `ephemeral` data
|
||||
points. `ephemeral` data point always repeats the left closest
|
||||
`real` data point (see red arrow on the pic above).
|
||||
|
||||
This behavior of adding ephemeral data points comes from the specifics of the [Pull model](#pull-model):
|
||||
|
||||
* Metrics are scraped at fixed intervals;
|
||||
* Scrape may be skipped if the monitoring system is overloaded;
|
||||
* Scrape may fail due to network issues.
|
||||
|
||||
According to these specifics, the range query assumes that if there is a missing data point then it is likely
|
||||
a missed scrape, so it fills it with the previous data point. The same will work for cases when `step` is
|
||||
lower than the actual interval between samples. In fact, if we set `step=1s` for the same request, we'll get about
|
||||
1 thousand data points in response, where most of them are `ephemeral`.
|
||||
According to these specifics, the range query assumes that if there is a missing data point then it is likely a missed
|
||||
scrape, so it fills it with the previous data point. The same will work for cases when `step` is lower than the actual
|
||||
interval between samples. In fact, if we set `step=1s` for the same request, we'll get about 1 thousand data points in
|
||||
response, where most of them are `ephemeral`.
|
||||
|
||||
Sometimes, the lookbehind window for locating the datapoint isn't big enough and the graph will contain a gap.
|
||||
For range queries, lookbehind window isn't equal to the `step` parameter. It is calculated as the median of the
|
||||
intervals between the first 20 data points in the requested time range. In this way, VictoriaMetrics automatically
|
||||
adjusts the lookbehind window to fill gaps and detect stale series at the same time.
|
||||
Sometimes, the lookbehind window for locating the datapoint isn't big enough and the graph will contain a gap. For range
|
||||
queries, lookbehind window isn't equal to the `step` parameter. It is calculated as the median of the intervals between
|
||||
the first 20 data points in the requested time range. In this way, VictoriaMetrics automatically adjusts the lookbehind
|
||||
window to fill gaps and detect stale series at the same time.
|
||||
|
||||
Range queries are mostly used for plotting time series data over specified time ranges. These queries are extremely
|
||||
useful in the following scenarios:
|
||||
|
||||
Range queries are mostly used for plotting time series data over specified time ranges.
|
||||
These queries are extremely useful in the following scenarios:
|
||||
* Track the state of a metric on the time interval;
|
||||
* Correlate changes between multiple metrics on the time interval;
|
||||
* Observe trends and dynamics of the metric change.
|
||||
|
||||
### MetricsQL
|
||||
|
||||
VictoriaMetrics provide a special query language for executing read queries - [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html).
|
||||
MetricsQL is a [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics) -like query language
|
||||
with a powerful set of functions and features for working specifically with time series data.
|
||||
MetricsQL is backwards-compatible with PromQL, so it shares most of the query concepts.
|
||||
For example, the basics concepts of PromQL are described [here](https://valyala.medium.com/promql-tutorial-for-beginners-9ab455142085)
|
||||
are applicable to MetricsQL as well.
|
||||
VictoriaMetrics provide a special query language for executing read queries
|
||||
|
||||
- [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html). MetricsQL is
|
||||
a [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics) -like query language with a powerful set of
|
||||
functions and features for working specifically with time series data. MetricsQL is backwards-compatible with PromQL,
|
||||
so it shares most of the query concepts. For example, the basics concepts of PromQL are
|
||||
described [here](https://valyala.medium.com/promql-tutorial-for-beginners-9ab455142085)
|
||||
are applicable to MetricsQL as well.
|
||||
|
||||
#### Filtering
|
||||
|
||||
In sections [instant query](#instant-query) and [range query](#range-query) we've already used MetricsQL
|
||||
to get data for metric `foo_bar`. It is as simple as just writing a metric name in the query:
|
||||
In sections [instant query](#instant-query) and [range query](#range-query) we've already used MetricsQL to get data for
|
||||
metric `foo_bar`. It is as simple as just writing a metric name in the query:
|
||||
|
||||
```MetricsQL
|
||||
foo_bar
|
||||
```
|
||||
|
||||
A single metric name may correspond to multiple time series with distinct label sets. For example:
|
||||
|
||||
```MetricsQL
|
||||
requests_total{path="/", code="200"}
|
||||
requests_total{path="/", code="403"}
|
||||
```
|
||||
|
||||
To select only time series with specific label value specify the matching condition in curly braces:
|
||||
|
||||
```MetricsQL
|
||||
requests_total{code="200"}
|
||||
```
|
||||
|
||||
The query above will return all time series with the name `requests_total` and `code="200"`.
|
||||
We use the operator `=` to match a label value. For negative match use `!=` operator.
|
||||
Filters also support regex matching `=~` for positive and `!~` for negative matching:
|
||||
The query above will return all time series with the name `requests_total` and `code="200"`. We use the operator `=` to
|
||||
match a label value. For negative match use `!=` operator. Filters also support regex matching `=~` for positive
|
||||
and `!~` for negative matching:
|
||||
|
||||
```MetricsQL
|
||||
requests_total{code=~"2.*"}
|
||||
```
|
||||
|
||||
Filters can also be combined:
|
||||
|
||||
```MetricsQL
|
||||
requests_total{code=~"200|204", path="/home"}
|
||||
```
|
||||
The query above will return all time series with a name `requests_total`,
|
||||
status `code` `200` or `204` and `path="/home"`.
|
||||
|
||||
The query above will return all time series with a name `requests_total`, status `code` `200` or `204`and `path="/home"`
|
||||
.
|
||||
|
||||
#### Filtering by name
|
||||
|
||||
Sometimes it is required to return all the time series for multiple metric names.
|
||||
As was mentioned in the [data model section](#data-model), the metric name is just an ordinary label with
|
||||
a special name — `__name__`. So filtering by multiple metric names may be performed by applying regexps
|
||||
on metric names:
|
||||
Sometimes it is required to return all the time series for multiple metric names. As was mentioned in
|
||||
the [data model section](#data-model), the metric name is just an ordinary label with a special name — `__name__`. So
|
||||
filtering by multiple metric names may be performed by applying regexps on metric names:
|
||||
|
||||
```MetricsQL
|
||||
{__name__=~"requests_(error|success)_total"}
|
||||
```
|
||||
|
||||
The query above is supposed to return series for two metrics: `requests_error_total` and `requests_success_total`.
|
||||
|
||||
#### Arithmetic operations
|
||||
|
||||
MetricsQL supports all the basic arithmetic operations:
|
||||
|
||||
* addition (+)
|
||||
* subtraction (-)
|
||||
* multiplication (*)
|
||||
|
@ -538,20 +708,23 @@ MetricsQL supports all the basic arithmetic operations:
|
|||
* modulo (%)
|
||||
* power (^)
|
||||
|
||||
This allows performing various calculations. For example, the following query will calculate
|
||||
the percentage of error requests:
|
||||
This allows performing various calculations. For example, the following query will calculate the percentage of error
|
||||
requests:
|
||||
|
||||
```MetricsQL
|
||||
(requests_error_total / (requests_error_total + requests_success_total)) * 100
|
||||
```
|
||||
|
||||
#### Combining multiple series
|
||||
Combining multiple time series with arithmetic operations requires an understanding of matching rules.
|
||||
Otherwise, the query may break or may lead to incorrect results. The basics of the matching rules are simple:
|
||||
* MetricsQL engine strips metric names from all the time series on the left and right side of the arithmetic
|
||||
operation without touching labels.
|
||||
* For each time series on the left side MetricsQL engine searches for the corresponding time series on
|
||||
the right side with the same set of labels, applies the operation for each data point and returns the resulting
|
||||
time series with the same set of labels. If there are no matches, then the time series is dropped from the result.
|
||||
|
||||
Combining multiple time series with arithmetic operations requires an understanding of matching rules. Otherwise, the
|
||||
query may break or may lead to incorrect results. The basics of the matching rules are simple:
|
||||
|
||||
* MetricsQL engine strips metric names from all the time series on the left and right side of the arithmetic operation
|
||||
without touching labels.
|
||||
* For each time series on the left side MetricsQL engine searches for the corresponding time series on the right side
|
||||
with the same set of labels, applies the operation for each data point and returns the resulting time series with the
|
||||
same set of labels. If there are no matches, then the time series is dropped from the result.
|
||||
* The matching rules may be augmented with ignoring, on, group_left and group_right modifiers.
|
||||
|
||||
This could be complex, but in the majority of cases isn’t needed.
|
||||
|
@ -559,6 +732,7 @@ This could be complex, but in the majority of cases isn’t needed.
|
|||
#### Comparison operations
|
||||
|
||||
MetricsQL supports the following comparison operators:
|
||||
|
||||
* equal (==)
|
||||
* not equal (!=)
|
||||
* greater (>)
|
||||
|
@ -566,51 +740,56 @@ MetricsQL supports the following comparison operators:
|
|||
* less (<)
|
||||
* less-or-equal (<=)
|
||||
|
||||
These operators may be applied to arbitrary MetricsQL expressions as with arithmetic operators.
|
||||
The result of the comparison operation is time series with only matching data points.
|
||||
For instance, the following query would return series only for processes where memory usage is > 100MB:
|
||||
These operators may be applied to arbitrary MetricsQL expressions as with arithmetic operators. The result of the
|
||||
comparison operation is time series with only matching data points. For instance, the following query would return
|
||||
series only for processes where memory usage is > 100MB:
|
||||
|
||||
```MetricsQL
|
||||
process_resident_memory_bytes > 100*1024*1024
|
||||
```
|
||||
|
||||
#### Aggregation and grouping functions
|
||||
|
||||
MetricsQL allows aggregating and grouping time series.
|
||||
Time series are grouped by the given set of labels and then the given aggregation function is applied
|
||||
for each group. For instance, the following query would return memory used by various processes grouped
|
||||
by instances (for the case when multiple processes run on the same instance):
|
||||
MetricsQL allows aggregating and grouping time series. Time series are grouped by the given set of labels and then the
|
||||
given aggregation function is applied for each group. For instance, the following query would return memory used by
|
||||
various processes grouped by instances (for the case when multiple processes run on the same instance):
|
||||
|
||||
```MetricsQL
|
||||
sum(process_resident_memory_bytes) by (instance)
|
||||
```
|
||||
|
||||
#### Calculating rates
|
||||
|
||||
One of the most widely used functions for [counters](#counter) is [rate](https://docs.victoriametrics.com/MetricsQL.html#rate).
|
||||
It calculates per-second rate for all the matching time series. For example, the following query will show
|
||||
how many bytes are received by the network per second:
|
||||
One of the most widely used functions for [counters](#counter)
|
||||
is [rate](https://docs.victoriametrics.com/MetricsQL.html#rate). It calculates per-second rate for all the matching time
|
||||
series. For example, the following query will show how many bytes are received by the network per second:
|
||||
|
||||
```MetricsQL
|
||||
rate(node_network_receive_bytes_total)
|
||||
```
|
||||
|
||||
To calculate the rate, the query engine will need at least two data points to compare.
|
||||
Simplified rate calculation for each point looks like `(Vcurr-Vprev)/(Tcurr-Tprev)`,
|
||||
where `Vcurr` is the value at the current point — `Tcurr`, `Vprev` is the value at the point `Tprev=Tcurr-step`.
|
||||
The range between `Tcurr-Tprev` is usually equal to `step` parameter.
|
||||
If `step` value is lower than the real interval between data points, then it is ignored and a minimum real interval is used.
|
||||
To calculate the rate, the query engine will need at least two data points to compare. Simplified rate calculation for
|
||||
each point looks like `(Vcurr-Vprev)/(Tcurr-Tprev)`, where `Vcurr` is the value at the current point — `Tcurr`, `Vprev`
|
||||
is the value at the point `Tprev=Tcurr-step`. The range between `Tcurr-Tprev` is usually equal to `step` parameter.
|
||||
If `step` value is lower than the real interval between data points, then it is ignored and a minimum real interval is
|
||||
used.
|
||||
|
||||
The interval on which `rate` needs to be calculated can be specified explicitly as `duration` in square brackets:
|
||||
|
||||
```MetricsQL
|
||||
rate(node_network_receive_bytes_total[5m])
|
||||
```
|
||||
For this query the time duration to look back when calculating per-second rate for each point on the graph
|
||||
will be equal to `5m`.
|
||||
|
||||
`rate` strips metric name while leaving all the labels for the inner time series.
|
||||
Do not apply `rate` to time series which may go up and down, such as [gauges](#gauge).
|
||||
`rate` must be applied only to [counters](#counter), which always go up.
|
||||
Even if counter gets reset (for instance, on service restart), `rate` knows how to deal with it.
|
||||
For this query the time duration to look back when calculating per-second rate for each point on the graph will be equal
|
||||
to `5m`.
|
||||
|
||||
`rate` strips metric name while leaving all the labels for the inner time series. Do not apply `rate` to time series
|
||||
which may go up and down, such as [gauges](#gauge).
|
||||
`rate` must be applied only to [counters](#counter), which always go up. Even if counter gets reset (for instance, on
|
||||
service restart), `rate` knows how to deal with it.
|
||||
|
||||
### Visualizing time series
|
||||
|
||||
VictoriaMetrics has a built-in graphical User Interface for querying and visualizing metrics
|
||||
[VMUI](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#vmui).
|
||||
Open `http://victoriametrics:8428/vmui` page, type the query and see the results:
|
||||
|
@ -618,5 +797,30 @@ Open `http://victoriametrics:8428/vmui` page, type the query and see the results
|
|||
{% include img.html href="keyConcepts_vmui.png" %}
|
||||
|
||||
VictoriaMetrics supports [Prometheus HTTP API](https://prometheus.io/docs/prometheus/latest/querying/api/)
|
||||
which makes it possible to [use with Grafana](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#grafana-setup).
|
||||
Play more with Grafana integration in VictoriaMetrics sandbox [https://play-grafana.victoriametrics.com](https://play-grafana.victoriametrics.com).
|
||||
which makes it possible
|
||||
to [use with Grafana](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#grafana-setup). Play more with
|
||||
Grafana integration in VictoriaMetrics
|
||||
sandbox [https://play-grafana.victoriametrics.com](https://play-grafana.victoriametrics.com).
|
||||
|
||||
## Modify data
|
||||
|
||||
VictoriaMetrics stores time series data in [MergeTree](https://en.wikipedia.org/wiki/Log-structured_merge-tree)-like
|
||||
data structures. While this approach if very efficient for write-heavy databases, it applies some limitations on data
|
||||
updates. In short, modifying already written [time series](#time-series) requires re-writing the whole data block where
|
||||
it is stored. Due to this limitation, VictoriaMetrics does not support direct data modification.
|
||||
|
||||
### Deletion
|
||||
|
||||
See [How to delete time series](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-delete-time-series)
|
||||
.
|
||||
|
||||
### Relabeling
|
||||
|
||||
Relabeling is a powerful mechanism for modifying time series before they have been written to the database. Relabeling
|
||||
may be applied for both [push](#push-model) and [pull](#pull-model) models. See more
|
||||
details [here](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#relabeling).
|
||||
|
||||
### Deduplication
|
||||
|
||||
VictoriaMetrics supports data points deduplication after data was written to the storage. See more
|
||||
details [here](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#deduplication).
|
|
@ -375,8 +375,12 @@ start a cluster of three `vmagent` instances, where each target is scraped by tw
|
|||
```
|
||||
|
||||
If each target is scraped by multiple `vmagent` instances, then data deduplication must be enabled at remote storage pointed by `-remoteWrite.url`.
|
||||
The `-dedup.minScrapeInterval` must be set to the `scrape_interval` configured at `-promscrape.config`.
|
||||
See [these docs](https://docs.victoriametrics.com/#deduplication) for details.
|
||||
|
||||
If multiple `vmagent` clusters scrape the same set of targets, then each cluster must have unique value for the `-promscrape.cluster.name` command-line flag.
|
||||
This is needed for proper data de-duplication. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2679) for details.
|
||||
|
||||
## Scraping targets via a proxy
|
||||
|
||||
`vmagent` supports scraping targets via http, https and socks5 proxies. Proxy address must be specified in `proxy_url` option. For example, the following scrape config instructs
|
||||
|
|
|
@ -72,7 +72,7 @@ Then configure `vmalert` accordingly:
|
|||
-external.label=replica=a # Multiple external labels may be set
|
||||
```
|
||||
|
||||
Note there's a separate `remoteRead.url` to allow writing results of
|
||||
Note there's a separate `remoteWrite.url` to allow writing results of
|
||||
alerting/recording rules into a different storage than the initial data that's
|
||||
queried. This allows using `vmalert` to aggregate data from a short-term,
|
||||
high-frequency, high-cardinality storage into a long-term storage with
|
||||
|
@ -529,7 +529,7 @@ There are following non-required `replay` flags:
|
|||
(rules which depend on each other) rules. It is expected, that remote storage will be able to persist
|
||||
previously accepted data during the delay, so data will be available for the subsequent queries.
|
||||
Keep it equal or bigger than `-remoteWrite.flushInterval`.
|
||||
* `replay.disableProgressBar` - whether to disable progress bar which shows progress work.
|
||||
* `-replay.disableProgressBar` - whether to disable progress bar which shows progress work.
|
||||
Progress bar may generate a lot of log records, which is not formatted as standard VictoriaMetrics logger.
|
||||
It could break logs parsing by external system and generate additional load on it.
|
||||
|
||||
|
|
|
@ -201,7 +201,8 @@ One important note for OpenTSDB migration: Queries/HBase scans can "get stuck" w
|
|||
|
||||
## Migrating data from InfluxDB (1.x)
|
||||
|
||||
`vmctl` supports the `influx` mode to migrate data from InfluxDB to VictoriaMetrics time-series database.
|
||||
`vmctl` supports the `influx` mode for [migrating data from InfluxDB to VictoriaMetrics](https://docs.victoriametrics.com/guides/migrate-from-influx.html)
|
||||
time-series database.
|
||||
|
||||
See `./vmctl influx --help` for details and full list of flags.
|
||||
|
||||
|
|
20
go.mod
20
go.mod
|
@ -11,7 +11,7 @@ require (
|
|||
github.com/VictoriaMetrics/fasthttp v1.1.0
|
||||
github.com/VictoriaMetrics/metrics v1.18.1
|
||||
github.com/VictoriaMetrics/metricsql v0.43.0
|
||||
github.com/aws/aws-sdk-go v1.44.18
|
||||
github.com/aws/aws-sdk-go v1.44.24
|
||||
github.com/cespare/xxhash/v2 v2.1.2
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
|
||||
|
||||
|
@ -23,31 +23,30 @@ require (
|
|||
github.com/go-kit/kit v0.12.0
|
||||
github.com/golang/snappy v0.0.4
|
||||
github.com/influxdata/influxdb v1.9.7
|
||||
github.com/klauspost/compress v1.15.4
|
||||
github.com/klauspost/compress v1.15.5
|
||||
github.com/mattn/go-colorable v0.1.12 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.13 // indirect
|
||||
github.com/oklog/ulid v1.3.1
|
||||
github.com/prometheus/common v0.34.0 // indirect
|
||||
github.com/prometheus/prometheus v1.8.2-0.20201119142752-3ad25a6dc3d9
|
||||
github.com/urfave/cli/v2 v2.7.1
|
||||
github.com/urfave/cli/v2 v2.8.1
|
||||
github.com/valyala/fastjson v1.6.3
|
||||
github.com/valyala/fastrand v1.1.0
|
||||
github.com/valyala/fasttemplate v1.2.1
|
||||
github.com/valyala/gozstd v1.17.0
|
||||
github.com/valyala/quicktemplate v1.7.0
|
||||
golang.org/x/net v0.0.0-20220520000938-2e3eb7b945c2
|
||||
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5
|
||||
golang.org/x/sys v0.0.0-20220519141025-dcacdad47464
|
||||
google.golang.org/api v0.80.0
|
||||
golang.org/x/net v0.0.0-20220526153639-5463443f8c37
|
||||
golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401
|
||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a
|
||||
google.golang.org/api v0.81.0
|
||||
gopkg.in/yaml.v2 v2.4.0
|
||||
)
|
||||
|
||||
require (
|
||||
cloud.google.com/go v0.101.1 // indirect
|
||||
cloud.google.com/go v0.102.0 // indirect
|
||||
cloud.google.com/go/compute v1.6.1 // indirect
|
||||
cloud.google.com/go/iam v0.3.0 // indirect
|
||||
github.com/VividCortex/ewma v1.2.0 // indirect
|
||||
github.com/antzucaro/matchr v0.0.0-20210222213004-b04723ef80f0 // indirect
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/go-kit/log v0.2.1 // indirect
|
||||
github.com/go-logfmt/logfmt v0.5.1 // indirect
|
||||
|
@ -68,6 +67,7 @@ require (
|
|||
github.com/russross/blackfriday/v2 v2.1.0 // indirect
|
||||
github.com/valyala/bytebufferpool v1.0.0 // indirect
|
||||
github.com/valyala/histogram v1.2.0 // indirect
|
||||
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect
|
||||
go.opencensus.io v0.23.0 // indirect
|
||||
go.uber.org/atomic v1.9.0 // indirect
|
||||
go.uber.org/goleak v1.1.11-0.20210813005559-691160354723 // indirect
|
||||
|
@ -75,7 +75,7 @@ require (
|
|||
golang.org/x/text v0.3.7 // indirect
|
||||
golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/genproto v0.0.0-20220519153652-3a47de7e79bd // indirect
|
||||
google.golang.org/genproto v0.0.0-20220527130721-00d5c0f3be58 // indirect
|
||||
google.golang.org/grpc v1.46.2 // indirect
|
||||
google.golang.org/protobuf v1.28.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
|
||||
|
|
44
go.sum
44
go.sum
|
@ -29,8 +29,8 @@ cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW
|
|||
cloud.google.com/go v0.97.0/go.mod h1:GF7l59pYBVlXQIBLx3a761cZ41F9bBH3JUlihCt2Udc=
|
||||
cloud.google.com/go v0.99.0/go.mod h1:w0Xx2nLzqWJPuozYQX+hFfCSI8WioryfRDzkoI/Y2ZA=
|
||||
cloud.google.com/go v0.100.2/go.mod h1:4Xra9TjzAeYHrl5+oeLlzbM2k3mjVhZh4UqTZ//w99A=
|
||||
cloud.google.com/go v0.101.1 h1:3+/0TAm9JD/PyhkrDWQWi2L197h3euCsM+H+J4iYTR8=
|
||||
cloud.google.com/go v0.101.1/go.mod h1:55HwjsGW4CHD3JrNuMdZtSDsgTs0CuCB/bBTugD+7AA=
|
||||
cloud.google.com/go v0.102.0 h1:DAq3r8y4mDgyB/ZPJ9v/5VJNqjgJAxTn6ZYLlUywOu8=
|
||||
cloud.google.com/go v0.102.0/go.mod h1:oWcCzKlqJ5zgHQt9YsaeTY9KzIvjyy0ArmiBUgpQ+nc=
|
||||
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
|
||||
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
|
||||
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
|
||||
|
@ -57,7 +57,6 @@ cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0Zeo
|
|||
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
|
||||
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
|
||||
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
|
||||
cloud.google.com/go/storage v1.22.0/go.mod h1:GbaLEoMqbVm6sx3Z0R++gSiBlgMv6yUi2q1DeGFKQgE=
|
||||
cloud.google.com/go/storage v1.22.1 h1:F6IlQJZrZM++apn9V5/VfS3gbTUYg98PS3EMQAzqtfg=
|
||||
cloud.google.com/go/storage v1.22.1/go.mod h1:S8N1cAStu7BOeFfE8KAQzmyyLkK8p/vmRq6kuBTW58Y=
|
||||
collectd.org v0.3.0/go.mod h1:A/8DzQBkF6abtvrT2j/AU/4tiBgJWYyh0y/oB/4MlWE=
|
||||
|
@ -125,8 +124,6 @@ github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo
|
|||
github.com/andybalholm/brotli v1.0.2/go.mod h1:loMXtMfwqflxFJPmdbJO0a3KNoPuLBgiu3qAvBg8x/Y=
|
||||
github.com/andybalholm/brotli v1.0.3/go.mod h1:fO7iG3H7G2nSZ7m0zPUDn85XEX2GTukHGRSepvi9Eig=
|
||||
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
|
||||
github.com/antzucaro/matchr v0.0.0-20210222213004-b04723ef80f0 h1:R/qAiUxFT3mNgQaNqJe0IVznjKRNm23ohAIh9lgtlzc=
|
||||
github.com/antzucaro/matchr v0.0.0-20210222213004-b04723ef80f0/go.mod h1:v3ZDlfVAL1OrkKHbGSFFK60k0/7hruHPDq2XMs9Gu6U=
|
||||
github.com/apache/arrow/go/arrow v0.0.0-20191024131854-af6fa24be0db/go.mod h1:VTxUBvSJ3s3eHAg65PNgrsn5BtqCRPdmyXh6rAfdxN0=
|
||||
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
|
||||
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
|
||||
|
@ -145,8 +142,8 @@ github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQ
|
|||
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
||||
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
|
||||
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
||||
github.com/aws/aws-sdk-go v1.44.18 h1:rPDxVLNZL9R76yifC0kYOnfnkMswLfy89c8LBJSyvgY=
|
||||
github.com/aws/aws-sdk-go v1.44.18/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
|
||||
github.com/aws/aws-sdk-go v1.44.24 h1:3nOkwJBJLiGBmJKWp3z0utyXuBkxyGkRRwWjrTItJaY=
|
||||
github.com/aws/aws-sdk-go v1.44.24/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
|
||||
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
|
@ -438,9 +435,8 @@ github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPg
|
|||
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
||||
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
|
||||
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
|
||||
github.com/google/martian/v3 v3.2.1 h1:d8MncMlErDFTwQGBK1xhv026j9kqhvw1Qv9IbWT1VLQ=
|
||||
github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
|
||||
github.com/google/martian/v3 v3.3.2 h1:IqNFLAmvJOgVlpdEBiQbDc2EwKW77amAycfTuWKdfvw=
|
||||
github.com/google/martian/v3 v3.3.2/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
|
||||
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
||||
|
@ -570,8 +566,8 @@ github.com/klauspost/compress v1.4.0/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0
|
|||
github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
|
||||
github.com/klauspost/compress v1.13.4/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
|
||||
github.com/klauspost/compress v1.13.5/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
github.com/klauspost/compress v1.15.4 h1:1kn4/7MepF/CHmYub99/nNX8az0IJjfSOU/jbnTVfqQ=
|
||||
github.com/klauspost/compress v1.15.4/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU=
|
||||
github.com/klauspost/compress v1.15.5 h1:qyCLMz2JCrKADihKOh9FxnW3houKeNsp2h5OEz0QSEA=
|
||||
github.com/klauspost/compress v1.15.5/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU=
|
||||
github.com/klauspost/cpuid v0.0.0-20170728055534-ae7887de9fa5/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
|
||||
github.com/klauspost/crc32 v0.0.0-20161016154125-cb6bfca970f6/go.mod h1:+ZoRqAPRLkC4NPOvfYeR5KNOrY6TD+/sAC3HXPZgDYg=
|
||||
github.com/klauspost/pgzip v1.0.2-0.20170402124221-0bf5dcad4ada/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
|
||||
|
@ -819,8 +815,8 @@ github.com/uber/jaeger-client-go v2.25.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMW
|
|||
github.com/uber/jaeger-lib v2.4.0+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
|
||||
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
|
||||
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
|
||||
github.com/urfave/cli/v2 v2.7.1 h1:DsAOFeI9T0vmUW4LiGR5mhuCIn5kqGIE4WMU2ytmH00=
|
||||
github.com/urfave/cli/v2 v2.7.1/go.mod h1:TYFbtzt/azQoJOrGH5mDfZtS0jIkl/OeFwlRWPR9KRM=
|
||||
github.com/urfave/cli/v2 v2.8.1 h1:CGuYNZF9IKZY/rfBe3lJpccSoIY1ytfvmgQT90cNOl4=
|
||||
github.com/urfave/cli/v2 v2.8.1/go.mod h1:Z41J9TPoffeoqP0Iza0YbAhGvymRdZAd2uPmZ5JxRdY=
|
||||
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
|
||||
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
|
||||
github.com/valyala/fasthttp v1.30.0/go.mod h1:2rsYD01CKFrjjsvFxx75KlEUNpWNBY9JWD3K/7o2Cus=
|
||||
|
@ -844,6 +840,8 @@ github.com/xdg/stringprep v0.0.0-20180714160509-73f8eece6fdc/go.mod h1:Jhud4/sHM
|
|||
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
|
||||
github.com/xlab/treeprint v0.0.0-20180616005107-d6fb6747feb6/go.mod h1:ce1O1j6UtZfjr22oyGxGLbauSBp2YVXpARAosm7dHBg=
|
||||
github.com/xlab/treeprint v1.0.0/go.mod h1:IoImgRak9i3zJyuxOKUP1v4UZd1tMoKkq/Cimt1uhCg=
|
||||
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 h1:bAn7/zixMGCfxrRTfdpNzjtPYqr8smhKouy9mxVdGPU=
|
||||
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673/go.mod h1:N3UwUGtsrSj3ccvlPHLoLsHnpR27oXr4ZE984MbSER8=
|
||||
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
|
@ -994,8 +992,9 @@ golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su
|
|||
golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||
golang.org/x/net v0.0.0-20220412020605-290c469a71a5/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||
golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||
golang.org/x/net v0.0.0-20220520000938-2e3eb7b945c2 h1:NWy5+hlRbC7HK+PmcXVUmW1IMyFce7to56IUvhUFm7Y=
|
||||
golang.org/x/net v0.0.0-20220520000938-2e3eb7b945c2/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||
golang.org/x/net v0.0.0-20220526153639-5463443f8c37 h1:lUkvobShwKsOesNfWWlCS5q7fnbG1MEliIzwu886fn8=
|
||||
golang.org/x/net v0.0.0-20220526153639-5463443f8c37/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
|
@ -1014,8 +1013,9 @@ golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ
|
|||
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
|
||||
golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
|
||||
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5 h1:OSnWWcOd/CtWQC2cYSBgbTSJv3ciqd8r54ySIW2y3RE=
|
||||
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
|
||||
golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401 h1:zwrSfklXn0gxyLRX/aR+q6cgHbV/ItVyzbPlbA+dkAw=
|
||||
golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
|
@ -1122,8 +1122,8 @@ golang.org/x/sys v0.0.0-20220405052023-b1e9470b6e64/go.mod h1:oPkhp1MJrh7nUepCBc
|
|||
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220502124256-b6088ccd6cba/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220519141025-dcacdad47464 h1:MpIuURY70f0iKp/oooEFtB2oENcHITo/z1b6u41pKCw=
|
||||
golang.org/x/sys v0.0.0-20220519141025-dcacdad47464/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a h1:dGzPydgVsqGcTRVwiLJ1jVbufYwmzD3LfVPLKsKg+0k=
|
||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
|
@ -1265,10 +1265,10 @@ google.golang.org/api v0.70.0/go.mod h1:Bs4ZM2HGifEvXwd50TtW70ovgJffJYw2oRCOFU/S
|
|||
google.golang.org/api v0.71.0/go.mod h1:4PyU6e6JogV1f9eA4voyrTY2batOLdgZ5qZ5HOCc4j8=
|
||||
google.golang.org/api v0.74.0/go.mod h1:ZpfMZOVRMywNyvJFeqL9HRWBgAuRfSjJFpe9QtRRyDs=
|
||||
google.golang.org/api v0.75.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA=
|
||||
google.golang.org/api v0.77.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA=
|
||||
google.golang.org/api v0.78.0/go.mod h1:1Sg78yoMLOhlQTeF+ARBoytAcH1NNyyl390YMy6rKmw=
|
||||
google.golang.org/api v0.80.0 h1:IQWaGVCYnsm4MO3hh+WtSXMzMzuyFx/fuR8qkN3A0Qo=
|
||||
google.golang.org/api v0.80.0/go.mod h1:xY3nI94gbvBrE0J6NHXhxOmW97HG7Khjkku6AFB3Hyg=
|
||||
google.golang.org/api v0.81.0 h1:o8WF5AvfidafWbFjsRyupxyEQJNUWxLZJCK5NXrxZZ8=
|
||||
google.golang.org/api v0.81.0/go.mod h1:FA6Mb/bZxj706H2j+j2d6mHEEaHBmbbWnkfvmorOCko=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
|
@ -1349,17 +1349,17 @@ google.golang.org/genproto v0.0.0-20220222213610-43724f9ea8cf/go.mod h1:kGP+zUP2
|
|||
google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
||||
google.golang.org/genproto v0.0.0-20220310185008-1973136f34c6/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
||||
google.golang.org/genproto v0.0.0-20220324131243-acbaeb5b85eb/go.mod h1:hAL49I2IFola2sVEjAn7MEwsja0xp51I0tlGAf9hz4E=
|
||||
google.golang.org/genproto v0.0.0-20220405205423-9d709892a2bf/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
|
||||
google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
|
||||
google.golang.org/genproto v0.0.0-20220413183235-5e96e2839df9/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
|
||||
google.golang.org/genproto v0.0.0-20220414192740-2d67ff6cf2b4/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
|
||||
google.golang.org/genproto v0.0.0-20220421151946-72621c1f0bd3/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
|
||||
google.golang.org/genproto v0.0.0-20220429170224-98d788798c3e/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
|
||||
google.golang.org/genproto v0.0.0-20220502173005-c8bf987b8c21/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
|
||||
google.golang.org/genproto v0.0.0-20220505152158-f39f71e6c8f3/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
|
||||
google.golang.org/genproto v0.0.0-20220518221133-4f43b3371335/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
|
||||
google.golang.org/genproto v0.0.0-20220519153652-3a47de7e79bd h1:e0TwkXOdbnH/1x5rc5MZ/VYyiZ4v+RdVfrGMqEwT68I=
|
||||
google.golang.org/genproto v0.0.0-20220519153652-3a47de7e79bd/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
|
||||
google.golang.org/genproto v0.0.0-20220523171625-347a074981d8/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
|
||||
google.golang.org/genproto v0.0.0-20220527130721-00d5c0f3be58 h1:a221mAAEAzq4Lz6ZWRkcS8ptb2mxoxYSt4N68aRyQHM=
|
||||
google.golang.org/genproto v0.0.0-20220527130721-00d5c0f3be58/go.mod h1:yKyY4AMRwFiC8yMMNaMi+RkCnjZJt9LoWuvhXjMs+To=
|
||||
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
|
||||
|
|
|
@ -91,6 +91,7 @@ type Table struct {
|
|||
needFlushCallbackCall uint32
|
||||
|
||||
prepareBlock PrepareBlockCallback
|
||||
isReadOnly *uint32
|
||||
|
||||
partsLock sync.Mutex
|
||||
parts []*partWrapper
|
||||
|
@ -254,7 +255,7 @@ func (pw *partWrapper) decRef() {
|
|||
// to persistent storage.
|
||||
//
|
||||
// The table is created if it doesn't exist yet.
|
||||
func OpenTable(path string, flushCallback func(), prepareBlock PrepareBlockCallback) (*Table, error) {
|
||||
func OpenTable(path string, flushCallback func(), prepareBlock PrepareBlockCallback, isReadOnly *uint32) (*Table, error) {
|
||||
path = filepath.Clean(path)
|
||||
logger.Infof("opening table %q...", path)
|
||||
startTime := time.Now()
|
||||
|
@ -280,6 +281,7 @@ func OpenTable(path string, flushCallback func(), prepareBlock PrepareBlockCallb
|
|||
path: path,
|
||||
flushCallback: flushCallback,
|
||||
prepareBlock: prepareBlock,
|
||||
isReadOnly: isReadOnly,
|
||||
parts: pws,
|
||||
mergeIdx: uint64(time.Now().UnixNano()),
|
||||
flockF: flockF,
|
||||
|
@ -799,7 +801,17 @@ func (tb *Table) startPartMergers() {
|
|||
}
|
||||
}
|
||||
|
||||
func (tb *Table) canBackgroundMerge() bool {
|
||||
return atomic.LoadUint32(tb.isReadOnly) == 0
|
||||
}
|
||||
|
||||
func (tb *Table) mergeExistingParts(isFinal bool) error {
|
||||
if !tb.canBackgroundMerge() {
|
||||
// Do not perform background merge in read-only mode
|
||||
// in order to prevent from disk space shortage.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2603
|
||||
return nil
|
||||
}
|
||||
n := fs.MustGetFreeSpace(tb.path)
|
||||
// Divide free space by the max number of concurrent merges.
|
||||
maxOutBytes := n / uint64(mergeWorkersCount)
|
||||
|
|
|
@ -40,7 +40,8 @@ func TestTableSearchSerial(t *testing.T) {
|
|||
|
||||
func() {
|
||||
// Re-open the table and verify the search works.
|
||||
tb, err := OpenTable(path, nil, nil)
|
||||
var isReadOnly uint32
|
||||
tb, err := OpenTable(path, nil, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot open table: %s", err)
|
||||
}
|
||||
|
@ -75,7 +76,8 @@ func TestTableSearchConcurrent(t *testing.T) {
|
|||
|
||||
// Re-open the table and verify the search works.
|
||||
func() {
|
||||
tb, err := OpenTable(path, nil, nil)
|
||||
var isReadOnly uint32
|
||||
tb, err := OpenTable(path, nil, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot open table: %s", err)
|
||||
}
|
||||
|
@ -151,7 +153,8 @@ func newTestTable(path string, itemsCount int) (*Table, []string, error) {
|
|||
flushCallback := func() {
|
||||
atomic.AddUint64(&flushes, 1)
|
||||
}
|
||||
tb, err := OpenTable(path, flushCallback, nil)
|
||||
var isReadOnly uint32
|
||||
tb, err := OpenTable(path, flushCallback, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("cannot open table: %w", err)
|
||||
}
|
||||
|
|
|
@ -32,7 +32,8 @@ func benchmarkTableSearch(b *testing.B, itemsCount int) {
|
|||
|
||||
// Force finishing pending merges
|
||||
tb.MustClose()
|
||||
tb, err = OpenTable(path, nil, nil)
|
||||
var isReadOnly uint32
|
||||
tb, err = OpenTable(path, nil, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
b.Fatalf("unexpected error when re-opening table %q: %s", path, err)
|
||||
}
|
||||
|
|
|
@ -21,7 +21,8 @@ func TestTableOpenClose(t *testing.T) {
|
|||
}()
|
||||
|
||||
// Create a new table
|
||||
tb, err := OpenTable(path, nil, nil)
|
||||
var isReadOnly uint32
|
||||
tb, err := OpenTable(path, nil, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot create new table: %s", err)
|
||||
}
|
||||
|
@ -31,7 +32,7 @@ func TestTableOpenClose(t *testing.T) {
|
|||
|
||||
// Re-open created table multiple times.
|
||||
for i := 0; i < 10; i++ {
|
||||
tb, err := OpenTable(path, nil, nil)
|
||||
tb, err := OpenTable(path, nil, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot open created table: %s", err)
|
||||
}
|
||||
|
@ -45,14 +46,15 @@ func TestTableOpenMultipleTimes(t *testing.T) {
|
|||
_ = os.RemoveAll(path)
|
||||
}()
|
||||
|
||||
tb1, err := OpenTable(path, nil, nil)
|
||||
var isReadOnly uint32
|
||||
tb1, err := OpenTable(path, nil, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot open table: %s", err)
|
||||
}
|
||||
defer tb1.MustClose()
|
||||
|
||||
for i := 0; i < 10; i++ {
|
||||
tb2, err := OpenTable(path, nil, nil)
|
||||
tb2, err := OpenTable(path, nil, nil, &isReadOnly)
|
||||
if err == nil {
|
||||
tb2.MustClose()
|
||||
t.Fatalf("expecting non-nil error when opening already opened table")
|
||||
|
@ -73,7 +75,8 @@ func TestTableAddItemSerial(t *testing.T) {
|
|||
flushCallback := func() {
|
||||
atomic.AddUint64(&flushes, 1)
|
||||
}
|
||||
tb, err := OpenTable(path, flushCallback, nil)
|
||||
var isReadOnly uint32
|
||||
tb, err := OpenTable(path, flushCallback, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot open %q: %s", path, err)
|
||||
}
|
||||
|
@ -99,7 +102,7 @@ func TestTableAddItemSerial(t *testing.T) {
|
|||
testReopenTable(t, path, itemsCount)
|
||||
|
||||
// Add more items in order to verify merge between inmemory parts and file-based parts.
|
||||
tb, err = OpenTable(path, nil, nil)
|
||||
tb, err = OpenTable(path, nil, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot open %q: %s", path, err)
|
||||
}
|
||||
|
@ -132,7 +135,8 @@ func TestTableCreateSnapshotAt(t *testing.T) {
|
|||
_ = os.RemoveAll(path)
|
||||
}()
|
||||
|
||||
tb, err := OpenTable(path, nil, nil)
|
||||
var isReadOnly uint32
|
||||
tb, err := OpenTable(path, nil, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot open %q: %s", path, err)
|
||||
}
|
||||
|
@ -163,13 +167,13 @@ func TestTableCreateSnapshotAt(t *testing.T) {
|
|||
}()
|
||||
|
||||
// Verify snapshots contain all the data.
|
||||
tb1, err := OpenTable(snapshot1, nil, nil)
|
||||
tb1, err := OpenTable(snapshot1, nil, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot open %q: %s", path, err)
|
||||
}
|
||||
defer tb1.MustClose()
|
||||
|
||||
tb2, err := OpenTable(snapshot2, nil, nil)
|
||||
tb2, err := OpenTable(snapshot2, nil, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot open %q: %s", path, err)
|
||||
}
|
||||
|
@ -222,7 +226,8 @@ func TestTableAddItemsConcurrent(t *testing.T) {
|
|||
atomic.AddUint64(&itemsMerged, uint64(len(items)))
|
||||
return data, items
|
||||
}
|
||||
tb, err := OpenTable(path, flushCallback, prepareBlock)
|
||||
var isReadOnly uint32
|
||||
tb, err := OpenTable(path, flushCallback, prepareBlock, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot open %q: %s", path, err)
|
||||
}
|
||||
|
@ -252,7 +257,7 @@ func TestTableAddItemsConcurrent(t *testing.T) {
|
|||
testReopenTable(t, path, itemsCount)
|
||||
|
||||
// Add more items in order to verify merge between inmemory parts and file-based parts.
|
||||
tb, err = OpenTable(path, nil, nil)
|
||||
tb, err = OpenTable(path, nil, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot open %q: %s", path, err)
|
||||
}
|
||||
|
@ -294,7 +299,8 @@ func testReopenTable(t *testing.T, path string, itemsCount int) {
|
|||
t.Helper()
|
||||
|
||||
for i := 0; i < 10; i++ {
|
||||
tb, err := OpenTable(path, nil, nil)
|
||||
var isReadOnly uint32
|
||||
tb, err := OpenTable(path, nil, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot re-open %q: %s", path, err)
|
||||
}
|
||||
|
|
|
@ -15,6 +15,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
xxhash "github.com/cespare/xxhash/v2"
|
||||
"golang.org/x/oauth2"
|
||||
"golang.org/x/oauth2/clientcredentials"
|
||||
)
|
||||
|
@ -68,8 +69,11 @@ func (s *Secret) String() string {
|
|||
//
|
||||
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#tls_config
|
||||
type TLSConfig struct {
|
||||
CA []byte `yaml:"ca,omitempty"`
|
||||
CAFile string `yaml:"ca_file,omitempty"`
|
||||
Cert []byte `yaml:"cert,omitempty"`
|
||||
CertFile string `yaml:"cert_file,omitempty"`
|
||||
Key []byte `yaml:"key,omitempty"`
|
||||
KeyFile string `yaml:"key_file,omitempty"`
|
||||
ServerName string `yaml:"server_name,omitempty"`
|
||||
InsecureSkipVerify bool `yaml:"insecure_skip_verify,omitempty"`
|
||||
|
@ -81,8 +85,11 @@ func (tlsConfig *TLSConfig) String() string {
|
|||
if tlsConfig == nil {
|
||||
return ""
|
||||
}
|
||||
return fmt.Sprintf("ca_file=%q, cert_file=%q, key_file=%q, server_name=%q, insecure_skip_verify=%v, min_version=%q",
|
||||
tlsConfig.CAFile, tlsConfig.CertFile, tlsConfig.KeyFile, tlsConfig.ServerName, tlsConfig.InsecureSkipVerify, tlsConfig.MinVersion)
|
||||
caHash := xxhash.Sum64(tlsConfig.CA)
|
||||
certHash := xxhash.Sum64(tlsConfig.Cert)
|
||||
keyHash := xxhash.Sum64(tlsConfig.Key)
|
||||
return fmt.Sprintf("hash(ca)=%d, ca_file=%q, hash(cert)=%d, cert_file=%q, hash(key)=%d, key_file=%q, server_name=%q, insecure_skip_verify=%v, min_version=%q",
|
||||
caHash, tlsConfig.CAFile, certHash, tlsConfig.CertFile, keyHash, tlsConfig.KeyFile, tlsConfig.ServerName, tlsConfig.InsecureSkipVerify, tlsConfig.MinVersion)
|
||||
}
|
||||
|
||||
// Authorization represents generic authorization config.
|
||||
|
@ -270,6 +277,9 @@ func (ac *Config) GetAuthHeader() string {
|
|||
}
|
||||
|
||||
// String returns human-readable representation for ac.
|
||||
//
|
||||
// It is also used for comparing Config objects for equality. If two Config
|
||||
// objects have the same string representation, then they are considered equal.
|
||||
func (ac *Config) String() string {
|
||||
return fmt.Sprintf("AuthDigest=%s, TLSRootCA=%s, TLSCertificate=%s, TLSServerName=%s, TLSInsecureSkipVerify=%v, TLSMinVersion=%d",
|
||||
ac.authDigest, ac.tlsRootCAString(), ac.tlsCertDigest, ac.TLSServerName, ac.TLSInsecureSkipVerify, ac.TLSMinVersion)
|
||||
|
@ -456,7 +466,17 @@ func NewConfig(baseDir string, az *Authorization, basicAuth *BasicAuthConfig, be
|
|||
if tlsConfig != nil {
|
||||
tlsServerName = tlsConfig.ServerName
|
||||
tlsInsecureSkipVerify = tlsConfig.InsecureSkipVerify
|
||||
if tlsConfig.CertFile != "" || tlsConfig.KeyFile != "" {
|
||||
if len(tlsConfig.Key) != 0 || len(tlsConfig.Cert) != 0 {
|
||||
cert, err := tls.X509KeyPair(tlsConfig.Cert, tlsConfig.Key)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot load TLS certificate from the provided `cert` and `key` values: %w", err)
|
||||
}
|
||||
getTLSCert = func(*tls.CertificateRequestInfo) (*tls.Certificate, error) {
|
||||
return &cert, nil
|
||||
}
|
||||
h := xxhash.Sum64(tlsConfig.Key) ^ xxhash.Sum64(tlsConfig.Cert)
|
||||
tlsCertDigest = fmt.Sprintf("digest(key+cert)=%d", h)
|
||||
} else if tlsConfig.CertFile != "" || tlsConfig.KeyFile != "" {
|
||||
getTLSCert = func(*tls.CertificateRequestInfo) (*tls.Certificate, error) {
|
||||
// Re-read TLS certificate from disk. This is needed for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1420
|
||||
certPath := fs.GetFilepath(baseDir, tlsConfig.CertFile)
|
||||
|
@ -473,7 +493,12 @@ func NewConfig(baseDir string, az *Authorization, basicAuth *BasicAuthConfig, be
|
|||
}
|
||||
tlsCertDigest = fmt.Sprintf("certFile=%q, keyFile=%q", tlsConfig.CertFile, tlsConfig.KeyFile)
|
||||
}
|
||||
if tlsConfig.CAFile != "" {
|
||||
if len(tlsConfig.CA) != 0 {
|
||||
tlsRootCA = x509.NewCertPool()
|
||||
if !tlsRootCA.AppendCertsFromPEM(tlsConfig.CA) {
|
||||
return nil, fmt.Errorf("cannot parse data from `ca` value")
|
||||
}
|
||||
} else if tlsConfig.CAFile != "" {
|
||||
path := fs.GetFilepath(baseDir, tlsConfig.CAFile)
|
||||
data, err := fs.ReadFileOrHTTP(path)
|
||||
if err != nil {
|
||||
|
|
|
@ -260,6 +260,13 @@ func parseRelabelConfig(rc *RelabelConfig) (*parsedRelabelConfig, error) {
|
|||
}
|
||||
sourceLabels = []string{"__name__"}
|
||||
action = "drop"
|
||||
case "uppercase", "lowercase":
|
||||
if len(sourceLabels) == 0 {
|
||||
return nil, fmt.Errorf("missing `source_labels` for `action=%s`", action)
|
||||
}
|
||||
if targetLabel == "" {
|
||||
return nil, fmt.Errorf("missing `target_label` for `action=%s`", action)
|
||||
}
|
||||
case "labelmap":
|
||||
case "labelmap_all":
|
||||
case "labeldrop":
|
||||
|
|
|
@ -53,8 +53,8 @@ func TestLoadRelabelConfigsSuccess(t *testing.T) {
|
|||
if err != nil {
|
||||
t.Fatalf("cannot load relabel configs from %q: %s", path, err)
|
||||
}
|
||||
if n := pcs.Len(); n != 12 {
|
||||
t.Fatalf("unexpected number of relabel configs loaded from %q; got %d; want %d", path, n, 12)
|
||||
if n := pcs.Len(); n != 14 {
|
||||
t.Fatalf("unexpected number of relabel configs loaded from %q; got %d; want %d", path, n, 14)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -184,7 +184,7 @@ func (prc *parsedRelabelConfig) apply(labels []prompbmarshal.Label, labelsOffset
|
|||
relabelBufPool.Put(bb)
|
||||
return setLabelValue(labels, labelsOffset, nameStr, valueStr)
|
||||
case "replace_all":
|
||||
// Replace all the occurences of `regex` at `source_labels` joined with `separator` with the `replacement`
|
||||
// Replace all the occurrences of `regex` at `source_labels` joined with `separator` with the `replacement`
|
||||
// and store the result at `target_label`
|
||||
bb := relabelBufPool.Get()
|
||||
bb.B = concatLabelValues(bb.B[:0], src, prc.SourceLabels, prc.Separator)
|
||||
|
@ -300,6 +300,22 @@ func (prc *parsedRelabelConfig) apply(labels []prompbmarshal.Label, labelsOffset
|
|||
}
|
||||
}
|
||||
return dst
|
||||
case "uppercase":
|
||||
bb := relabelBufPool.Get()
|
||||
bb.B = concatLabelValues(bb.B[:0], src, prc.SourceLabels, prc.Separator)
|
||||
valueStr := string(bb.B)
|
||||
relabelBufPool.Put(bb)
|
||||
valueStr = strings.ToUpper(valueStr)
|
||||
labels = setLabelValue(labels, labelsOffset, prc.TargetLabel, valueStr)
|
||||
return labels
|
||||
case "lowercase":
|
||||
bb := relabelBufPool.Get()
|
||||
bb.B = concatLabelValues(bb.B[:0], src, prc.SourceLabels, prc.Separator)
|
||||
valueStr := string(bb.B)
|
||||
relabelBufPool.Put(bb)
|
||||
valueStr = strings.ToLower(valueStr)
|
||||
labels = setLabelValue(labels, labelsOffset, prc.TargetLabel, valueStr)
|
||||
return labels
|
||||
default:
|
||||
logger.Panicf("BUG: unknown `action`: %q", prc.Action)
|
||||
return labels
|
||||
|
|
|
@ -1580,6 +1580,63 @@ func TestApplyRelabelConfigs(t *testing.T) {
|
|||
},
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("upper-lower-case", func(t *testing.T) {
|
||||
f(`
|
||||
- action: uppercase
|
||||
source_labels: ["foo"]
|
||||
target_label: foo
|
||||
`, []prompbmarshal.Label{
|
||||
{
|
||||
Name: "foo",
|
||||
Value: "bar",
|
||||
},
|
||||
}, true, []prompbmarshal.Label{
|
||||
{
|
||||
Name: "foo",
|
||||
Value: "BAR",
|
||||
},
|
||||
})
|
||||
f(`
|
||||
- action: lowercase
|
||||
source_labels: ["foo", "bar"]
|
||||
target_label: baz
|
||||
- action: labeldrop
|
||||
regex: foo|bar
|
||||
`, []prompbmarshal.Label{
|
||||
{
|
||||
Name: "foo",
|
||||
Value: "BaR",
|
||||
},
|
||||
{
|
||||
Name: "bar",
|
||||
Value: "fOO",
|
||||
},
|
||||
}, true, []prompbmarshal.Label{
|
||||
{
|
||||
Name: "baz",
|
||||
Value: "bar;foo",
|
||||
},
|
||||
})
|
||||
})
|
||||
f(`
|
||||
- action: lowercase
|
||||
source_labels: ["foo"]
|
||||
target_label: baz
|
||||
- action: uppercase
|
||||
source_labels: ["bar"]
|
||||
target_label: baz
|
||||
`, []prompbmarshal.Label{
|
||||
{
|
||||
Name: "qux",
|
||||
Value: "quux",
|
||||
},
|
||||
}, true, []prompbmarshal.Label{
|
||||
{
|
||||
Name: "qux",
|
||||
Value: "quux",
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func TestFinalizeLabels(t *testing.T) {
|
||||
|
|
|
@ -32,3 +32,10 @@
|
|||
regex: [foo bar baz]
|
||||
- action: drop_metrics
|
||||
regex: "foo|bar|baz"
|
||||
- source_labels: [foo, bar]
|
||||
separator: "-"
|
||||
target_label: __tmp_uppercase
|
||||
action: uppercase
|
||||
- source_labels: [__tmp_uppercase]
|
||||
target_label: lower_aaa
|
||||
action: lowercase
|
|
@ -56,6 +56,9 @@ var (
|
|||
"Can be specified as pod name of Kubernetes StatefulSet - pod-name-Num, where Num is a numeric part of pod name")
|
||||
clusterReplicationFactor = flag.Int("promscrape.cluster.replicationFactor", 1, "The number of members in the cluster, which scrape the same targets. "+
|
||||
"If the replication factor is greater than 1, then the deduplication must be enabled at remote storage side. See https://docs.victoriametrics.com/#deduplication")
|
||||
clusterName = flag.String("promscrape.cluster.name", "", "Optional name of the cluster. If multiple vmagent clusters scrape the same targets, "+
|
||||
"then each cluster must have unique name in order to properly de-duplicate samples received from these clusters. "+
|
||||
"See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2679")
|
||||
)
|
||||
|
||||
var clusterMemberID int
|
||||
|
@ -68,11 +71,11 @@ func mustInitClusterMemberID() {
|
|||
if idx := strings.LastIndexByte(s, '-'); idx >= 0 {
|
||||
s = s[idx+1:]
|
||||
}
|
||||
n, err := strconv.ParseInt(s, 10, 64)
|
||||
n, err := strconv.Atoi(s)
|
||||
if err != nil {
|
||||
logger.Fatalf("cannot parse -promscrape.cluster.memberNum=%q: %s", *clusterMemberNum, err)
|
||||
}
|
||||
clusterMemberID = int(n)
|
||||
clusterMemberID = n
|
||||
}
|
||||
|
||||
// Config represents essential parts from Prometheus config defined at https://prometheus.io/docs/prometheus/latest/configuration/configuration/
|
||||
|
|
|
@ -9,11 +9,6 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
|
||||
)
|
||||
|
||||
// apiConfig contains config for API server
|
||||
type apiConfig struct {
|
||||
aw *apiWatcher
|
||||
}
|
||||
|
||||
func newAPIConfig(sdc *SDConfig, baseDir string, swcFunc ScrapeWorkConstructorFunc) (*apiConfig, error) {
|
||||
role := sdc.role()
|
||||
switch role {
|
||||
|
@ -26,6 +21,21 @@ func newAPIConfig(sdc *SDConfig, baseDir string, swcFunc ScrapeWorkConstructorFu
|
|||
return nil, fmt.Errorf("cannot parse auth config: %w", err)
|
||||
}
|
||||
apiServer := sdc.APIServer
|
||||
|
||||
if len(sdc.KubeConfig) > 0 {
|
||||
fmt.Println("building")
|
||||
kc, err := buildConfig(sdc)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot build kube config: %w", err)
|
||||
}
|
||||
ac, err = promauth.NewConfig(".", nil, kc.basicAuth, kc.token, kc.tokenFile, nil, kc.tlsConfig)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot initialize service account auth: %w; probably, `kubernetes_sd_config->api_server` is missing in Prometheus configs?", err)
|
||||
}
|
||||
apiServer = kc.server
|
||||
sdc.ProxyURL = kc.proxyURL
|
||||
}
|
||||
|
||||
if len(apiServer) == 0 {
|
||||
// Assume we run at k8s pod.
|
||||
// Discover apiServer and auth config according to k8s docs.
|
||||
|
|
261
lib/promscrape/discovery/kubernetes/kubeconfig.go
Normal file
261
lib/promscrape/discovery/kubernetes/kubeconfig.go
Normal file
|
@ -0,0 +1,261 @@
|
|||
package kubernetes
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"gopkg.in/yaml.v2"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/proxy"
|
||||
)
|
||||
|
||||
// apiConfig contains config for API server
|
||||
type apiConfig struct {
|
||||
aw *apiWatcher
|
||||
}
|
||||
|
||||
// Config represent configuration file for kubernetes API server connection
|
||||
// https://github.com/kubernetes/client-go/blob/master/tools/clientcmd/api/v1/types.go#L28
|
||||
type Config struct {
|
||||
Kind string `yaml:"kind,omitempty"`
|
||||
APIVersion string `yaml:"apiVersion,omitempty"`
|
||||
Clusters []struct {
|
||||
Name string `yaml:"name"`
|
||||
Cluster *Cluster `yaml:"cluster"`
|
||||
} `yaml:"clusters"`
|
||||
AuthInfos []struct {
|
||||
Name string `yaml:"name"`
|
||||
AuthInfo *AuthInfo `yaml:"user"`
|
||||
} `yaml:"users"`
|
||||
Contexts []struct {
|
||||
Name string `yaml:"name"`
|
||||
Context *Context `yaml:"context"`
|
||||
} `yaml:"contexts"`
|
||||
CurrentContext string `yaml:"current-context"`
|
||||
}
|
||||
|
||||
// Cluster contains information about how to communicate with a kubernetes cluster
|
||||
type Cluster struct {
|
||||
Server string `yaml:"server"`
|
||||
TLSServerName string `yaml:"tls-server-name,omitempty"`
|
||||
InsecureSkipTLSVerify bool `yaml:"insecure-skip-tls-verify,omitempty"`
|
||||
CertificateAuthority string `yaml:"certificate-authority,omitempty"`
|
||||
CertificateAuthorityData string `yaml:"certificate-authority-data,omitempty"`
|
||||
ProxyURL *proxy.URL `yaml:"proxy-url,omitempty"`
|
||||
}
|
||||
|
||||
// AuthInfo contains information that describes identity information. This is use to tell the kubernetes cluster who you are.
|
||||
type AuthInfo struct {
|
||||
ClientCertificate string `yaml:"client-certificate,omitempty"`
|
||||
ClientCertificateData string `yaml:"client-certificate-data,omitempty"`
|
||||
ClientKey string `yaml:"client-key,omitempty"`
|
||||
ClientKeyData string `yaml:"client-key-data,omitempty"`
|
||||
// TODO add support for it
|
||||
Exec *ExecConfig `yaml:"exec,omitempty"`
|
||||
Token string `yaml:"token,omitempty"`
|
||||
TokenFile string `yaml:"tokenFile,omitempty"`
|
||||
Impersonate string `yaml:"act-as,omitempty"`
|
||||
ImpersonateUID string `yaml:"act-as-uid,omitempty"`
|
||||
ImpersonateGroups []string `yaml:"act-as-groups,omitempty"`
|
||||
ImpersonateUserExtra []string `yaml:"act-as-user-extra,omitempty"`
|
||||
Username string `yaml:"username,omitempty"`
|
||||
Password string `yaml:"password,omitempty"`
|
||||
}
|
||||
|
||||
func (au *AuthInfo) validate() error {
|
||||
errContext := "field: %s is not supported currently, open an issue with feature request for it"
|
||||
if au.Exec != nil {
|
||||
return fmt.Errorf(errContext, "exec")
|
||||
}
|
||||
if len(au.ImpersonateUID) > 0 {
|
||||
return fmt.Errorf(errContext, "act-as-uid")
|
||||
}
|
||||
if len(au.Impersonate) > 0 {
|
||||
return fmt.Errorf(errContext, "act-as")
|
||||
}
|
||||
if len(au.ImpersonateGroups) > 0 {
|
||||
return fmt.Errorf(errContext, "act-as-groups")
|
||||
}
|
||||
if len(au.ImpersonateUserExtra) > 0 {
|
||||
return fmt.Errorf(errContext, "act-as-user-extra")
|
||||
}
|
||||
if len(au.Password) > 0 && len(au.Username) == 0 {
|
||||
return fmt.Errorf("username cannot be empty, if password defined")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ExecConfig contains information about os.command, that returns auth token for kubernetes cluster connection
|
||||
type ExecConfig struct {
|
||||
// Command to execute.
|
||||
Command string `json:"command"`
|
||||
// Arguments to pass to the command when executing it.
|
||||
Args []string `json:"args"`
|
||||
// Env defines additional environment variables to expose to the process. These
|
||||
// are unioned with the host's environment, as well as variables client-go uses
|
||||
// to pass argument to the plugin.
|
||||
Env []ExecEnvVar `json:"env"`
|
||||
|
||||
// Preferred input version of the ExecInfo. The returned ExecCredentials MUST use
|
||||
// the same encoding version as the input.
|
||||
APIVersion string `json:"apiVersion,omitempty"`
|
||||
|
||||
// This text is shown to the user when the executable doesn't seem to be
|
||||
// present. For example, `brew install foo-cli` might be a good InstallHint for
|
||||
// foo-cli on Mac OS systems.
|
||||
InstallHint string `json:"installHint,omitempty"`
|
||||
|
||||
// ProvideClusterInfo determines whether or not to provide cluster information,
|
||||
// which could potentially contain very large CA data, to this exec plugin as a
|
||||
// part of the KUBERNETES_EXEC_INFO environment variable. By default, it is set
|
||||
// to false. Package k8s.io/client-go/tools/auth/exec provides helper methods for
|
||||
// reading this environment variable.
|
||||
ProvideClusterInfo bool `json:"provideClusterInfo"`
|
||||
|
||||
// InteractiveMode determines this plugin's relationship with standard input. Valid
|
||||
// values are "Never" (this exec plugin never uses standard input), "IfAvailable" (this
|
||||
// exec plugin wants to use standard input if it is available), or "Always" (this exec
|
||||
// plugin requires standard input to function). See ExecInteractiveMode values for more
|
||||
// details.
|
||||
//
|
||||
// If APIVersion is client.authentication.k8s.io/v1alpha1 or
|
||||
// client.authentication.k8s.io/v1beta1, then this field is optional and defaults
|
||||
// to "IfAvailable" when unset. Otherwise, this field is required.
|
||||
//+optional
|
||||
InteractiveMode string `json:"interactiveMode,omitempty"`
|
||||
}
|
||||
|
||||
// ExecEnvVar is used for setting environment variables when executing an exec-based
|
||||
// credential plugin.
|
||||
type ExecEnvVar struct {
|
||||
Name string `json:"name"`
|
||||
Value string `json:"value"`
|
||||
}
|
||||
|
||||
// Context is a tuple of references to a cluster and AuthInfo
|
||||
type Context struct {
|
||||
Cluster string `yaml:"cluster"`
|
||||
AuthInfo string `yaml:"user"`
|
||||
}
|
||||
|
||||
type kubeConfig struct {
|
||||
basicAuth *promauth.BasicAuthConfig
|
||||
server string
|
||||
token string
|
||||
tokenFile string
|
||||
tlsConfig *promauth.TLSConfig
|
||||
proxyURL *proxy.URL
|
||||
}
|
||||
|
||||
func buildConfig(sdc *SDConfig) (*kubeConfig, error) {
|
||||
|
||||
data, err := fs.ReadFileOrHTTP(sdc.KubeConfig)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot read kubeConfig from %q: %w", sdc.KubeConfig, err)
|
||||
}
|
||||
var config Config
|
||||
if err = yaml.Unmarshal(data, &config); err != nil {
|
||||
return nil, fmt.Errorf("cannot parse %q: %w", sdc.KubeConfig, err)
|
||||
}
|
||||
|
||||
authInfos := make(map[string]*AuthInfo)
|
||||
for _, obj := range config.AuthInfos {
|
||||
authInfos[obj.Name] = obj.AuthInfo
|
||||
}
|
||||
clusterInfos := make(map[string]*Cluster)
|
||||
for _, obj := range config.Clusters {
|
||||
clusterInfos[obj.Name] = obj.Cluster
|
||||
}
|
||||
contexts := make(map[string]*Context)
|
||||
for _, obj := range config.Contexts {
|
||||
contexts[obj.Name] = obj.Context
|
||||
}
|
||||
|
||||
contextName := config.CurrentContext
|
||||
configContext := contexts[contextName]
|
||||
if configContext == nil {
|
||||
return nil, fmt.Errorf("context %q does not exist", contextName)
|
||||
}
|
||||
|
||||
clusterInfoName := configContext.Cluster
|
||||
configClusterInfo := clusterInfos[clusterInfoName]
|
||||
if configClusterInfo == nil {
|
||||
return nil, fmt.Errorf("cluster %q does not exist", clusterInfoName)
|
||||
}
|
||||
|
||||
if len(configClusterInfo.Server) == 0 {
|
||||
return nil, fmt.Errorf("kubernetes server address cannot be empty, define it for context: %s", contextName)
|
||||
}
|
||||
|
||||
authInfoName := configContext.AuthInfo
|
||||
configAuthInfo := authInfos[authInfoName]
|
||||
if authInfoName != "" && configAuthInfo == nil {
|
||||
return nil, fmt.Errorf("auth info %q does not exist", authInfoName)
|
||||
}
|
||||
|
||||
var tlsConfig *promauth.TLSConfig
|
||||
var basicAuth *promauth.BasicAuthConfig
|
||||
var token, tokenFile string
|
||||
isHTTPS := strings.HasPrefix(configClusterInfo.Server, "https://")
|
||||
|
||||
if isHTTPS {
|
||||
tlsConfig = &promauth.TLSConfig{
|
||||
CAFile: configClusterInfo.CertificateAuthority,
|
||||
ServerName: configClusterInfo.TLSServerName,
|
||||
InsecureSkipVerify: configClusterInfo.InsecureSkipTLSVerify,
|
||||
}
|
||||
}
|
||||
|
||||
if len(configClusterInfo.CertificateAuthorityData) > 0 && isHTTPS {
|
||||
tlsConfig.CA, err = base64.StdEncoding.DecodeString(configClusterInfo.CertificateAuthorityData)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot base64-decode configClusterInfo.CertificateAuthorityData %q: %w", configClusterInfo.CertificateAuthorityData, err)
|
||||
}
|
||||
}
|
||||
|
||||
if configAuthInfo != nil {
|
||||
if err := configAuthInfo.validate(); err != nil {
|
||||
return nil, fmt.Errorf("invalid user auth configuration for context: %s, err: %w", contextName, err)
|
||||
}
|
||||
if isHTTPS {
|
||||
tlsConfig.CertFile = configAuthInfo.ClientCertificate
|
||||
tlsConfig.KeyFile = configAuthInfo.ClientKey
|
||||
|
||||
if len(configAuthInfo.ClientCertificateData) > 0 {
|
||||
tlsConfig.Cert, err = base64.StdEncoding.DecodeString(configAuthInfo.ClientCertificateData)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot base64-decode configAuthInfo.ClientCertificateData %q: %w", configClusterInfo.CertificateAuthorityData, err)
|
||||
}
|
||||
}
|
||||
if len(configAuthInfo.ClientKeyData) > 0 {
|
||||
tlsConfig.Key, err = base64.StdEncoding.DecodeString(configAuthInfo.ClientKeyData)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot base64-decode configAuthInfo.ClientKeyData %q: %w", configClusterInfo.CertificateAuthorityData, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(configAuthInfo.Username) > 0 || len(configAuthInfo.Password) > 0 {
|
||||
basicAuth = &promauth.BasicAuthConfig{
|
||||
Username: configAuthInfo.Username,
|
||||
Password: promauth.NewSecret(configAuthInfo.Password),
|
||||
}
|
||||
}
|
||||
token = configAuthInfo.Token
|
||||
tokenFile = configAuthInfo.TokenFile
|
||||
}
|
||||
|
||||
kc := kubeConfig{
|
||||
basicAuth: basicAuth,
|
||||
server: configClusterInfo.Server,
|
||||
token: token,
|
||||
tokenFile: tokenFile,
|
||||
tlsConfig: tlsConfig,
|
||||
proxyURL: configClusterInfo.ProxyURL,
|
||||
}
|
||||
|
||||
return &kc, nil
|
||||
}
|
84
lib/promscrape/discovery/kubernetes/kubeconfig_test.go
Normal file
84
lib/promscrape/discovery/kubernetes/kubeconfig_test.go
Normal file
|
@ -0,0 +1,84 @@
|
|||
package kubernetes
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
|
||||
)
|
||||
|
||||
func TestParseKubeConfigSuccess(t *testing.T) {
|
||||
|
||||
type testCase struct {
|
||||
name string
|
||||
sdc *SDConfig
|
||||
expectedConfig *kubeConfig
|
||||
}
|
||||
|
||||
var cases = []testCase{
|
||||
{
|
||||
name: "token",
|
||||
sdc: &SDConfig{
|
||||
KubeConfig: "testdata/good_kubeconfig/with_token.yaml",
|
||||
},
|
||||
expectedConfig: &kubeConfig{
|
||||
server: "http://some-server:8080",
|
||||
token: "abc",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "cert",
|
||||
sdc: &SDConfig{
|
||||
KubeConfig: "testdata/good_kubeconfig/with_tls.yaml",
|
||||
},
|
||||
expectedConfig: &kubeConfig{
|
||||
server: "https://localhost:6443",
|
||||
tlsConfig: &promauth.TLSConfig{
|
||||
CA: []byte("authority"),
|
||||
Cert: []byte("certificate"),
|
||||
Key: []byte("key"),
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "basic",
|
||||
sdc: &SDConfig{
|
||||
KubeConfig: "testdata/good_kubeconfig/with_basic.yaml",
|
||||
},
|
||||
expectedConfig: &kubeConfig{
|
||||
server: "http://some-server:8080",
|
||||
basicAuth: &promauth.BasicAuthConfig{
|
||||
Password: promauth.NewSecret("secret"),
|
||||
Username: "user1",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
ac, err := buildConfig(tc.sdc)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
if !reflect.DeepEqual(ac, tc.expectedConfig) {
|
||||
t.Fatalf("unexpected result, got: %v, want: %v", ac, tc.expectedConfig)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseKubeConfigFail(t *testing.T) {
|
||||
f := func(name, kubeConfigPath string) {
|
||||
t.Helper()
|
||||
t.Run(name, func(t *testing.T) {
|
||||
sdc := &SDConfig{
|
||||
KubeConfig: kubeConfigPath,
|
||||
}
|
||||
if _, err := buildConfig(sdc); err == nil {
|
||||
t.Fatalf("unexpected result for config file: %s, must return error", kubeConfigPath)
|
||||
}
|
||||
})
|
||||
}
|
||||
f("unsupported options", "testdata/bad_kubeconfig/unsupported_fields")
|
||||
f("missing server address", "testdata/bad_kubeconfig/missing_server.yaml")
|
||||
}
|
|
@ -22,6 +22,8 @@ type SDConfig struct {
|
|||
|
||||
// Use role() function for accessing the Role field
|
||||
Role string `yaml:"role"`
|
||||
// if defined any cluster connection information from HTTPClientConfig will be ignored
|
||||
KubeConfig string `yaml:"kubeconfig_file"`
|
||||
|
||||
HTTPClientConfig promauth.HTTPClientConfig `yaml:",inline"`
|
||||
ProxyURL *proxy.URL `yaml:"proxy_url,omitempty"`
|
||||
|
|
11
lib/promscrape/discovery/kubernetes/testdata/bad_kubeconfig/missing_server.yaml
vendored
Normal file
11
lib/promscrape/discovery/kubernetes/testdata/bad_kubeconfig/missing_server.yaml
vendored
Normal file
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
name: k8s
|
||||
contexts:
|
||||
- context:
|
||||
cluster: k8s
|
||||
name: user1@k8s
|
||||
current-context: user1@k8s
|
||||
kind: Config
|
||||
preferences: {}
|
30
lib/promscrape/discovery/kubernetes/testdata/bad_kubeconfig/unsupported_fields.yaml
vendored
Normal file
30
lib/promscrape/discovery/kubernetes/testdata/bad_kubeconfig/unsupported_fields.yaml
vendored
Normal file
|
@ -0,0 +1,30 @@
|
|||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
server: "http://some-server:8080"
|
||||
name: k8s
|
||||
contexts:
|
||||
- context:
|
||||
cluster: k8s
|
||||
user: user1
|
||||
name: user1@k8s
|
||||
current-context: user1@k8s
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: user1
|
||||
exec:
|
||||
apiVersion: client.authentication.k8s.io/v1alpha1
|
||||
args:
|
||||
- eks
|
||||
- get-token
|
||||
- --cluster-name
|
||||
- some-cluster
|
||||
- --region
|
||||
- us-east-2
|
||||
command: aws
|
||||
env:
|
||||
- name: AWS_STS_REGIONAL_ENDPOINTS
|
||||
value: regional
|
||||
interactiveMode: IfAvailable
|
||||
provideClusterInfo: false
|
18
lib/promscrape/discovery/kubernetes/testdata/good_kubeconfig/with_basic.yaml
vendored
Normal file
18
lib/promscrape/discovery/kubernetes/testdata/good_kubeconfig/with_basic.yaml
vendored
Normal file
|
@ -0,0 +1,18 @@
|
|||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
server: "http://some-server:8080"
|
||||
name: k8s
|
||||
contexts:
|
||||
- context:
|
||||
cluster: k8s
|
||||
user: user1
|
||||
name: user1@k8s
|
||||
current-context: user1@k8s
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: user1
|
||||
user:
|
||||
username: user1
|
||||
password: secret
|
19
lib/promscrape/discovery/kubernetes/testdata/good_kubeconfig/with_tls.yaml
vendored
Normal file
19
lib/promscrape/discovery/kubernetes/testdata/good_kubeconfig/with_tls.yaml
vendored
Normal file
|
@ -0,0 +1,19 @@
|
|||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: YXV0aG9yaXR5
|
||||
server: https://localhost:6443
|
||||
name: k8s
|
||||
contexts:
|
||||
- context:
|
||||
cluster: k8s
|
||||
user: user1
|
||||
name: user1@k8s
|
||||
current-context: user1@k8s
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: user1
|
||||
user:
|
||||
client-certificate-data: Y2VydGlmaWNhdGU=
|
||||
client-key-data: a2V5
|
17
lib/promscrape/discovery/kubernetes/testdata/good_kubeconfig/with_token.yaml
vendored
Normal file
17
lib/promscrape/discovery/kubernetes/testdata/good_kubeconfig/with_token.yaml
vendored
Normal file
|
@ -0,0 +1,17 @@
|
|||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
server: "http://some-server:8080"
|
||||
name: k8s
|
||||
contexts:
|
||||
- context:
|
||||
cluster: k8s
|
||||
user: user1
|
||||
name: user1@k8s
|
||||
current-context: user1@k8s
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: user1
|
||||
user:
|
||||
token: abc
|
|
@ -15,6 +15,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/leveledbytebufferpool"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
|
@ -25,12 +26,15 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/proxy"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
xxhash "github.com/cespare/xxhash/v2"
|
||||
"github.com/cespare/xxhash/v2"
|
||||
)
|
||||
|
||||
var (
|
||||
suppressScrapeErrors = flag.Bool("promscrape.suppressScrapeErrors", false, "Whether to suppress scrape errors logging. "+
|
||||
"The last error for each target is always available at '/targets' page even if scrape errors logging is suppressed")
|
||||
"The last error for each target is always available at '/targets' page even if scrape errors logging is suppressed. "+
|
||||
"See also -promscrape.suppressScrapeErrorsDelay")
|
||||
suppressScrapeErrorsDelay = flag.Duration("promscrape.suppressScrapeErrorsDelay", 0, "The delay for suppressing repeated scrape errors logging per each scrape targets. "+
|
||||
"This may be used for reducing the number of log lines related to scrape errors. See also -promscrape.suppressScrapeErrors")
|
||||
noStaleMarkers = flag.Bool("promscrape.noStaleMarkers", false, "Whether to disable sending Prometheus stale markers for metrics when scrape target disappears. This option may reduce memory usage if stale markers aren't needed for your setup. This option also disables populating the scrape_series_added metric. See https://prometheus.io/docs/concepts/jobs_instances/#automatically-generated-labels-and-time-series")
|
||||
seriesLimitPerTarget = flag.Int("promscrape.seriesLimitPerTarget", 0, "Optional limit on the number of unique time series a single scrape target can expose. See https://docs.victoriametrics.com/vmagent.html#cardinality-limiter for more info")
|
||||
minResponseSizeForStreamParse = flagutil.NewBytes("promscrape.minResponseSizeForStreamParse", 1e6, "The minimum target response size for automatic switching to stream parsing mode, which can reduce memory usage. See https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode")
|
||||
|
@ -221,6 +225,12 @@ type scrapeWork struct {
|
|||
// in stream parsing mode in order to reduce memory usage when the lastScrape size
|
||||
// equals to or exceeds -promscrape.minResponseSizeForStreamParse
|
||||
lastScrapeCompressed []byte
|
||||
|
||||
// lastErrLogTimestamp is the timestamp in unix seconds of the last logged scrape error
|
||||
lastErrLogTimestamp uint64
|
||||
|
||||
// errsSuppressedCount is the number of suppressed scrape errors since lastErrLogTimestamp
|
||||
errsSuppressedCount int
|
||||
}
|
||||
|
||||
func (sw *scrapeWork) loadLastScrape() string {
|
||||
|
@ -272,11 +282,16 @@ func (sw *scrapeWork) run(stopCh <-chan struct{}, globalStopCh <-chan struct{})
|
|||
// This also makes consistent scrape times across restarts
|
||||
// for a target with the same ScrapeURL and labels.
|
||||
//
|
||||
// Include clusterMemberNum to the key in order to guarantee that each member in vmagent cluster
|
||||
// Include clusterName to the key in order to guarantee that the same
|
||||
// scrape target is scraped at different offsets per each cluster.
|
||||
// This guarantees that the deduplication consistently leaves samples received from the same vmagent.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2679
|
||||
//
|
||||
// Include clusterMemberID to the key in order to guarantee that each member in vmagent cluster
|
||||
// scrapes replicated targets at different time offsets. This guarantees that the deduplication consistently leaves samples
|
||||
// received from the same vmagent replica.
|
||||
// See https://docs.victoriametrics.com/vmagent.html#scraping-big-number-of-targets
|
||||
key := fmt.Sprintf("ClusterMemberNum=%d, ScrapeURL=%s, Labels=%s", clusterMemberID, sw.Config.ScrapeURL, sw.Config.LabelsString())
|
||||
key := fmt.Sprintf("clusterName=%s, clusterMemberID=%d, ScrapeURL=%s, Labels=%s", *clusterName, clusterMemberID, sw.Config.ScrapeURL, sw.Config.LabelsString())
|
||||
h := xxhash.Sum64(bytesutil.ToUnsafeBytes(key))
|
||||
randSleep = uint64(float64(scrapeInterval) * (float64(h) / (1 << 64)))
|
||||
sleepOffset := uint64(time.Now().UnixNano()) % uint64(scrapeInterval)
|
||||
|
@ -351,9 +366,22 @@ func (sw *scrapeWork) logError(s string) {
|
|||
}
|
||||
|
||||
func (sw *scrapeWork) scrapeAndLogError(scrapeTimestamp, realTimestamp int64) {
|
||||
if err := sw.scrapeInternal(scrapeTimestamp, realTimestamp); err != nil && !*suppressScrapeErrors {
|
||||
logger.Errorf("error when scraping %q from job %q with labels %s: %s", sw.Config.ScrapeURL, sw.Config.Job(), sw.Config.LabelsString(), err)
|
||||
err := sw.scrapeInternal(scrapeTimestamp, realTimestamp)
|
||||
if err == nil {
|
||||
return
|
||||
}
|
||||
d := time.Duration(fasttime.UnixTimestamp()-sw.lastErrLogTimestamp) * time.Second
|
||||
if *suppressScrapeErrors || d < *suppressScrapeErrorsDelay {
|
||||
sw.errsSuppressedCount++
|
||||
return
|
||||
}
|
||||
err = fmt.Errorf("cannot scrape %q (job %q, labels %s): %w", sw.Config.ScrapeURL, sw.Config.Job(), sw.Config.LabelsString(), err)
|
||||
if sw.errsSuppressedCount > 0 {
|
||||
err = fmt.Errorf("%w; %d similar errors suppressed during the last %.1f seconds", err, sw.errsSuppressedCount, d.Seconds())
|
||||
}
|
||||
logger.Warnf("%s", err)
|
||||
sw.lastErrLogTimestamp = fasttime.UnixTimestamp()
|
||||
sw.errsSuppressedCount = 0
|
||||
}
|
||||
|
||||
var (
|
||||
|
|
82
lib/promscrape/service_discovery.qtpl
Normal file
82
lib/promscrape/service_discovery.qtpl
Normal file
|
@ -0,0 +1,82 @@
|
|||
{% import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
|
||||
) %}
|
||||
|
||||
{% func ServiceDiscovery(jts []jobTargetsStatuses, emptyJobs []string, showOnlyUnhealthy bool, droppedKeyStatuses []droppedKeyStatus) %}
|
||||
<div class="row mt-4">
|
||||
<div class="col-12">
|
||||
{% for i, js := range jts %}
|
||||
{% if showOnlyUnhealthy && js.upCount == js.targetsTotal %}{% continue %}{% endif %}
|
||||
<h4>
|
||||
<span class="me-2">{%s js.job %}{% space %}({%d js.upCount %}/{%d js.targetsTotal %}{% space %}up)</span>
|
||||
<button type="button" class="btn btn-primary btn-sm me-1"
|
||||
onclick="document.querySelector('.table-discovery-{%d i %}').style.display='none'">collapse
|
||||
</button>
|
||||
<button type="button" class="btn btn-secondary btn-sm me-1"
|
||||
onclick="document.querySelector('.table-discovery-{%d i %}').style.display='block'">expand
|
||||
</button>
|
||||
</h4>
|
||||
<div id="table-discovery-{%d i %}" class="table-responsive table-discovery-{%d i %}">
|
||||
<table class="table table-striped table-hover table-bordered table-sm">
|
||||
<thead>
|
||||
<tr>
|
||||
<th scope="col" style="width: 50%">Discovered Labels</th>
|
||||
<th scope="col" style="width: 50%">Target Labels</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody class="list-{%d i %}">
|
||||
{% for _, ts := range js.targetsStatus %}
|
||||
{% if showOnlyUnhealthy && ts.up %}{% continue %}{% endif %}
|
||||
<tr {% if !ts.up %}{%space%}class="alert alert-danger" role="alert" {% endif %}>
|
||||
<td class="labels">
|
||||
{%= formatLabel(ts.sw.Config.OriginalLabels) %}
|
||||
</td>
|
||||
<td class="labels">
|
||||
{%= formatLabel(promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels)) %}
|
||||
</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% for i,jobName := range emptyJobs %}
|
||||
<div>
|
||||
<h4>
|
||||
<a>{%s jobName %} (0/0 up)</a>
|
||||
<button type="button" class="btn btn-primary btn-sm me-1"
|
||||
onclick="document.querySelector('.table-empty-{%d i %}').style.display='none'">collapse
|
||||
</button>
|
||||
<button type="button" class="btn btn-secondary btn-sm me-1"
|
||||
onclick="document.querySelector('.table-empty-{%d i %}').style.display='block'">expand
|
||||
</button>
|
||||
</h4>
|
||||
<table id="table-empty-{%d i %}" class="table table-striped table-hover table-bordered table-sm table-empty-{%d i %}">
|
||||
<thead>
|
||||
<tr>
|
||||
<th scope="col" style="width: 50%">Discovered Labels</th>
|
||||
<th scope="col" style="width: 50%">Target Labels</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody class="list-{%d i %}">
|
||||
{% for _, status := range droppedKeyStatuses %}
|
||||
{% for _, label := range status.originalLabels %}
|
||||
{% if label.Value == jobName %}
|
||||
<tr>
|
||||
<td class="labels">
|
||||
{%= formatLabel(status.originalLabels) %}
|
||||
</td>
|
||||
<td class="labels">
|
||||
<span class="badge bg-danger">DROPPED</span>
|
||||
</td>
|
||||
</tr>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% endfunc %}
|
279
lib/promscrape/service_discovery.qtpl.go
Normal file
279
lib/promscrape/service_discovery.qtpl.go
Normal file
|
@ -0,0 +1,279 @@
|
|||
// Code generated by qtc from "service_discovery.qtpl". DO NOT EDIT.
|
||||
// See https://github.com/valyala/quicktemplate for details.
|
||||
|
||||
//line lib/promscrape/service_discovery.qtpl:1
|
||||
package promscrape
|
||||
|
||||
//line lib/promscrape/service_discovery.qtpl:1
|
||||
import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
|
||||
)
|
||||
|
||||
//line lib/promscrape/service_discovery.qtpl:5
|
||||
import (
|
||||
qtio422016 "io"
|
||||
|
||||
qt422016 "github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line lib/promscrape/service_discovery.qtpl:5
|
||||
var (
|
||||
_ = qtio422016.Copy
|
||||
_ = qt422016.AcquireByteBuffer
|
||||
)
|
||||
|
||||
//line lib/promscrape/service_discovery.qtpl:5
|
||||
func StreamServiceDiscovery(qw422016 *qt422016.Writer, jts []jobTargetsStatuses, emptyJobs []string, showOnlyUnhealthy bool, droppedKeyStatuses []droppedKeyStatus) {
|
||||
//line lib/promscrape/service_discovery.qtpl:5
|
||||
qw422016.N().S(`
|
||||
<div class="row mt-4">
|
||||
<div class="col-12">
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:8
|
||||
for i, js := range jts {
|
||||
//line lib/promscrape/service_discovery.qtpl:8
|
||||
qw422016.N().S(`
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:9
|
||||
if showOnlyUnhealthy && js.upCount == js.targetsTotal {
|
||||
//line lib/promscrape/service_discovery.qtpl:9
|
||||
continue
|
||||
//line lib/promscrape/service_discovery.qtpl:9
|
||||
}
|
||||
//line lib/promscrape/service_discovery.qtpl:9
|
||||
qw422016.N().S(`
|
||||
<h4>
|
||||
<span class="me-2">`)
|
||||
//line lib/promscrape/service_discovery.qtpl:11
|
||||
qw422016.E().S(js.job)
|
||||
//line lib/promscrape/service_discovery.qtpl:11
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/service_discovery.qtpl:11
|
||||
qw422016.N().S(`(`)
|
||||
//line lib/promscrape/service_discovery.qtpl:11
|
||||
qw422016.N().D(js.upCount)
|
||||
//line lib/promscrape/service_discovery.qtpl:11
|
||||
qw422016.N().S(`/`)
|
||||
//line lib/promscrape/service_discovery.qtpl:11
|
||||
qw422016.N().D(js.targetsTotal)
|
||||
//line lib/promscrape/service_discovery.qtpl:11
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/service_discovery.qtpl:11
|
||||
qw422016.N().S(`up)</span>
|
||||
<button type="button" class="btn btn-primary btn-sm me-1"
|
||||
onclick="document.querySelector('.table-discovery-`)
|
||||
//line lib/promscrape/service_discovery.qtpl:13
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/service_discovery.qtpl:13
|
||||
qw422016.N().S(`').style.display='none'">collapse
|
||||
</button>
|
||||
<button type="button" class="btn btn-secondary btn-sm me-1"
|
||||
onclick="document.querySelector('.table-discovery-`)
|
||||
//line lib/promscrape/service_discovery.qtpl:16
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/service_discovery.qtpl:16
|
||||
qw422016.N().S(`').style.display='block'">expand
|
||||
</button>
|
||||
</h4>
|
||||
<div id="table-discovery-`)
|
||||
//line lib/promscrape/service_discovery.qtpl:19
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/service_discovery.qtpl:19
|
||||
qw422016.N().S(`" class="table-responsive table-discovery-`)
|
||||
//line lib/promscrape/service_discovery.qtpl:19
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/service_discovery.qtpl:19
|
||||
qw422016.N().S(`">
|
||||
<table class="table table-striped table-hover table-bordered table-sm">
|
||||
<thead>
|
||||
<tr>
|
||||
<th scope="col" style="width: 50%">Discovered Labels</th>
|
||||
<th scope="col" style="width: 50%">Target Labels</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody class="list-`)
|
||||
//line lib/promscrape/service_discovery.qtpl:27
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/service_discovery.qtpl:27
|
||||
qw422016.N().S(`">
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:28
|
||||
for _, ts := range js.targetsStatus {
|
||||
//line lib/promscrape/service_discovery.qtpl:28
|
||||
qw422016.N().S(`
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:29
|
||||
if showOnlyUnhealthy && ts.up {
|
||||
//line lib/promscrape/service_discovery.qtpl:29
|
||||
continue
|
||||
//line lib/promscrape/service_discovery.qtpl:29
|
||||
}
|
||||
//line lib/promscrape/service_discovery.qtpl:29
|
||||
qw422016.N().S(`
|
||||
<tr `)
|
||||
//line lib/promscrape/service_discovery.qtpl:30
|
||||
if !ts.up {
|
||||
//line lib/promscrape/service_discovery.qtpl:30
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/service_discovery.qtpl:30
|
||||
qw422016.N().S(`class="alert alert-danger" role="alert" `)
|
||||
//line lib/promscrape/service_discovery.qtpl:30
|
||||
}
|
||||
//line lib/promscrape/service_discovery.qtpl:30
|
||||
qw422016.N().S(`>
|
||||
<td class="labels">
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:32
|
||||
streamformatLabel(qw422016, ts.sw.Config.OriginalLabels)
|
||||
//line lib/promscrape/service_discovery.qtpl:32
|
||||
qw422016.N().S(`
|
||||
</td>
|
||||
<td class="labels">
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:35
|
||||
streamformatLabel(qw422016, promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels))
|
||||
//line lib/promscrape/service_discovery.qtpl:35
|
||||
qw422016.N().S(`
|
||||
</td>
|
||||
</tr>
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:38
|
||||
}
|
||||
//line lib/promscrape/service_discovery.qtpl:38
|
||||
qw422016.N().S(`
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:42
|
||||
}
|
||||
//line lib/promscrape/service_discovery.qtpl:42
|
||||
qw422016.N().S(`
|
||||
</div>
|
||||
</div>
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:45
|
||||
for i, jobName := range emptyJobs {
|
||||
//line lib/promscrape/service_discovery.qtpl:45
|
||||
qw422016.N().S(`
|
||||
<div>
|
||||
<h4>
|
||||
<a>`)
|
||||
//line lib/promscrape/service_discovery.qtpl:48
|
||||
qw422016.E().S(jobName)
|
||||
//line lib/promscrape/service_discovery.qtpl:48
|
||||
qw422016.N().S(` (0/0 up)</a>
|
||||
<button type="button" class="btn btn-primary btn-sm me-1"
|
||||
onclick="document.querySelector('.table-empty-`)
|
||||
//line lib/promscrape/service_discovery.qtpl:50
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/service_discovery.qtpl:50
|
||||
qw422016.N().S(`').style.display='none'">collapse
|
||||
</button>
|
||||
<button type="button" class="btn btn-secondary btn-sm me-1"
|
||||
onclick="document.querySelector('.table-empty-`)
|
||||
//line lib/promscrape/service_discovery.qtpl:53
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/service_discovery.qtpl:53
|
||||
qw422016.N().S(`').style.display='block'">expand
|
||||
</button>
|
||||
</h4>
|
||||
<table id="table-empty-`)
|
||||
//line lib/promscrape/service_discovery.qtpl:56
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/service_discovery.qtpl:56
|
||||
qw422016.N().S(`" class="table table-striped table-hover table-bordered table-sm table-empty-`)
|
||||
//line lib/promscrape/service_discovery.qtpl:56
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/service_discovery.qtpl:56
|
||||
qw422016.N().S(`">
|
||||
<thead>
|
||||
<tr>
|
||||
<th scope="col" style="width: 50%">Discovered Labels</th>
|
||||
<th scope="col" style="width: 50%">Target Labels</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody class="list-`)
|
||||
//line lib/promscrape/service_discovery.qtpl:63
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/service_discovery.qtpl:63
|
||||
qw422016.N().S(`">
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:64
|
||||
for _, status := range droppedKeyStatuses {
|
||||
//line lib/promscrape/service_discovery.qtpl:64
|
||||
qw422016.N().S(`
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:65
|
||||
for _, label := range status.originalLabels {
|
||||
//line lib/promscrape/service_discovery.qtpl:65
|
||||
qw422016.N().S(`
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:66
|
||||
if label.Value == jobName {
|
||||
//line lib/promscrape/service_discovery.qtpl:66
|
||||
qw422016.N().S(`
|
||||
<tr>
|
||||
<td class="labels">
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:69
|
||||
streamformatLabel(qw422016, status.originalLabels)
|
||||
//line lib/promscrape/service_discovery.qtpl:69
|
||||
qw422016.N().S(`
|
||||
</td>
|
||||
<td class="labels">
|
||||
<span class="badge bg-danger">DROPPED</span>
|
||||
</td>
|
||||
</tr>
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:75
|
||||
}
|
||||
//line lib/promscrape/service_discovery.qtpl:75
|
||||
qw422016.N().S(`
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:76
|
||||
}
|
||||
//line lib/promscrape/service_discovery.qtpl:76
|
||||
qw422016.N().S(`
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:77
|
||||
}
|
||||
//line lib/promscrape/service_discovery.qtpl:77
|
||||
qw422016.N().S(`
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:81
|
||||
}
|
||||
//line lib/promscrape/service_discovery.qtpl:81
|
||||
qw422016.N().S(`
|
||||
`)
|
||||
//line lib/promscrape/service_discovery.qtpl:82
|
||||
}
|
||||
|
||||
//line lib/promscrape/service_discovery.qtpl:82
|
||||
func WriteServiceDiscovery(qq422016 qtio422016.Writer, jts []jobTargetsStatuses, emptyJobs []string, showOnlyUnhealthy bool, droppedKeyStatuses []droppedKeyStatus) {
|
||||
//line lib/promscrape/service_discovery.qtpl:82
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/service_discovery.qtpl:82
|
||||
StreamServiceDiscovery(qw422016, jts, emptyJobs, showOnlyUnhealthy, droppedKeyStatuses)
|
||||
//line lib/promscrape/service_discovery.qtpl:82
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/service_discovery.qtpl:82
|
||||
}
|
||||
|
||||
//line lib/promscrape/service_discovery.qtpl:82
|
||||
func ServiceDiscovery(jts []jobTargetsStatuses, emptyJobs []string, showOnlyUnhealthy bool, droppedKeyStatuses []droppedKeyStatus) string {
|
||||
//line lib/promscrape/service_discovery.qtpl:82
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/service_discovery.qtpl:82
|
||||
WriteServiceDiscovery(qb422016, jts, emptyJobs, showOnlyUnhealthy, droppedKeyStatuses)
|
||||
//line lib/promscrape/service_discovery.qtpl:82
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/service_discovery.qtpl:82
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/service_discovery.qtpl:82
|
||||
return qs422016
|
||||
//line lib/promscrape/service_discovery.qtpl:82
|
||||
}
|
107
lib/promscrape/targets.qtpl
Normal file
107
lib/promscrape/targets.qtpl
Normal file
|
@ -0,0 +1,107 @@
|
|||
{% import (
|
||||
"time"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
|
||||
) %}
|
||||
|
||||
{% func Targets(jts []jobTargetsStatuses, emptyJobs []string, showOnlyUnhealthy bool) %}
|
||||
<div class="row mt-4">
|
||||
<div class="col-12">
|
||||
{% for i, js := range jts %}
|
||||
{% if showOnlyUnhealthy && js.upCount == js.targetsTotal %}{% continue %}{% endif %}
|
||||
<div class="row mb-4">
|
||||
<div class="col-12">
|
||||
<h4>
|
||||
<span class="me-2">{%s js.job %}{% space %}({%d js.upCount %}/{%d js.targetsTotal %}{% space %}up)</span>
|
||||
<button type="button" class="btn btn-primary btn-sm me-1"
|
||||
onclick="document.getElementById('table-{%d i %}').style.display='none'">collapse
|
||||
</button>
|
||||
<button type="button" class="btn btn-secondary btn-sm me-1"
|
||||
onclick="document.getElementById('table-{%d i %}').style.display='block'">expand
|
||||
</button>
|
||||
</h4>
|
||||
<div id="table-{%d i %}" class="table-responsive">
|
||||
<table class="table table-striped table-hover table-bordered table-sm">
|
||||
<thead>
|
||||
<tr>
|
||||
<th scope="col">Endpoint</th>
|
||||
<th scope="col">State</th>
|
||||
<th scope="col" title="scrape target labels">Labels</th>
|
||||
<th scope="col" title="total scrapes">Scrapes</th>
|
||||
<th scope="col" title="total scrape errors">Errors</th>
|
||||
<th scope="col" title="the time of the last scrape">Last Scrape</th>
|
||||
<th scope="col" title="the duration of the last scrape">Duration</th>
|
||||
<th scope="col" title="the number of metrics scraped during the last scrape">Samples</th>
|
||||
<th scope="col" title="error from the last scrape (if any)">Last error</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody class="list-{%d i %}">
|
||||
{% for _, ts := range js.targetsStatus %}
|
||||
{% code
|
||||
endpoint := ts.sw.Config.ScrapeURL
|
||||
targetID := getTargetID(ts.sw)
|
||||
lastScrapeTime := ts.getDurationFromLastScrape()
|
||||
%}
|
||||
{% if showOnlyUnhealthy && ts.up %}{% continue %}{% endif %}
|
||||
<tr {% if !ts.up %}{%space%}class="alert alert-danger" role="alert" {% endif %}>
|
||||
<td class="endpoint"><a href="{%s endpoint %}" target="_blank">{%s endpoint %}</a> (
|
||||
<a href="target_response?id={%s targetID %}" target="_blank"
|
||||
title="click to fetch target response on behalf of the scraper">response</a>
|
||||
)
|
||||
</td>
|
||||
<td>
|
||||
{% if ts.up %}
|
||||
<span class="badge bg-success">UP</span>
|
||||
{% else %}
|
||||
<span class="badge bg-danger">DOWN</span>
|
||||
{% endif %}
|
||||
</td>
|
||||
<td class="labels">
|
||||
<div title="click to show original labels"
|
||||
onclick="document.getElementById('original_labels_{%s targetID %}').style.display='block'">
|
||||
{%= formatLabel(promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels)) %}
|
||||
</div>
|
||||
<div style="display:none" id="original_labels_{%s targetID %}">
|
||||
{%= formatLabel(ts.sw.Config.OriginalLabels) %}
|
||||
</div>
|
||||
</td>
|
||||
<td>{%d ts.scrapesTotal %}</td>
|
||||
<td>{%d ts.scrapesFailed %}</td>
|
||||
<td>
|
||||
{% if lastScrapeTime < 365*24*time.Hour %}
|
||||
{%f.3 lastScrapeTime.Seconds() %}s ago
|
||||
{% else %}
|
||||
none
|
||||
{% endif %}
|
||||
<td>{%d int(ts.scrapeDuration) %}ms</td>
|
||||
<td>{%d ts.samplesScraped %}</td>
|
||||
<td>{% if ts.err != nil %}{%s ts.err.Error() %}{% endif %}</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{% for _, jobName := range emptyJobs %}
|
||||
<div>
|
||||
<h4><a>{%s jobName %} (0/0 up)</a></h4>
|
||||
<table class="table table-striped table-hover table-bordered table-sm">
|
||||
<thead>
|
||||
<tr>
|
||||
<th scope="col">Endpoint</th>
|
||||
<th scope="col">State</th>
|
||||
<th scope="col">Labels</th>
|
||||
<th scope="col">Last Scrape</th>
|
||||
<th scope="col">Scrape Duration</th>
|
||||
<th scope="col">Samples Scraped</th>
|
||||
<th scope="col">Error</th>
|
||||
</tr>
|
||||
</thead>
|
||||
</table>
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% endfunc %}
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue