:2003`. This protects from unexpected requests from untrusted network interfaces.
## Tuning
-* There is no need for VictoriaMetrics tuning since it uses reasonable defaults for command-line flags,
+* No need in tuning for VictoriaMetrics - it uses reasonable defaults for command-line flags,
which are automatically adjusted for the available CPU and RAM resources.
-* There is no need for Operating System tuning since VictoriaMetrics is optimized for default OS settings.
+* No need in tuning for Operating System - VictoriaMetrics is optimized for default OS settings.
The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a).
The recommendation is not specific for VictoriaMetrics only but also for any service which handles many HTTP connections and stores data on disk.
* VictoriaMetrics is a write-heavy application and its performance depends on disk performance. So be careful with other
@@ -1376,19 +1406,23 @@ mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
## Monitoring
-VictoriaMetrics exports internal metrics in Prometheus format at `/metrics` page.
-These metrics may be collected by [vmagent](https://docs.victoriametrics.com/vmagent.html)
-or Prometheus by adding the corresponding scrape config to it.
-Alternatively they can be self-scraped by setting `-selfScrapeInterval` command-line flag to duration greater than 0.
-For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page with 10 seconds interval.
+VictoriaMetrics exports internal metrics in Prometheus exposition format at `/metrics` page.
+These metrics can be scraped via [vmagent](https://docs.victoriametrics.com/vmagent.html) or Prometheus.
+Alternatively, single-node VictoriaMetrics can self-scrape the metrics when `-selfScrapeInterval` command-line flag is
+set to duration greater than 0. For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page
+with 10 seconds interval.
-There are officials Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/dashboards/10229) and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176). There is also an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831).
+Official Grafana dashboards available for [single-node](https://grafana.com/dashboards/10229)
+and [clustered](https://grafana.com/grafana/dashboards/11176) VictoriaMetrics.
+See an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831)
+created by community.
-Graphs on these dashboard contain useful hints - hover the `i` icon at the top left corner of each graph in order to read it.
+Graphs on the dashboards contain useful hints - hover the `i` icon in the top left corner of each graph to read it.
-It is recommended setting up alerts in [vmalert](https://docs.victoriametrics.com/vmalert.html) or in Prometheus from [this config](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml).
+We recommend setting up [alerts](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml)
+via [vmalert](https://docs.victoriametrics.com/vmalert.html) or via Prometheus.
-The most interesting metrics are:
+The most interesting health metrics are the following:
* `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour
aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
@@ -1404,16 +1438,14 @@ The most interesting metrics are:
If this number remains high during extended periods of time, then it is likely more RAM is needed for optimal handling
of the current number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
-VictoriaMetrics also exposes currently running queries with their execution times at `/api/v1/status/active_queries` page.
-
-See the example of alerting rules for VM components [here](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml).
+VictoriaMetrics exposes currently running queries and their execution times at `/api/v1/status/active_queries` page.
## TSDB stats
VictoriaMetrics returns TSDB stats at `/api/v1/status/tsdb` page in the way similar to Prometheus - see [these Prometheus docs](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats). VictoriaMetrics accepts the following optional query args at `/api/v1/status/tsdb` page:
* `topN=N` where `N` is the number of top entries to return in the response. By default top 10 entries are returned.
-* `date=YYYY-MM-DD` where `YYYY-MM-DD` is the date for collecting the stats. By default the stats is collected for the current day.
+* `date=YYYY-MM-DD` where `YYYY-MM-DD` is the date for collecting the stats. By default the stats is collected for the current day. Pass `date=1970-01-01` in order to collect global stats across all the days.
* `match[]=SELECTOR` where `SELECTOR` is an arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) for series to take into account during stats calculation. By default all the series are taken into account.
* `extra_label=LABEL=VALUE`. See [these docs](#prometheus-querying-api-enhancements) for more details.
@@ -1791,9 +1823,10 @@ Files included in each folder:
### We kindly ask
* Please don't use any other font instead of suggested.
-* There should be sufficient clear space around the logo.
+* To keep enough clear space around the logo.
* Do not change spacing, alignment, or relative locations of the design elements.
-* Do not change the proportions of any of the design elements or the design itself. You may resize as needed but must retain all proportions.
+* Do not change the proportions for any of the design elements or the design itself.
+ You may resize as needed but must retain all proportions.
## List of command-line flags
diff --git a/docs/guides/k8s-monitoring-via-vm-single.md b/docs/guides/k8s-monitoring-via-vm-single.md
index 2ae3832b8..973df57ee 100644
--- a/docs/guides/k8s-monitoring-via-vm-single.md
+++ b/docs/guides/k8s-monitoring-via-vm-single.md
@@ -74,7 +74,7 @@ Run this command in your terminal:
.html
-```yaml
+```bash
helm install vmsingle vm/victoria-metrics-single -f https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml
```
diff --git a/docs/keyConcepts.md b/docs/keyConcepts.md
index 61746e597..11ffef800 100644
--- a/docs/keyConcepts.md
+++ b/docs/keyConcepts.md
@@ -145,13 +145,13 @@ vm_per_query_rows_processed_count_count 11
In practice, histogram `vm_per_query_rows_processed_count` may be used in the following way:
-```Go
+```go
// define the histogram
perQueryRowsProcessed := metrics.NewHistogram(`vm_per_query_rows_processed_count`)
// use the histogram during processing
for _, query := range queries {
-perQueryRowsProcessed.Update(len(query.Rows))
+ perQueryRowsProcessed.Update(len(query.Rows))
}
```
diff --git a/go.mod b/go.mod
index addfc4ae2..358690712 100644
--- a/go.mod
+++ b/go.mod
@@ -11,7 +11,7 @@ require (
github.com/VictoriaMetrics/fasthttp v1.1.0
github.com/VictoriaMetrics/metrics v1.18.1
github.com/VictoriaMetrics/metricsql v0.43.0
- github.com/aws/aws-sdk-go v1.44.27
+ github.com/aws/aws-sdk-go v1.44.32
github.com/cespare/xxhash/v2 v2.1.2
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
@@ -35,10 +35,10 @@ require (
github.com/valyala/fasttemplate v1.2.1
github.com/valyala/gozstd v1.17.0
github.com/valyala/quicktemplate v1.7.0
- golang.org/x/net v0.0.0-20220531201128-c960675eff93
- golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401
- golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a
- google.golang.org/api v0.82.0
+ golang.org/x/net v0.0.0-20220607020251-c690dde0001d
+ golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb
+ golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d
+ google.golang.org/api v0.83.0
gopkg.in/yaml.v2 v2.4.0
)
@@ -73,9 +73,9 @@ require (
go.uber.org/goleak v1.1.11-0.20210813005559-691160354723 // indirect
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f // indirect
golang.org/x/text v0.3.7 // indirect
- golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df // indirect
+ golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f // indirect
google.golang.org/appengine v1.6.7 // indirect
- google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8 // indirect
+ google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac // indirect
google.golang.org/grpc v1.47.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
diff --git a/go.sum b/go.sum
index dc9a5a603..0a609514d 100644
--- a/go.sum
+++ b/go.sum
@@ -142,8 +142,8 @@ github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQ
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
-github.com/aws/aws-sdk-go v1.44.27 h1:8CMspeZSrewnbvAwgl8qo5R7orDLwQnTGBf/OKPiHxI=
-github.com/aws/aws-sdk-go v1.44.27/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
+github.com/aws/aws-sdk-go v1.44.32 h1:x5hBtpY/02sgRL158zzTclcCLwh3dx3YlSl1rAH4Op0=
+github.com/aws/aws-sdk-go v1.44.32/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
@@ -992,9 +992,8 @@ golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su
golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220412020605-290c469a71a5/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
-golang.org/x/net v0.0.0-20220526153639-5463443f8c37/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
-golang.org/x/net v0.0.0-20220531201128-c960675eff93 h1:MYimHLfoXEpOhqd/zgoA/uoXzHB86AEky4LAx5ij9xA=
-golang.org/x/net v0.0.0-20220531201128-c960675eff93/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
+golang.org/x/net v0.0.0-20220607020251-c690dde0001d h1:4SFsTMi4UahlKoloni7L4eYzhFRifURQLw+yv0QDCx8=
+golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1014,8 +1013,9 @@ golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
-golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401 h1:zwrSfklXn0gxyLRX/aR+q6cgHbV/ItVyzbPlbA+dkAw=
golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
+golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb h1:8tDJ3aechhddbdPAxpycgXHJRMLpk/Ab+aa4OgdN5/g=
+golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb/go.mod h1:jaDAt6Dkxork7LmZnYtzbRWj0W47D86a3TGe0YHBvmE=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -1028,7 +1028,6 @@ golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
-golang.org/x/sync v0.0.0-20220513210516-0976fa681c29/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f h1:Ax0t5p6N38Ga0dThY21weqDEyz2oklo4IvDkpigvkD8=
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -1123,8 +1122,9 @@ golang.org/x/sys v0.0.0-20220405052023-b1e9470b6e64/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220502124256-b6088ccd6cba/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a h1:dGzPydgVsqGcTRVwiLJ1jVbufYwmzD3LfVPLKsKg+0k=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d h1:Zu/JngovGLVi6t2J3nmAf3AoTDwuzw85YZ3b9o4yU7s=
+golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -1221,8 +1221,9 @@ golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
-golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df h1:5Pf6pFKu98ODmgnpvkJ3kFUOQGGLIzLIkbzUHp47618=
golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
+golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f h1:uF6paiQQebLeSXkrTqHqz0MXhXXS1KgF41eUdBNvxK0=
+golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
gonum.org/v1/gonum v0.0.0-20180816165407-929014505bf4/go.mod h1:Y+Yx5eoAFn32cQvJDxZx5Dpnq+c3wtXuadVZAcxbbBo=
gonum.org/v1/gonum v0.0.0-20181121035319-3f7ecaa7e8ca/go.mod h1:Y+Yx5eoAFn32cQvJDxZx5Dpnq+c3wtXuadVZAcxbbBo=
gonum.org/v1/gonum v0.6.0/go.mod h1:9mxDZsDKxgMAuccQkewq682L+0eCu4dCN2yonUJTCLU=
@@ -1268,8 +1269,8 @@ google.golang.org/api v0.74.0/go.mod h1:ZpfMZOVRMywNyvJFeqL9HRWBgAuRfSjJFpe9QtRR
google.golang.org/api v0.75.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA=
google.golang.org/api v0.78.0/go.mod h1:1Sg78yoMLOhlQTeF+ARBoytAcH1NNyyl390YMy6rKmw=
google.golang.org/api v0.80.0/go.mod h1:xY3nI94gbvBrE0J6NHXhxOmW97HG7Khjkku6AFB3Hyg=
-google.golang.org/api v0.82.0 h1:h6EGeZuzhoKSS7BUznzkW+2wHZ+4Ubd6rsVvvh3dRkw=
-google.golang.org/api v0.82.0/go.mod h1:Ld58BeTlL9DIYr2M2ajvoSqmGLei0BMn+kVBmkam1os=
+google.golang.org/api v0.83.0 h1:pMvST+6v+46Gabac4zlJlalxZjCeRcepwg2EdBU+nCc=
+google.golang.org/api v0.83.0/go.mod h1:CNywQoj/AfhTw26ZWAa6LwOv+6WFxHmeLPZq2uncLZk=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@@ -1358,9 +1359,9 @@ google.golang.org/genproto v0.0.0-20220429170224-98d788798c3e/go.mod h1:8w6bsBMX
google.golang.org/genproto v0.0.0-20220505152158-f39f71e6c8f3/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220518221133-4f43b3371335/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220523171625-347a074981d8/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
-google.golang.org/genproto v0.0.0-20220527130721-00d5c0f3be58/go.mod h1:yKyY4AMRwFiC8yMMNaMi+RkCnjZJt9LoWuvhXjMs+To=
-google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8 h1:qRu95HZ148xXw+XeZ3dvqe85PxH4X8+jIo0iRPKcEnM=
google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8/go.mod h1:yKyY4AMRwFiC8yMMNaMi+RkCnjZJt9LoWuvhXjMs+To=
+google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac h1:ByeiW1F67iV9o8ipGskA+HWzSkMbRJuKLlwCdPxzn7A=
+google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
diff --git a/lib/storage/index_db.go b/lib/storage/index_db.go
index c08f7e782..b729dad77 100644
--- a/lib/storage/index_db.go
+++ b/lib/storage/index_db.go
@@ -312,13 +312,17 @@ func (db *indexDB) decRef() {
logger.Infof("indexDB %q has been dropped", tbPath)
}
-func (db *indexDB) getFromTagFiltersCache(key []byte) ([]TSID, bool) {
+func (db *indexDB) getFromTagFiltersCache(qt *querytracer.Tracer, key []byte) ([]TSID, bool) {
+ qt = qt.NewChild("search for tsids in tag filters cache")
+ defer qt.Done()
compressedBuf := tagBufPool.Get()
defer tagBufPool.Put(compressedBuf)
compressedBuf.B = db.tagFiltersCache.GetBig(compressedBuf.B[:0], key)
if len(compressedBuf.B) == 0 {
+ qt.Printf("cache miss")
return nil, false
}
+ qt.Printf("found tsids with compressed size: %d bytes", len(compressedBuf.B))
buf := tagBufPool.Get()
defer tagBufPool.Put(buf)
var err error
@@ -326,22 +330,29 @@ func (db *indexDB) getFromTagFiltersCache(key []byte) ([]TSID, bool) {
if err != nil {
logger.Panicf("FATAL: cannot decompress tsids from tagFiltersCache: %s", err)
}
+ qt.Printf("decompressed tsids to %d bytes", len(buf.B))
tsids, err := unmarshalTSIDs(nil, buf.B)
if err != nil {
logger.Panicf("FATAL: cannot unmarshal tsids from tagFiltersCache: %s", err)
}
+ qt.Printf("unmarshaled %d tsids", len(tsids))
return tsids, true
}
var tagBufPool bytesutil.ByteBufferPool
-func (db *indexDB) putToTagFiltersCache(tsids []TSID, key []byte) {
+func (db *indexDB) putToTagFiltersCache(qt *querytracer.Tracer, tsids []TSID, key []byte) {
+ qt = qt.NewChild("put %d tsids in cache", len(tsids))
+ defer qt.Done()
buf := tagBufPool.Get()
buf.B = marshalTSIDs(buf.B[:0], tsids)
+ qt.Printf("marshaled %d tsids into %d bytes", len(tsids), len(buf.B))
compressedBuf := tagBufPool.Get()
compressedBuf.B = encoding.CompressZSTDLevel(compressedBuf.B[:0], buf.B, 1)
+ qt.Printf("compressed %d tsids into %d bytes", len(tsids), len(compressedBuf.B))
tagBufPool.Put(buf)
db.tagFiltersCache.SetBig(key, compressedBuf.B)
+ qt.Printf("store %d compressed tsids into cache", len(tsids))
tagBufPool.Put(compressedBuf)
}
@@ -414,7 +425,7 @@ func marshalTagFiltersKey(dst []byte, tfss []*TagFilters, tr TimeRange, versione
}
// Round start and end times to per-day granularity according to per-day inverted index.
startDate := uint64(tr.MinTimestamp) / msecPerDay
- endDate := uint64(tr.MaxTimestamp) / msecPerDay
+ endDate := uint64(tr.MaxTimestamp-1) / msecPerDay
dst = encoding.MarshalUint64(dst, prefix)
dst = encoding.MarshalUint64(dst, startDate)
dst = encoding.MarshalUint64(dst, endDate)
@@ -704,50 +715,65 @@ func putIndexItems(ii *indexItems) {
var indexItemsPool sync.Pool
-// SearchTagKeysOnTimeRange returns all the tag keys on the given tr.
-func (db *indexDB) SearchTagKeysOnTimeRange(tr TimeRange, maxTagKeys int, deadline uint64) ([]string, error) {
- tks := make(map[string]struct{})
+// SearchLabelNamesWithFiltersOnTimeRange returns all the label names, which match the given tfss on the given tr.
+func (db *indexDB) SearchLabelNamesWithFiltersOnTimeRange(qt *querytracer.Tracer, tfss []*TagFilters, tr TimeRange, maxLabelNames, maxMetrics int, deadline uint64) ([]string, error) {
+ qt = qt.NewChild("search for label names: filters=%s, timeRange=%s, maxLabelNames=%d, maxMetrics=%d", tfss, &tr, maxLabelNames, maxMetrics)
+ defer qt.Done()
+ lns := make(map[string]struct{})
+ qtChild := qt.NewChild("search for label names in the current indexdb")
is := db.getIndexSearch(deadline)
- err := is.searchTagKeysOnTimeRange(tks, tr, maxTagKeys)
+ err := is.searchLabelNamesWithFiltersOnTimeRange(qtChild, lns, tfss, tr, maxLabelNames, maxMetrics)
db.putIndexSearch(is)
+ qtChild.Donef("found %d label names", len(lns))
if err != nil {
return nil, err
}
ok := db.doExtDB(func(extDB *indexDB) {
+ qtChild := qt.NewChild("search for label names in the previous indexdb")
+ lnsLen := len(lns)
is := extDB.getIndexSearch(deadline)
- err = is.searchTagKeysOnTimeRange(tks, tr, maxTagKeys)
+ err = is.searchLabelNamesWithFiltersOnTimeRange(qtChild, lns, tfss, tr, maxLabelNames, maxMetrics)
extDB.putIndexSearch(is)
+ qtChild.Donef("found %d additional label names", len(lns)-lnsLen)
})
if ok && err != nil {
return nil, err
}
- keys := make([]string, 0, len(tks))
- for key := range tks {
- // Do not skip empty keys, since they are converted to __name__
- keys = append(keys, key)
+ labelNames := make([]string, 0, len(lns))
+ for labelName := range lns {
+ labelNames = append(labelNames, labelName)
}
- // Do not sort keys, since they must be sorted by vmselect.
- return keys, nil
+ // Do not sort label names, since they must be sorted by vmselect.
+ qt.Printf("found %d label names in the current and the previous indexdb", len(labelNames))
+ return labelNames, nil
}
-func (is *indexSearch) searchTagKeysOnTimeRange(tks map[string]struct{}, tr TimeRange, maxTagKeys int) error {
+func (is *indexSearch) searchLabelNamesWithFiltersOnTimeRange(qt *querytracer.Tracer, lns map[string]struct{}, tfss []*TagFilters, tr TimeRange, maxLabelNames, maxMetrics int) error {
minDate := uint64(tr.MinTimestamp) / msecPerDay
- maxDate := uint64(tr.MaxTimestamp) / msecPerDay
- if minDate > maxDate || maxDate-minDate > maxDaysForPerDaySearch {
- return is.searchTagKeys(tks, maxTagKeys)
+ maxDate := uint64(tr.MaxTimestamp-1) / msecPerDay
+ if maxDate == 0 || minDate > maxDate || maxDate-minDate > maxDaysForPerDaySearch {
+ qtChild := qt.NewChild("search for label names in global index: filters=%s", tfss)
+ err := is.searchLabelNamesWithFiltersOnDate(qtChild, lns, tfss, 0, maxLabelNames, maxMetrics)
+ qtChild.Done()
+ return err
}
var mu sync.Mutex
wg := getWaitGroup()
var errGlobal error
+ qt = qt.NewChild("parallel search for label names: filters=%s, timeRange=%s", tfss, &tr)
for date := minDate; date <= maxDate; date++ {
wg.Add(1)
+ qtChild := qt.NewChild("search for label names: filters=%s, date=%d", tfss, date)
go func(date uint64) {
- defer wg.Done()
- tksLocal := make(map[string]struct{})
+ defer func() {
+ qtChild.Done()
+ wg.Done()
+ }()
+ lnsLocal := make(map[string]struct{})
isLocal := is.db.getIndexSearch(is.deadline)
- err := isLocal.searchTagKeysOnDate(tksLocal, date, maxTagKeys)
+ err := isLocal.searchLabelNamesWithFiltersOnDate(qtChild, lnsLocal, tfss, date, maxLabelNames, maxMetrics)
is.db.putIndexSearch(isLocal)
mu.Lock()
defer mu.Unlock()
@@ -758,31 +784,43 @@ func (is *indexSearch) searchTagKeysOnTimeRange(tks map[string]struct{}, tr Time
errGlobal = err
return
}
- if len(tks) >= maxTagKeys {
+ if len(lns) >= maxLabelNames {
return
}
- for k := range tksLocal {
- tks[k] = struct{}{}
+ for k := range lnsLocal {
+ lns[k] = struct{}{}
}
}(date)
}
wg.Wait()
putWaitGroup(wg)
+ qt.Done()
return errGlobal
}
-func (is *indexSearch) searchTagKeysOnDate(tks map[string]struct{}, date uint64, maxTagKeys int) error {
+func (is *indexSearch) searchLabelNamesWithFiltersOnDate(qt *querytracer.Tracer, lns map[string]struct{}, tfss []*TagFilters, date uint64, maxLabelNames, maxMetrics int) error {
+ filter, err := is.searchMetricIDsWithFiltersOnDate(qt, tfss, date, maxMetrics)
+ if err != nil {
+ return err
+ }
+ if filter != nil && filter.Len() == 0 {
+ qt.Printf("found zero label names for filter=%s", tfss)
+ return nil
+ }
+ var prevLabelName []byte
ts := &is.ts
kb := &is.kb
mp := &is.mp
- mp.Reset()
dmis := is.db.s.getDeletedMetricIDs()
loopsPaceLimiter := 0
- kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixDateTagToMetricIDs)
- kb.B = encoding.MarshalUint64(kb.B, date)
+ nsPrefixExpected := byte(nsPrefixDateTagToMetricIDs)
+ if date == 0 {
+ nsPrefixExpected = nsPrefixTagToMetricIDs
+ }
+ kb.B = is.marshalCommonPrefixForDate(kb.B[:0], date)
prefix := kb.B
ts.Seek(prefix)
- for len(tks) < maxTagKeys && ts.NextItem() {
+ for len(lns) < maxLabelNames && ts.NextItem() {
if loopsPaceLimiter&paceLimiterFastIterationsMask == 0 {
if err := checkSearchDeadlineAndPace(is.deadline); err != nil {
return err
@@ -793,110 +831,36 @@ func (is *indexSearch) searchTagKeysOnDate(tks map[string]struct{}, date uint64,
if !bytes.HasPrefix(item, prefix) {
break
}
- if err := mp.Init(item, nsPrefixDateTagToMetricIDs); err != nil {
+ if err := mp.Init(item, nsPrefixExpected); err != nil {
return err
}
if mp.IsDeletedTag(dmis) {
continue
}
- key := mp.Tag.Key
- if !isArtificialTagKey(key) {
- tks[string(key)] = struct{}{}
+ if mp.GetMatchingSeriesCount(filter) == 0 {
+ continue
}
-
- // Search for the next tag key.
- // The last char in kb.B must be tagSeparatorChar.
- // Just increment it in order to jump to the next tag key.
- kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixDateTagToMetricIDs)
- kb.B = encoding.MarshalUint64(kb.B, date)
- if len(key) > 0 && key[0] == compositeTagKeyPrefix {
- // skip composite tag entries
- kb.B = append(kb.B, compositeTagKeyPrefix)
- } else {
- kb.B = marshalTagValue(kb.B, key)
+ labelName := mp.Tag.Key
+ if len(labelName) == 0 {
+ labelName = []byte("__name__")
}
- kb.B[len(kb.B)-1]++
- ts.Seek(kb.B)
- }
- if err := ts.Error(); err != nil {
- return fmt.Errorf("error during search for prefix %q: %w", prefix, err)
- }
- return nil
-}
-
-// SearchTagKeys returns all the tag keys.
-func (db *indexDB) SearchTagKeys(maxTagKeys int, deadline uint64) ([]string, error) {
- tks := make(map[string]struct{})
-
- is := db.getIndexSearch(deadline)
- err := is.searchTagKeys(tks, maxTagKeys)
- db.putIndexSearch(is)
- if err != nil {
- return nil, err
- }
-
- ok := db.doExtDB(func(extDB *indexDB) {
- is := extDB.getIndexSearch(deadline)
- err = is.searchTagKeys(tks, maxTagKeys)
- extDB.putIndexSearch(is)
- })
- if ok && err != nil {
- return nil, err
- }
-
- keys := make([]string, 0, len(tks))
- for key := range tks {
- // Do not skip empty keys, since they are converted to __name__
- keys = append(keys, key)
- }
- // Do not sort keys, since they must be sorted by vmselect.
- return keys, nil
-}
-
-func (is *indexSearch) searchTagKeys(tks map[string]struct{}, maxTagKeys int) error {
- ts := &is.ts
- kb := &is.kb
- mp := &is.mp
- mp.Reset()
- dmis := is.db.s.getDeletedMetricIDs()
- loopsPaceLimiter := 0
- kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixTagToMetricIDs)
- prefix := kb.B
- ts.Seek(prefix)
- for len(tks) < maxTagKeys && ts.NextItem() {
- if loopsPaceLimiter&paceLimiterFastIterationsMask == 0 {
- if err := checkSearchDeadlineAndPace(is.deadline); err != nil {
- return err
+ if isArtificialTagKey(labelName) || string(labelName) == string(prevLabelName) {
+ // Search for the next tag key.
+ // The last char in kb.B must be tagSeparatorChar.
+ // Just increment it in order to jump to the next tag key.
+ kb.B = is.marshalCommonPrefixForDate(kb.B[:0], date)
+ if len(labelName) > 0 && labelName[0] == compositeTagKeyPrefix {
+ // skip composite tag entries
+ kb.B = append(kb.B, compositeTagKeyPrefix)
+ } else {
+ kb.B = marshalTagValue(kb.B, labelName)
}
- }
- loopsPaceLimiter++
- item := ts.Item
- if !bytes.HasPrefix(item, prefix) {
- break
- }
- if err := mp.Init(item, nsPrefixTagToMetricIDs); err != nil {
- return err
- }
- if mp.IsDeletedTag(dmis) {
+ kb.B[len(kb.B)-1]++
+ ts.Seek(kb.B)
continue
}
- key := mp.Tag.Key
- if !isArtificialTagKey(key) {
- tks[string(key)] = struct{}{}
- }
-
- // Search for the next tag key.
- // The last char in kb.B must be tagSeparatorChar.
- // Just increment it in order to jump to the next tag key.
- kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixTagToMetricIDs)
- if len(key) > 0 && key[0] == compositeTagKeyPrefix {
- // skip composite tag entries
- kb.B = append(kb.B, compositeTagKeyPrefix)
- } else {
- kb.B = marshalTagValue(kb.B, key)
- }
- kb.B[len(kb.B)-1]++
- ts.Seek(kb.B)
+ lns[string(labelName)] = struct{}{}
+ prevLabelName = append(prevLabelName[:0], labelName...)
}
if err := ts.Error(); err != nil {
return fmt.Errorf("error during search for prefix %q: %w", prefix, err)
@@ -904,53 +868,71 @@ func (is *indexSearch) searchTagKeys(tks map[string]struct{}, maxTagKeys int) er
return nil
}
-// SearchTagValuesOnTimeRange returns all the tag values for the given tagKey on tr.
-func (db *indexDB) SearchTagValuesOnTimeRange(tagKey []byte, tr TimeRange, maxTagValues int, deadline uint64) ([]string, error) {
- tvs := make(map[string]struct{})
+// SearchLabelValuesWithFiltersOnTimeRange returns label values for the given labelName, tfss and tr.
+func (db *indexDB) SearchLabelValuesWithFiltersOnTimeRange(qt *querytracer.Tracer, labelName string, tfss []*TagFilters, tr TimeRange,
+ maxLabelValues, maxMetrics int, deadline uint64) ([]string, error) {
+ qt = qt.NewChild("search for label values: labelName=%q, filters=%s, timeRange=%s, maxLabelNames=%d, maxMetrics=%d", labelName, tfss, &tr, maxLabelValues, maxMetrics)
+ defer qt.Done()
+ lvs := make(map[string]struct{})
+ qtChild := qt.NewChild("search for label values in the current indexdb")
is := db.getIndexSearch(deadline)
- err := is.searchTagValuesOnTimeRange(tvs, tagKey, tr, maxTagValues)
+ err := is.searchLabelValuesWithFiltersOnTimeRange(qtChild, lvs, labelName, tfss, tr, maxLabelValues, maxMetrics)
db.putIndexSearch(is)
+ qtChild.Donef("found %d label values", len(lvs))
if err != nil {
return nil, err
}
ok := db.doExtDB(func(extDB *indexDB) {
+ qtChild := qt.NewChild("search for label values in the previous indexdb")
+ lvsLen := len(lvs)
is := extDB.getIndexSearch(deadline)
- err = is.searchTagValuesOnTimeRange(tvs, tagKey, tr, maxTagValues)
+ err = is.searchLabelValuesWithFiltersOnTimeRange(qtChild, lvs, labelName, tfss, tr, maxLabelValues, maxMetrics)
extDB.putIndexSearch(is)
+ qtChild.Donef("found %d additional label values", len(lvs)-lvsLen)
})
if ok && err != nil {
return nil, err
}
- tagValues := make([]string, 0, len(tvs))
- for tv := range tvs {
- if len(tv) == 0 {
+ labelValues := make([]string, 0, len(lvs))
+ for labelValue := range lvs {
+ if len(labelValue) == 0 {
// Skip empty values, since they have no any meaning.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/600
continue
}
- tagValues = append(tagValues, tv)
+ labelValues = append(labelValues, labelValue)
}
- // Do not sort tagValues, since they must be sorted by vmselect.
- return tagValues, nil
+ // Do not sort labelValues, since they must be sorted by vmselect.
+ qt.Printf("found %d label values in the current and the previous indexdb", len(labelValues))
+ return labelValues, nil
}
-func (is *indexSearch) searchTagValuesOnTimeRange(tvs map[string]struct{}, tagKey []byte, tr TimeRange, maxTagValues int) error {
+func (is *indexSearch) searchLabelValuesWithFiltersOnTimeRange(qt *querytracer.Tracer, lvs map[string]struct{}, labelName string, tfss []*TagFilters,
+ tr TimeRange, maxLabelValues, maxMetrics int) error {
minDate := uint64(tr.MinTimestamp) / msecPerDay
- maxDate := uint64(tr.MaxTimestamp) / msecPerDay
- if minDate > maxDate || maxDate-minDate > maxDaysForPerDaySearch {
- return is.searchTagValues(tvs, tagKey, maxTagValues)
+ maxDate := uint64(tr.MaxTimestamp-1) / msecPerDay
+ if maxDate == 0 || minDate > maxDate || maxDate-minDate > maxDaysForPerDaySearch {
+ qtChild := qt.NewChild("search for label values in global index: labelName=%q, filters=%s", labelName, tfss)
+ err := is.searchLabelValuesWithFiltersOnDate(qtChild, lvs, labelName, tfss, 0, maxLabelValues, maxMetrics)
+ qtChild.Done()
+ return err
}
var mu sync.Mutex
wg := getWaitGroup()
var errGlobal error
+ qt = qt.NewChild("parallel search for label values: labelName=%q, filters=%s, timeRange=%s", labelName, tfss, &tr)
for date := minDate; date <= maxDate; date++ {
wg.Add(1)
+ qtChild := qt.NewChild("search for label names: filters=%s, date=%d", tfss, date)
go func(date uint64) {
- defer wg.Done()
- tvsLocal := make(map[string]struct{})
+ defer func() {
+ qtChild.Done()
+ wg.Done()
+ }()
+ lvsLocal := make(map[string]struct{})
isLocal := is.db.getIndexSearch(is.deadline)
- err := isLocal.searchTagValuesOnDate(tvsLocal, tagKey, date, maxTagValues)
+ err := isLocal.searchLabelValuesWithFiltersOnDate(qtChild, lvsLocal, labelName, tfss, date, maxLabelValues, maxMetrics)
is.db.putIndexSearch(isLocal)
mu.Lock()
defer mu.Unlock()
@@ -961,117 +943,50 @@ func (is *indexSearch) searchTagValuesOnTimeRange(tvs map[string]struct{}, tagKe
errGlobal = err
return
}
- if len(tvs) >= maxTagValues {
+ if len(lvs) >= maxLabelValues {
return
}
- for v := range tvsLocal {
- tvs[v] = struct{}{}
+ for v := range lvsLocal {
+ lvs[v] = struct{}{}
}
}(date)
}
wg.Wait()
putWaitGroup(wg)
+ qt.Done()
return errGlobal
}
-func (is *indexSearch) searchTagValuesOnDate(tvs map[string]struct{}, tagKey []byte, date uint64, maxTagValues int) error {
- ts := &is.ts
- kb := &is.kb
- mp := &is.mp
- mp.Reset()
- dmis := is.db.s.getDeletedMetricIDs()
- loopsPaceLimiter := 0
- kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixDateTagToMetricIDs)
- kb.B = encoding.MarshalUint64(kb.B, date)
- kb.B = marshalTagValue(kb.B, tagKey)
- prefix := kb.B
- ts.Seek(prefix)
- for len(tvs) < maxTagValues && ts.NextItem() {
- if loopsPaceLimiter&paceLimiterFastIterationsMask == 0 {
- if err := checkSearchDeadlineAndPace(is.deadline); err != nil {
- return err
- }
- }
- loopsPaceLimiter++
- item := ts.Item
- if !bytes.HasPrefix(item, prefix) {
- break
- }
- if err := mp.Init(item, nsPrefixDateTagToMetricIDs); err != nil {
- return err
- }
- if mp.IsDeletedTag(dmis) {
- continue
- }
- if string(mp.Tag.Key) != string(tagKey) {
- break
- }
- tvs[string(mp.Tag.Value)] = struct{}{}
- if mp.MetricIDsLen() < maxMetricIDsPerRow/2 {
- // There is no need in searching for the next tag value,
- // since it is likely it is located in the next row,
- // because the current row contains incomplete metricIDs set.
- continue
- }
- // Search for the next tag value.
- // The last char in kb.B must be tagSeparatorChar.
- // Just increment it in order to jump to the next tag value.
- kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixDateTagToMetricIDs)
- kb.B = encoding.MarshalUint64(kb.B, date)
- kb.B = marshalTagValue(kb.B, mp.Tag.Key)
- kb.B = marshalTagValue(kb.B, mp.Tag.Value)
- kb.B[len(kb.B)-1]++
- ts.Seek(kb.B)
- }
- if err := ts.Error(); err != nil {
- return fmt.Errorf("error when searching for tag name prefix %q: %w", prefix, err)
- }
- return nil
-}
-
-// SearchTagValues returns all the tag values for the given tagKey
-func (db *indexDB) SearchTagValues(tagKey []byte, maxTagValues int, deadline uint64) ([]string, error) {
- tvs := make(map[string]struct{})
- is := db.getIndexSearch(deadline)
- err := is.searchTagValues(tvs, tagKey, maxTagValues)
- db.putIndexSearch(is)
+func (is *indexSearch) searchLabelValuesWithFiltersOnDate(qt *querytracer.Tracer, lvs map[string]struct{}, labelName string, tfss []*TagFilters,
+ date uint64, maxLabelValues, maxMetrics int) error {
+ filter, err := is.searchMetricIDsWithFiltersOnDate(qt, tfss, date, maxMetrics)
if err != nil {
- return nil, err
+ return err
}
- ok := db.doExtDB(func(extDB *indexDB) {
- is := extDB.getIndexSearch(deadline)
- err = is.searchTagValues(tvs, tagKey, maxTagValues)
- extDB.putIndexSearch(is)
- })
- if ok && err != nil {
- return nil, err
+ if filter != nil && filter.Len() == 0 {
+ qt.Printf("found zero label values for filter=%s", tfss)
+ return nil
}
-
- tagValues := make([]string, 0, len(tvs))
- for tv := range tvs {
- if len(tv) == 0 {
- // Skip empty values, since they have no any meaning.
- // See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/600
- continue
- }
- tagValues = append(tagValues, tv)
+ if labelName == "__name__" {
+ // __name__ label is encoded as empty string in indexdb.
+ labelName = ""
}
- // Do not sort tagValues, since they must be sorted by vmselect.
- return tagValues, nil
-}
-
-func (is *indexSearch) searchTagValues(tvs map[string]struct{}, tagKey []byte, maxTagValues int) error {
+ labelNameBytes := bytesutil.ToUnsafeBytes(labelName)
+ var prevLabelValue []byte
ts := &is.ts
kb := &is.kb
mp := &is.mp
- mp.Reset()
dmis := is.db.s.getDeletedMetricIDs()
loopsPaceLimiter := 0
- kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixTagToMetricIDs)
- kb.B = marshalTagValue(kb.B, tagKey)
+ nsPrefixExpected := byte(nsPrefixDateTagToMetricIDs)
+ if date == 0 {
+ nsPrefixExpected = nsPrefixTagToMetricIDs
+ }
+ kb.B = is.marshalCommonPrefixForDate(kb.B[:0], date)
+ kb.B = marshalTagValue(kb.B, labelNameBytes)
prefix := kb.B
ts.Seek(prefix)
- for len(tvs) < maxTagValues && ts.NextItem() {
+ for len(lvs) < maxLabelValues && ts.NextItem() {
if loopsPaceLimiter&paceLimiterFastIterationsMask == 0 {
if err := checkSearchDeadlineAndPace(is.deadline); err != nil {
return err
@@ -1082,30 +997,29 @@ func (is *indexSearch) searchTagValues(tvs map[string]struct{}, tagKey []byte, m
if !bytes.HasPrefix(item, prefix) {
break
}
- if err := mp.Init(item, nsPrefixTagToMetricIDs); err != nil {
+ if err := mp.Init(item, nsPrefixExpected); err != nil {
return err
}
if mp.IsDeletedTag(dmis) {
continue
}
- if string(mp.Tag.Key) != string(tagKey) {
- break
- }
- tvs[string(mp.Tag.Value)] = struct{}{}
- if mp.MetricIDsLen() < maxMetricIDsPerRow/2 {
- // There is no need in searching for the next tag value,
- // since it is likely it is located in the next row,
- // because the current row contains incomplete metricIDs set.
+ if mp.GetMatchingSeriesCount(filter) == 0 {
continue
}
- // Search for the next tag value.
- // The last char in kb.B must be tagSeparatorChar.
- // Just increment it in order to jump to the next tag value.
- kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixTagToMetricIDs)
- kb.B = marshalTagValue(kb.B, mp.Tag.Key)
- kb.B = marshalTagValue(kb.B, mp.Tag.Value)
- kb.B[len(kb.B)-1]++
- ts.Seek(kb.B)
+ labelValue := mp.Tag.Value
+ if string(labelValue) == string(prevLabelValue) {
+ // Search for the next tag value.
+ // The last char in kb.B must be tagSeparatorChar.
+ // Just increment it in order to jump to the next tag value.
+ kb.B = is.marshalCommonPrefixForDate(kb.B[:0], date)
+ kb.B = marshalTagValue(kb.B, labelNameBytes)
+ kb.B = marshalTagValue(kb.B, labelValue)
+ kb.B[len(kb.B)-1]++
+ ts.Seek(kb.B)
+ continue
+ }
+ lvs[string(labelValue)] = struct{}{}
+ prevLabelValue = append(prevLabelValue[:0], labelValue...)
}
if err := ts.Error(); err != nil {
return fmt.Errorf("error when searching for tag name prefix %q: %w", prefix, err)
@@ -1153,7 +1067,7 @@ func (db *indexDB) SearchTagValueSuffixes(tr TimeRange, tagKey, tagValuePrefix [
func (is *indexSearch) searchTagValueSuffixesForTimeRange(tvss map[string]struct{}, tr TimeRange, tagKey, tagValuePrefix []byte, delimiter byte, maxTagValueSuffixes int) error {
minDate := uint64(tr.MinTimestamp) / msecPerDay
- maxDate := uint64(tr.MaxTimestamp) / msecPerDay
+ maxDate := uint64(tr.MaxTimestamp-1) / msecPerDay
if minDate > maxDate || maxDate-minDate > maxDaysForPerDaySearch {
return is.searchTagValueSuffixesAll(tvss, tagKey, tagValuePrefix, delimiter, maxTagValueSuffixes)
}
@@ -1219,7 +1133,6 @@ func (is *indexSearch) searchTagValueSuffixesForPrefix(tvss map[string]struct{},
kb := &is.kb
ts := &is.ts
mp := &is.mp
- mp.Reset()
dmis := is.db.s.getDeletedMetricIDs()
loopsPaceLimiter := 0
ts.Seek(prefix)
@@ -1333,9 +1246,11 @@ func (is *indexSearch) getSeriesCount() (uint64, error) {
}
// GetTSDBStatusWithFiltersForDate returns topN entries for tsdb status for the given tfss and the given date.
-func (db *indexDB) GetTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint64, topN, maxMetrics int, deadline uint64) (*TSDBStatus, error) {
+func (db *indexDB) GetTSDBStatusWithFiltersForDate(qt *querytracer.Tracer, tfss []*TagFilters, date uint64, topN, maxMetrics int, deadline uint64) (*TSDBStatus, error) {
+ qtChild := qt.NewChild("collect tsdb stats in the current indexdb")
is := db.getIndexSearch(deadline)
- status, err := is.getTSDBStatusWithFiltersForDate(tfss, date, topN, maxMetrics)
+ status, err := is.getTSDBStatusWithFiltersForDate(qtChild, tfss, date, topN, maxMetrics)
+ qtChild.Done()
db.putIndexSearch(is)
if err != nil {
return nil, err
@@ -1344,8 +1259,10 @@ func (db *indexDB) GetTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint
return status, nil
}
ok := db.doExtDB(func(extDB *indexDB) {
+ qtChild := qt.NewChild("collect tsdb stats in the previous indexdb")
is := extDB.getIndexSearch(deadline)
- status, err = is.getTSDBStatusWithFiltersForDate(tfss, date, topN, maxMetrics)
+ status, err = is.getTSDBStatusWithFiltersForDate(qtChild, tfss, date, topN, maxMetrics)
+ qtChild.Done()
extDB.putIndexSearch(is)
})
if ok && err != nil {
@@ -1355,24 +1272,15 @@ func (db *indexDB) GetTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint
}
// getTSDBStatusWithFiltersForDate returns topN entries for tsdb status for the given tfss and the given date.
-func (is *indexSearch) getTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint64, topN, maxMetrics int) (*TSDBStatus, error) {
- var filter *uint64set.Set
- if len(tfss) > 0 {
- tr := TimeRange{
- MinTimestamp: int64(date) * msecPerDay,
- MaxTimestamp: int64(date+1)*msecPerDay - 1,
- }
- metricIDs, err := is.searchMetricIDsInternal(tfss, tr, maxMetrics)
- if err != nil {
- return nil, err
- }
- if metricIDs.Len() == 0 {
- // Nothing found.
- return &TSDBStatus{}, nil
- }
- filter = metricIDs
+func (is *indexSearch) getTSDBStatusWithFiltersForDate(qt *querytracer.Tracer, tfss []*TagFilters, date uint64, topN, maxMetrics int) (*TSDBStatus, error) {
+ filter, err := is.searchMetricIDsWithFiltersOnDate(qt, tfss, date, maxMetrics)
+ if err != nil {
+ return nil, err
+ }
+ if filter != nil && filter.Len() == 0 {
+ qt.Printf("no matching series for filter=%s", tfss)
+ return &TSDBStatus{}, nil
}
-
ts := &is.ts
kb := &is.kb
mp := &is.mp
@@ -1385,8 +1293,11 @@ func (is *indexSearch) getTSDBStatusWithFiltersForDate(tfss []*TagFilters, date
nameEqualBytes := []byte("__name__=")
loopsPaceLimiter := 0
- kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixDateTagToMetricIDs)
- kb.B = encoding.MarshalUint64(kb.B, date)
+ nsPrefixExpected := byte(nsPrefixDateTagToMetricIDs)
+ if date == 0 {
+ nsPrefixExpected = nsPrefixTagToMetricIDs
+ }
+ kb.B = is.marshalCommonPrefixForDate(kb.B[:0], date)
prefix := kb.B
ts.Seek(prefix)
for ts.NextItem() {
@@ -1400,28 +1311,15 @@ func (is *indexSearch) getTSDBStatusWithFiltersForDate(tfss []*TagFilters, date
if !bytes.HasPrefix(item, prefix) {
break
}
- matchingSeriesCount := 0
- if filter != nil {
- if err := mp.Init(item, nsPrefixDateTagToMetricIDs); err != nil {
- return nil, err
- }
- mp.ParseMetricIDs()
- for _, metricID := range mp.MetricIDs {
- if filter.Has(metricID) {
- matchingSeriesCount++
- }
- }
- if matchingSeriesCount == 0 {
- // Skip rows without matching metricIDs.
- continue
- }
+ if err := mp.Init(item, nsPrefixExpected); err != nil {
+ return nil, err
}
- tail := item[len(prefix):]
- var err error
- tail, tmp, err = unmarshalTagValue(tmp[:0], tail)
- if err != nil {
- return nil, fmt.Errorf("cannot unmarshal tag key from line %q: %w", item, err)
+ matchingSeriesCount := mp.GetMatchingSeriesCount(filter)
+ if matchingSeriesCount == 0 {
+ // Skip rows without matching metricIDs.
+ continue
}
+ tmp = append(tmp[:0], mp.Tag.Key...)
tagKey := tmp
if isArtificialTagKey(tagKey) {
// Skip artificially created tag keys.
@@ -1440,17 +1338,8 @@ func (is *indexSearch) getTSDBStatusWithFiltersForDate(tfss []*TagFilters, date
tmp = tagKey
}
tmp = append(tmp, '=')
- tail, tmp, err = unmarshalTagValue(tmp, tail)
- if err != nil {
- return nil, fmt.Errorf("cannot unmarshal tag value from line %q: %w", item, err)
- }
+ tmp = append(tmp, mp.Tag.Value...)
tagKeyValue := tmp
- if filter == nil {
- if err := mp.InitOnlyTail(item, tail); err != nil {
- return nil, err
- }
- matchingSeriesCount = mp.MetricIDsLen()
- }
if string(tagKey) == "__name__" {
totalSeries += uint64(matchingSeriesCount)
}
@@ -1748,43 +1637,49 @@ func (db *indexDB) searchTSIDs(qt *querytracer.Tracer, tfss []*TagFilters, tr Ti
tfss = convertToCompositeTagFilterss(tfss)
}
+ qtChild := qt.NewChild("search for tsids in the current indexdb")
+
tfKeyBuf := tagFiltersKeyBufPool.Get()
defer tagFiltersKeyBufPool.Put(tfKeyBuf)
tfKeyBuf.B = marshalTagFiltersKey(tfKeyBuf.B[:0], tfss, tr, true)
- tsids, ok := db.getFromTagFiltersCache(tfKeyBuf.B)
+ tsids, ok := db.getFromTagFiltersCache(qtChild, tfKeyBuf.B)
if ok {
// Fast path - tsids found in the cache
- qt.Printf("found %d matching series ids in the cache; they occupy %d bytes of memory", len(tsids), memorySizeForTSIDs(tsids))
+ qtChild.Done()
return tsids, nil
}
// Slow path - search for tsids in the db and extDB.
is := db.getIndexSearch(deadline)
- localTSIDs, err := is.searchTSIDs(qt, tfss, tr, maxMetrics)
+ localTSIDs, err := is.searchTSIDs(qtChild, tfss, tr, maxMetrics)
db.putIndexSearch(is)
if err != nil {
return nil, err
}
+ qtChild.Done()
var extTSIDs []TSID
if db.doExtDB(func(extDB *indexDB) {
+ qtChild := qt.NewChild("search for tsids in the previous indexdb")
+ defer qtChild.Done()
+
tfKeyExtBuf := tagFiltersKeyBufPool.Get()
defer tagFiltersKeyBufPool.Put(tfKeyExtBuf)
// Data in extDB cannot be changed, so use unversioned keys for tag cache.
tfKeyExtBuf.B = marshalTagFiltersKey(tfKeyExtBuf.B[:0], tfss, tr, false)
- tsids, ok := extDB.getFromTagFiltersCache(tfKeyExtBuf.B)
+ tsids, ok := extDB.getFromTagFiltersCache(qtChild, tfKeyExtBuf.B)
if ok {
extTSIDs = tsids
return
}
is := extDB.getIndexSearch(deadline)
- extTSIDs, err = is.searchTSIDs(qt, tfss, tr, maxMetrics)
+ extTSIDs, err = is.searchTSIDs(qtChild, tfss, tr, maxMetrics)
extDB.putIndexSearch(is)
sort.Slice(extTSIDs, func(i, j int) bool { return extTSIDs[i].Less(&extTSIDs[j]) })
- extDB.putToTagFiltersCache(extTSIDs, tfKeyExtBuf.B)
+ extDB.putToTagFiltersCache(qtChild, extTSIDs, tfKeyExtBuf.B)
}) {
if err != nil {
return nil, err
@@ -1793,23 +1688,19 @@ func (db *indexDB) searchTSIDs(qt *querytracer.Tracer, tfss []*TagFilters, tr Ti
// Merge localTSIDs with extTSIDs.
tsids = mergeTSIDs(localTSIDs, extTSIDs)
+ qt.Printf("merge %d tsids from the current indexdb with %d tsids from the previous indexdb; result: %d tsids", len(localTSIDs), len(extTSIDs), len(tsids))
// Sort the found tsids, since they must be passed to TSID search
// in the sorted order.
sort.Slice(tsids, func(i, j int) bool { return tsids[i].Less(&tsids[j]) })
- qt.Printf("sort the found %d series ids", len(tsids))
+ qt.Printf("sort %d tsids", len(tsids))
// Store TSIDs in the cache.
- db.putToTagFiltersCache(tsids, tfKeyBuf.B)
- qt.Printf("store the found %d series ids in cache; they occupy %d bytes of memory", len(tsids), memorySizeForTSIDs(tsids))
+ db.putToTagFiltersCache(qt, tsids, tfKeyBuf.B)
return tsids, err
}
-func memorySizeForTSIDs(tsids []TSID) int {
- return len(tsids) * int(unsafe.Sizeof(TSID{}))
-}
-
var tagFiltersKeyBufPool bytesutil.ByteBufferPool
func (is *indexSearch) getTSIDByMetricName(dst *TSID, metricName []byte) error {
@@ -1988,7 +1879,7 @@ func (is *indexSearch) searchTSIDs(qt *querytracer.Tracer, tfss []*TagFilters, t
i++
}
tsids = tsids[:i]
- qt.Printf("load %d series ids from %d metric ids", len(tsids), len(metricIDs))
+ qt.Printf("load %d tsids from %d metric ids", len(tsids), len(metricIDs))
// Do not sort the found tsids, since they will be sorted later.
return tsids, nil
@@ -2020,9 +1911,13 @@ func (is *indexSearch) getTSIDByMetricID(dst *TSID, metricID uint64) error {
// updateMetricIDsByMetricNameMatch matches metricName values for the given srcMetricIDs against tfs
// and adds matching metrics to metricIDs.
-func (is *indexSearch) updateMetricIDsByMetricNameMatch(metricIDs, srcMetricIDs *uint64set.Set, tfs []*tagFilter) error {
+func (is *indexSearch) updateMetricIDsByMetricNameMatch(qt *querytracer.Tracer, metricIDs, srcMetricIDs *uint64set.Set, tfs []*tagFilter) error {
+ qt = qt.NewChild("filter out %d metric ids with filters=%s", srcMetricIDs.Len(), tfs)
+ defer qt.Done()
+
// sort srcMetricIDs in order to speed up Seek below.
sortedMetricIDs := srcMetricIDs.AppendTo(nil)
+ qt.Printf("sort %d metric ids", len(sortedMetricIDs))
kb := &is.kb
kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixTagToMetricIDs)
@@ -2062,6 +1957,7 @@ func (is *indexSearch) updateMetricIDsByMetricNameMatch(metricIDs, srcMetricIDs
}
metricIDs.Add(metricID)
}
+ qt.Printf("apply filters %s; resulting metric ids: %d", tfs, metricIDs.Len())
return nil
}
@@ -2221,12 +2117,30 @@ func matchTagFilters(mn *MetricName, tfs []*tagFilter, kb *bytesutil.ByteBuffer)
return true, nil
}
-func (is *indexSearch) searchMetricIDs(qt *querytracer.Tracer, tfss []*TagFilters, tr TimeRange, maxMetrics int) ([]uint64, error) {
- metricIDs, err := is.searchMetricIDsInternal(tfss, tr, maxMetrics)
+func (is *indexSearch) searchMetricIDsWithFiltersOnDate(qt *querytracer.Tracer, tfss []*TagFilters, date uint64, maxMetrics int) (*uint64set.Set, error) {
+ if len(tfss) == 0 {
+ return nil, nil
+ }
+ tr := TimeRange{
+ MinTimestamp: int64(date) * msecPerDay,
+ MaxTimestamp: int64(date+1)*msecPerDay - 1,
+ }
+ if date == 0 {
+ // Search for metricIDs on the whole time range.
+ tr.MaxTimestamp = timestampFromTime(time.Now())
+ }
+ metricIDs, err := is.searchMetricIDsInternal(qt, tfss, tr, maxMetrics)
+ if err != nil {
+ return nil, err
+ }
+ return metricIDs, nil
+}
+
+func (is *indexSearch) searchMetricIDs(qt *querytracer.Tracer, tfss []*TagFilters, tr TimeRange, maxMetrics int) ([]uint64, error) {
+ metricIDs, err := is.searchMetricIDsInternal(qt, tfss, tr, maxMetrics)
if err != nil {
return nil, err
}
- qt.Printf("found %d matching metric ids", metricIDs.Len())
if metricIDs.Len() == 0 {
// Nothing found
return nil, nil
@@ -2244,14 +2158,16 @@ func (is *indexSearch) searchMetricIDs(qt *querytracer.Tracer, tfss []*TagFilter
metricIDsFiltered = append(metricIDsFiltered, metricID)
}
}
- qt.Printf("%d metric ids after removing deleted metric ids", len(metricIDsFiltered))
+ qt.Printf("left %d metric ids after removing deleted metric ids", len(metricIDsFiltered))
sortedMetricIDs = metricIDsFiltered
}
return sortedMetricIDs, nil
}
-func (is *indexSearch) searchMetricIDsInternal(tfss []*TagFilters, tr TimeRange, maxMetrics int) (*uint64set.Set, error) {
+func (is *indexSearch) searchMetricIDsInternal(qt *querytracer.Tracer, tfss []*TagFilters, tr TimeRange, maxMetrics int) (*uint64set.Set, error) {
+ qt = qt.NewChild("search for metric ids: filters=%s, timeRange=%s, maxMetrics=%d", tfss, &tr, maxMetrics)
+ defer qt.Done()
metricIDs := &uint64set.Set{}
for _, tfs := range tfss {
if len(tfs.tfs) == 0 {
@@ -2261,7 +2177,11 @@ func (is *indexSearch) searchMetricIDsInternal(tfss []*TagFilters, tr TimeRange,
logger.Panicf(`BUG: cannot add {__name__!=""} filter: %s`, err)
}
}
- if err := is.updateMetricIDsForTagFilters(metricIDs, tfs, tr, maxMetrics+1); err != nil {
+ qtChild := qt.NewChild("update metric ids: filters=%s, timeRange=%s", tfs, &tr)
+ prevMetricIDsLen := metricIDs.Len()
+ err := is.updateMetricIDsForTagFilters(qtChild, metricIDs, tfs, tr, maxMetrics+1)
+ qtChild.Donef("updated %d metric ids", metricIDs.Len()-prevMetricIDsLen)
+ if err != nil {
return nil, err
}
if metricIDs.Len() > maxMetrics {
@@ -2272,8 +2192,8 @@ func (is *indexSearch) searchMetricIDsInternal(tfss []*TagFilters, tr TimeRange,
return metricIDs, nil
}
-func (is *indexSearch) updateMetricIDsForTagFilters(metricIDs *uint64set.Set, tfs *TagFilters, tr TimeRange, maxMetrics int) error {
- err := is.tryUpdatingMetricIDsForDateRange(metricIDs, tfs, tr, maxMetrics)
+func (is *indexSearch) updateMetricIDsForTagFilters(qt *querytracer.Tracer, metricIDs *uint64set.Set, tfs *TagFilters, tr TimeRange, maxMetrics int) error {
+ err := is.tryUpdatingMetricIDsForDateRange(qt, metricIDs, tfs, tr, maxMetrics)
if err == nil {
// Fast path: found metricIDs by date range.
return nil
@@ -2283,8 +2203,9 @@ func (is *indexSearch) updateMetricIDsForTagFilters(metricIDs *uint64set.Set, tf
}
// Slow path - fall back to search in the global inverted index.
+ qt.Printf("cannot find metric ids in per-day index; fall back to global index")
atomic.AddUint64(&is.db.globalSearchCalls, 1)
- m, err := is.getMetricIDsForDateAndFilters(0, tfs, maxMetrics)
+ m, err := is.getMetricIDsForDateAndFilters(qt, 0, tfs, maxMetrics)
if err != nil {
if errors.Is(err, errFallbackToGlobalSearch) {
return fmt.Errorf("the number of matching timeseries exceeds %d; either narrow down the search "+
@@ -2296,7 +2217,7 @@ func (is *indexSearch) updateMetricIDsForTagFilters(metricIDs *uint64set.Set, tf
return nil
}
-func (is *indexSearch) getMetricIDsForTagFilter(tf *tagFilter, maxMetrics int, maxLoopsCount int64) (*uint64set.Set, int64, error) {
+func (is *indexSearch) getMetricIDsForTagFilter(qt *querytracer.Tracer, tf *tagFilter, maxMetrics int, maxLoopsCount int64) (*uint64set.Set, int64, error) {
if tf.isNegative {
logger.Panicf("BUG: isNegative must be false")
}
@@ -2304,6 +2225,7 @@ func (is *indexSearch) getMetricIDsForTagFilter(tf *tagFilter, maxMetrics int, m
if len(tf.orSuffixes) > 0 {
// Fast path for orSuffixes - seek for rows for each value from orSuffixes.
loopsCount, err := is.updateMetricIDsForOrSuffixes(tf, metricIDs, maxMetrics, maxLoopsCount)
+ qt.Printf("found %d metric ids for filter={%s} using exact search; spent %d loops", metricIDs.Len(), tf, loopsCount)
if err != nil {
return nil, loopsCount, fmt.Errorf("error when searching for metricIDs for tagFilter in fast path: %w; tagFilter=%s", err, tf)
}
@@ -2312,6 +2234,7 @@ func (is *indexSearch) getMetricIDsForTagFilter(tf *tagFilter, maxMetrics int, m
// Slow path - scan for all the rows with the given prefix.
loopsCount, err := is.getMetricIDsForTagFilterSlow(tf, metricIDs.Add, maxLoopsCount)
+ qt.Printf("found %d metric ids for filter={%s} using prefix search; spent %d loops", metricIDs.Len(), tf, loopsCount)
if err != nil {
return nil, loopsCount, fmt.Errorf("error when searching for metricIDs for tagFilter in slow path: %w; tagFilter=%s", err, tf)
}
@@ -2329,7 +2252,6 @@ func (is *indexSearch) getMetricIDsForTagFilterSlow(tf *tagFilter, f func(metric
ts := &is.ts
kb := &is.kb
mp := &is.mp
- mp.Reset()
var prevMatchingSuffix []byte
var prevMatch bool
var loopsCount int64
@@ -2437,7 +2359,6 @@ func (is *indexSearch) updateMetricIDsForOrSuffixes(tf *tagFilter, metricIDs *ui
func (is *indexSearch) updateMetricIDsForOrSuffix(prefix []byte, metricIDs *uint64set.Set, maxMetrics int, maxLoopsCount int64) (int64, error) {
ts := &is.ts
mp := &is.mp
- mp.Reset()
var loopsCount int64
loopsPaceLimiter := 0
ts.Seek(prefix)
@@ -2472,17 +2393,17 @@ var errFallbackToGlobalSearch = errors.New("fall back from per-day index search
const maxDaysForPerDaySearch = 40
-func (is *indexSearch) tryUpdatingMetricIDsForDateRange(metricIDs *uint64set.Set, tfs *TagFilters, tr TimeRange, maxMetrics int) error {
+func (is *indexSearch) tryUpdatingMetricIDsForDateRange(qt *querytracer.Tracer, metricIDs *uint64set.Set, tfs *TagFilters, tr TimeRange, maxMetrics int) error {
atomic.AddUint64(&is.db.dateRangeSearchCalls, 1)
minDate := uint64(tr.MinTimestamp) / msecPerDay
- maxDate := uint64(tr.MaxTimestamp) / msecPerDay
+ maxDate := uint64(tr.MaxTimestamp-1) / msecPerDay
if minDate > maxDate || maxDate-minDate > maxDaysForPerDaySearch {
// Too much dates must be covered. Give up, since it may be slow.
return errFallbackToGlobalSearch
}
if minDate == maxDate {
// Fast path - query only a single date.
- m, err := is.getMetricIDsForDateAndFilters(minDate, tfs, maxMetrics)
+ m, err := is.getMetricIDsForDateAndFilters(qt, minDate, tfs, maxMetrics)
if err != nil {
return err
}
@@ -2492,15 +2413,21 @@ func (is *indexSearch) tryUpdatingMetricIDsForDateRange(metricIDs *uint64set.Set
}
// Slower path - search for metricIDs for each day in parallel.
+ qt = qt.NewChild("parallel search for metric ids in per-day index: filters=%s, dayRange=[%d..%d]", tfs, minDate, maxDate)
+ defer qt.Done()
wg := getWaitGroup()
var errGlobal error
var mu sync.Mutex // protects metricIDs + errGlobal vars from concurrent access below
for minDate <= maxDate {
+ qtChild := qt.NewChild("parallel thread for date=%d", minDate)
wg.Add(1)
go func(date uint64) {
- defer wg.Done()
+ defer func() {
+ qtChild.Done()
+ wg.Done()
+ }()
isLocal := is.db.getIndexSearch(is.deadline)
- m, err := isLocal.getMetricIDsForDateAndFilters(date, tfs, maxMetrics)
+ m, err := isLocal.getMetricIDsForDateAndFilters(qtChild, date, tfs, maxMetrics)
is.db.putIndexSearch(isLocal)
mu.Lock()
defer mu.Unlock()
@@ -2527,7 +2454,9 @@ func (is *indexSearch) tryUpdatingMetricIDsForDateRange(metricIDs *uint64set.Set
return nil
}
-func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilters, maxMetrics int) (*uint64set.Set, error) {
+func (is *indexSearch) getMetricIDsForDateAndFilters(qt *querytracer.Tracer, date uint64, tfs *TagFilters, maxMetrics int) (*uint64set.Set, error) {
+ qt = qt.NewChild("search for metric ids on a particular day: filters=%s, date=%d, maxMetrics=%d", tfs, date, maxMetrics)
+ defer qt.Done()
// Sort tfs by loopsCount needed for performing each filter.
// This stats is usually collected from the previous queries.
// This way we limit the amount of work below by applying fast filters at first.
@@ -2579,7 +2508,8 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
}
}
- // Populate metricIDs for the first non-negative filter with the cost smaller than maxLoopsCount.
+ // Populate metricIDs for the first non-negative filter with the smallest cost.
+ qtChild := qt.NewChild("search for the first non-negative filter with the smallest cost")
var metricIDs *uint64set.Set
tfwsRemaining := tfws[:0]
maxDateMetrics := intMax
@@ -2593,10 +2523,11 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
continue
}
maxLoopsCount := getFirstPositiveLoopsCount(tfws[i+1:])
- m, loopsCount, err := is.getMetricIDsForDateTagFilter(tf, date, tfs.commonPrefix, maxDateMetrics, maxLoopsCount)
+ m, loopsCount, err := is.getMetricIDsForDateTagFilter(qtChild, tf, date, tfs.commonPrefix, maxDateMetrics, maxLoopsCount)
if err != nil {
if errors.Is(err, errTooManyLoops) {
// The tf took too many loops compared to the next filter. Postpone applying this filter.
+ qtChild.Printf("the filter={%s} took more than %d loops; postpone it", tf, maxLoopsCount)
storeLoopsCount(&tfw, 2*loopsCount)
tfwsRemaining = append(tfwsRemaining, tfw)
continue
@@ -2607,6 +2538,7 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
}
if m.Len() >= maxDateMetrics {
// Too many time series found by a single tag filter. Move the filter to the end of list.
+ qtChild.Printf("the filter={%s} matches at least %d series; postpone it", tf, maxDateMetrics)
storeLoopsCount(&tfw, int64Max-1)
tfwsRemaining = append(tfwsRemaining, tfw)
continue
@@ -2614,14 +2546,17 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
storeLoopsCount(&tfw, loopsCount)
metricIDs = m
tfwsRemaining = append(tfwsRemaining, tfws[i+1:]...)
+ qtChild.Printf("the filter={%s} matches less than %d series (actually %d series); use it", tf, maxDateMetrics, metricIDs.Len())
break
}
+ qtChild.Done()
tfws = tfwsRemaining
if metricIDs == nil {
// All the filters in tfs are negative or match too many time series.
// Populate all the metricIDs for the given (date),
// so later they can be filtered out with negative filters.
+ qt.Printf("all the filters are negative or match more than %d time series; fall back to searching for all the metric ids", maxDateMetrics)
m, err := is.getMetricIDsForDate(date, maxDateMetrics)
if err != nil {
return nil, fmt.Errorf("cannot obtain all the metricIDs: %w", err)
@@ -2631,6 +2566,7 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
return nil, errFallbackToGlobalSearch
}
metricIDs = m
+ qt.Printf("found %d metric ids", metricIDs.Len())
}
sort.Slice(tfws, func(i, j int) bool {
@@ -2660,6 +2596,7 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
// when the intial tag filters significantly reduce the number of found metricIDs,
// so the remaining filters could be performed via much faster metricName matching instead
// of slow selecting of matching metricIDs.
+ qtChild = qt.NewChild("intersect the remaining %d filters with the found %d metric ids", len(tfws), metricIDs.Len())
var tfsPostponed []*tagFilter
for i, tfw := range tfws {
tf := tfw.tf
@@ -2680,10 +2617,11 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
if maxLoopsCount == int64Max {
maxLoopsCount = int64(metricIDsLen) * loopsCountPerMetricNameMatch
}
- m, filterLoopsCount, err := is.getMetricIDsForDateTagFilter(tf, date, tfs.commonPrefix, intMax, maxLoopsCount)
+ m, filterLoopsCount, err := is.getMetricIDsForDateTagFilter(qtChild, tf, date, tfs.commonPrefix, intMax, maxLoopsCount)
if err != nil {
if errors.Is(err, errTooManyLoops) {
// Postpone tf, since it took more loops than the next filter may need.
+ qtChild.Printf("postpone filter={%s}, since it took more than %d loops", tf, maxLoopsCount)
storeFilterLoopsCount(&tfw, 2*filterLoopsCount)
tfsPostponed = append(tfsPostponed, tf)
continue
@@ -2695,22 +2633,28 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
storeFilterLoopsCount(&tfw, filterLoopsCount)
if tf.isNegative || tf.isEmptyMatch {
metricIDs.Subtract(m)
+ qtChild.Printf("subtract %d metric ids from the found %d metric ids for filter={%s}; resulting metric ids: %d", m.Len(), metricIDsLen, tf, metricIDs.Len())
} else {
metricIDs.Intersect(m)
+ qtChild.Printf("intersect %d metric ids with the found %d metric ids for filter={%s}; resulting metric ids: %d", m.Len(), metricIDsLen, tf, metricIDs.Len())
}
}
+ qtChild.Done()
if metricIDs.Len() == 0 {
// There is no need in applying tfsPostponed, since the result is empty.
+ qt.Printf("found zero metric ids")
return nil, nil
}
if len(tfsPostponed) > 0 {
// Apply the postponed filters via metricName match.
+ qt.Printf("apply postponed filters=%s to %d metrics ids", tfsPostponed, metricIDs.Len())
var m uint64set.Set
- if err := is.updateMetricIDsByMetricNameMatch(&m, metricIDs, tfsPostponed); err != nil {
+ if err := is.updateMetricIDsByMetricNameMatch(qt, &m, metricIDs, tfsPostponed); err != nil {
return nil, err
}
return &m, nil
}
+ qt.Printf("found %d metric ids", metricIDs.Len())
return metricIDs, nil
}
@@ -2819,6 +2763,27 @@ func marshalCompositeTagKey(dst, name, key []byte) []byte {
return dst
}
+func unmarshalCompositeTagKey(src []byte) ([]byte, []byte, error) {
+ if len(src) == 0 {
+ return nil, nil, fmt.Errorf("composite tag key cannot be empty")
+ }
+ if src[0] != compositeTagKeyPrefix {
+ return nil, nil, fmt.Errorf("missing composite tag key prefix in %q", src)
+ }
+ src = src[1:]
+ tail, n, err := encoding.UnmarshalVarUint64(src)
+ if err != nil {
+ return nil, nil, fmt.Errorf("cannot unmarshal metric name length from composite tag key: %w", err)
+ }
+ src = tail
+ if uint64(len(src)) < n {
+ return nil, nil, fmt.Errorf("missing metric name with length %d in composite tag key %q", n, src)
+ }
+ name := src[:n]
+ key := src[n:]
+ return name, key, nil
+}
+
func reverseBytes(dst, src []byte) []byte {
for i := len(src) - 1; i >= 0; i-- {
dst = append(dst, src[i])
@@ -2844,26 +2809,22 @@ func (is *indexSearch) hasDateMetricID(date, metricID uint64) (bool, error) {
return true, nil
}
-func (is *indexSearch) getMetricIDsForDateTagFilter(tf *tagFilter, date uint64, commonPrefix []byte, maxMetrics int, maxLoopsCount int64) (*uint64set.Set, int64, error) {
+func (is *indexSearch) getMetricIDsForDateTagFilter(qt *querytracer.Tracer, tf *tagFilter, date uint64, commonPrefix []byte,
+ maxMetrics int, maxLoopsCount int64) (*uint64set.Set, int64, error) {
+ qt = qt.NewChild("get metric ids for filter and date: filter={%s}, date=%d, maxMetrics=%d, maxLoopsCount=%d", tf, date, maxMetrics, maxLoopsCount)
+ defer qt.Done()
if !bytes.HasPrefix(tf.prefix, commonPrefix) {
logger.Panicf("BUG: unexpected tf.prefix %q; must start with commonPrefix %q", tf.prefix, commonPrefix)
}
kb := kbPool.Get()
defer kbPool.Put(kb)
- if date != 0 {
- // Use per-date search.
- kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixDateTagToMetricIDs)
- kb.B = encoding.MarshalUint64(kb.B, date)
- } else {
- // Use global search if date isn't set.
- kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixTagToMetricIDs)
- }
+ kb.B = is.marshalCommonPrefixForDate(kb.B[:0], date)
prefix := kb.B
kb.B = append(kb.B, tf.prefix[len(commonPrefix):]...)
tfNew := *tf
tfNew.isNegative = false // isNegative for the original tf is handled by the caller.
tfNew.prefix = kb.B
- metricIDs, loopsCount, err := is.getMetricIDsForTagFilter(&tfNew, maxMetrics, maxLoopsCount)
+ metricIDs, loopsCount, err := is.getMetricIDsForTagFilter(qt, &tfNew, maxMetrics, maxLoopsCount)
if err != nil {
return nil, loopsCount, err
}
@@ -2875,16 +2836,19 @@ func (is *indexSearch) getMetricIDsForDateTagFilter(tf *tagFilter, date uint64,
// This fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1601
// See also https://github.com/VictoriaMetrics/VictoriaMetrics/issues/395
maxLoopsCount -= loopsCount
- tfNew = tagFilter{}
- if err := tfNew.Init(prefix, tf.key, []byte(".+"), false, true); err != nil {
+ var tfGross tagFilter
+ if err := tfGross.Init(prefix, tf.key, []byte(".+"), false, true); err != nil {
logger.Panicf(`BUG: cannot init tag filter: {%q=~".+"}: %s`, tf.key, err)
}
- m, lc, err := is.getMetricIDsForTagFilter(&tfNew, maxMetrics, maxLoopsCount)
+ m, lc, err := is.getMetricIDsForTagFilter(qt, &tfGross, maxMetrics, maxLoopsCount)
loopsCount += lc
if err != nil {
return nil, loopsCount, err
}
+ mLen := m.Len()
m.Subtract(metricIDs)
+ qt.Printf("subtract %d metric ids for filter={%s} from %d metric ids for filter={%s}", metricIDs.Len(), &tfNew, mLen, &tfGross)
+ qt.Printf("found %d metric ids, spent %d loops", m.Len(), loopsCount)
return m, loopsCount, nil
}
@@ -2924,14 +2888,7 @@ func (is *indexSearch) getMetricIDsForDate(date uint64, maxMetrics int) (*uint64
// Extract all the metricIDs from (date, __name__=value)->metricIDs entries.
kb := kbPool.Get()
defer kbPool.Put(kb)
- if date != 0 {
- // Use per-date search
- kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixDateTagToMetricIDs)
- kb.B = encoding.MarshalUint64(kb.B, date)
- } else {
- // Use global search
- kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixTagToMetricIDs)
- }
+ kb.B = is.marshalCommonPrefixForDate(kb.B[:0], date)
kb.B = marshalTagValue(kb.B, nil)
var metricIDs uint64set.Set
if err := is.updateMetricIDsForPrefix(kb.B, &metricIDs, maxMetrics); err != nil {
@@ -3005,6 +2962,16 @@ func (is *indexSearch) marshalCommonPrefix(dst []byte, nsPrefix byte) []byte {
return marshalCommonPrefix(dst, nsPrefix)
}
+func (is *indexSearch) marshalCommonPrefixForDate(dst []byte, date uint64) []byte {
+ if date == 0 {
+ // Global index
+ return is.marshalCommonPrefix(dst, nsPrefixTagToMetricIDs)
+ }
+ // Per-day index
+ dst = is.marshalCommonPrefix(dst, nsPrefixDateTagToMetricIDs)
+ return encoding.MarshalUint64(dst, date)
+}
+
func unmarshalCommonPrefix(src []byte) ([]byte, byte, error) {
if len(src) < commonPrefixLen {
return nil, 0, fmt.Errorf("cannot unmarshal common prefix from %d bytes; need at least %d bytes; data=%X", len(src), commonPrefixLen, src)
@@ -3027,6 +2994,9 @@ type tagToMetricIDsRowParser struct {
// MetricIDs contains parsed MetricIDs after ParseMetricIDs call
MetricIDs []uint64
+ // metricIDsParsed is set to true after ParseMetricIDs call
+ metricIDsParsed bool
+
// Tag contains parsed tag after Init call
Tag Tag
@@ -3038,6 +3008,7 @@ func (mp *tagToMetricIDsRowParser) Reset() {
mp.NSPrefix = 0
mp.Date = 0
mp.MetricIDs = mp.MetricIDs[:0]
+ mp.metricIDsParsed = false
mp.Tag.Reset()
mp.tail = nil
}
@@ -3091,6 +3062,7 @@ func (mp *tagToMetricIDsRowParser) InitOnlyTail(b, tail []byte) error {
return fmt.Errorf("invalid tail length in the tag->metricIDs row; got %d bytes; must be multiple of 8 bytes", len(tail))
}
mp.tail = tail
+ mp.metricIDsParsed = false
return nil
}
@@ -3111,6 +3083,9 @@ func (mp *tagToMetricIDsRowParser) MetricIDsLen() int {
// ParseMetricIDs parses MetricIDs from mp.tail into mp.MetricIDs.
func (mp *tagToMetricIDsRowParser) ParseMetricIDs() {
+ if mp.metricIDsParsed {
+ return
+ }
tail := mp.tail
mp.MetricIDs = mp.MetricIDs[:0]
n := len(tail) / 8
@@ -3130,6 +3105,24 @@ func (mp *tagToMetricIDsRowParser) ParseMetricIDs() {
metricIDs[i] = metricID
tail = tail[8:]
}
+ mp.metricIDsParsed = true
+}
+
+// GetMatchingSeriesCount returns the number of series in mp, which match metricIDs from the given filter.
+//
+// if filter is empty, then all series in mp are taken into account.
+func (mp *tagToMetricIDsRowParser) GetMatchingSeriesCount(filter *uint64set.Set) int {
+ if filter == nil {
+ return mp.MetricIDsLen()
+ }
+ mp.ParseMetricIDs()
+ n := 0
+ for _, metricID := range mp.MetricIDs {
+ if filter.Has(metricID) {
+ n++
+ }
+ }
+ return n
}
// IsDeletedTag verifies whether the tag from mp is deleted according to dmis.
diff --git a/lib/storage/index_db_test.go b/lib/storage/index_db_test.go
index a943c174f..8d29c0793 100644
--- a/lib/storage/index_db_test.go
+++ b/lib/storage/index_db_test.go
@@ -703,9 +703,9 @@ func testIndexDBGetOrCreateTSIDByName(db *indexDB, metricGroups int) ([]MetricNa
}
func testIndexDBCheckTSIDByName(db *indexDB, mns []MetricName, tsids []TSID, isConcurrent bool) error {
- hasValue := func(tvs []string, v []byte) bool {
- for _, tv := range tvs {
- if string(v) == tv {
+ hasValue := func(lvs []string, v []byte) bool {
+ for _, lv := range lvs {
+ if string(v) == lv {
return true
}
}
@@ -715,7 +715,7 @@ func testIndexDBCheckTSIDByName(db *indexDB, mns []MetricName, tsids []TSID, isC
timeseriesCounters := make(map[uint64]bool)
var tsidCopy TSID
var metricNameCopy []byte
- allKeys := make(map[string]bool)
+ allLabelNames := make(map[string]bool)
for i := range mns {
mn := &mns[i]
tsid := &tsids[i]
@@ -757,38 +757,38 @@ func testIndexDBCheckTSIDByName(db *indexDB, mns []MetricName, tsids []TSID, isC
return fmt.Errorf("expecting empty buf when searching for non-existent metricID; got %X", buf)
}
- // Test SearchTagValues
- tvs, err := db.SearchTagValues(nil, 1e5, noDeadline)
+ // Test SearchLabelValuesWithFiltersOnTimeRange
+ lvs, err := db.SearchLabelValuesWithFiltersOnTimeRange(nil, "__name__", nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil {
- return fmt.Errorf("error in SearchTagValues for __name__: %w", err)
+ return fmt.Errorf("error in SearchLabelValuesWithFiltersOnTimeRange(labelName=%q): %w", "__name__", err)
}
- if !hasValue(tvs, mn.MetricGroup) {
- return fmt.Errorf("SearchTagValues couldn't find %q; found %q", mn.MetricGroup, tvs)
+ if !hasValue(lvs, mn.MetricGroup) {
+ return fmt.Errorf("SearchLabelValuesWithFiltersOnTimeRange(labelName=%q): couldn't find %q; found %q", "__name__", mn.MetricGroup, lvs)
}
for i := range mn.Tags {
tag := &mn.Tags[i]
- tvs, err := db.SearchTagValues(tag.Key, 1e5, noDeadline)
+ lvs, err := db.SearchLabelValuesWithFiltersOnTimeRange(nil, string(tag.Key), nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil {
- return fmt.Errorf("error in SearchTagValues for __name__: %w", err)
+ return fmt.Errorf("error in SearchLabelValuesWithFiltersOnTimeRange(labelName=%q): %w", tag.Key, err)
}
- if !hasValue(tvs, tag.Value) {
- return fmt.Errorf("SearchTagValues couldn't find %q=%q; found %q", tag.Key, tag.Value, tvs)
+ if !hasValue(lvs, tag.Value) {
+ return fmt.Errorf("SearchLabelValuesWithFiltersOnTimeRange(labelName=%q): couldn't find %q; found %q", tag.Key, tag.Value, lvs)
}
- allKeys[string(tag.Key)] = true
+ allLabelNames[string(tag.Key)] = true
}
}
- // Test SearchTagKeys
- tks, err := db.SearchTagKeys(1e5, noDeadline)
+ // Test SearchLabelNamesWithFiltersOnTimeRange (empty filters, global time range)
+ lns, err := db.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil {
- return fmt.Errorf("error in SearchTagKeys: %w", err)
+ return fmt.Errorf("error in SearchLabelNamesWithFiltersOnTimeRange(empty filter, global time range): %w", err)
}
- if !hasValue(tks, nil) {
- return fmt.Errorf("cannot find __name__ in %q", tks)
+ if !hasValue(lns, []byte("__name__")) {
+ return fmt.Errorf("cannot find __name__ in %q", lns)
}
- for key := range allKeys {
- if !hasValue(tks, []byte(key)) {
- return fmt.Errorf("cannot find %q in %q", key, tks)
+ for labelName := range allLabelNames {
+ if !hasValue(lns, []byte(labelName)) {
+ return fmt.Errorf("cannot find %q in %q", labelName, lns)
}
}
@@ -1633,13 +1633,13 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
var metricNameBuf []byte
perDayMetricIDs := make(map[uint64]*uint64set.Set)
var allMetricIDs uint64set.Set
- tagKeys := []string{
- "", "constant", "day", "uniqueid",
+ labelNames := []string{
+ "__name__", "constant", "day", "uniqueid",
}
- tagValues := []string{
+ labelValues := []string{
"testMetric",
}
- sort.Strings(tagKeys)
+ sort.Strings(labelNames)
for day := 0; day < days; day++ {
var tsids []TSID
var mns []MetricName
@@ -1709,30 +1709,28 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
t.Fatalf("unexpected metricIDs found;\ngot\n%d\nwant\n%d", metricIDs.AppendTo(nil), allMetricIDs.AppendTo(nil))
}
- // Check SearchTagKeysOnTimeRange.
- tks, err := db.SearchTagKeysOnTimeRange(TimeRange{
+ // Check SearchLabelNamesWithFiltersOnTimeRange with the specified time range.
+ tr := TimeRange{
MinTimestamp: int64(now) - msecPerDay,
MaxTimestamp: int64(now),
- }, 10000, noDeadline)
- if err != nil {
- t.Fatalf("unexpected error in SearchTagKeysOnTimeRange: %s", err)
}
- sort.Strings(tks)
- if !reflect.DeepEqual(tks, tagKeys) {
- t.Fatalf("unexpected tagKeys; got\n%s\nwant\n%s", tks, tagKeys)
+ lns, err := db.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, tr, 10000, 1e9, noDeadline)
+ if err != nil {
+ t.Fatalf("unexpected error in SearchLabelNamesWithFiltersOnTimeRange(timeRange=%s): %s", &tr, err)
+ }
+ sort.Strings(lns)
+ if !reflect.DeepEqual(lns, labelNames) {
+ t.Fatalf("unexpected labelNames; got\n%s\nwant\n%s", lns, labelNames)
}
- // Check SearchTagValuesOnTimeRange.
- tvs, err := db.SearchTagValuesOnTimeRange([]byte(""), TimeRange{
- MinTimestamp: int64(now) - msecPerDay,
- MaxTimestamp: int64(now),
- }, 10000, noDeadline)
+ // Check SearchLabelValuesWithFiltersOnTimeRange with the specified time range.
+ lvs, err := db.SearchLabelValuesWithFiltersOnTimeRange(nil, "", nil, tr, 10000, 1e9, noDeadline)
if err != nil {
- t.Fatalf("unexpected error in SearchTagValuesOnTimeRange: %s", err)
+ t.Fatalf("unexpected error in SearchLabelValuesWithFiltersOnTimeRange(timeRange=%s): %s", &tr, err)
}
- sort.Strings(tvs)
- if !reflect.DeepEqual(tvs, tagValues) {
- t.Fatalf("unexpected tagValues; got\n%s\nwant\n%s", tvs, tagValues)
+ sort.Strings(lvs)
+ if !reflect.DeepEqual(lvs, labelValues) {
+ t.Fatalf("unexpected labelValues; got\n%s\nwant\n%s", lvs, labelValues)
}
// Create a filter that will match series that occur across multiple days
@@ -1743,7 +1741,7 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
// Perform a search within a day.
// This should return the metrics for the day
- tr := TimeRange{
+ tr = TimeRange{
MinTimestamp: int64(now - 2*msecPerHour - 1),
MaxTimestamp: int64(now),
}
@@ -1755,6 +1753,46 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
t.Fatalf("expected %d time series for current day, got %d time series", metricsPerDay, len(matchedTSIDs))
}
+ // Check SearchLabelNamesWithFiltersOnTimeRange with the specified filter.
+ lns, err = db.SearchLabelNamesWithFiltersOnTimeRange(nil, []*TagFilters{tfs}, TimeRange{}, 10000, 1e9, noDeadline)
+ if err != nil {
+ t.Fatalf("unexpected error in SearchLabelNamesWithFiltersOnTimeRange(filters=%s): %s", tfs, err)
+ }
+ sort.Strings(lns)
+ if !reflect.DeepEqual(lns, labelNames) {
+ t.Fatalf("unexpected labelNames; got\n%s\nwant\n%s", lns, labelNames)
+ }
+
+ // Check SearchLabelNamesWithFiltersOnTimeRange with the specified filter and time range.
+ lns, err = db.SearchLabelNamesWithFiltersOnTimeRange(nil, []*TagFilters{tfs}, tr, 10000, 1e9, noDeadline)
+ if err != nil {
+ t.Fatalf("unexpected error in SearchLabelNamesWithFiltersOnTimeRange(filters=%s, timeRange=%s): %s", tfs, &tr, err)
+ }
+ sort.Strings(lns)
+ if !reflect.DeepEqual(lns, labelNames) {
+ t.Fatalf("unexpected labelNames; got\n%s\nwant\n%s", lns, labelNames)
+ }
+
+ // Check SearchLabelValuesWithFiltersOnTimeRange with the specified filter.
+ lvs, err = db.SearchLabelValuesWithFiltersOnTimeRange(nil, "", []*TagFilters{tfs}, TimeRange{}, 10000, 1e9, noDeadline)
+ if err != nil {
+ t.Fatalf("unexpected error in SearchLabelValuesWithFiltersOnTimeRange(filters=%s): %s", tfs, err)
+ }
+ sort.Strings(lvs)
+ if !reflect.DeepEqual(lvs, labelValues) {
+ t.Fatalf("unexpected labelValues; got\n%s\nwant\n%s", lvs, labelValues)
+ }
+
+ // Check SearchLabelValuesWithFiltersOnTimeRange with the specified filter and time range.
+ lvs, err = db.SearchLabelValuesWithFiltersOnTimeRange(nil, "", []*TagFilters{tfs}, tr, 10000, 1e9, noDeadline)
+ if err != nil {
+ t.Fatalf("unexpected error in SearchLabelValuesWithFiltersOnTimeRange(filters=%s, timeRange=%s): %s", tfs, &tr, err)
+ }
+ sort.Strings(lvs)
+ if !reflect.DeepEqual(lvs, labelValues) {
+ t.Fatalf("unexpected labelValues; got\n%s\nwant\n%s", lvs, labelValues)
+ }
+
// Perform a search across all the days, should match all metrics
tr = TimeRange{
MinTimestamp: int64(now - msecPerDay*days),
@@ -1770,7 +1808,7 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
}
// Check GetTSDBStatusWithFiltersForDate with nil filters.
- status, err := db.GetTSDBStatusWithFiltersForDate(nil, baseDate, 5, 1e6, noDeadline)
+ status, err := db.GetTSDBStatusWithFiltersForDate(nil, nil, baseDate, 5, 1e6, noDeadline)
if err != nil {
t.Fatalf("error in GetTSDBStatusWithFiltersForDate with nil filters: %s", err)
}
@@ -1841,12 +1879,12 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
t.Fatalf("unexpected TotalLabelValuePairs; got %d; want %d", status.TotalLabelValuePairs, expectedLabelValuePairs)
}
- // Check GetTSDBStatusWithFiltersForDate
+ // Check GetTSDBStatusWithFiltersForDate with non-nil filter, which matches all the series
tfs = NewTagFilters()
if err := tfs.Add([]byte("day"), []byte("0"), false, false); err != nil {
t.Fatalf("cannot add filter: %s", err)
}
- status, err = db.GetTSDBStatusWithFiltersForDate([]*TagFilters{tfs}, baseDate, 5, 1e6, noDeadline)
+ status, err = db.GetTSDBStatusWithFiltersForDate(nil, []*TagFilters{tfs}, baseDate, 5, 1e6, noDeadline)
if err != nil {
t.Fatalf("error in GetTSDBStatusWithFiltersForDate: %s", err)
}
@@ -1870,12 +1908,39 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
if status.TotalLabelValuePairs != expectedLabelValuePairs {
t.Fatalf("unexpected TotalLabelValuePairs; got %d; want %d", status.TotalLabelValuePairs, expectedLabelValuePairs)
}
- // Check GetTSDBStatusWithFiltersForDate
+
+ // Check GetTSDBStatusWithFiltersOnDate, which matches all the series on a global time range
+ status, err = db.GetTSDBStatusWithFiltersForDate(nil, nil, 0, 5, 1e6, noDeadline)
+ if err != nil {
+ t.Fatalf("error in GetTSDBStatusWithFiltersForDate: %s", err)
+ }
+ if !status.hasEntries() {
+ t.Fatalf("expecting non-empty TSDB status")
+ }
+ expectedSeriesCountByMetricName = []TopHeapEntry{
+ {
+ Name: "testMetric",
+ Count: 5000,
+ },
+ }
+ if !reflect.DeepEqual(status.SeriesCountByMetricName, expectedSeriesCountByMetricName) {
+ t.Fatalf("unexpected SeriesCountByMetricName;\ngot\n%v\nwant\n%v", status.SeriesCountByMetricName, expectedSeriesCountByMetricName)
+ }
+ expectedTotalSeries = 5000
+ if status.TotalSeries != expectedTotalSeries {
+ t.Fatalf("unexpected TotalSeries; got %d; want %d", status.TotalSeries, expectedTotalSeries)
+ }
+ expectedLabelValuePairs = 20000
+ if status.TotalLabelValuePairs != expectedLabelValuePairs {
+ t.Fatalf("unexpected TotalLabelValuePairs; got %d; want %d", status.TotalLabelValuePairs, expectedLabelValuePairs)
+ }
+
+ // Check GetTSDBStatusWithFiltersForDate with non-nil filter, which matches only 3 series
tfs = NewTagFilters()
if err := tfs.Add([]byte("uniqueid"), []byte("0|1|3"), false, true); err != nil {
t.Fatalf("cannot add filter: %s", err)
}
- status, err = db.GetTSDBStatusWithFiltersForDate([]*TagFilters{tfs}, baseDate, 5, 1e6, noDeadline)
+ status, err = db.GetTSDBStatusWithFiltersForDate(nil, []*TagFilters{tfs}, baseDate, 5, 1e6, noDeadline)
if err != nil {
t.Fatalf("error in GetTSDBStatusWithFiltersForDate: %s", err)
}
@@ -1899,6 +1964,32 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
if status.TotalLabelValuePairs != expectedLabelValuePairs {
t.Fatalf("unexpected TotalLabelValuePairs; got %d; want %d", status.TotalLabelValuePairs, expectedLabelValuePairs)
}
+
+ // Check GetTSDBStatusWithFiltersForDate with non-nil filter on global time range, which matches only 15 series
+ status, err = db.GetTSDBStatusWithFiltersForDate(nil, []*TagFilters{tfs}, 0, 5, 1e6, noDeadline)
+ if err != nil {
+ t.Fatalf("error in GetTSDBStatusWithFiltersForDate: %s", err)
+ }
+ if !status.hasEntries() {
+ t.Fatalf("expecting non-empty TSDB status")
+ }
+ expectedSeriesCountByMetricName = []TopHeapEntry{
+ {
+ Name: "testMetric",
+ Count: 15,
+ },
+ }
+ if !reflect.DeepEqual(status.SeriesCountByMetricName, expectedSeriesCountByMetricName) {
+ t.Fatalf("unexpected SeriesCountByMetricName;\ngot\n%v\nwant\n%v", status.SeriesCountByMetricName, expectedSeriesCountByMetricName)
+ }
+ expectedTotalSeries = 15
+ if status.TotalSeries != expectedTotalSeries {
+ t.Fatalf("unexpected TotalSeries; got %d; want %d", status.TotalSeries, expectedTotalSeries)
+ }
+ expectedLabelValuePairs = 60
+ if status.TotalLabelValuePairs != expectedLabelValuePairs {
+ t.Fatalf("unexpected TotalLabelValuePairs; got %d; want %d", status.TotalLabelValuePairs, expectedLabelValuePairs)
+ }
}
func toTFPointers(tfs []tagFilter) []*tagFilter {
diff --git a/lib/storage/storage.go b/lib/storage/storage.go
index f968d0663..11d99b56a 100644
--- a/lib/storage/storage.go
+++ b/lib/storage/storage.go
@@ -1061,7 +1061,7 @@ func nextRetentionDuration(retentionMsecs int64) time.Duration {
// SearchMetricNames returns metric names matching the given tfss on the given tr.
func (s *Storage) SearchMetricNames(qt *querytracer.Tracer, tfss []*TagFilters, tr TimeRange, maxMetrics int, deadline uint64) ([]MetricName, error) {
- qt = qt.NewChild("search for matching metric names")
+ qt = qt.NewChild("search for matching metric names: filters=%s, timeRange=%s", tfss, &tr)
defer qt.Done()
tsids, err := s.searchTSIDs(qt, tfss, tr, maxMetrics, deadline)
if err != nil {
@@ -1104,7 +1104,7 @@ func (s *Storage) SearchMetricNames(qt *querytracer.Tracer, tfss []*TagFilters,
// searchTSIDs returns sorted TSIDs for the given tfss and the given tr.
func (s *Storage) searchTSIDs(qt *querytracer.Tracer, tfss []*TagFilters, tr TimeRange, maxMetrics int, deadline uint64) ([]TSID, error) {
- qt = qt.NewChild("search for matching series ids")
+ qt = qt.NewChild("search for matching tsids: filters=%s, timeRange=%s", tfss, &tr)
defer qt.Done()
// Do not cache tfss -> tsids here, since the caching is performed
// on idb level.
@@ -1154,7 +1154,7 @@ var (
//
// This should speed-up further searchMetricNameWithCache calls for metricIDs from tsids.
func (s *Storage) prefetchMetricNames(qt *querytracer.Tracer, tsids []TSID, deadline uint64) error {
- qt = qt.NewChild("prefetch metric names for %d series ids", len(tsids))
+ qt = qt.NewChild("prefetch metric names for %d tsids", len(tsids))
defer qt.Done()
if len(tsids) == 0 {
qt.Printf("nothing to prefetch")
@@ -1263,24 +1263,15 @@ func (s *Storage) DeleteMetrics(tfss []*TagFilters) (int, error) {
return deletedCount, nil
}
-// SearchTagKeysOnTimeRange searches for tag keys on tr.
-func (s *Storage) SearchTagKeysOnTimeRange(tr TimeRange, maxTagKeys int, deadline uint64) ([]string, error) {
- return s.idb().SearchTagKeysOnTimeRange(tr, maxTagKeys, deadline)
+// SearchLabelNamesWithFiltersOnTimeRange searches for label names matching the given tfss on tr.
+func (s *Storage) SearchLabelNamesWithFiltersOnTimeRange(qt *querytracer.Tracer, tfss []*TagFilters, tr TimeRange, maxLabelNames, maxMetrics int, deadline uint64) ([]string, error) {
+ return s.idb().SearchLabelNamesWithFiltersOnTimeRange(qt, tfss, tr, maxLabelNames, maxMetrics, deadline)
}
-// SearchTagKeys searches for tag keys
-func (s *Storage) SearchTagKeys(maxTagKeys int, deadline uint64) ([]string, error) {
- return s.idb().SearchTagKeys(maxTagKeys, deadline)
-}
-
-// SearchTagValuesOnTimeRange searches for tag values for the given tagKey on tr.
-func (s *Storage) SearchTagValuesOnTimeRange(tagKey []byte, tr TimeRange, maxTagValues int, deadline uint64) ([]string, error) {
- return s.idb().SearchTagValuesOnTimeRange(tagKey, tr, maxTagValues, deadline)
-}
-
-// SearchTagValues searches for tag values for the given tagKey
-func (s *Storage) SearchTagValues(tagKey []byte, maxTagValues int, deadline uint64) ([]string, error) {
- return s.idb().SearchTagValues(tagKey, maxTagValues, deadline)
+// SearchLabelValuesWithFiltersOnTimeRange searches for label values for the given labelName, filters and tr.
+func (s *Storage) SearchLabelValuesWithFiltersOnTimeRange(qt *querytracer.Tracer, labelName string, tfss []*TagFilters,
+ tr TimeRange, maxLabelValues, maxMetrics int, deadline uint64) ([]string, error) {
+ return s.idb().SearchLabelValuesWithFiltersOnTimeRange(qt, labelName, tfss, tr, maxLabelValues, maxMetrics, deadline)
}
// SearchTagValueSuffixes returns all the tag value suffixes for the given tagKey and tagValuePrefix on the given tr.
@@ -1468,39 +1459,6 @@ func getRegexpPartsForGraphiteQuery(q string) ([]string, string) {
}
}
-// SearchTagEntries returns a list of (tagName -> tagValues)
-func (s *Storage) SearchTagEntries(maxTagKeys, maxTagValues int, deadline uint64) ([]TagEntry, error) {
- idb := s.idb()
- keys, err := idb.SearchTagKeys(maxTagKeys, deadline)
- if err != nil {
- return nil, fmt.Errorf("cannot search tag keys: %w", err)
- }
-
- // Sort keys for faster seeks below
- sort.Strings(keys)
-
- tes := make([]TagEntry, len(keys))
- for i, key := range keys {
- values, err := idb.SearchTagValues([]byte(key), maxTagValues, deadline)
- if err != nil {
- return nil, fmt.Errorf("cannot search values for tag %q: %w", key, err)
- }
- te := &tes[i]
- te.Key = key
- te.Values = values
- }
- return tes, nil
-}
-
-// TagEntry contains (tagName -> tagValues) mapping
-type TagEntry struct {
- // Key is tagName
- Key string
-
- // Values contains all the values for Key.
- Values []string
-}
-
// GetSeriesCount returns the approximate number of unique time series.
//
// It includes the deleted series too and may count the same series
@@ -1510,8 +1468,8 @@ func (s *Storage) GetSeriesCount(deadline uint64) (uint64, error) {
}
// GetTSDBStatusWithFiltersForDate returns TSDB status data for /api/v1/status/tsdb with match[] filters.
-func (s *Storage) GetTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint64, topN, maxMetrics int, deadline uint64) (*TSDBStatus, error) {
- return s.idb().GetTSDBStatusWithFiltersForDate(tfss, date, topN, maxMetrics, deadline)
+func (s *Storage) GetTSDBStatusWithFiltersForDate(qt *querytracer.Tracer, tfss []*TagFilters, date uint64, topN, maxMetrics int, deadline uint64) (*TSDBStatus, error) {
+ return s.idb().GetTSDBStatusWithFiltersForDate(qt, tfss, date, topN, maxMetrics, deadline)
}
// MetricRow is a metric to insert into storage.
diff --git a/lib/storage/storage_test.go b/lib/storage/storage_test.go
index 8c350cde4..7460eca3a 100644
--- a/lib/storage/storage_test.go
+++ b/lib/storage/storage_test.go
@@ -502,13 +502,13 @@ func TestStorageDeleteMetrics(t *testing.T) {
t.Fatalf("cannot open storage: %s", err)
}
- // Verify no tag keys exist
- tks, err := s.SearchTagKeys(1e5, noDeadline)
+ // Verify no label names exist
+ lns, err := s.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil {
- t.Fatalf("error in SearchTagKeys at the start: %s", err)
+ t.Fatalf("error in SearchLabelNamesWithFiltersOnTimeRange() at the start: %s", err)
}
- if len(tks) != 0 {
- t.Fatalf("found non-empty tag keys at the start: %q", tks)
+ if len(lns) != 0 {
+ t.Fatalf("found non-empty tag keys at the start: %q", lns)
}
t.Run("serial", func(t *testing.T) {
@@ -554,12 +554,12 @@ func TestStorageDeleteMetrics(t *testing.T) {
})
// Verify no more tag keys exist
- tks, err = s.SearchTagKeys(1e5, noDeadline)
+ lns, err = s.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil {
- t.Fatalf("error in SearchTagKeys after the test: %s", err)
+ t.Fatalf("error in SearchLabelNamesWithFiltersOnTimeRange after the test: %s", err)
}
- if len(tks) != 0 {
- t.Fatalf("found non-empty tag keys after the test: %q", tks)
+ if len(lns) != 0 {
+ t.Fatalf("found non-empty tag keys after the test: %q", lns)
}
s.MustClose()
@@ -574,8 +574,8 @@ func testStorageDeleteMetrics(s *Storage, workerNum int) error {
workerTag := []byte(fmt.Sprintf("workerTag_%d", workerNum))
- tksAll := make(map[string]bool)
- tksAll[""] = true // __name__
+ lnsAll := make(map[string]bool)
+ lnsAll["__name__"] = true
for i := 0; i < metricsCount; i++ {
var mrs []MetricRow
var mn MetricName
@@ -587,7 +587,7 @@ func testStorageDeleteMetrics(s *Storage, workerNum int) error {
{workerTag, []byte("foobar")},
}
for i := range mn.Tags {
- tksAll[string(mn.Tags[i].Key)] = true
+ lnsAll[string(mn.Tags[i].Key)] = true
}
mn.MetricGroup = []byte(fmt.Sprintf("metric_%d_%d", i, workerNum))
metricNameRaw := mn.marshalRaw(nil)
@@ -610,21 +610,21 @@ func testStorageDeleteMetrics(s *Storage, workerNum int) error {
s.DebugFlush()
// Verify tag values exist
- tvs, err := s.SearchTagValues(workerTag, 1e5, noDeadline)
+ tvs, err := s.SearchLabelValuesWithFiltersOnTimeRange(nil, string(workerTag), nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil {
- return fmt.Errorf("error in SearchTagValues before metrics removal: %w", err)
+ return fmt.Errorf("error in SearchLabelValuesWithFiltersOnTimeRange before metrics removal: %w", err)
}
if len(tvs) == 0 {
return fmt.Errorf("unexpected empty number of tag values for workerTag")
}
// Verify tag keys exist
- tks, err := s.SearchTagKeys(1e5, noDeadline)
+ lns, err := s.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil {
- return fmt.Errorf("error in SearchTagKeys before metrics removal: %w", err)
+ return fmt.Errorf("error in SearchLabelNamesWithFiltersOnTimeRange before metrics removal: %w", err)
}
- if err := checkTagKeys(tks, tksAll); err != nil {
- return fmt.Errorf("unexpected tag keys before metrics removal: %w", err)
+ if err := checkLabelNames(lns, lnsAll); err != nil {
+ return fmt.Errorf("unexpected label names before metrics removal: %w", err)
}
var sr Search
@@ -683,9 +683,9 @@ func testStorageDeleteMetrics(s *Storage, workerNum int) error {
if n := metricBlocksCount(tfs); n != 0 {
return fmt.Errorf("expecting zero metric blocks after deleting all the metrics; got %d blocks", n)
}
- tvs, err = s.SearchTagValues(workerTag, 1e5, noDeadline)
+ tvs, err = s.SearchLabelValuesWithFiltersOnTimeRange(nil, string(workerTag), nil, TimeRange{}, 1e5, 1e9, noDeadline)
if err != nil {
- return fmt.Errorf("error in SearchTagValues after all the metrics are removed: %w", err)
+ return fmt.Errorf("error in SearchLabelValuesWithFiltersOnTimeRange after all the metrics are removed: %w", err)
}
if len(tvs) != 0 {
return fmt.Errorf("found non-empty tag values for %q after metrics removal: %q", workerTag, tvs)
@@ -694,21 +694,21 @@ func testStorageDeleteMetrics(s *Storage, workerNum int) error {
return nil
}
-func checkTagKeys(tks []string, tksExpected map[string]bool) error {
- if len(tks) < len(tksExpected) {
- return fmt.Errorf("unexpected number of tag keys found; got %d; want at least %d; tks=%q, tksExpected=%v", len(tks), len(tksExpected), tks, tksExpected)
+func checkLabelNames(lns []string, lnsExpected map[string]bool) error {
+ if len(lns) < len(lnsExpected) {
+ return fmt.Errorf("unexpected number of label names found; got %d; want at least %d; lns=%q, lnsExpected=%v", len(lns), len(lnsExpected), lns, lnsExpected)
}
- hasItem := func(k string, tks []string) bool {
- for _, kk := range tks {
- if k == kk {
+ hasItem := func(s string, lns []string) bool {
+ for _, labelName := range lns {
+ if s == labelName {
return true
}
}
return false
}
- for k := range tksExpected {
- if !hasItem(k, tks) {
- return fmt.Errorf("cannot find %q in tag keys %q", k, tks)
+ for labelName := range lnsExpected {
+ if !hasItem(labelName, lns) {
+ return fmt.Errorf("cannot find %q in label names %q", labelName, lns)
}
}
return nil
@@ -796,23 +796,23 @@ func testStorageRegisterMetricNames(s *Storage) error {
// Verify the storage contains the added metric names.
s.DebugFlush()
- // Verify that SearchTagKeys returns correct result.
- tksExpected := []string{
- "",
+ // Verify that SearchLabelNamesWithFiltersOnTimeRange returns correct result.
+ lnsExpected := []string{
+ "__name__",
"add_id",
"instance",
"job",
}
- tks, err := s.SearchTagKeys(100, noDeadline)
+ lns, err := s.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, TimeRange{}, 100, 1e9, noDeadline)
if err != nil {
- return fmt.Errorf("error in SearchTagKeys: %w", err)
+ return fmt.Errorf("error in SearchLabelNamesWithFiltersOnTimeRange: %w", err)
}
- sort.Strings(tks)
- if !reflect.DeepEqual(tks, tksExpected) {
- return fmt.Errorf("unexpected tag keys returned from SearchTagKeys;\ngot\n%q\nwant\n%q", tks, tksExpected)
+ sort.Strings(lns)
+ if !reflect.DeepEqual(lns, lnsExpected) {
+ return fmt.Errorf("unexpected label names returned from SearchLabelNamesWithFiltersOnTimeRange;\ngot\n%q\nwant\n%q", lns, lnsExpected)
}
- // Verify that SearchTagKeysOnTimeRange returns correct result.
+ // Verify that SearchLabelNamesWithFiltersOnTimeRange with the specified time range returns correct result.
now := timestampFromTime(time.Now())
start := now - msecPerDay
end := now + 60*1000
@@ -820,33 +820,33 @@ func testStorageRegisterMetricNames(s *Storage) error {
MinTimestamp: start,
MaxTimestamp: end,
}
- tks, err = s.SearchTagKeysOnTimeRange(tr, 100, noDeadline)
+ lns, err = s.SearchLabelNamesWithFiltersOnTimeRange(nil, nil, tr, 100, 1e9, noDeadline)
if err != nil {
- return fmt.Errorf("error in SearchTagKeysOnTimeRange: %w", err)
+ return fmt.Errorf("error in SearchLabelNamesWithFiltersOnTimeRange: %w", err)
}
- sort.Strings(tks)
- if !reflect.DeepEqual(tks, tksExpected) {
- return fmt.Errorf("unexpected tag keys returned from SearchTagKeysOnTimeRange;\ngot\n%q\nwant\n%q", tks, tksExpected)
+ sort.Strings(lns)
+ if !reflect.DeepEqual(lns, lnsExpected) {
+ return fmt.Errorf("unexpected label names returned from SearchLabelNamesWithFiltersOnTimeRange;\ngot\n%q\nwant\n%q", lns, lnsExpected)
}
- // Verify that SearchTagValues returns correct result.
- addIDs, err := s.SearchTagValues([]byte("add_id"), addsCount+100, noDeadline)
+ // Verify that SearchLabelValuesWithFiltersOnTimeRange returns correct result.
+ addIDs, err := s.SearchLabelValuesWithFiltersOnTimeRange(nil, "add_id", nil, TimeRange{}, addsCount+100, 1e9, noDeadline)
if err != nil {
- return fmt.Errorf("error in SearchTagValues: %w", err)
+ return fmt.Errorf("error in SearchLabelValuesWithFiltersOnTimeRange: %w", err)
}
sort.Strings(addIDs)
if !reflect.DeepEqual(addIDs, addIDsExpected) {
- return fmt.Errorf("unexpected tag values returned from SearchTagValues;\ngot\n%q\nwant\n%q", addIDs, addIDsExpected)
+ return fmt.Errorf("unexpected tag values returned from SearchLabelValuesWithFiltersOnTimeRange;\ngot\n%q\nwant\n%q", addIDs, addIDsExpected)
}
- // Verify that SearchTagValuesOnTimeRange returns correct result.
- addIDs, err = s.SearchTagValuesOnTimeRange([]byte("add_id"), tr, addsCount+100, noDeadline)
+ // Verify that SearchLabelValuesWithFiltersOnTimeRange with the specified time range returns correct result.
+ addIDs, err = s.SearchLabelValuesWithFiltersOnTimeRange(nil, "add_id", nil, tr, addsCount+100, 1e9, noDeadline)
if err != nil {
- return fmt.Errorf("error in SearchTagValuesOnTimeRange: %w", err)
+ return fmt.Errorf("error in SearchLabelValuesWithFiltersOnTimeRange: %w", err)
}
sort.Strings(addIDs)
if !reflect.DeepEqual(addIDs, addIDsExpected) {
- return fmt.Errorf("unexpected tag values returned from SearchTagValuesOnTimeRange;\ngot\n%q\nwant\n%q", addIDs, addIDsExpected)
+ return fmt.Errorf("unexpected tag values returned from SearchLabelValuesWithFiltersOnTimeRange;\ngot\n%q\nwant\n%q", addIDs, addIDsExpected)
}
// Verify that SearchMetricNames returns correct result.
diff --git a/lib/storage/tag_filters.go b/lib/storage/tag_filters.go
index c22a67f30..1a79fa5fb 100644
--- a/lib/storage/tag_filters.go
+++ b/lib/storage/tag_filters.go
@@ -211,16 +211,11 @@ func (tfs *TagFilters) addTagFilter() *tagFilter {
// String returns human-readable value for tfs.
func (tfs *TagFilters) String() string {
- if len(tfs.tfs) == 0 {
- return "{}"
+ a := make([]string, 0, len(tfs.tfs))
+ for _, tf := range tfs.tfs {
+ a = append(a, tf.String())
}
- var bb bytes.Buffer
- fmt.Fprintf(&bb, "{%s", tfs.tfs[0].String())
- for i := range tfs.tfs[1:] {
- fmt.Fprintf(&bb, ", %s", tfs.tfs[i+1].String())
- }
- fmt.Fprintf(&bb, "}")
- return bb.String()
+ return fmt.Sprintf("{%s}", strings.Join(a, ","))
}
// Reset resets the tf
@@ -305,6 +300,16 @@ func (tf *tagFilter) String() string {
} else if tf.isRegexp {
op = "=~"
}
+ if bytes.Equal(tf.key, graphiteReverseTagKey) {
+ return fmt.Sprintf("__graphite_reverse__%s%q", op, tf.value)
+ }
+ if tf.isComposite() {
+ metricName, key, err := unmarshalCompositeTagKey(tf.key)
+ if err != nil {
+ logger.Panicf("BUG: cannot unmarshal composite tag key: %s", err)
+ }
+ return fmt.Sprintf("composite(%s,%s)%s%q", metricName, key, op, tf.value)
+ }
key := tf.key
if len(key) == 0 {
key = []byte("__name__")
diff --git a/lib/storage/tag_filters_test.go b/lib/storage/tag_filters_test.go
index d2f6c0351..71476e5a2 100644
--- a/lib/storage/tag_filters_test.go
+++ b/lib/storage/tag_filters_test.go
@@ -1263,7 +1263,7 @@ func TestTagFiltersString(t *testing.T) {
mustAdd("tag_n", "n_value", true, false)
mustAdd("tag_re_graphite", "foo\\.bar", false, true)
s := tfs.String()
- sExpected := `{__name__="metric_name", tag_re=~"re.value", tag_nre!~"nre.value", tag_n!="n_value", tag_re_graphite="foo.bar"}`
+ sExpected := `{__name__="metric_name",tag_re=~"re.value",tag_nre!~"nre.value",tag_n!="n_value",tag_re_graphite="foo.bar"}`
if s != sExpected {
t.Fatalf("unexpected TagFilters.String(); got %q; want %q", s, sExpected)
}
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
index beccfc7f6..dca53fbc4 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
@@ -3301,6 +3301,13 @@ var awsPartition = partition{
},
},
},
+ "catalog.marketplace": service{
+ Endpoints: serviceEndpoints{
+ endpointKey{
+ Region: "us-east-1",
+ }: endpoint{},
+ },
+ },
"ce": service{
PartitionEndpoint: "aws-global",
IsRegionalized: boxedFalse,
@@ -7568,6 +7575,9 @@ var awsPartition = partition{
endpointKey{
Region: "ap-southeast-2",
}: endpoint{},
+ endpointKey{
+ Region: "ap-southeast-3",
+ }: endpoint{},
endpointKey{
Region: "ca-central-1",
}: endpoint{},
@@ -11213,6 +11223,9 @@ var awsPartition = partition{
endpointKey{
Region: "ap-southeast-2",
}: endpoint{},
+ endpointKey{
+ Region: "ca-central-1",
+ }: endpoint{},
endpointKey{
Region: "eu-central-1",
}: endpoint{},
@@ -11275,6 +11288,14 @@ var awsPartition = partition{
Region: "ap-southeast-2",
},
},
+ endpointKey{
+ Region: "ca-central-1",
+ }: endpoint{
+ Hostname: "data.iotevents.ca-central-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "ca-central-1",
+ },
+ },
endpointKey{
Region: "eu-central-1",
}: endpoint{
@@ -11483,12 +11504,30 @@ var awsPartition = partition{
endpointKey{
Region: "ap-southeast-2",
}: endpoint{},
+ endpointKey{
+ Region: "ca-central-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "ca-central-1",
+ Variant: fipsVariant,
+ }: endpoint{
+ Hostname: "iotsitewise-fips.ca-central-1.amazonaws.com",
+ },
endpointKey{
Region: "eu-central-1",
}: endpoint{},
endpointKey{
Region: "eu-west-1",
}: endpoint{},
+ endpointKey{
+ Region: "fips-ca-central-1",
+ }: endpoint{
+ Hostname: "iotsitewise-fips.ca-central-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "ca-central-1",
+ },
+ Deprecated: boxedTrue,
+ },
endpointKey{
Region: "fips-us-east-1",
}: endpoint{
@@ -11498,6 +11537,15 @@ var awsPartition = partition{
},
Deprecated: boxedTrue,
},
+ endpointKey{
+ Region: "fips-us-east-2",
+ }: endpoint{
+ Hostname: "iotsitewise-fips.us-east-2.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "us-east-2",
+ },
+ Deprecated: boxedTrue,
+ },
endpointKey{
Region: "fips-us-west-2",
}: endpoint{
@@ -11516,6 +11564,15 @@ var awsPartition = partition{
}: endpoint{
Hostname: "iotsitewise-fips.us-east-1.amazonaws.com",
},
+ endpointKey{
+ Region: "us-east-2",
+ }: endpoint{},
+ endpointKey{
+ Region: "us-east-2",
+ Variant: fipsVariant,
+ }: endpoint{
+ Hostname: "iotsitewise-fips.us-east-2.amazonaws.com",
+ },
endpointKey{
Region: "us-west-2",
}: endpoint{},
@@ -13259,6 +13316,31 @@ var awsPartition = partition{
}: endpoint{},
},
},
+ "m2": service{
+ Endpoints: serviceEndpoints{
+ endpointKey{
+ Region: "ap-southeast-2",
+ }: endpoint{},
+ endpointKey{
+ Region: "ca-central-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "eu-central-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "eu-west-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "sa-east-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "us-east-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "us-west-2",
+ }: endpoint{},
+ },
+ },
"machinelearning": service{
Endpoints: serviceEndpoints{
endpointKey{
@@ -13956,6 +14038,66 @@ var awsPartition = partition{
},
},
},
+ "memory-db": service{
+ Endpoints: serviceEndpoints{
+ endpointKey{
+ Region: "ap-east-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "ap-northeast-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "ap-northeast-2",
+ }: endpoint{},
+ endpointKey{
+ Region: "ap-south-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "ap-southeast-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "ap-southeast-2",
+ }: endpoint{},
+ endpointKey{
+ Region: "ca-central-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "eu-central-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "eu-north-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "eu-west-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "eu-west-2",
+ }: endpoint{},
+ endpointKey{
+ Region: "fips",
+ }: endpoint{
+ Hostname: "memory-db-fips.us-west-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "us-west-1",
+ },
+ },
+ endpointKey{
+ Region: "sa-east-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "us-east-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "us-east-2",
+ }: endpoint{},
+ endpointKey{
+ Region: "us-west-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "us-west-2",
+ }: endpoint{},
+ },
+ },
"messaging-chime": service{
Endpoints: serviceEndpoints{
endpointKey{
@@ -18746,6 +18888,9 @@ var awsPartition = partition{
endpointKey{
Region: "ap-southeast-2",
}: endpoint{},
+ endpointKey{
+ Region: "ap-southeast-3",
+ }: endpoint{},
endpointKey{
Region: "ca-central-1",
}: endpoint{},
@@ -18972,6 +19117,9 @@ var awsPartition = partition{
endpointKey{
Region: "ap-southeast-2",
}: endpoint{},
+ endpointKey{
+ Region: "ap-southeast-3",
+ }: endpoint{},
endpointKey{
Region: "ca-central-1",
}: endpoint{},
@@ -24030,6 +24178,16 @@ var awscnPartition = partition{
},
},
},
+ "memory-db": service{
+ Endpoints: serviceEndpoints{
+ endpointKey{
+ Region: "cn-north-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "cn-northwest-1",
+ }: endpoint{},
+ },
+ },
"monitoring": service{
Defaults: endpointDefaults{
defaultKey{}: endpoint{
@@ -28582,42 +28740,12 @@ var awsusgovPartition = partition{
},
},
Endpoints: serviceEndpoints{
- endpointKey{
- Region: "fips-us-gov-east-1",
- }: endpoint{
- Hostname: "servicecatalog-appregistry.us-gov-east-1.amazonaws.com",
- CredentialScope: credentialScope{
- Region: "us-gov-east-1",
- },
- Deprecated: boxedTrue,
- },
- endpointKey{
- Region: "fips-us-gov-west-1",
- }: endpoint{
- Hostname: "servicecatalog-appregistry.us-gov-west-1.amazonaws.com",
- CredentialScope: credentialScope{
- Region: "us-gov-west-1",
- },
- Deprecated: boxedTrue,
- },
endpointKey{
Region: "us-gov-east-1",
}: endpoint{},
- endpointKey{
- Region: "us-gov-east-1",
- Variant: fipsVariant,
- }: endpoint{
- Hostname: "servicecatalog-appregistry.us-gov-east-1.amazonaws.com",
- },
endpointKey{
Region: "us-gov-west-1",
}: endpoint{},
- endpointKey{
- Region: "us-gov-west-1",
- Variant: fipsVariant,
- }: endpoint{
- Hostname: "servicecatalog-appregistry.us-gov-west-1.amazonaws.com",
- },
},
},
"servicediscovery": service{
@@ -29339,6 +29467,16 @@ var awsusgovPartition = partition{
},
},
},
+ "transcribestreaming": service{
+ Endpoints: serviceEndpoints{
+ endpointKey{
+ Region: "us-gov-east-1",
+ }: endpoint{},
+ endpointKey{
+ Region: "us-gov-west-1",
+ }: endpoint{},
+ },
+ },
"transfer": service{
Endpoints: serviceEndpoints{
endpointKey{
@@ -30820,5 +30958,12 @@ var awsisobPartition = partition{
}: endpoint{},
},
},
+ "workspaces": service{
+ Endpoints: serviceEndpoints{
+ endpointKey{
+ Region: "us-isob-east-1",
+ }: endpoint{},
+ },
+ },
},
}
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/version.go b/vendor/github.com/aws/aws-sdk-go/aws/version.go
index 6fac339ed..8436203a9 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/version.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/version.go
@@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go"
// SDKVersion is the version of this SDK
-const SDKVersion = "1.44.27"
+const SDKVersion = "1.44.32"
diff --git a/vendor/golang.org/x/oauth2/google/default.go b/vendor/golang.org/x/oauth2/google/default.go
index dd0042016..024a104b0 100644
--- a/vendor/golang.org/x/oauth2/google/default.go
+++ b/vendor/golang.org/x/oauth2/google/default.go
@@ -190,6 +190,7 @@ func CredentialsFromJSONWithParams(ctx context.Context, jsonData []byte, params
if err != nil {
return nil, err
}
+ ts = newErrWrappingTokenSource(ts)
return &DefaultCredentials{
ProjectID: f.ProjectID,
TokenSource: ts,
diff --git a/vendor/golang.org/x/oauth2/google/error.go b/vendor/golang.org/x/oauth2/google/error.go
new file mode 100644
index 000000000..d84dd0047
--- /dev/null
+++ b/vendor/golang.org/x/oauth2/google/error.go
@@ -0,0 +1,64 @@
+// Copyright 2022 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package google
+
+import (
+ "errors"
+
+ "golang.org/x/oauth2"
+)
+
+// AuthenticationError indicates there was an error in the authentication flow.
+//
+// Use (*AuthenticationError).Temporary to check if the error can be retried.
+type AuthenticationError struct {
+ err *oauth2.RetrieveError
+}
+
+func newAuthenticationError(err error) error {
+ re := &oauth2.RetrieveError{}
+ if !errors.As(err, &re) {
+ return err
+ }
+ return &AuthenticationError{
+ err: re,
+ }
+}
+
+// Temporary indicates that the network error has one of the following status codes and may be retried: 500, 503, 408, or 429.
+func (e *AuthenticationError) Temporary() bool {
+ if e.err.Response == nil {
+ return false
+ }
+ sc := e.err.Response.StatusCode
+ return sc == 500 || sc == 503 || sc == 408 || sc == 429
+}
+
+func (e *AuthenticationError) Error() string {
+ return e.err.Error()
+}
+
+func (e *AuthenticationError) Unwrap() error {
+ return e.err
+}
+
+type errWrappingTokenSource struct {
+ src oauth2.TokenSource
+}
+
+func newErrWrappingTokenSource(ts oauth2.TokenSource) oauth2.TokenSource {
+ return &errWrappingTokenSource{src: ts}
+}
+
+// Token returns the current token if it's still valid, else will
+// refresh the current token (using r.Context for HTTP client
+// information) and return the new one.
+func (s *errWrappingTokenSource) Token() (*oauth2.Token, error) {
+ t, err := s.src.Token()
+ if err != nil {
+ return nil, newAuthenticationError(err)
+ }
+ return t, nil
+}
diff --git a/vendor/golang.org/x/oauth2/google/jwt.go b/vendor/golang.org/x/oauth2/google/jwt.go
index 67d97b990..e89e6ae17 100644
--- a/vendor/golang.org/x/oauth2/google/jwt.go
+++ b/vendor/golang.org/x/oauth2/google/jwt.go
@@ -66,7 +66,8 @@ func newJWTSource(jsonKey []byte, audience string, scopes []string) (oauth2.Toke
if err != nil {
return nil, err
}
- return oauth2.ReuseTokenSource(tok, ts), nil
+ rts := newErrWrappingTokenSource(oauth2.ReuseTokenSource(tok, ts))
+ return rts, nil
}
type jwtAccessTokenSource struct {
diff --git a/vendor/golang.org/x/sys/unix/syscall_solaris.go b/vendor/golang.org/x/sys/unix/syscall_solaris.go
index 5c2003cec..932996c75 100644
--- a/vendor/golang.org/x/sys/unix/syscall_solaris.go
+++ b/vendor/golang.org/x/sys/unix/syscall_solaris.go
@@ -618,6 +618,7 @@ func Sendfile(outfd int, infd int, offset *int64, count int) (written int, err e
//sys Getpriority(which int, who int) (n int, err error)
//sysnb Getrlimit(which int, lim *Rlimit) (err error)
//sysnb Getrusage(who int, rusage *Rusage) (err error)
+//sysnb Getsid(pid int) (sid int, err error)
//sysnb Gettimeofday(tv *Timeval) (err error)
//sysnb Getuid() (uid int)
//sys Kill(pid int, signum syscall.Signal) (err error)
diff --git a/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go b/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go
index d12f4fbfe..fdf53f8da 100644
--- a/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go
+++ b/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go
@@ -66,6 +66,7 @@ import (
//go:cgo_import_dynamic libc_getpriority getpriority "libc.so"
//go:cgo_import_dynamic libc_getrlimit getrlimit "libc.so"
//go:cgo_import_dynamic libc_getrusage getrusage "libc.so"
+//go:cgo_import_dynamic libc_getsid getsid "libc.so"
//go:cgo_import_dynamic libc_gettimeofday gettimeofday "libc.so"
//go:cgo_import_dynamic libc_getuid getuid "libc.so"
//go:cgo_import_dynamic libc_kill kill "libc.so"
@@ -202,6 +203,7 @@ import (
//go:linkname procGetpriority libc_getpriority
//go:linkname procGetrlimit libc_getrlimit
//go:linkname procGetrusage libc_getrusage
+//go:linkname procGetsid libc_getsid
//go:linkname procGettimeofday libc_gettimeofday
//go:linkname procGetuid libc_getuid
//go:linkname procKill libc_kill
@@ -339,6 +341,7 @@ var (
procGetpriority,
procGetrlimit,
procGetrusage,
+ procGetsid,
procGettimeofday,
procGetuid,
procKill,
@@ -1044,6 +1047,17 @@ func Getrusage(who int, rusage *Rusage) (err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+func Getsid(pid int) (sid int, err error) {
+ r0, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procGetsid)), 1, uintptr(pid), 0, 0, 0, 0, 0)
+ sid = int(r0)
+ if e1 != 0 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
func Gettimeofday(tv *Timeval) (err error) {
_, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procGettimeofday)), 1, uintptr(unsafe.Pointer(tv)), 0, 0, 0, 0, 0)
if e1 != 0 {
diff --git a/vendor/golang.org/x/xerrors/fmt.go b/vendor/golang.org/x/xerrors/fmt.go
index 6df18669f..27a5d70bd 100644
--- a/vendor/golang.org/x/xerrors/fmt.go
+++ b/vendor/golang.org/x/xerrors/fmt.go
@@ -34,7 +34,8 @@ const percentBangString = "%!"
// operand that does not implement the error interface. The %w verb is otherwise
// a synonym for %v.
//
-// Deprecated: As of Go 1.13, use fmt.Errorf instead.
+// Note that as of Go 1.13, the fmt.Errorf function will do error formatting,
+// but it will not capture a stack backtrace.
func Errorf(format string, a ...interface{}) error {
format = formatPlusW(format)
// Support a ": %[wsv]" suffix, which works well with xerrors.Formatter.
diff --git a/vendor/google.golang.org/api/internal/version.go b/vendor/google.golang.org/api/internal/version.go
index a382d160a..6b00f4d47 100644
--- a/vendor/google.golang.org/api/internal/version.go
+++ b/vendor/google.golang.org/api/internal/version.go
@@ -5,4 +5,4 @@
package internal
// Version is the current tagged release of the library.
-const Version = "0.82.0"
+const Version = "0.83.0"
diff --git a/vendor/modules.txt b/vendor/modules.txt
index e19d8e5f9..8087d4b3a 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -34,7 +34,7 @@ github.com/VictoriaMetrics/metricsql/binaryop
# github.com/VividCortex/ewma v1.2.0
## explicit; go 1.12
github.com/VividCortex/ewma
-# github.com/aws/aws-sdk-go v1.44.27
+# github.com/aws/aws-sdk-go v1.44.32
## explicit; go 1.11
github.com/aws/aws-sdk-go/aws
github.com/aws/aws-sdk-go/aws/arn
@@ -277,7 +277,7 @@ go.opencensus.io/trace/tracestate
go.uber.org/atomic
# go.uber.org/goleak v1.1.11-0.20210813005559-691160354723
## explicit; go 1.13
-# golang.org/x/net v0.0.0-20220531201128-c960675eff93
+# golang.org/x/net v0.0.0-20220607020251-c690dde0001d
## explicit; go 1.17
golang.org/x/net/context
golang.org/x/net/context/ctxhttp
@@ -289,8 +289,8 @@ golang.org/x/net/internal/socks
golang.org/x/net/internal/timeseries
golang.org/x/net/proxy
golang.org/x/net/trace
-# golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401
-## explicit; go 1.11
+# golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb
+## explicit; go 1.15
golang.org/x/oauth2
golang.org/x/oauth2/authhandler
golang.org/x/oauth2/clientcredentials
@@ -302,7 +302,7 @@ golang.org/x/oauth2/jwt
# golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f
## explicit
golang.org/x/sync/errgroup
-# golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a
+# golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d
## explicit; go 1.17
golang.org/x/sys/internal/unsafeheader
golang.org/x/sys/unix
@@ -313,11 +313,11 @@ golang.org/x/text/secure/bidirule
golang.org/x/text/transform
golang.org/x/text/unicode/bidi
golang.org/x/text/unicode/norm
-# golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df
+# golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f
## explicit; go 1.17
golang.org/x/xerrors
golang.org/x/xerrors/internal
-# google.golang.org/api v0.82.0
+# google.golang.org/api v0.83.0
## explicit; go 1.15
google.golang.org/api/googleapi
google.golang.org/api/googleapi/transport
@@ -350,7 +350,7 @@ google.golang.org/appengine/internal/socket
google.golang.org/appengine/internal/urlfetch
google.golang.org/appengine/socket
google.golang.org/appengine/urlfetch
-# google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8
+# google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac
## explicit; go 1.15
google.golang.org/genproto/googleapis/api/annotations
google.golang.org/genproto/googleapis/iam/v1