Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files

This commit is contained in:
Aliaksandr Valialkin 2022-06-20 17:42:27 +03:00
commit 1b24afec36
No known key found for this signature in database
GPG key ID: A72BEC6CD3D0DED1
91 changed files with 3703 additions and 1284 deletions

View file

@ -936,7 +936,7 @@ for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time
On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag:
```console
# count unique timeseries in database
# count unique time series in database
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
# relaunch victoriametrics with search.maxExportSeries more than value from previous command
@ -1392,6 +1392,7 @@ VictoriaMetrics provides the following security-related command-line flags:
* `-forceMergeAuthKey` for protecting `/internal/force_merge` endpoint. See [force merge docs](#forced-merge).
* `-search.resetCacheAuthKey` for protecting `/internal/resetRollupResultCache` endpoint. See [backfilling](#backfilling) for more details.
* `-configAuthKey` for protecting `/config` endpoint, since it may contain sensitive information such as passwords.
* `-flagsAuthKey` for protecting `/flags` endpoint.
* `-pprofAuthKey` for protecting `/debug/pprof/*` endpoints, which can be used for [profiling](#profiling).
* `-denyQueryTracing` for disallowing [query tracing](#query-tracing).

View file

@ -510,7 +510,7 @@ max range per request: 8h20m0s
In `replay` mode all groups are executed sequentially one-by-one. Rules within the group are
executed sequentially as well (`concurrency` setting is ignored). Vmalert sends rule's expression
to [/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries) endpoint
to [/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query) endpoint
of the configured `-datasource.url`. Returned data is then processed according to the rule type and
backfilled to `-remoteWrite.url` via [remote Write protocol](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations).
Vmalert respects `evaluationInterval` value set by flag or per-group during the replay.

View file

@ -140,7 +140,11 @@ Do not transfer Basic Auth headers in plaintext over untrusted networks. Enable
Alternatively, [https termination proxy](https://en.wikipedia.org/wiki/TLS_termination_proxy) may be put in front of `vmauth`.
It is recommended protecting `/-/reload` endpoint with `-reloadAuthKey` command-line flag, so external users couldn't trigger config reload.
It is recommended protecting following endpoints with authKeys:
* `/-/reload` with `-reloadAuthKey` command-line flag, so external users couldn't trigger config reload.
* `/flags` with `-flagsAuthkey` command-line flag, so unauthorized users couldn't get application command-line flags.
* `/metrics` with `metricsAuthkey` command-line flag, so unauthorized users couldn't get access to [vmauth metrics](#monitoring).
* `/debug/pprof` with `pprofAuthKey` command-line flag, so unauthorized users couldn't get access to [profiling information](#profiling).
## Monitoring

View file

@ -1,4 +1,4 @@
FROM golang:1.18.2 as build-web-stage
FROM golang:1.18.3 as build-web-stage
COPY build /build
WORKDIR /build

View file

@ -4,7 +4,7 @@ DOCKER_NAMESPACE := victoriametrics
ROOT_IMAGE ?= alpine:3.16.0
CERTS_IMAGE := alpine:3.16.0
GO_BUILDER_IMAGE := golang:1.18.2-alpine
GO_BUILDER_IMAGE := golang:1.18.3-alpine
BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr :/ __)-1
BASE_IMAGE := local/base:1.1.3-$(shell echo $(ROOT_IMAGE) | tr :/ __)-$(shell echo $(CERTS_IMAGE) | tr :/ __)

View file

@ -50,6 +50,7 @@ See also [case studies](https://docs.victoriametrics.com/CaseStudies.html).
* [Push Prometheus metrics to VictoriaMetrics or other exporters](https://pythonawesome.com/push-prometheus-metrics-to-victoriametrics-or-other-exporters/)
* [Install and configure VictoriaMetrics on Debian](https://www.vultr.com/docs/install-and-configure-victoriametrics-on-debian)
* [Superset BI with Victoria Metrics](https://cer6erus.medium.com/superset-bi-with-victoria-metrics-a109d3e91bc6)
* [VictoriaMetrics Source Code Analysis - Bloom filter](https://www.sobyte.net/post/2022-05/victoriametrics-bloomfilter/)
## Our articles

View file

@ -17,6 +17,8 @@ The following tip changes can be tested by building VictoriaMetrics components f
**Update notes:** this release introduces backwards-incompatible changes to communication protocol between `vmselect` and `vmstorage` nodes in cluster version of VictoriaMetrics because of added [query tracing](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#query-tracing), so `vmselect` and `vmstorage` nodes may log communication errors during the upgrade. These errors should stop after all the `vmselect` and `vmstorage` nodes are updated to new release. It is safe to downgrade to previous releases.
* SECURITY: add `-flagsAuthKey` command-line flag for protecting `/flags` endpoint from unauthorized access. Though this endpoint already hides values for command-line flags with `key` and `password` substrings in their names, other sensitive information could be exposed there. See [This issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2753).
* FEATURE: support query tracing, which allows determining bottlenecks during query processing. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#query-tracing) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1403).
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): add `cardinality` tab, which can help identifying the source of [high cardinality](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) and [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) issues. See [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2233) and [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2730) feature requests and [these docs](https://docs.victoriametrics.com/#cardinality-explorer).
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): small UX enhancements according to [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2638).
@ -37,6 +39,8 @@ The following tip changes can be tested by building VictoriaMetrics components f
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `-promscrape.cluster.name` command-line flag, which allows proper data de-duplication when the same target is scraped from multiple [vmagent clusters](https://docs.victoriametrics.com/vmagent.html#scraping-big-number-of-targets). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2679).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `action: graphite` relabeling rules optimized for extracting labels from Graphite-style metric names. See [these docs](https://docs.victoriametrics.com/vmagent.html#graphite-relabeling) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2737).
* FEATURE: [VictoriaMetrics enterprise](https://victoriametrics.com/products/enterprise/): expose `vm_downsampling_partitions_scheduled` and `vm_downsampling_partitions_scheduled_size_bytes` metrics, which can be used for tracking the progress of initial [downsampling](https://docs.victoriametrics.com/#downsampling) for historical data. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2612).
* FEATURE: [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html): do not spend up to 5 seconds when trying to connect to unavailable `vmstorage` nodes. This should improve query latency when some of `vmstorage` nodes aren't available. Expose `vm_tcpdialer_addr_available{addr="..."}` metric at `http://vmselect:8481/metrics` for determining whether the given `addr` is available for establishing new connections. See [this comment](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/711#issuecomment-1160363187).
* FEATURE: [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html): add `-vmstorageDialTimeout` command-line flags to `vmselect` and `vminsert` for tuning the maximum duration for connection estabilishing to `vmstorage` nodes. This should help resolving [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/711).
* BUGFIX: support for data ingestion in [DataDog format](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-datadog-agent) from legacy clients / agents. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2670). Thanks to @elProxy for the fix.
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent.html): do not expose `vm_promscrape_service_discovery_duration_seconds_bucket` metric for unused service discovery types. This reduces the number of metrics exported at `http://vmagent:8429/metrics`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2671).

View file

@ -32,7 +32,7 @@ MetricsQL implements [PromQL](https://medium.com/@valyala/promql-tutorial-for-be
This functionality can be evaluated at [an editable Grafana dashboard](https://play-grafana.victoriametrics.com/d/4ome8yJmz/node-exporter-on-victoriametrics-demo) or at your own [VictoriaMetrics instance](https://docs.victoriametrics.com/#how-to-start-victoriametrics).
* Graphite-compatible filters can be passed via `{__graphite__="foo.*.bar"}` syntax. See [these docs](https://docs.victoriametrics.com/#selecting-graphite-metrics). VictoriaMetrics also can be used as Graphite datasource in Grafana. See [these docs](https://docs.victoriametrics.com/#graphite-api-usage) for details. See also [label_graphite_group](#label_graphite_group) function, which can be used for extracting the given groups from Graphite metric name.
* Lookbehind window in square brackets may be omitted. VictoriaMetrics automatically selects the lookbehind window depending on the current step used for building the graph (e.g. `step` query arg passed to [/api/v1/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries)). For instance, the following query is valid in VictoriaMetrics: `rate(node_network_receive_bytes_total)`. It is equivalent to `rate(node_network_receive_bytes_total[$__interval])` when used in Grafana.
* Lookbehind window in square brackets may be omitted. VictoriaMetrics automatically selects the lookbehind window depending on the current step used for building the graph (e.g. `step` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query)). For instance, the following query is valid in VictoriaMetrics: `rate(node_network_receive_bytes_total)`. It is equivalent to `rate(node_network_receive_bytes_total[$__interval])` when used in Grafana.
* [Aggregate functions](#aggregate-functions) accept arbitrary number of args. For example, `avg(q1, q2, q3)` would return the average values for every point across time series returned by `q1`, `q2` and `q3`.
* [@ modifier](https://prometheus.io/docs/prometheus/latest/querying/basics/#modifier) can be put anywhere in the query. For example, `sum(foo) @ end()` calculates `sum(foo)` at the `end` timestamp of the selected time range `[start ... end]`.
* Arbitrary subexpression can be used as [@ modifier](https://prometheus.io/docs/prometheus/latest/querying/basics/#modifier). For example, `foo @ (end() - 1h)` calculates `foo` at the `end - 1 hour` timestamp on the selected time range `[start ... end]`.
@ -69,13 +69,13 @@ MetricsQL provides the following functions:
### Rollup functions
**Rollup functions** (aka range functions or window functions) calculate rollups over **raw samples** on the given lookbehind window for the [selected time series](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). For example, `avg_over_time(temperature[24h])` calculates the average temperature over raw samples for the last 24 hours. Additional details:
**Rollup functions** (aka range functions or window functions) calculate rollups over **raw samples** on the given lookbehind window for the [selected time series](https://docs.victoriametrics.com/keyConcepts.html#filtering). For example, `avg_over_time(temperature[24h])` calculates the average temperature over raw samples for the last 24 hours. Additional details:
* If rollup functions are used for building graphs in Grafana, then the rollup is calculated independently per each point on the graph. For example, every point for `avg_over_time(temperature[24h])` graph shows the average temperature for the last 24 hours ending at this point. The interval between points is set as `step` query arg passed by Grafana to [/api/v1/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries).
* If the given [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) returns multiple time series, then rollups are calculated individually per each returned series.
* If lookbehind window in square brackets is missing, then MetricsQL automatically sets the lookbehind window to the interval between points on the graph (aka `step` query arg at [/api/v1/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries), `$__interval` value from Grafana or `1i` duration in MetricsQL). For example, `rate(http_requests_total)` is equivalent to `rate(http_requests_total[$__interval])` in Grafana. It is also equivalent to `rate(http_requests_total[1i])`.
* Every [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) in MetricsQL must be wrapped into a rollup function. Otherwise it is automatically wrapped into [default_rollup](#default_rollup). For example, `foo{bar="baz"}` is automatically converted to `default_rollup(foo{bar="baz"}[1i])` before performing the calculations.
* If something other than [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) is passed to rollup function, then the inner arg is automatically converted to a [subquery](#subqueries).
* If rollup functions are used for building graphs in Grafana, then the rollup is calculated independently per each point on the graph. For example, every point for `avg_over_time(temperature[24h])` graph shows the average temperature for the last 24 hours ending at this point. The interval between points is set as `step` query arg passed by Grafana to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query).
* If the given [series selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) returns multiple time series, then rollups are calculated individually per each returned series.
* If lookbehind window in square brackets is missing, then MetricsQL automatically sets the lookbehind window to the interval between points on the graph (aka `step` query arg at [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query), `$__interval` value from Grafana or `1i` duration in MetricsQL). For example, `rate(http_requests_total)` is equivalent to `rate(http_requests_total[$__interval])` in Grafana. It is also equivalent to `rate(http_requests_total[1i])`.
* Every [series selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) in MetricsQL must be wrapped into a rollup function. Otherwise it is automatically wrapped into [default_rollup](#default_rollup). For example, `foo{bar="baz"}` is automatically converted to `default_rollup(foo{bar="baz"}[1i])` before performing the calculations.
* If something other than [series selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) is passed to rollup function, then the inner arg is automatically converted to a [subquery](#subqueries).
* All the rollup functions accept optional `keep_metric_names` modifier. If it is set, then the function keeps metric names in results. See [these docs](#keep_metric_names).
See also [implicit query conversions](#implicit-query-conversions).
@ -86,91 +86,91 @@ See also [implicit query conversions](#implicit-query-conversions).
#### aggr_over_time
`aggr_over_time(("rollup_func1", "rollup_func2", ...), series_selector[d])` calculates all the listed `rollup_func*` for raw samples on the given lookbehind window `d`. The calculations are perfomed individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). `rollup_func*` can contain any rollup function. For instance, `aggr_over_time(("min_over_time", "max_over_time", "rate"), m[d])` would calculate [min_over_time](#min_over_time), [max_over_time](#max_over_time) and [rate](#rate) for `m[d]`.
`aggr_over_time(("rollup_func1", "rollup_func2", ...), series_selector[d])` calculates all the listed `rollup_func*` for raw samples on the given lookbehind window `d`. The calculations are perfomed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). `rollup_func*` can contain any rollup function. For instance, `aggr_over_time(("min_over_time", "max_over_time", "rate"), m[d])` would calculate [min_over_time](#min_over_time), [max_over_time](#max_over_time) and [rate](#rate) for `m[d]`.
#### ascent_over_time
`ascent_over_time(series_selector[d])` calculates ascent of raw sample values on the given lookbehind window `d`. The calculations are performed individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Useful for tracking height gains in GPS tracking. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [descent_over_time](#descent_over_time).
`ascent_over_time(series_selector[d])` calculates ascent of raw sample values on the given lookbehind window `d`. The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Useful for tracking height gains in GPS tracking. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [descent_over_time](#descent_over_time).
#### avg_over_time
`avg_over_time(series_selector[d])` calculates the average value over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). This function is supported by PromQL. See also [median_over_time](#median_over_time).
`avg_over_time(series_selector[d])` calculates the average value over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). This function is supported by PromQL. See also [median_over_time](#median_over_time).
#### changes
`changes(series_selector[d])` calculates the number of times the raw samples changed on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Unlike `changes()` in Prometheus it takes into account the change from the last sample before the given lookbehind window `d`. See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [changes_prometheus](#changes_prometheus).
`changes(series_selector[d])` calculates the number of times the raw samples changed on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Unlike `changes()` in Prometheus it takes into account the change from the last sample before the given lookbehind window `d`. See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [changes_prometheus](#changes_prometheus).
#### changes_prometheus
`changes_prometheus(series_selector[d])` calculates the number of times the raw samples changed on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). It doesn't take into account the change from the last sample before the given lookbehind window `d` in the same way as Prometheus does. See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [changes](#changes).
`changes_prometheus(series_selector[d])` calculates the number of times the raw samples changed on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). It doesn't take into account the change from the last sample before the given lookbehind window `d` in the same way as Prometheus does. See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [changes](#changes).
#### count_eq_over_time
`count_eq_over_time(series_selector[d], eq)` calculates the number of raw samples on the given lookbehind window `d`, which are equal to `eq`. It is calculated independently per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [count_over_time](#count_over_time).
`count_eq_over_time(series_selector[d], eq)` calculates the number of raw samples on the given lookbehind window `d`, which are equal to `eq`. It is calculated independently per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [count_over_time](#count_over_time).
#### count_gt_over_time
`count_gt_over_time(series_selector[d], gt)` calculates the number of raw samples on the given lookbehind window `d`, which are bigger than `gt`. It is calculated independently per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [count_over_time](#count_over_time).
`count_gt_over_time(series_selector[d], gt)` calculates the number of raw samples on the given lookbehind window `d`, which are bigger than `gt`. It is calculated independently per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [count_over_time](#count_over_time).
#### count_le_over_time
`count_le_over_time(series_selector[d], le)` calculates the number of raw samples on the given lookbehind window `d`, which don't exceed `le`. It is calculated independently per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [count_over_time](#count_over_time).
`count_le_over_time(series_selector[d], le)` calculates the number of raw samples on the given lookbehind window `d`, which don't exceed `le`. It is calculated independently per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [count_over_time](#count_over_time).
#### count_ne_over_time
`count_ne_over_time(series_selector[d], ne)` calculates the number of raw samples on the given lookbehind window `d`, which aren't equal to `ne`. It is calculated independently per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [count_over_time](#count_over_time).
`count_ne_over_time(series_selector[d], ne)` calculates the number of raw samples on the given lookbehind window `d`, which aren't equal to `ne`. It is calculated independently per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [count_over_time](#count_over_time).
#### count_over_time
`count_over_time(series_selector[d])` calculates the number of raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [count_le_over_time](#count_le_over_time), [count_gt_over_time](#count_gt_over_time), [count_eq_over_time](#count_eq_over_time) and [count_ne_over_time](#count_ne_over_time).
`count_over_time(series_selector[d])` calculates the number of raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [count_le_over_time](#count_le_over_time), [count_gt_over_time](#count_gt_over_time), [count_eq_over_time](#count_eq_over_time) and [count_ne_over_time](#count_ne_over_time).
#### decreases_over_time
`decreases_over_time(series_selector[d])` calculates the number of raw sample value decreases over the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [increases_over_time](#increases_over_time).
`decreases_over_time(series_selector[d])` calculates the number of raw sample value decreases over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [increases_over_time](#increases_over_time).
#### default_rollup
`default_rollup(series_selector[d])` returns the last raw sample value on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors).
`default_rollup(series_selector[d])` returns the last raw sample value on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
#### delta
`delta(series_selector[d])` calculates the difference between the last sample before the given lookbehind window `d` and the last sample at the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). The behaviour of `delta()` function in MetricsQL is slighly different to the behaviour of `delta()` function in Prometheus. See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [increase](#increase) and [delta_prometheus](#delta_prometheus).
`delta(series_selector[d])` calculates the difference between the last sample before the given lookbehind window `d` and the last sample at the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). The behaviour of `delta()` function in MetricsQL is slighly different to the behaviour of `delta()` function in Prometheus. See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [increase](#increase) and [delta_prometheus](#delta_prometheus).
#### delta_prometheus
`delta_prometheus(series_selector[d])` calculates the difference between the first and the last samples at the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). The behaviour of `delta_prometheus()` is close to the behaviour of `delta()` function in Prometheus. See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [delta](#delta).
`delta_prometheus(series_selector[d])` calculates the difference between the first and the last samples at the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). The behaviour of `delta_prometheus()` is close to the behaviour of `delta()` function in Prometheus. See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [delta](#delta).
#### deriv
`deriv(series_selector[d])` calculates per-second derivative over the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). The derivative is calculated using linear regression. Metric names are stripped from the resulting rollups. This function is supported by PromQL. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [deriv_fast](#deriv_fast) and [ideriv](#ideriv).
`deriv(series_selector[d])` calculates per-second derivative over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). The derivative is calculated using linear regression. Metric names are stripped from the resulting rollups. This function is supported by PromQL. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [deriv_fast](#deriv_fast) and [ideriv](#ideriv).
#### deriv_fast
`deriv_fast(series_selector[d])` calculates per-second derivative using the first and the last raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [deriv](#deriv) and [ideriv](#ideriv).
`deriv_fast(series_selector[d])` calculates per-second derivative using the first and the last raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [deriv](#deriv) and [ideriv](#ideriv).
#### descent_over_time
`descent_over_time(series_selector[d])` calculates descent of raw sample values on the given lookbehind window `d`. The calculations are performed individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Useful for tracking height loss in GPS tracking. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [ascent_over_time](#ascent_over_time).
`descent_over_time(series_selector[d])` calculates descent of raw sample values on the given lookbehind window `d`. The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Useful for tracking height loss in GPS tracking. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [ascent_over_time](#ascent_over_time).
#### distinct_over_time
`distinct_over_time(series_selector[d])` returns the number of distinct raw sample values on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
`distinct_over_time(series_selector[d])` returns the number of distinct raw sample values on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
#### duration_over_time
`duration_over_time(series_selector[d], max_interval)` returns the duration in seconds when time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) were present over the given lookbehind window `d`. It is expected that intervals between adjacent samples per each series don't exceed the `max_interval`. Otherwise such intervals are considered as gaps and aren't counted. See also [lifetime](#lifetime) and [lag](#lag).
`duration_over_time(series_selector[d], max_interval)` returns the duration in seconds when time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) were present over the given lookbehind window `d`. It is expected that intervals between adjacent samples per each series don't exceed the `max_interval`. Otherwise such intervals are considered as gaps and aren't counted. See also [lifetime](#lifetime) and [lag](#lag).
#### first_over_time
`first_over_time(series_selector[d])` returns the first raw sample value on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). See also [last_over_time](#last_over_time) and [tfirst_over_time](#tfirst_over_time).
`first_over_time(series_selector[d])` returns the first raw sample value on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). See also [last_over_time](#last_over_time) and [tfirst_over_time](#tfirst_over_time).
#### geomean_over_time
`geomean_over_time(series_selector[d])` calculates [geometric mean](https://en.wikipedia.org/wiki/Geometric_mean) over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
`geomean_over_time(series_selector[d])` calculates [geometric mean](https://en.wikipedia.org/wiki/Geometric_mean) over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
#### histogram_over_time
`histogram_over_time(series_selector[d])` calculates [VictoriaMetrics histogram](https://godoc.org/github.com/VictoriaMetrics/metrics#Histogram) over raw samples on the given lookbehind window `d`. It is calculated individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). The resulting histograms are useful to pass to [histogram_quantile](#histogram_quantile) for calculating quantiles over multiple gauges. For example, the following query calculates median temperature by country over the last 24 hours: `histogram_quantile(0.5, sum(histogram_over_time(temperature[24h])) by (vmrange,country))`.
`histogram_over_time(series_selector[d])` calculates [VictoriaMetrics histogram](https://godoc.org/github.com/VictoriaMetrics/metrics#Histogram) over raw samples on the given lookbehind window `d`. It is calculated individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). The resulting histograms are useful to pass to [histogram_quantile](#histogram_quantile) for calculating quantiles over multiple [gauges](https://docs.victoriametrics.com/keyConcepts.html#gauge). For example, the following query calculates median temperature by country over the last 24 hours: `histogram_quantile(0.5, sum(histogram_over_time(temperature[24h])) by (vmrange,country))`.
#### hoeffding_bound_lower
@ -182,71 +182,71 @@ See also [implicit query conversions](#implicit-query-conversions).
#### holt_winters
`holt_winters(series_selector[d], sf, tf)` calculates Holt-Winters value (aka [double exponential smoothing](https://en.wikipedia.org/wiki/Exponential_smoothing#Double_exponential_smoothing)) for raw samples over the given lookbehind window `d` using the given smoothing factor `sf` and the given trend factor `tf`. Both `sf` and `tf` must be in the range `[0...1]`. It is expected that the [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) returns time series of [gauge type](https://prometheus.io/docs/concepts/metric_types/#gauge). This function is supported by PromQL.
`holt_winters(series_selector[d], sf, tf)` calculates Holt-Winters value (aka [double exponential smoothing](https://en.wikipedia.org/wiki/Exponential_smoothing#Double_exponential_smoothing)) for raw samples over the given lookbehind window `d` using the given smoothing factor `sf` and the given trend factor `tf`. Both `sf` and `tf` must be in the range `[0...1]`. It is expected that the [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) returns time series of [gauge type](https://docs.victoriametrics.com/keyConcepts.html#gauge). This function is supported by PromQL.
#### idelta
`idelta(series_selector[d])` calculates the difference between the last two raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL.
`idelta(series_selector[d])` calculates the difference between the last two raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL.
#### ideriv
`ideriv(series_selector[d])` calculates the per-second derivative based on the last two raw samples over the given lookbehind window `d`. The derivative is calculated independently per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [deriv](#deriv).
`ideriv(series_selector[d])` calculates the per-second derivative based on the last two raw samples over the given lookbehind window `d`. The derivative is calculated independently per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [deriv](#deriv).
#### increase
`increase(series_selector[d])` calculates the increase over the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). It is expected that the `series_selector` returns time series of [counter type](https://prometheus.io/docs/concepts/metric_types/#counter). Unlike Prometheus it takes into account the last sample before the given lookbehind window `d` when calculating the result. See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [increase_pure](#increase_pure), [increase_prometheus](#increase_prometheus) and [delta](#delta).
`increase(series_selector[d])` calculates the increase over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyConcepts.html#counter). Unlike Prometheus it takes into account the last sample before the given lookbehind window `d` when calculating the result. See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [increase_pure](#increase_pure), [increase_prometheus](#increase_prometheus) and [delta](#delta).
#### increase_prometheus
`increase_prometheus(series_selector[d])` calculates the increase over the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). It is expected that the `series_selector` returns time series of [counter type](https://prometheus.io/docs/concepts/metric_types/#counter). It doesn't take into account the last sample before the given lookbehind window `d` when calculating the result in the same way as Prometheus does. See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [increase_pure](#increase_pure) and [increase](#increase).
`increase_prometheus(series_selector[d])` calculates the increase over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyConcepts.html#counter). It doesn't take into account the last sample before the given lookbehind window `d` when calculating the result in the same way as Prometheus does. See [this article](https://medium.com/@romanhavronenko/victoriametrics-promql-compliance-d4318203f51e) for details. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [increase_pure](#increase_pure) and [increase](#increase).
#### increase_pure
`increase_pure(series_selector[d])` works the same as [increase](#increase) except of the following corner case - it assumes that [counters](https://prometheus.io/docs/concepts/metric_types/#counter) always start from 0, while [increase](#increase) ignores the first value in a series if it is too big.
`increase_pure(series_selector[d])` works the same as [increase](#increase) except of the following corner case - it assumes that [counters](https://docs.victoriametrics.com/keyConcepts.html#counter) always start from 0, while [increase](#increase) ignores the first value in a series if it is too big.
#### increases_over_time
`increases_over_time(series_selector[d])` calculates the number of raw sample value increases over the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [decreases_over_time](#decreases_over_time).
`increases_over_time(series_selector[d])` calculates the number of raw sample value increases over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [decreases_over_time](#decreases_over_time).
#### integrate
`integrate(series_selector[d])` calculates the integral over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
`integrate(series_selector[d])` calculates the integral over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
#### irate
`irate(series_selector[d])` calculates the "instant" per-second increase rate over the last two raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). It is expected that the `series_selector` returns time series of [counter type](https://prometheus.io/docs/concepts/metric_types/#counter). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [rate](#rate).
`irate(series_selector[d])` calculates the "instant" per-second increase rate over the last two raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyConcepts.html#counter). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [rate](#rate).
#### lag
`lag(series_selector[d])` returns the duration in seconds between the last sample on the given lookbehind window `d` and the timestamp of the current point. It is calculated independently per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [lifetime](#lifetime) and [duration_over_time](#duration_over_time).
`lag(series_selector[d])` returns the duration in seconds between the last sample on the given lookbehind window `d` and the timestamp of the current point. It is calculated independently per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [lifetime](#lifetime) and [duration_over_time](#duration_over_time).
#### last_over_time
`last_over_time(series_selector[d])` returns the last raw sample value on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). This function is supported by PromQL. See also [first_over_time](#first_over_time) and [tlast_over_time](#tlast_over_time).
`last_over_time(series_selector[d])` returns the last raw sample value on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). This function is supported by PromQL. See also [first_over_time](#first_over_time) and [tlast_over_time](#tlast_over_time).
#### lifetime
`lifetime(series_selector[d])` returns the duration in seconds between the last and the first sample on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [duration_over_time](#duration_over_time) and [lag](#lag).
`lifetime(series_selector[d])` returns the duration in seconds between the last and the first sample on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [duration_over_time](#duration_over_time) and [lag](#lag).
#### max_over_time
`max_over_time(series_selector[d])` calculates the maximum value over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). This function is supported by PromQL. See also [tmax_over_time](#tmax_over_time).
`max_over_time(series_selector[d])` calculates the maximum value over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). This function is supported by PromQL. See also [tmax_over_time](#tmax_over_time).
#### median_over_time
`median_over_time(series_selector[d])` calculates median value over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). See also [avg_over_time](#avg_over_time).
`median_over_time(series_selector[d])` calculates median value over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). See also [avg_over_time](#avg_over_time).
#### min_over_time
`min_over_time(series_selector[d])` calculates the minimum value over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). This function is supported by PromQL. See also [tmin_over_time](#tmin_over_time).
`min_over_time(series_selector[d])` calculates the minimum value over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). This function is supported by PromQL. See also [tmin_over_time](#tmin_over_time).
#### mode_over_time
`mode_over_time(series_selector[d])` calculates [mode](https://en.wikipedia.org/wiki/Mode_(statistics)) for raw samples on the given lookbehind window `d`. It is calculated individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). It is expected that raw sample values are discrete.
`mode_over_time(series_selector[d])` calculates [mode](https://en.wikipedia.org/wiki/Mode_(statistics)) for raw samples on the given lookbehind window `d`. It is calculated individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). It is expected that raw sample values are discrete.
#### predict_linear
`predict_linear(series_selector[d], t)` calculates the value `t` seconds in the future using linear interpolation over raw samples on the given lookbehind window `d`. The predicted value is calculated individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). This function is supported by PromQL.
`predict_linear(series_selector[d], t)` calculates the value `t` seconds in the future using linear interpolation over raw samples on the given lookbehind window `d`. The predicted value is calculated individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). This function is supported by PromQL.
#### present_over_time
@ -254,103 +254,103 @@ See also [implicit query conversions](#implicit-query-conversions).
#### quantile_over_time
`quantile_over_time(phi, series_selector[d])` calculates `phi`-quantile over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). The `phi` value must be in the range `[0...1]`. This function is supported by PromQL. See also [quantiles_over_time](#quantiles_over_time).
`quantile_over_time(phi, series_selector[d])` calculates `phi`-quantile over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). The `phi` value must be in the range `[0...1]`. This function is supported by PromQL. See also [quantiles_over_time](#quantiles_over_time).
#### quantiles_over_time
`quantiles_over_time("phiLabel", phi1, ..., phiN, series_selector[d])` calculates `phi*`-quantiles over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). The function returns individual series per each `phi*` with `{phiLabel="phi*"}` label. `phi*` values must be in the range `[0...1]`. See also [quantile_over_time](#quantile_over_time).
`quantiles_over_time("phiLabel", phi1, ..., phiN, series_selector[d])` calculates `phi*`-quantiles over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). The function returns individual series per each `phi*` with `{phiLabel="phi*"}` label. `phi*` values must be in the range `[0...1]`. See also [quantile_over_time](#quantile_over_time).
#### range_over_time
`range_over_time(series_selector[d])` calculates value range over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). E.g. it calculates `max_over_time(series_selector[d]) - min_over_time(series_selector[d])`. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
`range_over_time(series_selector[d])` calculates value range over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). E.g. it calculates `max_over_time(series_selector[d]) - min_over_time(series_selector[d])`. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
#### rate
`rate(series_selector[d])` calculates the average per-second increase rate over the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). It is expected that the `series_selector` returns time series of [counter type](https://prometheus.io/docs/concepts/metric_types/#counter). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL.
`rate(series_selector[d])` calculates the average per-second increase rate over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyConcepts.html#counter). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL.
#### rate_over_sum
`rate_over_sum(series_selector[d])` calculates per-second rate over the sum of raw samples on the given lookbehind window `d`. The calculations are performed indiviually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
`rate_over_sum(series_selector[d])` calculates per-second rate over the sum of raw samples on the given lookbehind window `d`. The calculations are performed indiviually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
#### resets
`resets(series_selector[d])` returns the number of [counter](https://prometheus.io/docs/concepts/metric_types/#counter) resets over the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). It is expected that the `series_selector` returns time series of [counter type](https://prometheus.io/docs/concepts/metric_types/#counter). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL.
`resets(series_selector[d])` returns the number of [counter](https://docs.victoriametrics.com/keyConcepts.html#counter) resets over the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). It is expected that the `series_selector` returns time series of [counter type](https://docs.victoriametrics.com/keyConcepts.html#counter). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL.
#### rollup
`rollup(series_selector[d])` calculates `min`, `max` and `avg` values for raw samples on the given lookbehind window `d`. These values are calculated individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors).
`rollup(series_selector[d])` calculates `min`, `max` and `avg` values for raw samples on the given lookbehind window `d`. These values are calculated individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering).
#### rollup_candlestick
`rollup_candlestick(series_selector[d])` calculates `open`, `high`, `low` and `close` values (aka OHLC) over raw samples on the given lookbehind window `d`. The calculations are perfomed individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). This function is useful for financial applications.
`rollup_candlestick(series_selector[d])` calculates `open`, `high`, `low` and `close` values (aka OHLC) over raw samples on the given lookbehind window `d`. The calculations are perfomed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). This function is useful for financial applications.
#### rollup_delta
`rollup_delta(series_selector[d])` calculates differences between adjancent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated differences. The calculations are performed individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [rollup_increase](#rollup_increase).
`rollup_delta(series_selector[d])` calculates differences between adjancent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated differences. The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [rollup_increase](#rollup_increase).
#### rollup_deriv
`rollup_deriv(series_selector[d])` calculates per-second derivatives for adjancent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated per-second derivatives. The calculations are performed individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
`rollup_deriv(series_selector[d])` calculates per-second derivatives for adjancent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated per-second derivatives. The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
#### rollup_increase
`rollup_increase(series_selector[d])` calculates increases for adjancent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated increases. The calculations are performed individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [rollup_delta](#rollup_delta).
`rollup_increase(series_selector[d])` calculates increases for adjancent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated increases. The calculations are performed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [rollup_delta](#rollup_delta).
#### rollup_rate
`rollup_rate(series_selector[d])` calculates per-second change rates for adjancent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated per-second change rates. The calculations are perfomed individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
`rollup_rate(series_selector[d])` calculates per-second change rates for adjancent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated per-second change rates. The calculations are perfomed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
#### rollup_scrape_interval
`rollup_scrape_interval(series_selector[d])` calculates the interval in seconds between adjancent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated interval. The calculations are perfomed individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [scrape_interval](#scrape_interval).
`rollup_scrape_interval(series_selector[d])` calculates the interval in seconds between adjancent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated interval. The calculations are perfomed individually per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [scrape_interval](#scrape_interval).
#### scrape_interval
`scrape_interval(series_selector[d])` calculates the average interval in seconds between raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [rollup_scrape_interval](#rollup_scrape_interval).
`scrape_interval(series_selector[d])` calculates the average interval in seconds between raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [rollup_scrape_interval](#rollup_scrape_interval).
#### share_gt_over_time
`share_gt_over_time(series_selector[d], gt)` returns share (in the range `[0...1]`) of raw samples on the given lookbehind window `d`, which are bigger than `gt`. It is calculated independently per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. Useful for calculating SLI and SLO. Example: `share_gt_over_time(up[24h], 0)` - returns service availability for the last 24 hours. See also [share_le_over_time](#share_le_over_time).
`share_gt_over_time(series_selector[d], gt)` returns share (in the range `[0...1]`) of raw samples on the given lookbehind window `d`, which are bigger than `gt`. It is calculated independently per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. Useful for calculating SLI and SLO. Example: `share_gt_over_time(up[24h], 0)` - returns service availability for the last 24 hours. See also [share_le_over_time](#share_le_over_time).
#### share_le_over_time
`share_le_over_time(series_selector[d], le)` returns share (in the range `[0...1]`) of raw samples on the given lookbehind window `d`, which are smaller or equal to `le`. It is calculated independently per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. Useful for calculating SLI and SLO. Example: `share_le_over_time(memory_usage_bytes[24h], 100*1024*1024)` returns the share of time series values for the last 24 hours when memory usage was below or equal to 100MB. See also [share_gt_over_time](#share_gt_over_time).
`share_le_over_time(series_selector[d], le)` returns share (in the range `[0...1]`) of raw samples on the given lookbehind window `d`, which are smaller or equal to `le`. It is calculated independently per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. Useful for calculating SLI and SLO. Example: `share_le_over_time(memory_usage_bytes[24h], 100*1024*1024)` returns the share of time series values for the last 24 hours when memory usage was below or equal to 100MB. See also [share_gt_over_time](#share_gt_over_time).
#### stale_samples_over_time
`stale_samples_over_time(series_selector[d])` calculates the number of [staleness markers](https://docs.victoriametrics.com/vmagent.html#prometheus-staleness-markers) on the given lookbehind window `d` per each time series matching the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
`stale_samples_over_time(series_selector[d])` calculates the number of [staleness markers](https://docs.victoriametrics.com/vmagent.html#prometheus-staleness-markers) on the given lookbehind window `d` per each time series matching the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
#### stddev_over_time
`stddev_over_time(series_selector[d])` calculates standard deviation over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [stdvar_over_time](#stdvar_over_time).
`stddev_over_time(series_selector[d])` calculates standard deviation over raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [stdvar_over_time](#stdvar_over_time).
#### stdvar_over_time
`stdvar_over_time(series_selector[d])` calculates stadnard variance over raw samples on the given lookbheind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [stddev_over_time](#stddev_over_time).
`stdvar_over_time(series_selector[d])` calculates stadnard variance over raw samples on the given lookbheind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [stddev_over_time](#stddev_over_time).
#### sum_over_time
`sum_over_time(series_selector[d])` calculates the sum of raw sample values on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL.
`sum_over_time(series_selector[d])` calculates the sum of raw sample values on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL.
#### sum2_over_time
`sum2_over_time(series_selector[d])` calculates the sum of squares for raw sample values on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
`sum2_over_time(series_selector[d])` calculates the sum of squares for raw sample values on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
#### timestamp
`timestamp(series_selector[d])` returns the timestamp in seconds for the last raw sample on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [timestamp_with_name](#timestamp_with_name).
`timestamp(series_selector[d])` returns the timestamp in seconds for the last raw sample on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. This function is supported by PromQL. See also [timestamp_with_name](#timestamp_with_name).
#### timestamp_with_name
`timestamp_with_name(series_selector[d])` returns the timestamp in seconds for the last raw sample on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are preserved in the resulting rollups. See also [timestamp](#timestamp).
`timestamp_with_name(series_selector[d])` returns the timestamp in seconds for the last raw sample on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are preserved in the resulting rollups. See also [timestamp](#timestamp).
#### tfirst_over_time
`tfirst_over_time(series_selector[d])` returns the timestamp in seconds for the first raw sample on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [first_over_time](#first_over_time).
`tfirst_over_time(series_selector[d])` returns the timestamp in seconds for the first raw sample on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [first_over_time](#first_over_time).
#### tlast_change_over_time
`tlast_change_over_time(series_selector[d])` returns the timestamp in seconds for the last change per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) on the given lookbehind window `d`. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [last_over_time](#last_over_time).
`tlast_change_over_time(series_selector[d])` returns the timestamp in seconds for the last change per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) on the given lookbehind window `d`. Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [last_over_time](#last_over_time).
#### tlast_over_time
@ -358,21 +358,21 @@ See also [implicit query conversions](#implicit-query-conversions).
#### tmax_over_time
`tmax_over_time(series_selector[d])` returns the timestamp in seconds for the raw sample with the maximum value on the given lookbehind window `d`. It is calculated independently per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [max_over_time](#max_over_time).
`tmax_over_time(series_selector[d])` returns the timestamp in seconds for the raw sample with the maximum value on the given lookbehind window `d`. It is calculated independently per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [max_over_time](#max_over_time).
#### tmin_over_time
`tmin_over_time(series_selector[d])` returns the timestamp in seconds for the raw sample with the minimum value on the given lookbehind window `d`. It is calculated independently per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [min_over_time](#min_over_time).
`tmin_over_time(series_selector[d])` returns the timestamp in seconds for the raw sample with the minimum value on the given lookbehind window `d`. It is calculated independently per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names. See also [min_over_time](#min_over_time).
#### zscore_over_time
`zscore_over_time(series_selector[d])` calculates returns [z-score](https://en.wikipedia.org/wiki/Standard_score) for raw samples on the given lookbehind window `d`. It is calculated independently per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
`zscore_over_time(series_selector[d])` calculates returns [z-score](https://en.wikipedia.org/wiki/Standard_score) for raw samples on the given lookbehind window `d`. It is calculated independently per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering). Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.
### Transform functions
**Transform functions** calculate transformations over rollup results. For example, `abs(delta(temperature[24h]))` calculates the absolute value for every point of every time series returned from the rollup `delta(temperature[24h])`. Additional details:
* If transform function is applied directly to a [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors), then the [default_rollup()](#default_rollup) function is automatically applied before calculating the transformations. For example, `abs(temperature)` is implicitly transformed to `abs(default_rollup(temperature[1i]))`.
* If transform function is applied directly to a [series selector](https://docs.victoriametrics.com/keyConcepts.html#filtering), then the [default_rollup()](#default_rollup) function is automatically applied before calculating the transformations. For example, `abs(temperature)` is implicitly transformed to `abs(default_rollup(temperature[1i]))`.
* All the transform functions accept optional `keep_metric_names` modifier. If it is set, then the function doesn't drop metric names from the resulting time series. See [these docs](#keep_metric_names).
See also [implicit query conversions](#implicit-query-conversions).
@ -467,7 +467,7 @@ See also [implicit query conversions](#implicit-query-conversions).
#### end
`end()` returns the unix timestamp in seconds for the last point. See also [start](#start). It is known as `end` query arg passed to [/api/v1/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries).
`end()` returns the unix timestamp in seconds for the last point. See also [start](#start). It is known as `end` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query).
#### exp
@ -679,11 +679,11 @@ See also [implicit query conversions](#implicit-query-conversions).
#### start
`start()` returns unix timestamp in seconds for the first point. See also [end](#end). It is known as `start` query arg passed to [/api/v1/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries).
`start()` returns unix timestamp in seconds for the first point. See also [end](#end). It is known as `start` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query).
#### step
`step()` returns the step in seconds (aka interval) between the returned points. It is known as `step` query arg passed to [/api/v1/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries).
`step()` returns the step in seconds (aka interval) between the returned points. It is known as `step` query arg passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query).
#### time
@ -713,7 +713,7 @@ See also [implicit query conversions](#implicit-query-conversions).
**Label manipulation functions** perform manipulations with lables on the selected rollup results. Additional details:
* If label manipulation function is applied directly to a [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors), then the [default_rollup()](#default_rollup) function is automatically applied before performing the label transformation. For example, `alias(temperature, "foo")` is implicitly transformed to `alias(default_rollup(temperature[1i]), "foo")`.
* If label manipulation function is applied directly to a [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering), then the [default_rollup()](#default_rollup) function is automatically applied before performing the label transformation. For example, `alias(temperature, "foo")` is implicitly transformed to `alias(default_rollup(temperature[1i]), "foo")`.
See also [implicit query conversions](#implicit-query-conversions).
@ -797,7 +797,7 @@ sum by (__name__) (
**Aggregate functions** calculate aggregates over groups of rollup results. Additional details:
* By default a single group is used for aggregation. Multiple independent groups can be set up by specifying grouping labels in `by` and `without` modifiers. For example, `count(up) by (job)` would group rollup results by `job` label value and calculate the [count](#count) aggregate function independently per each group, while `count(up) without (instance)` would group rollup results by all the labels except `instance` before calculating [count](#count) aggregate function independently per each group. Multiple labels can be put in `by` and `without` modifiers.
* If the aggregate function is applied directly to a [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors), then the [default_rollup()](#default_rollup) function is automatically applied before cacluating the aggregate. For example, `count(up)` is implicitly transformed to `count(default_rollup(up[1i]))`.
* If the aggregate function is applied directly to a [series_selector](https://docs.victoriametrics.com/keyConcepts.html#filtering), then the [default_rollup()](#default_rollup) function is automatically applied before cacluating the aggregate. For example, `count(up)` is implicitly transformed to `count(default_rollup(up[1i]))`.
* Aggregate functions accept arbitrary number of args. For example, `avg(q1, q2, q3)` would return the average values for every point across time series returned by `q1`, `q2` and `q3`.
* Aggregate functions support optional `limit N` suffix, which can be used for limiting the number of output groups. For example, `sum(x) by (y) limit 3` limits the number of groups for the aggregation to 3. All the other groups are ignored.
@ -945,22 +945,22 @@ See also [implicit query conversions](#implicit-query-conversions).
## Subqueries
MetricsQL supports and extends PromQL subqueries. See [this article](https://valyala.medium.com/prometheus-subqueries-in-victoriametrics-9b1492b720b3) for details. Any [rollup function](#rollup-functions) for something other than [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) form a subquery. Nested rollup functions can be implicit thanks to the [implicit query conversions](#implicit-query-conversions). For example, `delta(sum(m))` is implicitly converted to `delta(sum(default_rollup(m[1i]))[1i:1i])`, so it becomes a subquery, since it contains [default_rollup](#default_rollup) nested into [delta](#delta).
MetricsQL supports and extends PromQL subqueries. See [this article](https://valyala.medium.com/prometheus-subqueries-in-victoriametrics-9b1492b720b3) for details. Any [rollup function](#rollup-functions) for something other than [series selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) form a subquery. Nested rollup functions can be implicit thanks to the [implicit query conversions](#implicit-query-conversions). For example, `delta(sum(m))` is implicitly converted to `delta(sum(default_rollup(m[1i]))[1i:1i])`, so it becomes a subquery, since it contains [default_rollup](#default_rollup) nested into [delta](#delta).
VictoriaMetrics performs subqueries in the following way:
* It calculates the inner rollup function using the `step` value from the outer rollup function. For example, for expression `max_over_time(rate(http_requests_total[5m])[1h:30s])` the inner function `rate(http_requests_total[5m])` is calculated with `step=30s`. The resulting data points are aligned by the `step`.
* It calculates the outer rollup function over the results of the inner rollup function using the `step` value passed by Grafana to [/api/v1/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries).
* It calculates the outer rollup function over the results of the inner rollup function using the `step` value passed by Grafana to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query).
## Implicit query conversions
VictoriaMetrics performs the following implicit conversions for incoming queries before starting the calculations:
* If lookbehind window in square brackets is missing inside [rollup function](#rollup-functions), then `[1i]` is automatically added there. The `[1i]` means one `step` value, which is passed to [/api/v1/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries). It is also known as `$__interval` in Grafana. For example, `rate(http_requests_count)` is automatically transformed to `rate(http_requests_count[1i])`.
* All the [series selectors](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors), which aren't wrapped into [rollup functions](#rollup-functions), are automatically wrapped into [default_rollup](#default_rollup) function. Examples:
* If lookbehind window in square brackets is missing inside [rollup function](#rollup-functions), then `[1i]` is automatically added there. The `[1i]` means one `step` value, which is passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query). It is also known as `$__interval` in Grafana. For example, `rate(http_requests_count)` is automatically transformed to `rate(http_requests_count[1i])`.
* All the [series selectors](https://docs.victoriametrics.com/keyConcepts.html#filtering), which aren't wrapped into [rollup functions](#rollup-functions), are automatically wrapped into [default_rollup](#default_rollup) function. Examples:
* `foo` is transformed to `default_rollup(foo[1i])`
* `foo + bar` is transformed to `default_rollup(foo[1i]) + default_rollup(bar[1i])`
* `count(up)` is transformed to `count(default_rollup(up[1i]))`, because [count](#count) isn't a [rollup function](#rollup-functions) - it is [aggregate function](#aggregate-functions)
* `abs(temperature)` is transformed to `abs(default_rollup(temperature[1i]))`, because [abs](#abs) isn't a [rollup function](#rollup-functions) - it is [transform function](#transform-functions)
* If `step` in square brackets is missing inside [subquery](#subqueries), then `1i` step is automatically added there. For example, `avg_over_time(rate(http_requests_total[5m])[1h])` is automatically converted to `avg_over_time(rate(http_requests_total[5m])[1h:1i])`.
* If something other than [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) is passed to [rollup function](#rollup-functions), then a [subquery](#subqueries) with `1i` lookbehind window and `1i` step is automatically formed. For example, `rate(sum(up))` is automatically converted to `rate((sum(default_rollup(up[1i])))[1i:1i])`.
* If something other than [series selector](https://docs.victoriametrics.com/keyConcepts.html#filtering) is passed to [rollup function](#rollup-functions), then a [subquery](#subqueries) with `1i` lookbehind window and `1i` step is automatically formed. For example, `rate(sum(up))` is automatically converted to `rate((sum(default_rollup(up[1i])))[1i:1i])`.

View file

@ -936,7 +936,7 @@ for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time
On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag:
```console
# count unique timeseries in database
# count unique time series in database
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
# relaunch victoriametrics with search.maxExportSeries more than value from previous command
@ -1392,6 +1392,7 @@ VictoriaMetrics provides the following security-related command-line flags:
* `-forceMergeAuthKey` for protecting `/internal/force_merge` endpoint. See [force merge docs](#forced-merge).
* `-search.resetCacheAuthKey` for protecting `/internal/resetRollupResultCache` endpoint. See [backfilling](#backfilling) for more details.
* `-configAuthKey` for protecting `/config` endpoint, since it may contain sensitive information such as passwords.
* `-flagsAuthKey` for protecting `/flags` endpoint.
* `-pprofAuthKey` for protecting `/debug/pprof/*` endpoints, which can be used for [profiling](#profiling).
* `-denyQueryTracing` for disallowing [query tracing](#query-tracing).

View file

@ -940,7 +940,7 @@ for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time
On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag:
```console
# count unique timeseries in database
# count unique time series in database
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
# relaunch victoriametrics with search.maxExportSeries more than value from previous command
@ -1396,6 +1396,7 @@ VictoriaMetrics provides the following security-related command-line flags:
* `-forceMergeAuthKey` for protecting `/internal/force_merge` endpoint. See [force merge docs](#forced-merge).
* `-search.resetCacheAuthKey` for protecting `/internal/resetRollupResultCache` endpoint. See [backfilling](#backfilling) for more details.
* `-configAuthKey` for protecting `/config` endpoint, since it may contain sensitive information such as passwords.
* `-flagsAuthKey` for protecting `/flags` endpoint.
* `-pprofAuthKey` for protecting `/debug/pprof/*` endpoints, which can be used for [profiling](#profiling).
* `-denyQueryTracing` for disallowing [query tracing](#query-tracing).

View file

@ -221,8 +221,8 @@ curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/query?query=vm_http_r
Additional information:
* [Prometheus querying API usage](https://docs.victoriametrics.com/#prometheus-querying-api-usage)
* [Instant queries](https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries)
* [Instant vector selectors](https://prometheus.io/docs/prometheus/latest/querying/basics/#instant-vector-selectors)
* [Instant queries](https://docs.victoriametrics.com/keyConcepts.html#instant-query)
* [Query language](https://docs.victoriametrics.com/keyConcepts.html#metricsql)
## /api/v1/query_range
@ -260,8 +260,8 @@ curl -G http://<vmselect>:8481/select/0/prometheus/api/v1/query_range --data-url
Additional information:
* [Prometheus querying API usage](https://docs.victoriametrics.com/#prometheus-querying-api-usage)
* [Range queries](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries)
* [Range Vector Selectors](https://prometheus.io/docs/prometheus/latest/querying/basics/#range-vector-selectors)
* [Range queries](https://docs.victoriametrics.com/keyConcepts.html#range-query)
* [Query language](https://docs.victoriametrics.com/keyConcepts.html#metricsql)
## /api/v1/series

View file

@ -514,7 +514,7 @@ max range per request: 8h20m0s
In `replay` mode all groups are executed sequentially one-by-one. Rules within the group are
executed sequentially as well (`concurrency` setting is ignored). Vmalert sends rule's expression
to [/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries) endpoint
to [/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query) endpoint
of the configured `-datasource.url`. Returned data is then processed according to the rule type and
backfilled to `-remoteWrite.url` via [remote Write protocol](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations).
Vmalert respects `evaluationInterval` value set by flag or per-group during the replay.

View file

@ -144,7 +144,11 @@ Do not transfer Basic Auth headers in plaintext over untrusted networks. Enable
Alternatively, [https termination proxy](https://en.wikipedia.org/wiki/TLS_termination_proxy) may be put in front of `vmauth`.
It is recommended protecting `/-/reload` endpoint with `-reloadAuthKey` command-line flag, so external users couldn't trigger config reload.
It is recommended protecting following endpoints with authKeys:
* `/-/reload` with `-reloadAuthKey` command-line flag, so external users couldn't trigger config reload.
* `/flags` with `-flagsAuthkey` command-line flag, so unauthorized users couldn't get application command-line flags.
* `/metrics` with `metricsAuthkey` command-line flag, so unauthorized users couldn't get access to [vmauth metrics](#monitoring).
* `/debug/pprof` with `pprofAuthKey` command-line flag, so unauthorized users couldn't get access to [profiling information](#profiling).
## Monitoring

20
go.mod
View file

@ -11,7 +11,7 @@ require (
github.com/VictoriaMetrics/fasthttp v1.1.0
github.com/VictoriaMetrics/metrics v1.18.1
github.com/VictoriaMetrics/metricsql v0.43.0
github.com/aws/aws-sdk-go v1.44.32
github.com/aws/aws-sdk-go v1.44.37
github.com/cespare/xxhash/v2 v2.1.2
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
@ -27,24 +27,24 @@ require (
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/mattn/go-runewidth v0.0.13 // indirect
github.com/oklog/ulid v1.3.1
github.com/prometheus/common v0.34.0 // indirect
github.com/prometheus/common v0.35.0 // indirect
github.com/prometheus/prometheus v1.8.2-0.20201119142752-3ad25a6dc3d9
github.com/urfave/cli/v2 v2.8.1
github.com/urfave/cli/v2 v2.10.1
github.com/valyala/fastjson v1.6.3
github.com/valyala/fastrand v1.1.0
github.com/valyala/fasttemplate v1.2.1
github.com/valyala/gozstd v1.17.0
github.com/valyala/quicktemplate v1.7.0
golang.org/x/net v0.0.0-20220607020251-c690dde0001d
golang.org/x/net v0.0.0-20220617184016-355a448f1bc9
golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb
golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d
google.golang.org/api v0.83.0
golang.org/x/sys v0.0.0-20220615213510-4f61da869c0c
google.golang.org/api v0.84.0
gopkg.in/yaml.v2 v2.4.0
)
require (
cloud.google.com/go v0.102.0 // indirect
cloud.google.com/go/compute v1.6.1 // indirect
cloud.google.com/go v0.102.1 // indirect
cloud.google.com/go/compute v1.7.0 // indirect
cloud.google.com/go/iam v0.3.0 // indirect
github.com/VividCortex/ewma v1.2.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
@ -54,6 +54,7 @@ require (
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/go-cmp v0.5.8 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.1.0 // indirect
github.com/googleapis/gax-go/v2 v2.4.0 // indirect
github.com/googleapis/go-type-adapters v1.0.0 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
@ -75,8 +76,7 @@ require (
golang.org/x/text v0.3.7 // indirect
golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac // indirect
google.golang.org/genproto v0.0.0-20220617124728-180714bec0ad // indirect
google.golang.org/grpc v1.47.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
)

40
go.sum
View file

@ -29,8 +29,9 @@ cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW
cloud.google.com/go v0.97.0/go.mod h1:GF7l59pYBVlXQIBLx3a761cZ41F9bBH3JUlihCt2Udc=
cloud.google.com/go v0.99.0/go.mod h1:w0Xx2nLzqWJPuozYQX+hFfCSI8WioryfRDzkoI/Y2ZA=
cloud.google.com/go v0.100.2/go.mod h1:4Xra9TjzAeYHrl5+oeLlzbM2k3mjVhZh4UqTZ//w99A=
cloud.google.com/go v0.102.0 h1:DAq3r8y4mDgyB/ZPJ9v/5VJNqjgJAxTn6ZYLlUywOu8=
cloud.google.com/go v0.102.0/go.mod h1:oWcCzKlqJ5zgHQt9YsaeTY9KzIvjyy0ArmiBUgpQ+nc=
cloud.google.com/go v0.102.1 h1:vpK6iQWv/2uUeFJth4/cBHsQAGjn1iIE6AAlxipRaA0=
cloud.google.com/go v0.102.1/go.mod h1:XZ77E9qnTEnrgEOvr4xzfdX5TRo7fB4T2F4O6+34hIU=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
@ -42,8 +43,9 @@ cloud.google.com/go/compute v0.1.0/go.mod h1:GAesmwr110a34z04OlxYkATPBEfVhkymfTB
cloud.google.com/go/compute v1.3.0/go.mod h1:cCZiE1NHEtai4wiufUhW8I8S1JKkAnhnQJWM7YD99wM=
cloud.google.com/go/compute v1.5.0/go.mod h1:9SMHyhJlzhlkJqrPAc839t2BZFTSk6Jdj6mkzQJeu0M=
cloud.google.com/go/compute v1.6.0/go.mod h1:T29tfhtVbq1wvAPo0E3+7vhgmkOYeXjhFvz/FMzPu0s=
cloud.google.com/go/compute v1.6.1 h1:2sMmt8prCn7DPaG4Pmh0N3Inmc8cT8ae5k1M6VJ9Wqc=
cloud.google.com/go/compute v1.6.1/go.mod h1:g85FgpzFvNULZ+S8AYq87axRKuf2Kh7deLqV/jJ3thU=
cloud.google.com/go/compute v1.7.0 h1:v/k9Eueb8aAJ0vZuxKMrgm6kPhCLZU9HxFU+AFDs9Uk=
cloud.google.com/go/compute v1.7.0/go.mod h1:435lt8av5oL9P3fv1OEzSbSUe+ybHXGMPQHHZWZxy9U=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/iam v0.3.0 h1:exkAomrVUuzx9kWFI1wm3KI0uoDeUFPB4kKGzx6x+Gc=
@ -142,8 +144,8 @@ github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQ
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.44.32 h1:x5hBtpY/02sgRL158zzTclcCLwh3dx3YlSl1rAH4Op0=
github.com/aws/aws-sdk-go v1.44.32/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
github.com/aws/aws-sdk-go v1.44.37 h1:KvDxCX6dfJeEDC77U5GPGSP0ErecmNnhDHFxw+NIvlI=
github.com/aws/aws-sdk-go v1.44.37/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
@ -458,6 +460,9 @@ github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/enterprise-certificate-proxy v0.0.0-20220520183353-fd19c99a87aa/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8=
github.com/googleapis/enterprise-certificate-proxy v0.1.0 h1:zO8WHNx/MYiAKJ3d5spxZXZE6KHmIQGQcAzwUzV7qQw=
github.com/googleapis/enterprise-certificate-proxy v0.1.0/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
@ -736,8 +741,8 @@ github.com/prometheus/common v0.14.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16
github.com/prometheus/common v0.15.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/common v0.34.0 h1:RBmGO9d/FVjqHT0yUGQwBJhkwKV+wPCn7KGpvfab0uE=
github.com/prometheus/common v0.34.0/go.mod h1:gB3sOl7P0TvJabZpLY5uQMpUqRCPPCyRLCZYc7JZTNE=
github.com/prometheus/common v0.35.0 h1:Eyr+Pw2VymWejHqCugNaQXkAi6KayVNxaHeu6khmFBE=
github.com/prometheus/common v0.35.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
@ -815,8 +820,8 @@ github.com/uber/jaeger-client-go v2.25.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMW
github.com/uber/jaeger-lib v2.4.0+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli/v2 v2.8.1 h1:CGuYNZF9IKZY/rfBe3lJpccSoIY1ytfvmgQT90cNOl4=
github.com/urfave/cli/v2 v2.8.1/go.mod h1:Z41J9TPoffeoqP0Iza0YbAhGvymRdZAd2uPmZ5JxRdY=
github.com/urfave/cli/v2 v2.10.1 h1:34qJSQxqF/4fqJ7oiAV5WoXaTFlGG9QNM+qxpY3W3gs=
github.com/urfave/cli/v2 v2.10.1/go.mod h1:MaQ2eKodtz1fFzu2U0jL+tVjoWmG134POMRjyXJK6+8=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.30.0/go.mod h1:2rsYD01CKFrjjsvFxx75KlEUNpWNBY9JWD3K/7o2Cus=
@ -992,8 +997,9 @@ golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su
golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220412020605-290c469a71a5/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220607020251-c690dde0001d h1:4SFsTMi4UahlKoloni7L4eYzhFRifURQLw+yv0QDCx8=
golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220617184016-355a448f1bc9 h1:Yqz/iviulwKwAREEeUd3nbBFn0XuyJqkoft2IlrvOhc=
golang.org/x/net v0.0.0-20220617184016-355a448f1bc9/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -1013,7 +1019,6 @@ golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb h1:8tDJ3aechhddbdPAxpycgXHJRMLpk/Ab+aa4OgdN5/g=
golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb/go.mod h1:jaDAt6Dkxork7LmZnYtzbRWj0W47D86a3TGe0YHBvmE=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -1123,8 +1128,9 @@ golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220502124256-b6088ccd6cba/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d h1:Zu/JngovGLVi6t2J3nmAf3AoTDwuzw85YZ3b9o4yU7s=
golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220615213510-4f61da869c0c h1:aFV+BgZ4svzjfabn8ERpuB4JI4N6/rdy1iusx77G3oU=
golang.org/x/sys v0.0.0-20220615213510-4f61da869c0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -1269,8 +1275,8 @@ google.golang.org/api v0.74.0/go.mod h1:ZpfMZOVRMywNyvJFeqL9HRWBgAuRfSjJFpe9QtRR
google.golang.org/api v0.75.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA=
google.golang.org/api v0.78.0/go.mod h1:1Sg78yoMLOhlQTeF+ARBoytAcH1NNyyl390YMy6rKmw=
google.golang.org/api v0.80.0/go.mod h1:xY3nI94gbvBrE0J6NHXhxOmW97HG7Khjkku6AFB3Hyg=
google.golang.org/api v0.83.0 h1:pMvST+6v+46Gabac4zlJlalxZjCeRcepwg2EdBU+nCc=
google.golang.org/api v0.83.0/go.mod h1:CNywQoj/AfhTw26ZWAa6LwOv+6WFxHmeLPZq2uncLZk=
google.golang.org/api v0.84.0 h1:NMB9J4cCxs9xEm+1Z9QiO3eFvn7EnQj3Eo3hN6ugVlg=
google.golang.org/api v0.84.0/go.mod h1:NTsGnUFJMYROtiquksZHBWtHfeMC7iYthki7Eq3pa8o=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@ -1359,9 +1365,10 @@ google.golang.org/genproto v0.0.0-20220429170224-98d788798c3e/go.mod h1:8w6bsBMX
google.golang.org/genproto v0.0.0-20220505152158-f39f71e6c8f3/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220518221133-4f43b3371335/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220523171625-347a074981d8/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8/go.mod h1:yKyY4AMRwFiC8yMMNaMi+RkCnjZJt9LoWuvhXjMs+To=
google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac h1:ByeiW1F67iV9o8ipGskA+HWzSkMbRJuKLlwCdPxzn7A=
google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
google.golang.org/genproto v0.0.0-20220616135557-88e70c0c3a90/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
google.golang.org/genproto v0.0.0-20220617124728-180714bec0ad h1:kqrS+lhvaMHCxul6sKQvKJ8nAAhlVItmZV822hYFH/U=
google.golang.org/genproto v0.0.0-20220617124728-180714bec0ad/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
@ -1443,8 +1450,7 @@ gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200605160147-a5ece683394c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

View file

@ -40,8 +40,9 @@ var (
"See https://www.robustperception.io/using-external-urls-and-proxies-with-prometheus")
httpAuthUsername = flag.String("httpAuth.username", "", "Username for HTTP Basic Auth. The authentication is disabled if empty. See also -httpAuth.password")
httpAuthPassword = flag.String("httpAuth.password", "", "Password for HTTP Basic Auth. The authentication is disabled if -httpAuth.username is empty")
metricsAuthKey = flag.String("metricsAuthKey", "", "Auth key for /metrics. It must be passed via authKey query arg. It overrides httpAuth.* settings")
pprofAuthKey = flag.String("pprofAuthKey", "", "Auth key for /debug/pprof. It must be passed via authKey query arg. It overrides httpAuth.* settings")
metricsAuthKey = flag.String("metricsAuthKey", "", "Auth key for /metrics endpoint. It must be passed via authKey query arg. It overrides httpAuth.* settings")
flagsAuthKey = flag.String("flagsAuthKey", "", "Auth key for /flags endpoint. It must be passed via authKey query arg. It overrides httpAuth.* settings")
pprofAuthKey = flag.String("pprofAuthKey", "", "Auth key for /debug/pprof/* endpoints. It must be passed via authKey query arg. It overrides httpAuth.* settings")
disableResponseCompression = flag.Bool("http.disableResponseCompression", false, "Disable compression of HTTP responses to save CPU resources. By default compression is enabled to save network bandwidth")
maxGracefulShutdownDuration = flag.Duration("http.maxGracefulShutdownDuration", 7*time.Second, `The maximum duration for a graceful shutdown of the HTTP server. A highly loaded server may require increased value for a graceful shutdown`)
@ -284,6 +285,10 @@ func handlerWrapper(s *server, w http.ResponseWriter, r *http.Request, rh Reques
metricsHandlerDuration.UpdateDuration(startTime)
return
case "/flags":
if len(*flagsAuthKey) > 0 && r.FormValue("authKey") != *flagsAuthKey {
http.Error(w, "The provided authKey doesn't match -flagsAuthKey", http.StatusUnauthorized)
return
}
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
flagutil.WriteFlags(w)
return

View file

@ -521,7 +521,7 @@ type indexSearch struct {
//
// It also registers the metricName in global and per-day indexes
// for the given date if the metricName->TSID entry is missing in the index.
func (is *indexSearch) GetOrCreateTSIDByName(dst *TSID, metricName []byte, date uint64) error {
func (is *indexSearch) GetOrCreateTSIDByName(dst *TSID, metricName, metricNameRaw []byte, date uint64) error {
// A hack: skip searching for the TSID after many serial misses.
// This should improve insertion performance for big batches
// of new time series.
@ -529,7 +529,7 @@ func (is *indexSearch) GetOrCreateTSIDByName(dst *TSID, metricName []byte, date
err := is.getTSIDByMetricName(dst, metricName)
if err == nil {
is.tsidByNameMisses = 0
return nil
return is.db.s.registerSeriesCardinality(dst.MetricID, metricNameRaw)
}
if err != io.EOF {
return fmt.Errorf("cannot search TSID by MetricName %q: %w", metricName, err)
@ -546,7 +546,7 @@ func (is *indexSearch) GetOrCreateTSIDByName(dst *TSID, metricName []byte, date
// TSID for the given name wasn't found. Create it.
// It is OK if duplicate TSID for mn is created by concurrent goroutines.
// Metric results will be merged by mn after TableSearch.
if err := is.createTSIDByName(dst, metricName, date); err != nil {
if err := is.createTSIDByName(dst, metricName, metricNameRaw, date); err != nil {
return fmt.Errorf("cannot create TSID by MetricName %q: %w", metricName, err)
}
return nil
@ -577,7 +577,7 @@ func (db *indexDB) putIndexSearch(is *indexSearch) {
db.indexSearchPool.Put(is)
}
func (is *indexSearch) createTSIDByName(dst *TSID, metricName []byte, date uint64) error {
func (is *indexSearch) createTSIDByName(dst *TSID, metricName, metricNameRaw []byte, date uint64) error {
mn := GetMetricName()
defer PutMetricName(mn)
if err := mn.Unmarshal(metricName); err != nil {
@ -588,8 +588,8 @@ func (is *indexSearch) createTSIDByName(dst *TSID, metricName []byte, date uint6
if err != nil {
return fmt.Errorf("cannot generate TSID: %w", err)
}
if !is.db.s.registerSeriesCardinality(dst.MetricID, mn) {
return errSeriesCardinalityExceeded
if err := is.db.s.registerSeriesCardinality(dst.MetricID, metricNameRaw); err != nil {
return err
}
if err := is.createGlobalIndexes(dst, mn); err != nil {
return fmt.Errorf("cannot create global indexes: %w", err)
@ -611,8 +611,6 @@ func (is *indexSearch) createTSIDByName(dst *TSID, metricName []byte, date uint6
return nil
}
var errSeriesCardinalityExceeded = fmt.Errorf("cannot create series because series cardinality limit exceeded")
// SetLogNewSeries updates new series logging.
//
// This function must be called before any calling any storage functions.

View file

@ -604,7 +604,8 @@ func testIndexDBBigMetricName(db *indexDB) error {
mn.MetricGroup = append(mn.MetricGroup[:0], bigBytes...)
mn.sortTags()
metricName := mn.Marshal(nil)
if err := is.GetOrCreateTSIDByName(&tsid, metricName, 0); err == nil {
metricNameRaw := mn.marshalRaw(nil)
if err := is.GetOrCreateTSIDByName(&tsid, metricName, metricNameRaw, 0); err == nil {
return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too big MetricGroup")
}
@ -617,7 +618,8 @@ func testIndexDBBigMetricName(db *indexDB) error {
}}
mn.sortTags()
metricName = mn.Marshal(nil)
if err := is.GetOrCreateTSIDByName(&tsid, metricName, 0); err == nil {
metricNameRaw = mn.marshalRaw(nil)
if err := is.GetOrCreateTSIDByName(&tsid, metricName, metricNameRaw, 0); err == nil {
return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too big tag key")
}
@ -630,7 +632,8 @@ func testIndexDBBigMetricName(db *indexDB) error {
}}
mn.sortTags()
metricName = mn.Marshal(nil)
if err := is.GetOrCreateTSIDByName(&tsid, metricName, 0); err == nil {
metricNameRaw = mn.marshalRaw(nil)
if err := is.GetOrCreateTSIDByName(&tsid, metricName, metricNameRaw, 0); err == nil {
return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too big tag value")
}
@ -645,7 +648,8 @@ func testIndexDBBigMetricName(db *indexDB) error {
}
mn.sortTags()
metricName = mn.Marshal(nil)
if err := is.GetOrCreateTSIDByName(&tsid, metricName, 0); err == nil {
metricNameRaw = mn.marshalRaw(nil)
if err := is.GetOrCreateTSIDByName(&tsid, metricName, metricNameRaw, 0); err == nil {
return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too many tags")
}
@ -661,6 +665,7 @@ func testIndexDBGetOrCreateTSIDByName(db *indexDB, metricGroups int) ([]MetricNa
defer db.putIndexSearch(is)
var metricNameBuf []byte
var metricNameRawBuf []byte
for i := 0; i < 4e2+1; i++ {
var mn MetricName
@ -676,10 +681,11 @@ func testIndexDBGetOrCreateTSIDByName(db *indexDB, metricGroups int) ([]MetricNa
}
mn.sortTags()
metricNameBuf = mn.Marshal(metricNameBuf[:0])
metricNameRawBuf = mn.marshalRaw(metricNameRawBuf[:0])
// Create tsid for the metricName.
var tsid TSID
if err := is.GetOrCreateTSIDByName(&tsid, metricNameBuf, 0); err != nil {
if err := is.GetOrCreateTSIDByName(&tsid, metricNameBuf, metricNameRawBuf, 0); err != nil {
return nil, nil, fmt.Errorf("unexpected error when creating tsid for mn:\n%s: %w", &mn, err)
}
@ -1631,6 +1637,7 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
now := uint64(timestampFromTime(theDay))
baseDate := now / msecPerDay
var metricNameBuf []byte
var metricNameRawBuf []byte
perDayMetricIDs := make(map[uint64]*uint64set.Set)
var allMetricIDs uint64set.Set
labelNames := []string{
@ -1661,8 +1668,9 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
mn.sortTags()
metricNameBuf = mn.Marshal(metricNameBuf[:0])
metricNameRawBuf = mn.marshalRaw(metricNameRawBuf[:0])
var tsid TSID
if err := is.GetOrCreateTSIDByName(&tsid, metricNameBuf, 0); err != nil {
if err := is.GetOrCreateTSIDByName(&tsid, metricNameBuf, metricNameRawBuf, 0); err != nil {
t.Fatalf("unexpected error when creating tsid for mn:\n%s: %s", &mn, err)
}
mns = append(mns, mn)

View file

@ -84,6 +84,7 @@ func BenchmarkIndexDBAddTSIDs(b *testing.B) {
func benchmarkIndexDBAddTSIDs(db *indexDB, tsid *TSID, mn *MetricName, startOffset, recordsPerLoop int) {
var metricName []byte
var metricNameRaw []byte
is := db.getIndexSearch(noDeadline)
defer db.putIndexSearch(is)
for i := 0; i < recordsPerLoop; i++ {
@ -93,7 +94,8 @@ func benchmarkIndexDBAddTSIDs(db *indexDB, tsid *TSID, mn *MetricName, startOffs
}
mn.sortTags()
metricName = mn.Marshal(metricName[:0])
if err := is.GetOrCreateTSIDByName(tsid, metricName, 0); err != nil {
metricNameRaw = mn.marshalRaw(metricNameRaw[:0])
if err := is.GetOrCreateTSIDByName(tsid, metricName, metricNameRaw, 0); err != nil {
panic(fmt.Errorf("cannot insert record: %w", err))
}
}
@ -121,6 +123,7 @@ func BenchmarkHeadPostingForMatchers(b *testing.B) {
// Fill the db with data as in https://github.com/prometheus/prometheus/blob/23c0299d85bfeb5d9b59e994861553a25ca578e5/tsdb/head_bench_test.go#L66
var mn MetricName
var metricName []byte
var metricNameRaw []byte
var tsid TSID
is := db.getIndexSearch(noDeadline)
defer db.putIndexSearch(is)
@ -131,7 +134,8 @@ func BenchmarkHeadPostingForMatchers(b *testing.B) {
}
mn.sortTags()
metricName = mn.Marshal(metricName[:0])
if err := is.createTSIDByName(&tsid, metricName, 0); err != nil {
metricNameRaw = mn.marshalRaw(metricNameRaw[:0])
if err := is.createTSIDByName(&tsid, metricName, metricNameRaw, 0); err != nil {
b.Fatalf("cannot insert record: %s", err)
}
}
@ -309,13 +313,15 @@ func BenchmarkIndexDBGetTSIDs(b *testing.B) {
}
var tsid TSID
var metricName []byte
var metricNameRaw []byte
is := db.getIndexSearch(noDeadline)
defer db.putIndexSearch(is)
for i := 0; i < recordsCount; i++ {
mn.sortTags()
metricName = mn.Marshal(metricName[:0])
if err := is.GetOrCreateTSIDByName(&tsid, metricName, 0); err != nil {
metricNameRaw = mn.marshalRaw(metricName[:0])
if err := is.GetOrCreateTSIDByName(&tsid, metricName, metricNameRaw, 0); err != nil {
b.Fatalf("cannot insert record: %s", err)
}
}
@ -326,6 +332,7 @@ func BenchmarkIndexDBGetTSIDs(b *testing.B) {
b.RunParallel(func(pb *testing.PB) {
var tsidLocal TSID
var metricNameLocal []byte
var metricNameLocalRaw []byte
mnLocal := mn
is := db.getIndexSearch(noDeadline)
defer db.putIndexSearch(is)
@ -333,7 +340,8 @@ func BenchmarkIndexDBGetTSIDs(b *testing.B) {
for i := 0; i < recordsPerLoop; i++ {
mnLocal.sortTags()
metricNameLocal = mnLocal.Marshal(metricNameLocal[:0])
if err := is.GetOrCreateTSIDByName(&tsidLocal, metricNameLocal, 0); err != nil {
metricNameLocalRaw = mnLocal.marshalRaw(metricNameLocalRaw[:0])
if err := is.GetOrCreateTSIDByName(&tsidLocal, metricNameLocal, metricNameLocalRaw, 0); err != nil {
panic(fmt.Errorf("cannot obtain tsid: %w", err))
}
}

View file

@ -1677,6 +1677,9 @@ func (s *Storage) RegisterMetricNames(mrs []MetricRow) error {
for i := range mrs {
mr := &mrs[i]
if s.getTSIDFromCache(&genTSID, mr.MetricNameRaw) {
if err := s.registerSeriesCardinality(genTSID.TSID.MetricID, mr.MetricNameRaw); err != nil {
continue
}
if genTSID.generation == idb.generation {
// Fast path - mr.MetricNameRaw has been already registered in the current idb.
continue
@ -1689,7 +1692,7 @@ func (s *Storage) RegisterMetricNames(mrs []MetricRow) error {
mn.sortTags()
metricName = mn.Marshal(metricName[:0])
date := uint64(mr.Timestamp) / msecPerDay
if err := is.GetOrCreateTSIDByName(&genTSID.TSID, metricName, date); err != nil {
if err := is.GetOrCreateTSIDByName(&genTSID.TSID, metricName, mr.MetricNameRaw, date); err != nil {
if errors.Is(err, errSeriesCardinalityExceeded) {
continue
}
@ -1762,6 +1765,10 @@ func (s *Storage) add(rows []rawRow, dstMrs []*MetricRow, mrs []MetricRow, preci
continue
}
if s.getTSIDFromCache(&genTSID, mr.MetricNameRaw) {
if err := s.registerSeriesCardinality(r.TSID.MetricID, mr.MetricNameRaw); err != nil {
j--
continue
}
r.TSID = genTSID.TSID
// Fast path - the TSID for the given MetricNameRaw has been found in cache and isn't deleted.
// There is no need in checking whether r.TSID.MetricID is deleted, since tsidCache doesn't
@ -1829,7 +1836,7 @@ func (s *Storage) add(rows []rawRow, dstMrs []*MetricRow, mrs []MetricRow, preci
}
slowInsertsCount++
date := uint64(r.Timestamp) / msecPerDay
if err := is.GetOrCreateTSIDByName(&r.TSID, pmr.MetricName, date); err != nil {
if err := is.GetOrCreateTSIDByName(&r.TSID, pmr.MetricName, mr.MetricNameRaw, date); err != nil {
j--
if errors.Is(err, errSeriesCardinalityExceeded) {
continue
@ -1874,26 +1881,29 @@ func (s *Storage) add(rows []rawRow, dstMrs []*MetricRow, mrs []MetricRow, preci
return nil
}
func (s *Storage) registerSeriesCardinality(metricID uint64, mn *MetricName) bool {
func (s *Storage) registerSeriesCardinality(metricID uint64, metricNameRaw []byte) error {
if sl := s.hourlySeriesLimiter; sl != nil && !sl.Add(metricID) {
atomic.AddUint64(&s.hourlySeriesLimitRowsDropped, 1)
logSkippedSeries(mn, "-storage.maxHourlySeries", sl.MaxItems())
return false
logSkippedSeries(metricNameRaw, "-storage.maxHourlySeries", sl.MaxItems())
return errSeriesCardinalityExceeded
}
if sl := s.dailySeriesLimiter; sl != nil && !sl.Add(metricID) {
atomic.AddUint64(&s.dailySeriesLimitRowsDropped, 1)
logSkippedSeries(mn, "-storage.maxDailySeries", sl.MaxItems())
return false
logSkippedSeries(metricNameRaw, "-storage.maxDailySeries", sl.MaxItems())
return errSeriesCardinalityExceeded
}
return true
return nil
}
func logSkippedSeries(mn *MetricName, flagName string, flagValue int) {
var errSeriesCardinalityExceeded = fmt.Errorf("cannot create series because series cardinality limit exceeded")
func logSkippedSeries(metricNameRaw []byte, flagName string, flagValue int) {
select {
case <-logSkippedSeriesTicker.C:
// Do not use logger.WithThrottler() here, since this will result in increased CPU load
// because of getUserReadableMetricName() calls per each logSkippedSeries call.
logger.Warnf("skip series %s because %s=%d reached", mn, flagName, flagValue)
userReadableMetricName := getUserReadableMetricName(metricNameRaw)
logger.Warnf("skip series %s because %s=%d reached", userReadableMetricName, flagName, flagValue)
default:
}
}

View file

@ -1,8 +1,8 @@
{
"accessapproval": "1.3.0",
"accesscontextmanager": "1.2.0",
"aiplatform": "1.10.0",
"analytics": "0.6.1",
"aiplatform": "1.13.0",
"analytics": "0.7.0",
"apigateway": "1.2.0",
"apigeeconnect": "1.2.0",
"appengine": "1.3.0",
@ -11,14 +11,16 @@
"asset": "1.2.0",
"assuredworkloads": "0.6.0",
"automl": "1.3.0",
"baremetalsolution": "0.1.0",
"batch": "0.1.0",
"billing": "1.2.0",
"binaryauthorization": "0.5.0",
"binaryauthorization": "0.6.0",
"certificatemanager": "0.2.0",
"channel": "1.6.0",
"cloudbuild": "1.2.0",
"clouddms": "1.2.0",
"cloudtasks": "1.3.0",
"compute": "1.6.1",
"compute": "1.7.0",
"contactcenterinsights": "1.2.0",
"container": "1.2.0",
"containeranalysis": "0.3.0",
@ -31,18 +33,18 @@
"dataqna": "0.3.0",
"datastream": "0.5.0",
"deploy": "1.2.0",
"dialogflow": "1.9.0",
"dialogflow": "1.10.0",
"dlp": "1.4.0",
"documentai": "1.4.0",
"domains": "0.4.0",
"essentialcontacts": "1.2.0",
"eventarc": "1.6.0",
"filestore": "1.2.0",
"functions": "1.3.0",
"functions": "1.4.0",
"gaming": "1.2.0",
"gkebackup": "0.1.0",
"gkeconnect": "0.3.0",
"gkehub": "0.5.0",
"gkehub": "0.7.0",
"gkemulticloud": "0.2.0",
"grafeas": "0.2.0",
"gsuiteaddons": "1.2.0",
@ -70,18 +72,18 @@
"phishingprotection": "0.3.0",
"policytroubleshooter": "1.2.0",
"privatecatalog": "0.3.0",
"recaptchaenterprise/v2": "2.0.0",
"recaptchaenterprise/v2": "2.0.1",
"recommendationengine": "0.2.0",
"recommender": "1.3.0",
"redis": "1.5.0",
"resourcemanager": "1.2.0",
"resourcesettings": "1.2.0",
"retail": "1.3.0",
"retail": "1.4.0",
"run": "0.1.1",
"scheduler": "1.2.0",
"secretmanager": "1.4.0",
"security": "1.4.0",
"securitycenter": "1.7.0",
"securitycenter": "1.8.0",
"servicecontrol": "1.3.0",
"servicedirectory": "1.2.0",
"servicemanagement": "1.3.0",
@ -89,12 +91,12 @@
"shell": "1.2.0",
"speech": "1.4.0",
"storagetransfer": "1.3.0",
"talent": "0.5.0",
"talent": "0.8.0",
"texttospeech": "1.3.0",
"tpu": "1.2.0",
"trace": "1.2.0",
"translate": "1.2.0",
"video": "1.4.0",
"video": "1.6.0",
"videointelligence": "1.2.0",
"vision/v2": "2.0.0",
"vmmigration": "0.3.0",

View file

@ -1,3 +1,3 @@
{
".": "0.102.0"
".": "0.102.1"
}

View file

@ -1,5 +1,12 @@
# Changes
## [0.102.1](https://github.com/googleapis/google-cloud-go/compare/v0.102.0...v0.102.1) (2022-06-17)
### Bug Fixes
* **longrunning:** regapic remove path params duped as query params ([#6183](https://github.com/googleapis/google-cloud-go/issues/6183)) ([c963be3](https://github.com/googleapis/google-cloud-go/commit/c963be301f074779e6bb8c897d8064fa076e9e35))
## [0.102.0](https://github.com/googleapis/google-cloud-go/compare/v0.101.1...v0.102.0) (2022-05-24)

View file

@ -16,7 +16,7 @@
// metadata and API service accounts.
//
// This package is a wrapper around the GCE metadata service,
// as documented at https://developers.google.com/compute/docs/metadata.
// as documented at https://cloud.google.com/compute/docs/metadata/overview.
package metadata // import "cloud.google.com/go/compute/metadata"
import (

View file

@ -152,6 +152,24 @@
"release_level": "beta",
"library_type": "GAPIC_AUTO"
},
"cloud.google.com/go/baremetalsolution/apiv2": {
"distribution_name": "cloud.google.com/go/baremetalsolution/apiv2",
"description": "Bare Metal Solution API",
"language": "Go",
"client_library_type": "generated",
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/baremetalsolution/latest/apiv2",
"release_level": "beta",
"library_type": "GAPIC_AUTO"
},
"cloud.google.com/go/batch/apiv1": {
"distribution_name": "cloud.google.com/go/batch/apiv1",
"description": "Batch API",
"language": "Go",
"client_library_type": "generated",
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/batch/latest/apiv1",
"release_level": "beta",
"library_type": "GAPIC_AUTO"
},
"cloud.google.com/go/bigquery": {
"distribution_name": "cloud.google.com/go/bigquery",
"description": "BigQuery",
@ -1669,7 +1687,7 @@
"description": "Cloud Vision API",
"language": "Go",
"client_library_type": "generated",
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/vision/latest/v2/apiv1",
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/vision/v2/latest/apiv1",
"release_level": "ga",
"library_type": "GAPIC_AUTO"
},

View file

@ -39,6 +39,12 @@
"automl": {
"component": "automl"
},
"baremetalsolution": {
"component": "baremetalsolution"
},
"batch": {
"component": "batch"
},
"billing": {
"component": "billing"
},

View file

@ -5396,6 +5396,22 @@ var awsPartition = partition{
}: endpoint{},
},
},
"connect-campaigns": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "ap-southeast-2",
}: endpoint{},
endpointKey{
Region: "eu-west-2",
}: endpoint{},
endpointKey{
Region: "us-east-1",
}: endpoint{},
endpointKey{
Region: "us-west-2",
}: endpoint{},
},
},
"contact-lens": service{
Endpoints: serviceEndpoints{
endpointKey{
@ -6478,30 +6494,66 @@ var awsPartition = partition{
},
"drs": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "af-south-1",
}: endpoint{},
endpointKey{
Region: "ap-east-1",
}: endpoint{},
endpointKey{
Region: "ap-northeast-1",
}: endpoint{},
endpointKey{
Region: "ap-northeast-2",
}: endpoint{},
endpointKey{
Region: "ap-northeast-3",
}: endpoint{},
endpointKey{
Region: "ap-south-1",
}: endpoint{},
endpointKey{
Region: "ap-southeast-1",
}: endpoint{},
endpointKey{
Region: "ap-southeast-2",
}: endpoint{},
endpointKey{
Region: "ca-central-1",
}: endpoint{},
endpointKey{
Region: "eu-central-1",
}: endpoint{},
endpointKey{
Region: "eu-north-1",
}: endpoint{},
endpointKey{
Region: "eu-south-1",
}: endpoint{},
endpointKey{
Region: "eu-west-1",
}: endpoint{},
endpointKey{
Region: "eu-west-2",
}: endpoint{},
endpointKey{
Region: "eu-west-3",
}: endpoint{},
endpointKey{
Region: "me-south-1",
}: endpoint{},
endpointKey{
Region: "sa-east-1",
}: endpoint{},
endpointKey{
Region: "us-east-1",
}: endpoint{},
endpointKey{
Region: "us-east-2",
}: endpoint{},
endpointKey{
Region: "us-west-1",
}: endpoint{},
endpointKey{
Region: "us-west-2",
}: endpoint{},
@ -15813,6 +15865,14 @@ var awsPartition = partition{
Region: "eu-north-1",
},
},
endpointKey{
Region: "eu-south-1",
}: endpoint{
Hostname: "portal.sso.eu-south-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-south-1",
},
},
endpointKey{
Region: "eu-west-1",
}: endpoint{
@ -16732,6 +16792,43 @@ var awsPartition = partition{
},
},
},
"redshift-serverless": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "ap-northeast-1",
}: endpoint{},
endpointKey{
Region: "ap-northeast-2",
}: endpoint{},
endpointKey{
Region: "ap-southeast-1",
}: endpoint{},
endpointKey{
Region: "ap-southeast-2",
}: endpoint{},
endpointKey{
Region: "eu-central-1",
}: endpoint{},
endpointKey{
Region: "eu-north-1",
}: endpoint{},
endpointKey{
Region: "eu-west-1",
}: endpoint{},
endpointKey{
Region: "eu-west-2",
}: endpoint{},
endpointKey{
Region: "us-east-1",
}: endpoint{},
endpointKey{
Region: "us-east-2",
}: endpoint{},
endpointKey{
Region: "us-west-2",
}: endpoint{},
},
},
"rekognition": service{
Endpoints: serviceEndpoints{
endpointKey{
@ -22827,6 +22924,582 @@ var awsPartition = partition{
},
},
},
"wafv2": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "af-south-1",
}: endpoint{
Hostname: "wafv2.af-south-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "af-south-1",
},
},
endpointKey{
Region: "af-south-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.af-south-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "af-south-1",
},
},
endpointKey{
Region: "ap-east-1",
}: endpoint{
Hostname: "wafv2.ap-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-east-1",
},
},
endpointKey{
Region: "ap-east-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.ap-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-east-1",
},
},
endpointKey{
Region: "ap-northeast-1",
}: endpoint{
Hostname: "wafv2.ap-northeast-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-northeast-1",
},
},
endpointKey{
Region: "ap-northeast-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.ap-northeast-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-northeast-1",
},
},
endpointKey{
Region: "ap-northeast-2",
}: endpoint{
Hostname: "wafv2.ap-northeast-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-northeast-2",
},
},
endpointKey{
Region: "ap-northeast-2",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.ap-northeast-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-northeast-2",
},
},
endpointKey{
Region: "ap-northeast-3",
}: endpoint{
Hostname: "wafv2.ap-northeast-3.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-northeast-3",
},
},
endpointKey{
Region: "ap-northeast-3",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.ap-northeast-3.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-northeast-3",
},
},
endpointKey{
Region: "ap-south-1",
}: endpoint{
Hostname: "wafv2.ap-south-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-south-1",
},
},
endpointKey{
Region: "ap-south-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.ap-south-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-south-1",
},
},
endpointKey{
Region: "ap-southeast-1",
}: endpoint{
Hostname: "wafv2.ap-southeast-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-southeast-1",
},
},
endpointKey{
Region: "ap-southeast-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.ap-southeast-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-southeast-1",
},
},
endpointKey{
Region: "ap-southeast-2",
}: endpoint{
Hostname: "wafv2.ap-southeast-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-southeast-2",
},
},
endpointKey{
Region: "ap-southeast-2",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.ap-southeast-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-southeast-2",
},
},
endpointKey{
Region: "ap-southeast-3",
}: endpoint{
Hostname: "wafv2.ap-southeast-3.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-southeast-3",
},
},
endpointKey{
Region: "ap-southeast-3",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.ap-southeast-3.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-southeast-3",
},
},
endpointKey{
Region: "ca-central-1",
}: endpoint{
Hostname: "wafv2.ca-central-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ca-central-1",
},
},
endpointKey{
Region: "ca-central-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.ca-central-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ca-central-1",
},
},
endpointKey{
Region: "eu-central-1",
}: endpoint{
Hostname: "wafv2.eu-central-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-central-1",
},
},
endpointKey{
Region: "eu-central-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.eu-central-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-central-1",
},
},
endpointKey{
Region: "eu-north-1",
}: endpoint{
Hostname: "wafv2.eu-north-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-north-1",
},
},
endpointKey{
Region: "eu-north-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.eu-north-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-north-1",
},
},
endpointKey{
Region: "eu-south-1",
}: endpoint{
Hostname: "wafv2.eu-south-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-south-1",
},
},
endpointKey{
Region: "eu-south-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.eu-south-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-south-1",
},
},
endpointKey{
Region: "eu-west-1",
}: endpoint{
Hostname: "wafv2.eu-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-west-1",
},
},
endpointKey{
Region: "eu-west-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.eu-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-west-1",
},
},
endpointKey{
Region: "eu-west-2",
}: endpoint{
Hostname: "wafv2.eu-west-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-west-2",
},
},
endpointKey{
Region: "eu-west-2",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.eu-west-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-west-2",
},
},
endpointKey{
Region: "eu-west-3",
}: endpoint{
Hostname: "wafv2.eu-west-3.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-west-3",
},
},
endpointKey{
Region: "eu-west-3",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.eu-west-3.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-west-3",
},
},
endpointKey{
Region: "fips-af-south-1",
}: endpoint{
Hostname: "wafv2-fips.af-south-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "af-south-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-ap-east-1",
}: endpoint{
Hostname: "wafv2-fips.ap-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-east-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-ap-northeast-1",
}: endpoint{
Hostname: "wafv2-fips.ap-northeast-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-northeast-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-ap-northeast-2",
}: endpoint{
Hostname: "wafv2-fips.ap-northeast-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-northeast-2",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-ap-northeast-3",
}: endpoint{
Hostname: "wafv2-fips.ap-northeast-3.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-northeast-3",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-ap-south-1",
}: endpoint{
Hostname: "wafv2-fips.ap-south-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-south-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-ap-southeast-1",
}: endpoint{
Hostname: "wafv2-fips.ap-southeast-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-southeast-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-ap-southeast-2",
}: endpoint{
Hostname: "wafv2-fips.ap-southeast-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-southeast-2",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-ap-southeast-3",
}: endpoint{
Hostname: "wafv2-fips.ap-southeast-3.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-southeast-3",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-ca-central-1",
}: endpoint{
Hostname: "wafv2-fips.ca-central-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ca-central-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-eu-central-1",
}: endpoint{
Hostname: "wafv2-fips.eu-central-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-central-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-eu-north-1",
}: endpoint{
Hostname: "wafv2-fips.eu-north-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-north-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-eu-south-1",
}: endpoint{
Hostname: "wafv2-fips.eu-south-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-south-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-eu-west-1",
}: endpoint{
Hostname: "wafv2-fips.eu-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-west-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-eu-west-2",
}: endpoint{
Hostname: "wafv2-fips.eu-west-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-west-2",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-eu-west-3",
}: endpoint{
Hostname: "wafv2-fips.eu-west-3.amazonaws.com",
CredentialScope: credentialScope{
Region: "eu-west-3",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-me-south-1",
}: endpoint{
Hostname: "wafv2-fips.me-south-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "me-south-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-sa-east-1",
}: endpoint{
Hostname: "wafv2-fips.sa-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "sa-east-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-us-east-1",
}: endpoint{
Hostname: "wafv2-fips.us-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-us-east-2",
}: endpoint{
Hostname: "wafv2-fips.us-east-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-2",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-us-west-1",
}: endpoint{
Hostname: "wafv2-fips.us-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-us-west-2",
}: endpoint{
Hostname: "wafv2-fips.us-west-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-2",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "me-south-1",
}: endpoint{
Hostname: "wafv2.me-south-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "me-south-1",
},
},
endpointKey{
Region: "me-south-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.me-south-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "me-south-1",
},
},
endpointKey{
Region: "sa-east-1",
}: endpoint{
Hostname: "wafv2.sa-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "sa-east-1",
},
},
endpointKey{
Region: "sa-east-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.sa-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "sa-east-1",
},
},
endpointKey{
Region: "us-east-1",
}: endpoint{
Hostname: "wafv2.us-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-1",
},
},
endpointKey{
Region: "us-east-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.us-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-1",
},
},
endpointKey{
Region: "us-east-2",
}: endpoint{
Hostname: "wafv2.us-east-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-2",
},
},
endpointKey{
Region: "us-east-2",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.us-east-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-2",
},
},
endpointKey{
Region: "us-west-1",
}: endpoint{
Hostname: "wafv2.us-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-1",
},
},
endpointKey{
Region: "us-west-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.us-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-1",
},
},
endpointKey{
Region: "us-west-2",
}: endpoint{
Hostname: "wafv2.us-west-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-2",
},
},
endpointKey{
Region: "us-west-2",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.us-west-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-2",
},
},
},
},
"wellarchitected": service{
Endpoints: serviceEndpoints{
endpointKey{
@ -24784,6 +25457,62 @@ var awscnPartition = partition{
},
},
},
"wafv2": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "cn-north-1",
}: endpoint{
Hostname: "wafv2.cn-north-1.amazonaws.com.cn",
CredentialScope: credentialScope{
Region: "cn-north-1",
},
},
endpointKey{
Region: "cn-north-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.cn-north-1.amazonaws.com.cn",
CredentialScope: credentialScope{
Region: "cn-north-1",
},
},
endpointKey{
Region: "cn-northwest-1",
}: endpoint{
Hostname: "wafv2.cn-northwest-1.amazonaws.com.cn",
CredentialScope: credentialScope{
Region: "cn-northwest-1",
},
},
endpointKey{
Region: "cn-northwest-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.cn-northwest-1.amazonaws.com.cn",
CredentialScope: credentialScope{
Region: "cn-northwest-1",
},
},
endpointKey{
Region: "fips-cn-north-1",
}: endpoint{
Hostname: "wafv2-fips.cn-north-1.amazonaws.com.cn",
CredentialScope: credentialScope{
Region: "cn-north-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-cn-northwest-1",
}: endpoint{
Hostname: "wafv2-fips.cn-northwest-1.amazonaws.com.cn",
CredentialScope: credentialScope{
Region: "cn-northwest-1",
},
Deprecated: boxedTrue,
},
},
},
"workspaces": service{
Endpoints: serviceEndpoints{
endpointKey{
@ -29600,6 +30329,62 @@ var awsusgovPartition = partition{
},
},
},
"wafv2": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "fips-us-gov-east-1",
}: endpoint{
Hostname: "wafv2-fips.us-gov-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-east-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-us-gov-west-1",
}: endpoint{
Hostname: "wafv2-fips.us-gov-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-west-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "us-gov-east-1",
}: endpoint{
Hostname: "wafv2.us-gov-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-east-1",
},
},
endpointKey{
Region: "us-gov-east-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.us-gov-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-east-1",
},
},
endpointKey{
Region: "us-gov-west-1",
}: endpoint{
Hostname: "wafv2.us-gov-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-west-1",
},
},
endpointKey{
Region: "us-gov-west-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "wafv2-fips.us-gov-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-west-1",
},
},
},
},
"workspaces": service{
Endpoints: serviceEndpoints{
endpointKey{

View file

@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go"
// SDKVersion is the version of this SDK
const SDKVersion = "1.44.32"
const SDKVersion = "1.44.37"

View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -0,0 +1,151 @@
// Copyright 2022 Google LLC.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// Client is a cross-platform client for the signer binary (a.k.a."EnterpriseCertSigner").
// The signer binary is OS-specific, but exposes a standard set of APIs for the client to use.
package client
import (
"crypto"
"crypto/rsa"
"crypto/x509"
"encoding/gob"
"fmt"
"io"
"net/rpc"
"os"
"os/exec"
"github.com/googleapis/enterprise-certificate-proxy/client/util"
)
const signAPI = "EnterpriseCertSigner.Sign"
const certificateChainAPI = "EnterpriseCertSigner.CertificateChain"
const publicKeyAPI = "EnterpriseCertSigner.Public"
// A Connection wraps a pair of unidirectional streams as an io.ReadWriteCloser.
type Connection struct {
io.ReadCloser
io.WriteCloser
}
// Close closes c's underlying ReadCloser and WriteCloser.
func (c *Connection) Close() error {
rerr := c.ReadCloser.Close()
werr := c.WriteCloser.Close()
if rerr != nil {
return rerr
}
return werr
}
func init() {
gob.Register(crypto.SHA256)
gob.Register(&rsa.PSSOptions{})
}
// SignArgs contains arguments to a crypto Signer.Sign method.
type SignArgs struct {
Digest []byte // The content to sign.
Opts crypto.SignerOpts // Options for signing, such as Hash identifier.
}
// Key implements credential.Credential by holding the executed signer subprocess.
type Key struct {
cmd *exec.Cmd // Pointer to the signer subprocess.
client *rpc.Client // Pointer to the rpc client that communicates with the signer subprocess.
publicKey crypto.PublicKey // Public key of loaded certificate.
chain [][]byte // Certificate chain of loaded certificate.
}
// CertificateChain returns the credential as a raw X509 cert chain. This contains the public key.
func (k *Key) CertificateChain() [][]byte {
return k.chain
}
// Close closes the RPC connection and kills the signer subprocess.
// Call this to free up resources when the Key object is no longer needed.
func (k *Key) Close() error {
if err := k.client.Close(); err != nil {
return fmt.Errorf("failed to close RPC connection: %w", err)
}
if err := k.cmd.Process.Kill(); err != nil {
return fmt.Errorf("failed to kill signer process: %w", err)
}
if err := k.cmd.Wait(); err.Error() != "signal: killed" {
return fmt.Errorf("signer process was not killed: %w", err)
}
return nil
}
// Public returns the public key for this Key.
func (k *Key) Public() crypto.PublicKey {
return k.publicKey
}
// Sign signs a message by encrypting a message digest, using the specified signer options.
func (k *Key) Sign(_ io.Reader, digest []byte, opts crypto.SignerOpts) (signed []byte, err error) {
err = k.client.Call(signAPI, SignArgs{Digest: digest, Opts: opts}, &signed)
return
}
// Cred spawns a signer subprocess that listens on stdin/stdout to perform certificate
// related operations, including signing messages with the private key.
//
// The signer binary path is read from the specified configFilePath, if provided.
// Otherwise, use the default config file path.
//
// The config file also specifies which certificate the signer should use.
func Cred(configFilePath string) (*Key, error) {
if configFilePath == "" {
configFilePath = util.GetDefaultConfigFilePath()
}
enterpriseCertSignerPath, err := util.LoadSignerBinaryPath(configFilePath)
if err != nil {
return nil, err
}
k := &Key{
cmd: exec.Command(enterpriseCertSignerPath, configFilePath),
}
// Redirect errors from subprocess to parent process.
k.cmd.Stderr = os.Stderr
// RPC client will communicate with subprocess over stdin/stdout.
kin, err := k.cmd.StdinPipe()
if err != nil {
return nil, err
}
kout, err := k.cmd.StdoutPipe()
if err != nil {
return nil, err
}
k.client = rpc.NewClient(&Connection{kout, kin})
if err := k.cmd.Start(); err != nil {
return nil, fmt.Errorf("starting enterprise cert signer subprocess: %w", err)
}
if err := k.client.Call(certificateChainAPI, struct{}{}, &k.chain); err != nil {
return nil, fmt.Errorf("failed to retrieve certificate chain: %w", err)
}
var publicKeyBytes []byte
if err := k.client.Call(publicKeyAPI, struct{}{}, &publicKeyBytes); err != nil {
return nil, fmt.Errorf("failed to retrieve public key: %w", err)
}
publicKey, err := x509.ParsePKIXPublicKey(publicKeyBytes)
if err != nil {
return nil, fmt.Errorf("failed to parse public key: %w", err)
}
var ok bool
k.publicKey, ok = publicKey.(crypto.PublicKey)
if !ok {
return nil, fmt.Errorf("invalid public key type: %T", publicKey)
}
return k, nil
}

View file

@ -0,0 +1,72 @@
// Package util provides helper functions for the client.
package util
import (
"encoding/json"
"errors"
"io/ioutil"
"os"
"os/user"
"path/filepath"
"runtime"
)
const configFileName = "enterprise_certificate_config.json"
// EnterpriseCertificateConfig contains parameters for initializing signer.
type EnterpriseCertificateConfig struct {
Libs Libs `json:"libs"`
}
// Libs specifies the locations of helper libraries.
type Libs struct {
SignerBinary string `json:"signer_binary"`
}
// LoadSignerBinaryPath retrieves the path of the signer binary from the config file.
func LoadSignerBinaryPath(configFilePath string) (path string, err error) {
jsonFile, err := os.Open(configFilePath)
if err != nil {
return "", err
}
byteValue, err := ioutil.ReadAll(jsonFile)
if err != nil {
return "", err
}
var config EnterpriseCertificateConfig
err = json.Unmarshal(byteValue, &config)
if err != nil {
return "", err
}
signerBinaryPath := config.Libs.SignerBinary
if signerBinaryPath == "" {
return "", errors.New("Signer binary path is missing.")
}
return signerBinaryPath, nil
}
func guessHomeDir() string {
// Prefer $HOME over user.Current due to glibc bug: golang.org/issue/13470
if v := os.Getenv("HOME"); v != "" {
return v
}
// Else, fall back to user.Current:
if u, err := user.Current(); err == nil {
return u.HomeDir
}
return ""
}
func getDefaultConfigFileDirectory() (directory string) {
if runtime.GOOS == "windows" {
return filepath.Join(os.Getenv("APPDATA"), "gcloud")
} else {
return filepath.Join(guessHomeDir(), ".config/gcloud")
}
}
// GetDefaultConfigFilePath returns the default path of the enterprise certificate config file created by gCloud.
func GetDefaultConfigFilePath() (path string) {
return filepath.Join(getDefaultConfigFileDirectory(), configFileName)
}

View file

@ -56,7 +56,12 @@ func (f *Float64Slice) Set(value string) error {
// String returns a readable representation of this value (for usage defaults)
func (f *Float64Slice) String() string {
return fmt.Sprintf("%#v", f.slice)
v := f.slice
if v == nil {
// treat nil the same as zero length non-nil
v = make([]float64, 0)
}
return fmt.Sprintf("%#v", v)
}
// Serialize allows Float64Slice to fulfill Serializer
@ -120,29 +125,40 @@ func (f *Float64SliceFlag) GetEnvVars() []string {
// Apply populates the flag given the flag set and environment
func (f *Float64SliceFlag) Apply(set *flag.FlagSet) error {
// apply any default
if f.Destination != nil && f.Value != nil {
f.Destination.slice = make([]float64, len(f.Value.slice))
copy(f.Destination.slice, f.Value.slice)
}
// resolve setValue (what we will assign to the set)
var setValue *Float64Slice
switch {
case f.Destination != nil:
setValue = f.Destination
case f.Value != nil:
setValue = f.Value.clone()
default:
setValue = new(Float64Slice)
}
if val, source, found := flagFromEnvOrFile(f.EnvVars, f.FilePath); found {
if val != "" {
f.Value = &Float64Slice{}
for _, s := range flagSplitMultiValues(val) {
if err := f.Value.Set(strings.TrimSpace(s)); err != nil {
return fmt.Errorf("could not parse %q as float64 slice value from %s for flag %s: %s", f.Value, source, f.Name, err)
if err := setValue.Set(strings.TrimSpace(s)); err != nil {
return fmt.Errorf("could not parse %q as float64 slice value from %s for flag %s: %s", val, source, f.Name, err)
}
}
// Set this to false so that we reset the slice if we then set values from
// flags that have already been set by the environment.
f.Value.hasBeenSet = false
setValue.hasBeenSet = false
f.HasBeenSet = true
}
}
if f.Value == nil {
f.Value = &Float64Slice{}
}
copyValue := f.Value.clone()
for _, name := range f.Names() {
set.Var(copyValue, name, f.Usage)
set.Var(setValue, name, f.Usage)
}
return nil
@ -165,7 +181,7 @@ func (cCtx *Context) Float64Slice(name string) []float64 {
func lookupFloat64Slice(name string, set *flag.FlagSet) []float64 {
f := set.Lookup(name)
if f != nil {
if slice, ok := f.Value.(*Float64Slice); ok {
if slice, ok := unwrapFlagValue(f.Value).(*Float64Slice); ok {
return slice.Value()
}
}

View file

@ -57,7 +57,12 @@ func (i *Int64Slice) Set(value string) error {
// String returns a readable representation of this value (for usage defaults)
func (i *Int64Slice) String() string {
return fmt.Sprintf("%#v", i.slice)
v := i.slice
if v == nil {
// treat nil the same as zero length non-nil
v = make([]int64, 0)
}
return fmt.Sprintf("%#v", v)
}
// Serialize allows Int64Slice to fulfill Serializer
@ -121,27 +126,38 @@ func (f *Int64SliceFlag) GetEnvVars() []string {
// Apply populates the flag given the flag set and environment
func (f *Int64SliceFlag) Apply(set *flag.FlagSet) error {
if val, source, found := flagFromEnvOrFile(f.EnvVars, f.FilePath); found {
f.Value = &Int64Slice{}
// apply any default
if f.Destination != nil && f.Value != nil {
f.Destination.slice = make([]int64, len(f.Value.slice))
copy(f.Destination.slice, f.Value.slice)
}
// resolve setValue (what we will assign to the set)
var setValue *Int64Slice
switch {
case f.Destination != nil:
setValue = f.Destination
case f.Value != nil:
setValue = f.Value.clone()
default:
setValue = new(Int64Slice)
}
if val, source, ok := flagFromEnvOrFile(f.EnvVars, f.FilePath); ok && val != "" {
for _, s := range flagSplitMultiValues(val) {
if err := f.Value.Set(strings.TrimSpace(s)); err != nil {
if err := setValue.Set(strings.TrimSpace(s)); err != nil {
return fmt.Errorf("could not parse %q as int64 slice value from %s for flag %s: %s", val, source, f.Name, err)
}
}
// Set this to false so that we reset the slice if we then set values from
// flags that have already been set by the environment.
f.Value.hasBeenSet = false
setValue.hasBeenSet = false
f.HasBeenSet = true
}
if f.Value == nil {
f.Value = &Int64Slice{}
}
copyValue := f.Value.clone()
for _, name := range f.Names() {
set.Var(copyValue, name, f.Usage)
set.Var(setValue, name, f.Usage)
}
return nil
@ -164,7 +180,7 @@ func (cCtx *Context) Int64Slice(name string) []int64 {
func lookupInt64Slice(name string, set *flag.FlagSet) []int64 {
f := set.Lookup(name)
if f != nil {
if slice, ok := f.Value.(*Int64Slice); ok {
if slice, ok := unwrapFlagValue(f.Value).(*Int64Slice); ok {
return slice.Value()
}
}

View file

@ -68,7 +68,12 @@ func (i *IntSlice) Set(value string) error {
// String returns a readable representation of this value (for usage defaults)
func (i *IntSlice) String() string {
return fmt.Sprintf("%#v", i.slice)
v := i.slice
if v == nil {
// treat nil the same as zero length non-nil
v = make([]int, 0)
}
return fmt.Sprintf("%#v", v)
}
// Serialize allows IntSlice to fulfill Serializer
@ -132,27 +137,38 @@ func (f *IntSliceFlag) GetEnvVars() []string {
// Apply populates the flag given the flag set and environment
func (f *IntSliceFlag) Apply(set *flag.FlagSet) error {
if val, source, found := flagFromEnvOrFile(f.EnvVars, f.FilePath); found {
f.Value = &IntSlice{}
// apply any default
if f.Destination != nil && f.Value != nil {
f.Destination.slice = make([]int, len(f.Value.slice))
copy(f.Destination.slice, f.Value.slice)
}
// resolve setValue (what we will assign to the set)
var setValue *IntSlice
switch {
case f.Destination != nil:
setValue = f.Destination
case f.Value != nil:
setValue = f.Value.clone()
default:
setValue = new(IntSlice)
}
if val, source, ok := flagFromEnvOrFile(f.EnvVars, f.FilePath); ok && val != "" {
for _, s := range flagSplitMultiValues(val) {
if err := f.Value.Set(strings.TrimSpace(s)); err != nil {
if err := setValue.Set(strings.TrimSpace(s)); err != nil {
return fmt.Errorf("could not parse %q as int slice value from %s for flag %s: %s", val, source, f.Name, err)
}
}
// Set this to false so that we reset the slice if we then set values from
// flags that have already been set by the environment.
f.Value.hasBeenSet = false
setValue.hasBeenSet = false
f.HasBeenSet = true
}
if f.Value == nil {
f.Value = &IntSlice{}
}
copyValue := f.Value.clone()
for _, name := range f.Names() {
set.Var(copyValue, name, f.Usage)
set.Var(setValue, name, f.Usage)
}
return nil
@ -175,7 +191,7 @@ func (cCtx *Context) IntSlice(name string) []int {
func lookupIntSlice(name string, set *flag.FlagSet) []int {
f := set.Lookup(name)
if f != nil {
if slice, ok := f.Value.(*IntSlice); ok {
if slice, ok := unwrapFlagValue(f.Value).(*IntSlice); ok {
return slice.Value()
}
}

View file

@ -115,41 +115,36 @@ func (f *StringSliceFlag) GetEnvVars() []string {
// Apply populates the flag given the flag set and environment
func (f *StringSliceFlag) Apply(set *flag.FlagSet) error {
// apply any default
if f.Destination != nil && f.Value != nil {
f.Destination.slice = make([]string, len(f.Value.slice))
copy(f.Destination.slice, f.Value.slice)
}
// resolve setValue (what we will assign to the set)
var setValue *StringSlice
switch {
case f.Destination != nil:
setValue = f.Destination
case f.Value != nil:
setValue = f.Value.clone()
default:
setValue = new(StringSlice)
}
if val, source, found := flagFromEnvOrFile(f.EnvVars, f.FilePath); found {
if f.Value == nil {
f.Value = &StringSlice{}
}
destination := f.Value
if f.Destination != nil {
destination = f.Destination
}
for _, s := range flagSplitMultiValues(val) {
if err := destination.Set(strings.TrimSpace(s)); err != nil {
if err := setValue.Set(strings.TrimSpace(s)); err != nil {
return fmt.Errorf("could not parse %q as string value from %s for flag %s: %s", val, source, f.Name, err)
}
}
// Set this to false so that we reset the slice if we then set values from
// flags that have already been set by the environment.
destination.hasBeenSet = false
setValue.hasBeenSet = false
f.HasBeenSet = true
}
if f.Value == nil {
f.Value = &StringSlice{}
}
setValue := f.Destination
if f.Destination == nil {
setValue = f.Value.clone()
}
for _, name := range f.Names() {
set.Var(setValue, name, f.Usage)
}
@ -174,7 +169,7 @@ func (cCtx *Context) StringSlice(name string) []string {
func lookupStringSlice(name string, set *flag.FlagSet) []string {
f := set.Lookup(name)
if f != nil {
if slice, ok := f.Value.(*StringSlice); ok {
if slice, ok := unwrapFlagValue(f.Value).(*StringSlice); ok {
return slice.Value()
}
}

View file

@ -32,16 +32,16 @@ var (
SuggestDidYouMeanTemplate string = suggestDidYouMeanTemplate
)
var AppHelpTemplate = `NAME:
{{.Name}}{{if .Usage}} - {{.Usage}}{{end}}
{{$v := offset .Name 6}}{{wrap .Name 3}}{{if .Usage}} - {{wrap .Usage $v}}{{end}}
USAGE:
{{if .UsageText}}{{.UsageText | nindent 3 | trim}}{{else}}{{.HelpName}} {{if .VisibleFlags}}[global options]{{end}}{{if .Commands}} command [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{end}}{{if .Version}}{{if not .HideVersion}}
{{if .UsageText}}{{wrap .UsageText 3}}{{else}}{{.HelpName}} {{if .VisibleFlags}}[global options]{{end}}{{if .Commands}} command [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{end}}{{if .Version}}{{if not .HideVersion}}
VERSION:
{{.Version}}{{end}}{{end}}{{if .Description}}
DESCRIPTION:
{{.Description | nindent 3 | trim}}{{end}}{{if len .Authors}}
{{wrap .Description 3}}{{end}}{{if len .Authors}}
AUTHOR{{with $length := len .Authors}}{{if ne 1 $length}}S{{end}}{{end}}:
{{range $index, $author := .Authors}}{{if $index}}
@ -59,26 +59,26 @@ GLOBAL OPTIONS:{{range .VisibleFlagCategories}}
GLOBAL OPTIONS:
{{range $index, $option := .VisibleFlags}}{{if $index}}
{{end}}{{$option}}{{end}}{{end}}{{end}}{{if .Copyright}}
{{end}}{{wrap $option.String 6}}{{end}}{{end}}{{end}}{{if .Copyright}}
COPYRIGHT:
{{.Copyright}}{{end}}
{{wrap .Copyright 3}}{{end}}
`
AppHelpTemplate is the text template for the Default help topic. cli.go uses
text/template to render templates. You can render custom help text by
setting this variable.
var CommandHelpTemplate = `NAME:
{{.HelpName}} - {{.Usage}}
{{$v := offset .HelpName 6}}{{wrap .HelpName 3}}{{if .Usage}} - {{wrap .Usage $v}}{{end}}
USAGE:
{{if .UsageText}}{{.UsageText | nindent 3 | trim}}{{else}}{{.HelpName}}{{if .VisibleFlags}} [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{end}}{{if .Category}}
{{if .UsageText}}{{wrap .UsageText 3}}{{else}}{{.HelpName}}{{if .VisibleFlags}} [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{end}}{{if .Category}}
CATEGORY:
{{.Category}}{{end}}{{if .Description}}
DESCRIPTION:
{{.Description | nindent 3 | trim}}{{end}}{{if .VisibleFlagCategories}}
{{wrap .Description 3}}{{end}}{{if .VisibleFlagCategories}}
OPTIONS:{{range .VisibleFlagCategories}}
{{if .Name}}{{.Name}}
@ -150,10 +150,10 @@ var SubcommandHelpTemplate = `NAME:
{{.HelpName}} - {{.Usage}}
USAGE:
{{if .UsageText}}{{.UsageText | nindent 3 | trim}}{{else}}{{.HelpName}} command{{if .VisibleFlags}} [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{end}}{{if .Description}}
{{if .UsageText}}{{wrap .UsageText 3}}{{else}}{{.HelpName}} command{{if .VisibleFlags}} [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{end}}{{if .Description}}
DESCRIPTION:
{{.Description | nindent 3 | trim}}{{end}}
{{wrap .Description 3}}{{end}}
COMMANDS:{{range .VisibleCategories}}{{if .Name}}
{{.Name}}:{{range .VisibleCommands}}
@ -186,6 +186,11 @@ var HelpPrinterCustom helpPrinterCustom = printHelpCustom
the default implementation of HelpPrinter, and may be called directly if the
ExtraInfo field is set on an App.
In the default implementation, if the customFuncs argument contains a
"wrapAt" key, which is a function which takes no arguments and returns an
int, this int value will be used to produce a "wrap" function used by the
default template to wrap long lines.
FUNCTIONS
@ -1003,6 +1008,8 @@ func (f *Float64SliceFlag) GetCategory() string
func (f *Float64SliceFlag) GetDefaultText() string
GetDefaultText returns the default text for this flag
func (f *Float64SliceFlag) GetDestination() []float64
func (f *Float64SliceFlag) GetEnvVars() []string
GetEnvVars returns the env vars for this flag
@ -1025,6 +1032,10 @@ func (f *Float64SliceFlag) IsVisible() bool
func (f *Float64SliceFlag) Names() []string
Names returns the names of the flag
func (f *Float64SliceFlag) SetDestination(slice []float64)
func (f *Float64SliceFlag) SetValue(slice []float64)
func (f *Float64SliceFlag) String() string
String returns a readable representation of this value (for usage defaults)
@ -1215,6 +1226,8 @@ func (f *Int64SliceFlag) GetCategory() string
func (f *Int64SliceFlag) GetDefaultText() string
GetDefaultText returns the default text for this flag
func (f *Int64SliceFlag) GetDestination() []int64
func (f *Int64SliceFlag) GetEnvVars() []string
GetEnvVars returns the env vars for this flag
@ -1237,6 +1250,10 @@ func (f *Int64SliceFlag) IsVisible() bool
func (f *Int64SliceFlag) Names() []string
Names returns the names of the flag
func (f *Int64SliceFlag) SetDestination(slice []int64)
func (f *Int64SliceFlag) SetValue(slice []int64)
func (f *Int64SliceFlag) String() string
String returns a readable representation of this value (for usage defaults)
@ -1362,6 +1379,8 @@ func (f *IntSliceFlag) GetCategory() string
func (f *IntSliceFlag) GetDefaultText() string
GetDefaultText returns the default text for this flag
func (f *IntSliceFlag) GetDestination() []int
func (f *IntSliceFlag) GetEnvVars() []string
GetEnvVars returns the env vars for this flag
@ -1384,6 +1403,10 @@ func (f *IntSliceFlag) IsVisible() bool
func (f *IntSliceFlag) Names() []string
Names returns the names of the flag
func (f *IntSliceFlag) SetDestination(slice []int)
func (f *IntSliceFlag) SetValue(slice []int)
func (f *IntSliceFlag) String() string
String returns a readable representation of this value (for usage defaults)
@ -1396,6 +1419,22 @@ type MultiError interface {
}
MultiError is an error that wraps multiple errors.
type MultiFloat64Flag = SliceFlag[*Float64SliceFlag, []float64, float64]
MultiFloat64Flag extends Float64SliceFlag with support for using slices
directly, as Value and/or Destination. See also SliceFlag.
type MultiInt64Flag = SliceFlag[*Int64SliceFlag, []int64, int64]
MultiInt64Flag extends Int64SliceFlag with support for using slices
directly, as Value and/or Destination. See also SliceFlag.
type MultiIntFlag = SliceFlag[*IntSliceFlag, []int, int]
MultiIntFlag extends IntSliceFlag with support for using slices directly, as
Value and/or Destination. See also SliceFlag.
type MultiStringFlag = SliceFlag[*StringSliceFlag, []string, string]
MultiStringFlag extends StringSliceFlag with support for using slices
directly, as Value and/or Destination. See also SliceFlag.
type OnUsageErrorFunc func(cCtx *Context, err error, isSubcommand bool) error
OnUsageErrorFunc is executed if a usage error occurs. This is useful for
displaying customized usage error messages. This function is able to replace
@ -1480,6 +1519,67 @@ type Serializer interface {
}
Serializer is used to circumvent the limitations of flag.FlagSet.Set
type SliceFlag[T SliceFlagTarget[E], S ~[]E, E any] struct {
Target T
Value S
Destination *S
}
SliceFlag extends implementations like StringSliceFlag and IntSliceFlag with
support for using slices directly, as Value and/or Destination. See also
SliceFlagTarget, MultiStringFlag, MultiFloat64Flag, MultiInt64Flag,
MultiIntFlag.
func (x *SliceFlag[T, S, E]) Apply(set *flag.FlagSet) error
func (x *SliceFlag[T, S, E]) GetCategory() string
func (x *SliceFlag[T, S, E]) GetDefaultText() string
func (x *SliceFlag[T, S, E]) GetDestination() S
func (x *SliceFlag[T, S, E]) GetEnvVars() []string
func (x *SliceFlag[T, S, E]) GetUsage() string
func (x *SliceFlag[T, S, E]) GetValue() string
func (x *SliceFlag[T, S, E]) IsRequired() bool
func (x *SliceFlag[T, S, E]) IsSet() bool
func (x *SliceFlag[T, S, E]) IsVisible() bool
func (x *SliceFlag[T, S, E]) Names() []string
func (x *SliceFlag[T, S, E]) SetDestination(slice S)
func (x *SliceFlag[T, S, E]) SetValue(slice S)
func (x *SliceFlag[T, S, E]) String() string
func (x *SliceFlag[T, S, E]) TakesValue() bool
type SliceFlagTarget[E any] interface {
Flag
RequiredFlag
DocGenerationFlag
VisibleFlag
CategorizableFlag
// SetValue should propagate the given slice to the target, ideally as a new value.
// Note that a nil slice should nil/clear any existing value (modelled as ~[]E).
SetValue(slice []E)
// SetDestination should propagate the given slice to the target, ideally as a new value.
// Note that a nil slice should nil/clear any existing value (modelled as ~*[]E).
SetDestination(slice []E)
// GetDestination should return the current value referenced by any destination, or nil if nil/unset.
GetDestination() []E
}
SliceFlagTarget models a target implementation for use with SliceFlag. The
three methods, SetValue, SetDestination, and GetDestination, are necessary
to propagate Value and Destination, where Value is propagated inwards
(initially), and Destination is propagated outwards (on every update).
type StringFlag struct {
Name string
@ -1599,6 +1699,8 @@ func (f *StringSliceFlag) GetCategory() string
func (f *StringSliceFlag) GetDefaultText() string
GetDefaultText returns the default text for this flag
func (f *StringSliceFlag) GetDestination() []string
func (f *StringSliceFlag) GetEnvVars() []string
GetEnvVars returns the env vars for this flag
@ -1621,6 +1723,10 @@ func (f *StringSliceFlag) IsVisible() bool
func (f *StringSliceFlag) Names() []string
Names returns the names of the flag
func (f *StringSliceFlag) SetDestination(slice []string)
func (f *StringSliceFlag) SetValue(slice []string)
func (f *StringSliceFlag) String() string
String returns a readable representation of this value (for usage defaults)

View file

@ -64,6 +64,11 @@ var HelpPrinter helpPrinter = printHelp
// HelpPrinterCustom is a function that writes the help output. It is used as
// the default implementation of HelpPrinter, and may be called directly if
// the ExtraInfo field is set on an App.
//
// In the default implementation, if the customFuncs argument contains a
// "wrapAt" key, which is a function which takes no arguments and returns
// an int, this int value will be used to produce a "wrap" function used
// by the default template to wrap long lines.
var HelpPrinterCustom helpPrinterCustom = printHelpCustom
// VersionPrinter prints the version for the App
@ -286,12 +291,29 @@ func ShowCommandCompletions(ctx *Context, command string) {
// The customFuncs map will be combined with a default template.FuncMap to
// allow using arbitrary functions in template rendering.
func printHelpCustom(out io.Writer, templ string, data interface{}, customFuncs map[string]interface{}) {
const maxLineLength = 10000
funcMap := template.FuncMap{
"join": strings.Join,
"indent": indent,
"nindent": nindent,
"trim": strings.TrimSpace,
"wrap": func(input string, offset int) string { return wrap(input, offset, maxLineLength) },
"offset": offset,
}
if customFuncs["wrapAt"] != nil {
if wa, ok := customFuncs["wrapAt"]; ok {
if waf, ok := wa.(func() int); ok {
wrapAt := waf()
customFuncs["wrap"] = func(input string, offset int) string {
return wrap(input, offset, wrapAt)
}
}
}
}
for key, value := range customFuncs {
funcMap[key] = value
}
@ -402,3 +424,55 @@ func indent(spaces int, v string) string {
func nindent(spaces int, v string) string {
return "\n" + indent(spaces, v)
}
func wrap(input string, offset int, wrapAt int) string {
var sb strings.Builder
lines := strings.Split(input, "\n")
padding := strings.Repeat(" ", offset)
for i, line := range lines {
if i != 0 {
sb.WriteString(padding)
}
sb.WriteString(wrapLine(line, offset, wrapAt, padding))
if i != len(lines)-1 {
sb.WriteString("\n")
}
}
return sb.String()
}
func wrapLine(input string, offset int, wrapAt int, padding string) string {
if wrapAt <= offset || len(input) <= wrapAt-offset {
return input
}
lineWidth := wrapAt - offset
words := strings.Fields(input)
if len(words) == 0 {
return input
}
wrapped := words[0]
spaceLeft := lineWidth - len(wrapped)
for _, word := range words[1:] {
if len(word)+1 > spaceLeft {
wrapped += "\n" + padding + word
spaceLeft = lineWidth - len(word)
} else {
wrapped += " " + word
spaceLeft -= 1 + len(word)
}
}
return wrapped
}
func offset(input string, fixed int) int {
return len(input) + fixed
}

293
vendor/github.com/urfave/cli/v2/sliceflag.go generated vendored Normal file
View file

@ -0,0 +1,293 @@
//go:build go1.18
// +build go1.18
package cli
import (
"flag"
"reflect"
)
type (
// SliceFlag extends implementations like StringSliceFlag and IntSliceFlag with support for using slices directly,
// as Value and/or Destination.
// See also SliceFlagTarget, MultiStringFlag, MultiFloat64Flag, MultiInt64Flag, MultiIntFlag.
SliceFlag[T SliceFlagTarget[E], S ~[]E, E any] struct {
Target T
Value S
Destination *S
}
// SliceFlagTarget models a target implementation for use with SliceFlag.
// The three methods, SetValue, SetDestination, and GetDestination, are necessary to propagate Value and
// Destination, where Value is propagated inwards (initially), and Destination is propagated outwards (on every
// update).
SliceFlagTarget[E any] interface {
Flag
RequiredFlag
DocGenerationFlag
VisibleFlag
CategorizableFlag
// SetValue should propagate the given slice to the target, ideally as a new value.
// Note that a nil slice should nil/clear any existing value (modelled as ~[]E).
SetValue(slice []E)
// SetDestination should propagate the given slice to the target, ideally as a new value.
// Note that a nil slice should nil/clear any existing value (modelled as ~*[]E).
SetDestination(slice []E)
// GetDestination should return the current value referenced by any destination, or nil if nil/unset.
GetDestination() []E
}
// MultiStringFlag extends StringSliceFlag with support for using slices directly, as Value and/or Destination.
// See also SliceFlag.
MultiStringFlag = SliceFlag[*StringSliceFlag, []string, string]
// MultiFloat64Flag extends Float64SliceFlag with support for using slices directly, as Value and/or Destination.
// See also SliceFlag.
MultiFloat64Flag = SliceFlag[*Float64SliceFlag, []float64, float64]
// MultiInt64Flag extends Int64SliceFlag with support for using slices directly, as Value and/or Destination.
// See also SliceFlag.
MultiInt64Flag = SliceFlag[*Int64SliceFlag, []int64, int64]
// MultiIntFlag extends IntSliceFlag with support for using slices directly, as Value and/or Destination.
// See also SliceFlag.
MultiIntFlag = SliceFlag[*IntSliceFlag, []int, int]
flagValueHook struct {
value Generic
hook func()
}
)
var (
// compile time assertions
_ SliceFlagTarget[string] = (*StringSliceFlag)(nil)
_ SliceFlagTarget[string] = (*SliceFlag[*StringSliceFlag, []string, string])(nil)
_ SliceFlagTarget[string] = (*MultiStringFlag)(nil)
_ SliceFlagTarget[float64] = (*MultiFloat64Flag)(nil)
_ SliceFlagTarget[int64] = (*MultiInt64Flag)(nil)
_ SliceFlagTarget[int] = (*MultiIntFlag)(nil)
_ Generic = (*flagValueHook)(nil)
_ Serializer = (*flagValueHook)(nil)
)
func (x *SliceFlag[T, S, E]) Apply(set *flag.FlagSet) error {
x.Target.SetValue(x.convertSlice(x.Value))
destination := x.Destination
if destination == nil {
x.Target.SetDestination(nil)
return x.Target.Apply(set)
}
x.Target.SetDestination(x.convertSlice(*destination))
return applyFlagValueHook(set, x.Target.Apply, func() {
*destination = x.Target.GetDestination()
})
}
func (x *SliceFlag[T, S, E]) convertSlice(slice S) []E {
result := make([]E, len(slice))
copy(result, slice)
return result
}
func (x *SliceFlag[T, S, E]) SetValue(slice S) {
x.Value = slice
}
func (x *SliceFlag[T, S, E]) SetDestination(slice S) {
if slice != nil {
x.Destination = &slice
} else {
x.Destination = nil
}
}
func (x *SliceFlag[T, S, E]) GetDestination() S {
if destination := x.Destination; destination != nil {
return *destination
}
return nil
}
func (x *SliceFlag[T, S, E]) String() string { return x.Target.String() }
func (x *SliceFlag[T, S, E]) Names() []string { return x.Target.Names() }
func (x *SliceFlag[T, S, E]) IsSet() bool { return x.Target.IsSet() }
func (x *SliceFlag[T, S, E]) IsRequired() bool { return x.Target.IsRequired() }
func (x *SliceFlag[T, S, E]) TakesValue() bool { return x.Target.TakesValue() }
func (x *SliceFlag[T, S, E]) GetUsage() string { return x.Target.GetUsage() }
func (x *SliceFlag[T, S, E]) GetValue() string { return x.Target.GetValue() }
func (x *SliceFlag[T, S, E]) GetDefaultText() string { return x.Target.GetDefaultText() }
func (x *SliceFlag[T, S, E]) GetEnvVars() []string { return x.Target.GetEnvVars() }
func (x *SliceFlag[T, S, E]) IsVisible() bool { return x.Target.IsVisible() }
func (x *SliceFlag[T, S, E]) GetCategory() string { return x.Target.GetCategory() }
func (x *flagValueHook) Set(value string) error {
if err := x.value.Set(value); err != nil {
return err
}
x.hook()
return nil
}
func (x *flagValueHook) String() string {
// note: this is necessary due to the way Go's flag package handles defaults
isZeroValue := func(f flag.Value, v string) bool {
/*
https://cs.opensource.google/go/go/+/refs/tags/go1.18.3:src/flag/flag.go;drc=2580d0e08d5e9f979b943758d3c49877fb2324cb;l=453
Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
// Build a zero value of the flag's Value type, and see if the
// result of calling its String method equals the value passed in.
// This works unless the Value type is itself an interface type.
typ := reflect.TypeOf(f)
var z reflect.Value
if typ.Kind() == reflect.Pointer {
z = reflect.New(typ.Elem())
} else {
z = reflect.Zero(typ)
}
return v == z.Interface().(flag.Value).String()
}
if x.value != nil {
// only return non-empty if not the same string as returned by the zero value
if s := x.value.String(); !isZeroValue(x.value, s) {
return s
}
}
return ``
}
func (x *flagValueHook) Serialize() string {
if value, ok := x.value.(Serializer); ok {
return value.Serialize()
}
return x.String()
}
// applyFlagValueHook wraps calls apply then wraps flags to call a hook function on update and after initial apply.
func applyFlagValueHook(set *flag.FlagSet, apply func(set *flag.FlagSet) error, hook func()) error {
if apply == nil || set == nil || hook == nil {
panic(`invalid input`)
}
var tmp flag.FlagSet
if err := apply(&tmp); err != nil {
return err
}
tmp.VisitAll(func(f *flag.Flag) { set.Var(&flagValueHook{value: f.Value, hook: hook}, f.Name, f.Usage) })
hook()
return nil
}
// newSliceFlagValue is for implementing SliceFlagTarget.SetValue and SliceFlagTarget.SetDestination.
// It's e.g. as part of StringSliceFlag.SetValue, using the factory NewStringSlice.
func newSliceFlagValue[R any, S ~[]E, E any](factory func(defaults ...E) *R, defaults S) *R {
if defaults == nil {
return nil
}
return factory(defaults...)
}
// unwrapFlagValue strips any/all *flagValueHook wrappers.
func unwrapFlagValue(v flag.Value) flag.Value {
for {
h, ok := v.(*flagValueHook)
if !ok {
return v
}
v = h.value
}
}
// NOTE: the methods below are in this file to make use of the build constraint
func (f *Float64SliceFlag) SetValue(slice []float64) {
f.Value = newSliceFlagValue(NewFloat64Slice, slice)
}
func (f *Float64SliceFlag) SetDestination(slice []float64) {
f.Destination = newSliceFlagValue(NewFloat64Slice, slice)
}
func (f *Float64SliceFlag) GetDestination() []float64 {
if destination := f.Destination; destination != nil {
return destination.Value()
}
return nil
}
func (f *Int64SliceFlag) SetValue(slice []int64) {
f.Value = newSliceFlagValue(NewInt64Slice, slice)
}
func (f *Int64SliceFlag) SetDestination(slice []int64) {
f.Destination = newSliceFlagValue(NewInt64Slice, slice)
}
func (f *Int64SliceFlag) GetDestination() []int64 {
if destination := f.Destination; destination != nil {
return destination.Value()
}
return nil
}
func (f *IntSliceFlag) SetValue(slice []int) {
f.Value = newSliceFlagValue(NewIntSlice, slice)
}
func (f *IntSliceFlag) SetDestination(slice []int) {
f.Destination = newSliceFlagValue(NewIntSlice, slice)
}
func (f *IntSliceFlag) GetDestination() []int {
if destination := f.Destination; destination != nil {
return destination.Value()
}
return nil
}
func (f *StringSliceFlag) SetValue(slice []string) {
f.Value = newSliceFlagValue(NewStringSlice, slice)
}
func (f *StringSliceFlag) SetDestination(slice []string) {
f.Destination = newSliceFlagValue(NewStringSlice, slice)
}
func (f *StringSliceFlag) GetDestination() []string {
if destination := f.Destination; destination != nil {
return destination.Value()
}
return nil
}

10
vendor/github.com/urfave/cli/v2/sliceflag_pre18.go generated vendored Normal file
View file

@ -0,0 +1,10 @@
//go:build !go1.18
// +build !go1.18
package cli
import (
"flag"
)
func unwrapFlagValue(v flag.Value) flag.Value { return v }

View file

@ -4,16 +4,16 @@ package cli
// cli.go uses text/template to render templates. You can
// render custom help text by setting this variable.
var AppHelpTemplate = `NAME:
{{.Name}}{{if .Usage}} - {{.Usage}}{{end}}
{{$v := offset .Name 6}}{{wrap .Name 3}}{{if .Usage}} - {{wrap .Usage $v}}{{end}}
USAGE:
{{if .UsageText}}{{.UsageText | nindent 3 | trim}}{{else}}{{.HelpName}} {{if .VisibleFlags}}[global options]{{end}}{{if .Commands}} command [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{end}}{{if .Version}}{{if not .HideVersion}}
{{if .UsageText}}{{wrap .UsageText 3}}{{else}}{{.HelpName}} {{if .VisibleFlags}}[global options]{{end}}{{if .Commands}} command [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{end}}{{if .Version}}{{if not .HideVersion}}
VERSION:
{{.Version}}{{end}}{{end}}{{if .Description}}
DESCRIPTION:
{{.Description | nindent 3 | trim}}{{end}}{{if len .Authors}}
{{wrap .Description 3}}{{end}}{{if len .Authors}}
AUTHOR{{with $length := len .Authors}}{{if ne 1 $length}}S{{end}}{{end}}:
{{range $index, $author := .Authors}}{{if $index}}
@ -31,26 +31,26 @@ GLOBAL OPTIONS:{{range .VisibleFlagCategories}}
GLOBAL OPTIONS:
{{range $index, $option := .VisibleFlags}}{{if $index}}
{{end}}{{$option}}{{end}}{{end}}{{end}}{{if .Copyright}}
{{end}}{{wrap $option.String 6}}{{end}}{{end}}{{end}}{{if .Copyright}}
COPYRIGHT:
{{.Copyright}}{{end}}
{{wrap .Copyright 3}}{{end}}
`
// CommandHelpTemplate is the text template for the command help topic.
// cli.go uses text/template to render templates. You can
// render custom help text by setting this variable.
var CommandHelpTemplate = `NAME:
{{.HelpName}} - {{.Usage}}
{{$v := offset .HelpName 6}}{{wrap .HelpName 3}}{{if .Usage}} - {{wrap .Usage $v}}{{end}}
USAGE:
{{if .UsageText}}{{.UsageText | nindent 3 | trim}}{{else}}{{.HelpName}}{{if .VisibleFlags}} [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{end}}{{if .Category}}
{{if .UsageText}}{{wrap .UsageText 3}}{{else}}{{.HelpName}}{{if .VisibleFlags}} [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{end}}{{if .Category}}
CATEGORY:
{{.Category}}{{end}}{{if .Description}}
DESCRIPTION:
{{.Description | nindent 3 | trim}}{{end}}{{if .VisibleFlagCategories}}
{{wrap .Description 3}}{{end}}{{if .VisibleFlagCategories}}
OPTIONS:{{range .VisibleFlagCategories}}
{{if .Name}}{{.Name}}
@ -69,10 +69,10 @@ var SubcommandHelpTemplate = `NAME:
{{.HelpName}} - {{.Usage}}
USAGE:
{{if .UsageText}}{{.UsageText | nindent 3 | trim}}{{else}}{{.HelpName}} command{{if .VisibleFlags}} [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{end}}{{if .Description}}
{{if .UsageText}}{{wrap .UsageText 3}}{{else}}{{.HelpName}} command{{if .VisibleFlags}} [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{end}}{{if .Description}}
DESCRIPTION:
{{.Description | nindent 3 | trim}}{{end}}
{{wrap .Description 3}}{{end}}
COMMANDS:{{range .VisibleCategories}}{{if .Name}}
{{.Name}}:{{range .VisibleCommands}}

View file

@ -139,7 +139,6 @@ func (p *clientConnPool) getStartDialLocked(ctx context.Context, addr string) *d
func (c *dialCall) dial(ctx context.Context, addr string) {
const singleUse = false // shared conn
c.res, c.err = c.p.t.dialClientConn(ctx, addr, singleUse)
close(c.done)
c.p.mu.Lock()
delete(c.p.dialing, addr)
@ -147,6 +146,8 @@ func (c *dialCall) dial(ctx context.Context, addr string) {
c.p.addConnLocked(addr, c.res)
}
c.p.mu.Unlock()
close(c.done)
}
// addConnIfNeeded makes a NewClientConn out of c if a connection for key doesn't

View file

@ -30,7 +30,7 @@ TEXT ·SyscallNoError(SB),NOSPLIT,$0-48
MOVV trap+0(FP), R11 // syscall entry
SYSCALL
MOVV R4, r1+32(FP)
MOVV R5, r2+40(FP)
MOVV R0, r2+40(FP) // r2 is not used. Always set to 0
JAL runtime·exitsyscall(SB)
RET
@ -50,5 +50,5 @@ TEXT ·RawSyscallNoError(SB),NOSPLIT,$0-48
MOVV trap+0(FP), R11 // syscall entry
SYSCALL
MOVV R4, r1+32(FP)
MOVV R5, r2+40(FP)
MOVV R0, r2+40(FP) // r2 is not used. Always set to 0
RET

View file

@ -393,6 +393,13 @@ func GetsockoptXucred(fd, level, opt int) (*Xucred, error) {
return x, err
}
func GetsockoptTCPConnectionInfo(fd, level, opt int) (*TCPConnectionInfo, error) {
var value TCPConnectionInfo
vallen := _Socklen(SizeofTCPConnectionInfo)
err := getsockopt(fd, level, opt, unsafe.Pointer(&value), &vallen)
return &value, err
}
func SysctlKinfoProc(name string, args ...int) (*KinfoProc, error) {
mib, err := sysctlmib(name, args...)
if err != nil {

View file

@ -12,8 +12,6 @@ import "unsafe"
//sys EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) = SYS_EPOLL_PWAIT
//sys Fadvise(fd int, offset int64, length int64, advice int) (err error) = SYS_FADVISE64
//sys Fchown(fd int, uid int, gid int) (err error)
//sys Fstat(fd int, stat *Stat_t) (err error)
//sys Fstatat(fd int, path string, stat *Stat_t, flags int) (err error)
//sys Fstatfs(fd int, buf *Statfs_t) (err error)
//sys Ftruncate(fd int, length int64) (err error)
//sysnb Getegid() (egid int)
@ -43,6 +41,43 @@ func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err
//sys Shutdown(fd int, how int) (err error)
//sys Splice(rfd int, roff *int64, wfd int, woff *int64, len int, flags int) (n int64, err error)
func timespecFromStatxTimestamp(x StatxTimestamp) Timespec {
return Timespec{
Sec: x.Sec,
Nsec: int64(x.Nsec),
}
}
func Fstatat(fd int, path string, stat *Stat_t, flags int) error {
var r Statx_t
// Do it the glibc way, add AT_NO_AUTOMOUNT.
if err := Statx(fd, path, AT_NO_AUTOMOUNT|flags, STATX_BASIC_STATS, &r); err != nil {
return err
}
stat.Dev = Mkdev(r.Dev_major, r.Dev_minor)
stat.Ino = r.Ino
stat.Mode = uint32(r.Mode)
stat.Nlink = r.Nlink
stat.Uid = r.Uid
stat.Gid = r.Gid
stat.Rdev = Mkdev(r.Rdev_major, r.Rdev_minor)
// hope we don't get to process files so large to overflow these size
// fields...
stat.Size = int64(r.Size)
stat.Blksize = int32(r.Blksize)
stat.Blocks = int64(r.Blocks)
stat.Atim = timespecFromStatxTimestamp(r.Atime)
stat.Mtim = timespecFromStatxTimestamp(r.Mtime)
stat.Ctim = timespecFromStatxTimestamp(r.Ctime)
return nil
}
func Fstat(fd int, stat *Stat_t) (err error) {
return Fstatat(fd, "", stat, AT_EMPTY_PATH)
}
func Stat(path string, stat *Stat_t) (err error) {
return Fstatat(AT_FDCWD, path, stat, 0)
}

View file

@ -22,6 +22,7 @@ import "unsafe"
//sysnb Getrlimit(resource int, rlim *Rlimit) (err error)
//sysnb Getuid() (uid int)
//sys Listen(s int, n int) (err error)
//sys MemfdSecret(flags int) (fd int, err error)
//sys pread(fd int, p []byte, offset int64) (n int, err error) = SYS_PREAD64
//sys pwrite(fd int, p []byte, offset int64) (n int, err error) = SYS_PWRITE64
//sys Seek(fd int, offset int64, whence int) (off int64, err error) = SYS_LSEEK

View file

@ -184,6 +184,7 @@ const (
BPF_F_ALLOW_MULTI = 0x2
BPF_F_ALLOW_OVERRIDE = 0x1
BPF_F_ANY_ALIGNMENT = 0x2
BPF_F_KPROBE_MULTI_RETURN = 0x1
BPF_F_QUERY_EFFECTIVE = 0x1
BPF_F_REPLACE = 0x4
BPF_F_SLEEPABLE = 0x10
@ -191,6 +192,8 @@ const (
BPF_F_TEST_RND_HI32 = 0x4
BPF_F_TEST_RUN_ON_CPU = 0x1
BPF_F_TEST_STATE_FREQ = 0x8
BPF_F_TEST_XDP_LIVE_FRAMES = 0x2
BPF_F_XDP_HAS_FRAGS = 0x20
BPF_H = 0x8
BPF_IMM = 0x0
BPF_IND = 0x40
@ -517,9 +520,9 @@ const (
DM_UUID_FLAG = 0x4000
DM_UUID_LEN = 0x81
DM_VERSION = 0xc138fd00
DM_VERSION_EXTRA = "-ioctl (2021-03-22)"
DM_VERSION_EXTRA = "-ioctl (2022-02-22)"
DM_VERSION_MAJOR = 0x4
DM_VERSION_MINOR = 0x2d
DM_VERSION_MINOR = 0x2e
DM_VERSION_PATCHLEVEL = 0x0
DT_BLK = 0x6
DT_CHR = 0x2
@ -712,6 +715,7 @@ const (
ETH_P_EDSA = 0xdada
ETH_P_ERSPAN = 0x88be
ETH_P_ERSPAN2 = 0x22eb
ETH_P_ETHERCAT = 0x88a4
ETH_P_FCOE = 0x8906
ETH_P_FIP = 0x8914
ETH_P_HDLC = 0x19
@ -749,6 +753,7 @@ const (
ETH_P_PPP_MP = 0x8
ETH_P_PPP_SES = 0x8864
ETH_P_PREAUTH = 0x88c7
ETH_P_PROFINET = 0x8892
ETH_P_PRP = 0x88fb
ETH_P_PUP = 0x200
ETH_P_PUPAT = 0x201
@ -837,6 +842,7 @@ const (
FAN_FS_ERROR = 0x8000
FAN_MARK_ADD = 0x1
FAN_MARK_DONT_FOLLOW = 0x4
FAN_MARK_EVICTABLE = 0x200
FAN_MARK_FILESYSTEM = 0x100
FAN_MARK_FLUSH = 0x80
FAN_MARK_IGNORED_MASK = 0x20
@ -1055,7 +1061,7 @@ const (
IFA_F_STABLE_PRIVACY = 0x800
IFA_F_TEMPORARY = 0x1
IFA_F_TENTATIVE = 0x40
IFA_MAX = 0xa
IFA_MAX = 0xb
IFF_ALLMULTI = 0x200
IFF_ATTACH_QUEUE = 0x200
IFF_AUTOMEDIA = 0x4000
@ -1403,6 +1409,7 @@ const (
LANDLOCK_ACCESS_FS_MAKE_SYM = 0x1000
LANDLOCK_ACCESS_FS_READ_DIR = 0x8
LANDLOCK_ACCESS_FS_READ_FILE = 0x4
LANDLOCK_ACCESS_FS_REFER = 0x2000
LANDLOCK_ACCESS_FS_REMOVE_DIR = 0x10
LANDLOCK_ACCESS_FS_REMOVE_FILE = 0x20
LANDLOCK_ACCESS_FS_WRITE_FILE = 0x2
@ -1758,6 +1765,7 @@ const (
NLM_F_ACK_TLVS = 0x200
NLM_F_APPEND = 0x800
NLM_F_ATOMIC = 0x400
NLM_F_BULK = 0x200
NLM_F_CAPPED = 0x100
NLM_F_CREATE = 0x400
NLM_F_DUMP = 0x300
@ -2075,6 +2083,11 @@ const (
PR_SET_UNALIGN = 0x6
PR_SET_VMA = 0x53564d41
PR_SET_VMA_ANON_NAME = 0x0
PR_SME_GET_VL = 0x40
PR_SME_SET_VL = 0x3f
PR_SME_SET_VL_ONEXEC = 0x40000
PR_SME_VL_INHERIT = 0x20000
PR_SME_VL_LEN_MASK = 0xffff
PR_SPEC_DISABLE = 0x4
PR_SPEC_DISABLE_NOEXEC = 0x10
PR_SPEC_ENABLE = 0x2
@ -2227,8 +2240,9 @@ const (
RTC_FEATURE_ALARM = 0x0
RTC_FEATURE_ALARM_RES_2S = 0x3
RTC_FEATURE_ALARM_RES_MINUTE = 0x1
RTC_FEATURE_ALARM_WAKEUP_ONLY = 0x7
RTC_FEATURE_BACKUP_SWITCH_MODE = 0x6
RTC_FEATURE_CNT = 0x7
RTC_FEATURE_CNT = 0x8
RTC_FEATURE_CORRECTION = 0x5
RTC_FEATURE_NEED_WEEK_DAY = 0x2
RTC_FEATURE_UPDATE_INTERRUPT = 0x4
@ -2302,6 +2316,7 @@ const (
RTM_DELRULE = 0x21
RTM_DELTCLASS = 0x29
RTM_DELTFILTER = 0x2d
RTM_DELTUNNEL = 0x79
RTM_DELVLAN = 0x71
RTM_F_CLONED = 0x200
RTM_F_EQUALIZE = 0x400
@ -2334,8 +2349,9 @@ const (
RTM_GETSTATS = 0x5e
RTM_GETTCLASS = 0x2a
RTM_GETTFILTER = 0x2e
RTM_GETTUNNEL = 0x7a
RTM_GETVLAN = 0x72
RTM_MAX = 0x77
RTM_MAX = 0x7b
RTM_NEWACTION = 0x30
RTM_NEWADDR = 0x14
RTM_NEWADDRLABEL = 0x48
@ -2359,11 +2375,13 @@ const (
RTM_NEWSTATS = 0x5c
RTM_NEWTCLASS = 0x28
RTM_NEWTFILTER = 0x2c
RTM_NR_FAMILIES = 0x1a
RTM_NR_MSGTYPES = 0x68
RTM_NEWTUNNEL = 0x78
RTM_NR_FAMILIES = 0x1b
RTM_NR_MSGTYPES = 0x6c
RTM_SETDCB = 0x4f
RTM_SETLINK = 0x13
RTM_SETNEIGHTBL = 0x43
RTM_SETSTATS = 0x5f
RTNH_ALIGNTO = 0x4
RTNH_COMPARE_MASK = 0x59
RTNH_F_DEAD = 0x1
@ -2544,6 +2562,9 @@ const (
SOCK_RDM = 0x4
SOCK_SEQPACKET = 0x5
SOCK_SNDBUF_LOCK = 0x1
SOCK_TXREHASH_DEFAULT = 0xff
SOCK_TXREHASH_DISABLED = 0x0
SOCK_TXREHASH_ENABLED = 0x1
SOL_AAL = 0x109
SOL_ALG = 0x117
SOL_ATM = 0x108
@ -2559,6 +2580,8 @@ const (
SOL_IUCV = 0x115
SOL_KCM = 0x119
SOL_LLC = 0x10c
SOL_MCTP = 0x11d
SOL_MPTCP = 0x11c
SOL_NETBEUI = 0x10b
SOL_NETLINK = 0x10e
SOL_NFC = 0x118
@ -2674,7 +2697,7 @@ const (
TASKSTATS_GENL_NAME = "TASKSTATS"
TASKSTATS_GENL_VERSION = 0x1
TASKSTATS_TYPE_MAX = 0x6
TASKSTATS_VERSION = 0xb
TASKSTATS_VERSION = 0xd
TCIFLUSH = 0x0
TCIOFF = 0x2
TCIOFLUSH = 0x2

View file

@ -326,6 +326,7 @@ const (
SO_RCVBUF = 0x8
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x12
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x14
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x14
@ -350,6 +351,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x3
SO_WIFI_STATUS = 0x29

View file

@ -327,6 +327,7 @@ const (
SO_RCVBUF = 0x8
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x12
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x14
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x14
@ -351,6 +352,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x3
SO_WIFI_STATUS = 0x29

View file

@ -333,6 +333,7 @@ const (
SO_RCVBUF = 0x8
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x12
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x14
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x14
@ -357,6 +358,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x3
SO_WIFI_STATUS = 0x29

View file

@ -323,6 +323,7 @@ const (
SO_RCVBUF = 0x8
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x12
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x14
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x14
@ -347,6 +348,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x3
SO_WIFI_STATUS = 0x29
@ -511,6 +513,7 @@ const (
WORDSIZE = 0x40
XCASE = 0x4
XTABS = 0x1800
ZA_MAGIC = 0x54366345
_HIDIOCGRAWNAME = 0x80804804
_HIDIOCGRAWPHYS = 0x80404805
_HIDIOCGRAWUNIQ = 0x80404808

View file

@ -109,8 +109,6 @@ const (
IUCLC = 0x200
IXOFF = 0x1000
IXON = 0x400
LASX_CTX_MAGIC = 0x41535801
LSX_CTX_MAGIC = 0x53580001
MAP_ANON = 0x20
MAP_ANONYMOUS = 0x20
MAP_DENYWRITE = 0x800
@ -319,6 +317,7 @@ const (
SO_RCVBUF = 0x8
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x12
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x14
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x14
@ -343,6 +342,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x3
SO_WIFI_STATUS = 0x29

View file

@ -326,6 +326,7 @@ const (
SO_RCVBUF = 0x1002
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x1004
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x1006
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x1006
@ -351,6 +352,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x1008
SO_WIFI_STATUS = 0x29

View file

@ -326,6 +326,7 @@ const (
SO_RCVBUF = 0x1002
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x1004
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x1006
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x1006
@ -351,6 +352,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x1008
SO_WIFI_STATUS = 0x29

View file

@ -326,6 +326,7 @@ const (
SO_RCVBUF = 0x1002
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x1004
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x1006
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x1006
@ -351,6 +352,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x1008
SO_WIFI_STATUS = 0x29

View file

@ -326,6 +326,7 @@ const (
SO_RCVBUF = 0x1002
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x1004
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x1006
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x1006
@ -351,6 +352,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x1008
SO_WIFI_STATUS = 0x29

View file

@ -381,6 +381,7 @@ const (
SO_RCVBUF = 0x8
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x10
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x12
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x12
@ -405,6 +406,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x3
SO_WIFI_STATUS = 0x29

View file

@ -385,6 +385,7 @@ const (
SO_RCVBUF = 0x8
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x10
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x12
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x12
@ -409,6 +410,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x3
SO_WIFI_STATUS = 0x29

View file

@ -385,6 +385,7 @@ const (
SO_RCVBUF = 0x8
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x10
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x12
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x12
@ -409,6 +410,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x3
SO_WIFI_STATUS = 0x29

View file

@ -314,6 +314,7 @@ const (
SO_RCVBUF = 0x8
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x12
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x14
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x14
@ -338,6 +339,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x3
SO_WIFI_STATUS = 0x29

View file

@ -389,6 +389,7 @@ const (
SO_RCVBUF = 0x8
SO_RCVBUFFORCE = 0x21
SO_RCVLOWAT = 0x12
SO_RCVMARK = 0x4b
SO_RCVTIMEO = 0x14
SO_RCVTIMEO_NEW = 0x42
SO_RCVTIMEO_OLD = 0x14
@ -413,6 +414,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x40
SO_TIMESTAMPNS_OLD = 0x23
SO_TIMESTAMP_NEW = 0x3f
SO_TXREHASH = 0x4a
SO_TXTIME = 0x3d
SO_TYPE = 0x3
SO_WIFI_STATUS = 0x29

View file

@ -380,6 +380,7 @@ const (
SO_RCVBUF = 0x1002
SO_RCVBUFFORCE = 0x100b
SO_RCVLOWAT = 0x800
SO_RCVMARK = 0x54
SO_RCVTIMEO = 0x2000
SO_RCVTIMEO_NEW = 0x44
SO_RCVTIMEO_OLD = 0x2000
@ -404,6 +405,7 @@ const (
SO_TIMESTAMPNS_NEW = 0x42
SO_TIMESTAMPNS_OLD = 0x21
SO_TIMESTAMP_NEW = 0x46
SO_TXREHASH = 0x53
SO_TXTIME = 0x3f
SO_TYPE = 0x1008
SO_WIFI_STATUS = 0x25

View file

@ -83,31 +83,6 @@ func Fchown(fd int, uid int, gid int) (err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func Fstat(fd int, stat *Stat_t) (err error) {
_, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0)
if e1 != 0 {
err = errnoErr(e1)
}
return
}
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) {
var _p0 *byte
_p0, err = BytePtrFromString(path)
if err != nil {
return
}
_, _, e1 := Syscall6(SYS_FSTATAT, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0)
if e1 != 0 {
err = errnoErr(e1)
}
return
}
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func Fstatfs(fd int, buf *Statfs_t) (err error) {
_, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(buf)), 0)
if e1 != 0 {

View file

@ -180,6 +180,17 @@ func Listen(s int, n int) (err error) {
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func MemfdSecret(flags int) (fd int, err error) {
r0, _, e1 := Syscall(SYS_MEMFD_SECRET, uintptr(flags), 0, 0)
fd = int(r0)
if e1 != 0 {
err = errnoErr(e1)
}
return
}
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func pread(fd int, p []byte, offset int64) (n int, err error) {
var _p0 unsafe.Pointer
if len(p) > 0 {

View file

@ -85,8 +85,6 @@ const (
SYS_SPLICE = 76
SYS_TEE = 77
SYS_READLINKAT = 78
SYS_FSTATAT = 79
SYS_FSTAT = 80
SYS_SYNC = 81
SYS_FSYNC = 82
SYS_FDATASYNC = 83

View file

@ -309,6 +309,7 @@ const (
SYS_LANDLOCK_CREATE_RULESET = 444
SYS_LANDLOCK_ADD_RULE = 445
SYS_LANDLOCK_RESTRICT_SELF = 446
SYS_MEMFD_SECRET = 447
SYS_PROCESS_MRELEASE = 448
SYS_FUTEX_WAITV = 449
SYS_SET_MEMPOLICY_HOME_NODE = 450

View file

@ -366,30 +366,57 @@ type ICMPv6Filter struct {
Filt [8]uint32
}
type TCPConnectionInfo struct {
State uint8
Snd_wscale uint8
Rcv_wscale uint8
_ uint8
Options uint32
Flags uint32
Rto uint32
Maxseg uint32
Snd_ssthresh uint32
Snd_cwnd uint32
Snd_wnd uint32
Snd_sbbytes uint32
Rcv_wnd uint32
Rttcur uint32
Srtt uint32
Rttvar uint32
Txpackets uint64
Txbytes uint64
Txretransmitbytes uint64
Rxpackets uint64
Rxbytes uint64
Rxoutoforderbytes uint64
Txretransmitpackets uint64
}
const (
SizeofSockaddrInet4 = 0x10
SizeofSockaddrInet6 = 0x1c
SizeofSockaddrAny = 0x6c
SizeofSockaddrUnix = 0x6a
SizeofSockaddrDatalink = 0x14
SizeofSockaddrCtl = 0x20
SizeofSockaddrVM = 0xc
SizeofXvsockpcb = 0xa8
SizeofXSocket = 0x64
SizeofXSockbuf = 0x18
SizeofXVSockPgen = 0x20
SizeofXucred = 0x4c
SizeofLinger = 0x8
SizeofIovec = 0x10
SizeofIPMreq = 0x8
SizeofIPMreqn = 0xc
SizeofIPv6Mreq = 0x14
SizeofMsghdr = 0x30
SizeofCmsghdr = 0xc
SizeofInet4Pktinfo = 0xc
SizeofInet6Pktinfo = 0x14
SizeofIPv6MTUInfo = 0x20
SizeofICMPv6Filter = 0x20
SizeofSockaddrInet4 = 0x10
SizeofSockaddrInet6 = 0x1c
SizeofSockaddrAny = 0x6c
SizeofSockaddrUnix = 0x6a
SizeofSockaddrDatalink = 0x14
SizeofSockaddrCtl = 0x20
SizeofSockaddrVM = 0xc
SizeofXvsockpcb = 0xa8
SizeofXSocket = 0x64
SizeofXSockbuf = 0x18
SizeofXVSockPgen = 0x20
SizeofXucred = 0x4c
SizeofLinger = 0x8
SizeofIovec = 0x10
SizeofIPMreq = 0x8
SizeofIPMreqn = 0xc
SizeofIPv6Mreq = 0x14
SizeofMsghdr = 0x30
SizeofCmsghdr = 0xc
SizeofInet4Pktinfo = 0xc
SizeofInet6Pktinfo = 0x14
SizeofIPv6MTUInfo = 0x20
SizeofICMPv6Filter = 0x20
SizeofTCPConnectionInfo = 0x70
)
const (

View file

@ -366,30 +366,57 @@ type ICMPv6Filter struct {
Filt [8]uint32
}
type TCPConnectionInfo struct {
State uint8
Snd_wscale uint8
Rcv_wscale uint8
_ uint8
Options uint32
Flags uint32
Rto uint32
Maxseg uint32
Snd_ssthresh uint32
Snd_cwnd uint32
Snd_wnd uint32
Snd_sbbytes uint32
Rcv_wnd uint32
Rttcur uint32
Srtt uint32
Rttvar uint32
Txpackets uint64
Txbytes uint64
Txretransmitbytes uint64
Rxpackets uint64
Rxbytes uint64
Rxoutoforderbytes uint64
Txretransmitpackets uint64
}
const (
SizeofSockaddrInet4 = 0x10
SizeofSockaddrInet6 = 0x1c
SizeofSockaddrAny = 0x6c
SizeofSockaddrUnix = 0x6a
SizeofSockaddrDatalink = 0x14
SizeofSockaddrCtl = 0x20
SizeofSockaddrVM = 0xc
SizeofXvsockpcb = 0xa8
SizeofXSocket = 0x64
SizeofXSockbuf = 0x18
SizeofXVSockPgen = 0x20
SizeofXucred = 0x4c
SizeofLinger = 0x8
SizeofIovec = 0x10
SizeofIPMreq = 0x8
SizeofIPMreqn = 0xc
SizeofIPv6Mreq = 0x14
SizeofMsghdr = 0x30
SizeofCmsghdr = 0xc
SizeofInet4Pktinfo = 0xc
SizeofInet6Pktinfo = 0x14
SizeofIPv6MTUInfo = 0x20
SizeofICMPv6Filter = 0x20
SizeofSockaddrInet4 = 0x10
SizeofSockaddrInet6 = 0x1c
SizeofSockaddrAny = 0x6c
SizeofSockaddrUnix = 0x6a
SizeofSockaddrDatalink = 0x14
SizeofSockaddrCtl = 0x20
SizeofSockaddrVM = 0xc
SizeofXvsockpcb = 0xa8
SizeofXSocket = 0x64
SizeofXSockbuf = 0x18
SizeofXVSockPgen = 0x20
SizeofXucred = 0x4c
SizeofLinger = 0x8
SizeofIovec = 0x10
SizeofIPMreq = 0x8
SizeofIPMreqn = 0xc
SizeofIPv6Mreq = 0x14
SizeofMsghdr = 0x30
SizeofCmsghdr = 0xc
SizeofInet4Pktinfo = 0xc
SizeofInet6Pktinfo = 0x14
SizeofIPv6MTUInfo = 0x20
SizeofICMPv6Filter = 0x20
SizeofTCPConnectionInfo = 0x70
)
const (

View file

@ -1127,7 +1127,9 @@ const (
PERF_BR_SYSRET = 0x8
PERF_BR_COND_CALL = 0x9
PERF_BR_COND_RET = 0xa
PERF_BR_MAX = 0xb
PERF_BR_ERET = 0xb
PERF_BR_IRQ = 0xc
PERF_BR_MAX = 0xd
PERF_SAMPLE_REGS_ABI_NONE = 0x0
PERF_SAMPLE_REGS_ABI_32 = 0x1
PERF_SAMPLE_REGS_ABI_64 = 0x2
@ -2969,7 +2971,7 @@ const (
DEVLINK_CMD_TRAP_POLICER_NEW = 0x47
DEVLINK_CMD_TRAP_POLICER_DEL = 0x48
DEVLINK_CMD_HEALTH_REPORTER_TEST = 0x49
DEVLINK_CMD_MAX = 0x4d
DEVLINK_CMD_MAX = 0x51
DEVLINK_PORT_TYPE_NOTSET = 0x0
DEVLINK_PORT_TYPE_AUTO = 0x1
DEVLINK_PORT_TYPE_ETH = 0x2
@ -3198,7 +3200,7 @@ const (
DEVLINK_ATTR_RATE_NODE_NAME = 0xa8
DEVLINK_ATTR_RATE_PARENT_NODE_NAME = 0xa9
DEVLINK_ATTR_REGION_MAX_SNAPSHOTS = 0xaa
DEVLINK_ATTR_MAX = 0xaa
DEVLINK_ATTR_MAX = 0xae
DEVLINK_DPIPE_FIELD_MAPPING_TYPE_NONE = 0x0
DEVLINK_DPIPE_FIELD_MAPPING_TYPE_IFINDEX = 0x1
DEVLINK_DPIPE_MATCH_TYPE_FIELD_EXACT = 0x0
@ -3638,7 +3640,11 @@ const (
ETHTOOL_A_RINGS_RX_MINI = 0x7
ETHTOOL_A_RINGS_RX_JUMBO = 0x8
ETHTOOL_A_RINGS_TX = 0x9
ETHTOOL_A_RINGS_MAX = 0xa
ETHTOOL_A_RINGS_RX_BUF_LEN = 0xa
ETHTOOL_A_RINGS_TCP_DATA_SPLIT = 0xb
ETHTOOL_A_RINGS_CQE_SIZE = 0xc
ETHTOOL_A_RINGS_TX_PUSH = 0xd
ETHTOOL_A_RINGS_MAX = 0xd
ETHTOOL_A_CHANNELS_UNSPEC = 0x0
ETHTOOL_A_CHANNELS_HEADER = 0x1
ETHTOOL_A_CHANNELS_RX_MAX = 0x2
@ -4323,7 +4329,7 @@ const (
NL80211_ATTR_MAC_HINT = 0xc8
NL80211_ATTR_MAC_MASK = 0xd7
NL80211_ATTR_MAX_AP_ASSOC_STA = 0xca
NL80211_ATTR_MAX = 0x135
NL80211_ATTR_MAX = 0x137
NL80211_ATTR_MAX_CRIT_PROT_DURATION = 0xb4
NL80211_ATTR_MAX_CSA_COUNTERS = 0xce
NL80211_ATTR_MAX_MATCH_SETS = 0x85
@ -4549,7 +4555,7 @@ const (
NL80211_BAND_IFTYPE_ATTR_HE_CAP_PHY = 0x3
NL80211_BAND_IFTYPE_ATTR_HE_CAP_PPE = 0x5
NL80211_BAND_IFTYPE_ATTR_IFTYPES = 0x1
NL80211_BAND_IFTYPE_ATTR_MAX = 0x7
NL80211_BAND_IFTYPE_ATTR_MAX = 0xb
NL80211_BAND_S1GHZ = 0x4
NL80211_BITRATE_ATTR_2GHZ_SHORTPREAMBLE = 0x2
NL80211_BITRATE_ATTR_MAX = 0x2
@ -4887,7 +4893,7 @@ const (
NL80211_FREQUENCY_ATTR_GO_CONCURRENT = 0xf
NL80211_FREQUENCY_ATTR_INDOOR_ONLY = 0xe
NL80211_FREQUENCY_ATTR_IR_CONCURRENT = 0xf
NL80211_FREQUENCY_ATTR_MAX = 0x19
NL80211_FREQUENCY_ATTR_MAX = 0x1b
NL80211_FREQUENCY_ATTR_MAX_TX_POWER = 0x6
NL80211_FREQUENCY_ATTR_NO_10MHZ = 0x11
NL80211_FREQUENCY_ATTR_NO_160MHZ = 0xc
@ -5254,7 +5260,7 @@ const (
NL80211_RATE_INFO_HE_RU_ALLOC_52 = 0x1
NL80211_RATE_INFO_HE_RU_ALLOC_996 = 0x5
NL80211_RATE_INFO_HE_RU_ALLOC = 0x11
NL80211_RATE_INFO_MAX = 0x11
NL80211_RATE_INFO_MAX = 0x16
NL80211_RATE_INFO_MCS = 0x2
NL80211_RATE_INFO_SHORT_GI = 0x4
NL80211_RATE_INFO_VHT_MCS = 0x6

View file

@ -324,6 +324,13 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
_ [4]byte
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint32

View file

@ -338,6 +338,12 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint64

View file

@ -315,6 +315,13 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
_ [4]byte
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint32

View file

@ -317,6 +317,12 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint64

View file

@ -318,6 +318,12 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint64

View file

@ -320,6 +320,13 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
_ [4]byte
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint32

View file

@ -320,6 +320,12 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint64

View file

@ -320,6 +320,12 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint64

View file

@ -320,6 +320,13 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
_ [4]byte
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint32

View file

@ -327,6 +327,13 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
_ [4]byte
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint32

View file

@ -327,6 +327,12 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint64

View file

@ -327,6 +327,12 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint64

View file

@ -345,6 +345,12 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint64

View file

@ -340,6 +340,12 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint64

View file

@ -322,6 +322,12 @@ type Taskstats struct {
Ac_btime64 uint64
Compact_count uint64
Compact_delay_total uint64
Ac_tgid uint32
Ac_tgetime uint64
Ac_exe_dev uint64
Ac_exe_inode uint64
Wpcopy_count uint64
Wpcopy_delay_total uint64
}
type cpuMask uint64

View file

@ -5,4 +5,4 @@
package internal
// Version is the current tagged release of the library.
const Version = "0.83.0"
const Version = "0.84.0"

View file

@ -26,7 +26,7 @@
"description": "Stores and retrieves potentially large, immutable data objects.",
"discoveryVersion": "v1",
"documentationLink": "https://developers.google.com/storage/docs/json_api/",
"etag": "\"3135383434363131313530373135383336353335\"",
"etag": "\"3130353432333136333236323133333532323835\"",
"icons": {
"x16": "https://www.google.com/images/icons/product/cloud_storage-16.png",
"x32": "https://www.google.com/images/icons/product/cloud_storage-32.png"
@ -3005,7 +3005,7 @@
}
}
},
"revision": "20220509",
"revision": "20220608",
"rootUrl": "https://storage.googleapis.com/",
"schemas": {
"Bucket": {
@ -3084,6 +3084,19 @@
},
"type": "array"
},
"customPlacementConfig": {
"description": "The bucket's custom placement configuration for Custom Dual Regions.",
"properties": {
"dataLocations": {
"description": "The list of regional locations in which data is placed.",
"items": {
"type": "string"
},
"type": "array"
}
},
"type": "object"
},
"defaultEventBasedHold": {
"description": "The default value for event-based hold on newly created objects in this bucket. Event-based hold is a way to retain objects indefinitely until an event occurs, signified by the hold's release. After being released, such objects will be subject to bucket-level retention (if any). One sample use case of this flag is for banks to hold loan documents for at least 3 years after loan is paid in full. Here, bucket-level retention is 3 years and the event is loan being paid in full. In this example, these objects will be held intact for any number of years until the event has occurred (event-based hold on the object is released) and then 3 more years after that. That means retention duration of the objects begins from the moment event-based hold transitioned from true to false. Objects under event-based hold cannot be deleted, overwritten or archived until the hold is removed.",
"type": "boolean"
@ -3181,7 +3194,7 @@
"type": "string"
},
"type": {
"description": "Type of the action. Currently, only Delete and SetStorageClass are supported.",
"description": "Type of the action. Currently, only Delete, SetStorageClass, and AbortIncompleteMultipartUpload are supported.",
"type": "string"
}
},

View file

@ -291,6 +291,10 @@ type Bucket struct {
// configuration.
Cors []*BucketCors `json:"cors,omitempty"`
// CustomPlacementConfig: The bucket's custom placement configuration
// for Custom Dual Regions.
CustomPlacementConfig *BucketCustomPlacementConfig `json:"customPlacementConfig,omitempty"`
// DefaultEventBasedHold: The default value for event-based hold on
// newly created objects in this bucket. Event-based hold is a way to
// retain objects indefinitely until an event occurs, signified by the
@ -538,6 +542,36 @@ func (s *BucketCors) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields)
}
// BucketCustomPlacementConfig: The bucket's custom placement
// configuration for Custom Dual Regions.
type BucketCustomPlacementConfig struct {
// DataLocations: The list of regional locations in which data is
// placed.
DataLocations []string `json:"dataLocations,omitempty"`
// ForceSendFields is a list of field names (e.g. "DataLocations") to
// unconditionally include in API requests. By default, fields with
// empty or default values are omitted from API requests. However, any
// non-pointer, non-interface field appearing in ForceSendFields will be
// sent to the server regardless of whether the field is empty or not.
// This may be used to include empty fields in Patch requests.
ForceSendFields []string `json:"-"`
// NullFields is a list of field names (e.g. "DataLocations") to include
// in API requests with the JSON null value. By default, fields with
// empty values are omitted from API requests. However, any field with
// an empty value appearing in NullFields will be sent to the server as
// null. It is an error if a field in this list has a non-empty value.
// This may be used to include null fields in Patch requests.
NullFields []string `json:"-"`
}
func (s *BucketCustomPlacementConfig) MarshalJSON() ([]byte, error) {
type NoMethod BucketCustomPlacementConfig
raw := NoMethod(*s)
return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields)
}
// BucketEncryption: Encryption configuration for a bucket.
type BucketEncryption struct {
// DefaultKmsKeyName: A Cloud KMS key that will be used to encrypt
@ -756,8 +790,8 @@ type BucketLifecycleRuleAction struct {
// action is SetStorageClass.
StorageClass string `json:"storageClass,omitempty"`
// Type: Type of the action. Currently, only Delete and SetStorageClass
// are supported.
// Type: Type of the action. Currently, only Delete, SetStorageClass,
// and AbortIncompleteMultipartUpload are supported.
Type string `json:"type,omitempty"`
// ForceSendFields is a list of field names (e.g. "StorageClass") to

View file

@ -14,32 +14,19 @@ package cert
import (
"crypto/tls"
"crypto/x509"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"os"
"os/exec"
"os/user"
"path/filepath"
"sync"
"time"
)
const (
metadataPath = ".secureConnect"
metadataFile = "context_aware_metadata.json"
)
// defaultCertData holds all the variables pertaining to
// the default certficate source created by DefaultSource.
//
// A singleton model is used to allow the source to be reused
// by the transport layer.
type defaultCertData struct {
once sync.Once
source Source
err error
cachedCertMutex sync.Mutex
cachedCert *tls.Certificate
once sync.Once
source Source
err error
}
var (
@ -49,93 +36,23 @@ var (
// Source is a function that can be passed into crypto/tls.Config.GetClientCertificate.
type Source func(*tls.CertificateRequestInfo) (*tls.Certificate, error)
// DefaultSource returns a certificate source that execs the command specified
// in the file at ~/.secureConnect/context_aware_metadata.json
// errSourceUnavailable is a sentinel error to indicate certificate source is unavailable.
var errSourceUnavailable = errors.New("certificate source is unavailable")
// DefaultSource returns a certificate source using the preferred EnterpriseCertificateProxySource.
// If EnterpriseCertificateProxySource is not available, fall back to the legacy SecureConnectSource.
//
// If that file does not exist, a nil source is returned.
// If neither source is available (due to missing configurations), a nil Source and a nil Error are
// returned to indicate that a default certificate source is unavailable.
func DefaultSource() (Source, error) {
defaultCert.once.Do(func() {
defaultCert.source, defaultCert.err = newSecureConnectSource()
defaultCert.source, defaultCert.err = NewEnterpriseCertificateProxySource("")
if errors.Is(defaultCert.err, errSourceUnavailable) {
defaultCert.source, defaultCert.err = NewSecureConnectSource("")
if errors.Is(defaultCert.err, errSourceUnavailable) {
defaultCert.source, defaultCert.err = nil, nil
}
}
})
return defaultCert.source, defaultCert.err
}
type secureConnectSource struct {
metadata secureConnectMetadata
}
type secureConnectMetadata struct {
Cmd []string `json:"cert_provider_command"`
}
// newSecureConnectSource creates a secureConnectSource by reading the well-known file.
func newSecureConnectSource() (Source, error) {
user, err := user.Current()
if err != nil {
// Ignore.
return nil, nil
}
filename := filepath.Join(user.HomeDir, metadataPath, metadataFile)
file, err := ioutil.ReadFile(filename)
if os.IsNotExist(err) {
// Ignore.
return nil, nil
}
if err != nil {
return nil, err
}
var metadata secureConnectMetadata
if err := json.Unmarshal(file, &metadata); err != nil {
return nil, fmt.Errorf("cert: could not parse JSON in %q: %v", filename, err)
}
if err := validateMetadata(metadata); err != nil {
return nil, fmt.Errorf("cert: invalid config in %q: %v", filename, err)
}
return (&secureConnectSource{
metadata: metadata,
}).getClientCertificate, nil
}
func validateMetadata(metadata secureConnectMetadata) error {
if len(metadata.Cmd) == 0 {
return errors.New("empty cert_provider_command")
}
return nil
}
func (s *secureConnectSource) getClientCertificate(info *tls.CertificateRequestInfo) (*tls.Certificate, error) {
defaultCert.cachedCertMutex.Lock()
defer defaultCert.cachedCertMutex.Unlock()
if defaultCert.cachedCert != nil && !isCertificateExpired(defaultCert.cachedCert) {
return defaultCert.cachedCert, nil
}
// Expand OS environment variables in the cert provider command such as "$HOME".
for i := 0; i < len(s.metadata.Cmd); i++ {
s.metadata.Cmd[i] = os.ExpandEnv(s.metadata.Cmd[i])
}
command := s.metadata.Cmd
data, err := exec.Command(command[0], command[1:]...).Output()
if err != nil {
// TODO(cbro): read stderr for error message? Might contain sensitive info.
return nil, err
}
cert, err := tls.X509KeyPair(data, data)
if err != nil {
return nil, err
}
defaultCert.cachedCert = &cert
return &cert, nil
}
// isCertificateExpired returns true if the given cert is expired or invalid.
func isCertificateExpired(cert *tls.Certificate) bool {
if len(cert.Certificate) == 0 {
return true
}
parsed, err := x509.ParseCertificate(cert.Certificate[0])
if err != nil {
return true
}
return time.Now().After(parsed.NotAfter)
}

View file

@ -0,0 +1,56 @@
// Copyright 2022 Google LLC.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package cert contains certificate tools for Google API clients.
// This package is intended to be used with crypto/tls.Config.GetClientCertificate.
//
// The certificates can be used to satisfy Google's Endpoint Validation.
// See https://cloud.google.com/endpoint-verification/docs/overview
//
// This package is not intended for use by end developers. Use the
// google.golang.org/api/option package to configure API clients.
package cert
import (
"crypto/tls"
"errors"
"os"
"github.com/googleapis/enterprise-certificate-proxy/client"
)
type ecpSource struct {
key *client.Key
}
// NewEnterpriseCertificateProxySource creates a certificate source
// using the Enterprise Certificate Proxy client, which delegates
// certifcate related operations to an OS-specific "signer binary"
// that communicates with the native keystore (ex. keychain on MacOS).
//
// The configFilePath points to a config file containing relevant parameters
// such as the certificate issuer and the location of the signer binary.
// If configFilePath is empty, the client will attempt to load the config from
// a well-known gcloud location.
func NewEnterpriseCertificateProxySource(configFilePath string) (Source, error) {
key, err := client.Cred(configFilePath)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
// Config file missing means Enterprise Certificate Proxy is not supported.
return nil, errSourceUnavailable
}
return nil, err
}
return (&ecpSource{
key: key,
}).getClientCertificate, nil
}
func (s *ecpSource) getClientCertificate(info *tls.CertificateRequestInfo) (*tls.Certificate, error) {
var cert tls.Certificate
cert.PrivateKey = s.key
cert.Certificate = s.key.CertificateChain()
return &cert, nil
}

View file

@ -0,0 +1,123 @@
// Copyright 2022 Google LLC.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package cert contains certificate tools for Google API clients.
// This package is intended to be used with crypto/tls.Config.GetClientCertificate.
//
// The certificates can be used to satisfy Google's Endpoint Validation.
// See https://cloud.google.com/endpoint-verification/docs/overview
//
// This package is not intended for use by end developers. Use the
// google.golang.org/api/option package to configure API clients.
package cert
import (
"crypto/tls"
"crypto/x509"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"os"
"os/exec"
"os/user"
"path/filepath"
"sync"
"time"
)
const (
metadataPath = ".secureConnect"
metadataFile = "context_aware_metadata.json"
)
type secureConnectSource struct {
metadata secureConnectMetadata
// Cache the cert to avoid executing helper command repeatedly.
cachedCertMutex sync.Mutex
cachedCert *tls.Certificate
}
type secureConnectMetadata struct {
Cmd []string `json:"cert_provider_command"`
}
// NewSecureConnectSource creates a certificate source using
// the Secure Connect Helper and its associated metadata file.
//
// The configFilePath points to the location of the context aware metadata file.
// If configFilePath is empty, use the default context aware metadata location.
func NewSecureConnectSource(configFilePath string) (Source, error) {
if configFilePath == "" {
user, err := user.Current()
if err != nil {
// Error locating the default config means Secure Connect is not supported.
return nil, errSourceUnavailable
}
configFilePath = filepath.Join(user.HomeDir, metadataPath, metadataFile)
}
file, err := ioutil.ReadFile(configFilePath)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
// Config file missing means Secure Connect is not supported.
return nil, errSourceUnavailable
}
return nil, err
}
var metadata secureConnectMetadata
if err := json.Unmarshal(file, &metadata); err != nil {
return nil, fmt.Errorf("cert: could not parse JSON in %q: %w", configFilePath, err)
}
if err := validateMetadata(metadata); err != nil {
return nil, fmt.Errorf("cert: invalid config in %q: %w", configFilePath, err)
}
return (&secureConnectSource{
metadata: metadata,
}).getClientCertificate, nil
}
func validateMetadata(metadata secureConnectMetadata) error {
if len(metadata.Cmd) == 0 {
return errors.New("empty cert_provider_command")
}
return nil
}
func (s *secureConnectSource) getClientCertificate(info *tls.CertificateRequestInfo) (*tls.Certificate, error) {
s.cachedCertMutex.Lock()
defer s.cachedCertMutex.Unlock()
if s.cachedCert != nil && !isCertificateExpired(s.cachedCert) {
return s.cachedCert, nil
}
// Expand OS environment variables in the cert provider command such as "$HOME".
for i := 0; i < len(s.metadata.Cmd); i++ {
s.metadata.Cmd[i] = os.ExpandEnv(s.metadata.Cmd[i])
}
command := s.metadata.Cmd
data, err := exec.Command(command[0], command[1:]...).Output()
if err != nil {
return nil, err
}
cert, err := tls.X509KeyPair(data, data)
if err != nil {
return nil, err
}
s.cachedCert = &cert
return &cert, nil
}
// isCertificateExpired returns true if the given cert is expired or invalid.
func isCertificateExpired(cert *tls.Certificate) bool {
if len(cert.Certificate) == 0 {
return true
}
parsed, err := x509.ParseCertificate(cert.Certificate[0])
if err != nil {
return true
}
return time.Now().After(parsed.NotAfter)
}

File diff suppressed because it is too large Load diff

26
vendor/modules.txt vendored
View file

@ -1,11 +1,11 @@
# cloud.google.com/go v0.102.0
# cloud.google.com/go v0.102.1
## explicit; go 1.15
cloud.google.com/go
cloud.google.com/go/internal
cloud.google.com/go/internal/optional
cloud.google.com/go/internal/trace
cloud.google.com/go/internal/version
# cloud.google.com/go/compute v1.6.1
# cloud.google.com/go/compute v1.7.0
## explicit; go 1.15
cloud.google.com/go/compute/metadata
# cloud.google.com/go/iam v0.3.0
@ -34,7 +34,7 @@ github.com/VictoriaMetrics/metricsql/binaryop
# github.com/VividCortex/ewma v1.2.0
## explicit; go 1.12
github.com/VividCortex/ewma
# github.com/aws/aws-sdk-go v1.44.32
# github.com/aws/aws-sdk-go v1.44.37
## explicit; go 1.11
github.com/aws/aws-sdk-go/aws
github.com/aws/aws-sdk-go/aws/arn
@ -138,6 +138,10 @@ github.com/google/go-cmp/cmp/internal/value
# github.com/google/uuid v1.3.0
## explicit
github.com/google/uuid
# github.com/googleapis/enterprise-certificate-proxy v0.1.0
## explicit; go 1.18
github.com/googleapis/enterprise-certificate-proxy/client
github.com/googleapis/enterprise-certificate-proxy/client/util
# github.com/googleapis/gax-go/v2 v2.4.0
## explicit; go 1.15
github.com/googleapis/gax-go/v2
@ -192,8 +196,8 @@ github.com/prometheus/client_golang/prometheus/internal
# github.com/prometheus/client_model v0.2.0
## explicit; go 1.9
github.com/prometheus/client_model/go
# github.com/prometheus/common v0.34.0
## explicit; go 1.15
# github.com/prometheus/common v0.35.0
## explicit; go 1.16
github.com/prometheus/common/expfmt
github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg
github.com/prometheus/common/model
@ -225,7 +229,7 @@ github.com/rivo/uniseg
# github.com/russross/blackfriday/v2 v2.1.0
## explicit
github.com/russross/blackfriday/v2
# github.com/urfave/cli/v2 v2.8.1
# github.com/urfave/cli/v2 v2.10.1
## explicit; go 1.18
github.com/urfave/cli/v2
# github.com/valyala/bytebufferpool v1.0.0
@ -277,7 +281,7 @@ go.opencensus.io/trace/tracestate
go.uber.org/atomic
# go.uber.org/goleak v1.1.11-0.20210813005559-691160354723
## explicit; go 1.13
# golang.org/x/net v0.0.0-20220607020251-c690dde0001d
# golang.org/x/net v0.0.0-20220617184016-355a448f1bc9
## explicit; go 1.17
golang.org/x/net/context
golang.org/x/net/context/ctxhttp
@ -302,7 +306,7 @@ golang.org/x/oauth2/jwt
# golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f
## explicit
golang.org/x/sync/errgroup
# golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d
# golang.org/x/sys v0.0.0-20220615213510-4f61da869c0c
## explicit; go 1.17
golang.org/x/sys/internal/unsafeheader
golang.org/x/sys/unix
@ -317,7 +321,7 @@ golang.org/x/text/unicode/norm
## explicit; go 1.17
golang.org/x/xerrors
golang.org/x/xerrors/internal
# google.golang.org/api v0.83.0
# google.golang.org/api v0.84.0
## explicit; go 1.15
google.golang.org/api/googleapi
google.golang.org/api/googleapi/transport
@ -350,7 +354,7 @@ google.golang.org/appengine/internal/socket
google.golang.org/appengine/internal/urlfetch
google.golang.org/appengine/socket
google.golang.org/appengine/urlfetch
# google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac
# google.golang.org/genproto v0.0.0-20220617124728-180714bec0ad
## explicit; go 1.15
google.golang.org/genproto/googleapis/api/annotations
google.golang.org/genproto/googleapis/iam/v1
@ -467,5 +471,3 @@ google.golang.org/protobuf/types/known/wrapperspb
# gopkg.in/yaml.v2 v2.4.0
## explicit; go 1.15
gopkg.in/yaml.v2
# gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b
## explicit