diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index 1d11ad1a4..4500c91c0 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -4,12 +4,12 @@ about: Create a report to help us improve title: '' labels: '' assignees: '' - --- **Describe the bug** A clear and concise description of what the bug is. -It would be a great [upgrading](https://docs.victoriametrics.com/#how-to-upgrade) to [the latest avaialble release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) +It would be a great [upgrading](https://docs.victoriametrics.com/#how-to-upgrade) +to [the latest available release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) and verifying whether the bug is reproducible there. It is also recommended reading [troubleshooting docs](https://docs.victoriametrics.com/#troubleshooting). @@ -19,9 +19,22 @@ Steps to reproduce the behavior. **Expected behavior** A clear and concise description of what you expected to happen. +**Logs** +Check if any warnings or errors were logged by VictoriaMetrics components +or components in communication with VictoriaMetrics (e.g. Prometheus, Grafana). + **Screenshots** If applicable, add screenshots to help explain your problem. +For VictoriaMetrics health-state issues please provide full-length screenshots +of Grafana dashboards if possible: +* [Grafana dashboard for single-node VictoriaMetrics](https://grafana.com/dashboards/10229) +* [Grafana dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176) + +See how to setup monitoring here: +* [monitoring for single-node VictoriaMetrics](https://docs.victoriametrics.com/#monitoring) +* [montioring for VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#monitoring) + **Version** The line returned when passing `--version` command line flag to binary. For example: ``` @@ -30,15 +43,5 @@ victoria-metrics-20190730-121249-heads-single-node-0-g671d9e55 ``` **Used command-line flags** -Command-line flags are listed as `flag{name="httpListenAddr", value=":443"} 1` lines at the `/metrics` page. -See the following docs for details: +Please provide applied command-line flags used for running VictoriaMetrics and its components. -* [monitoring for single-node VictoriaMetrics](https://docs.victoriametrics.com/#monitoring) -* [montioring for VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#monitoring) - -**Additional context** -Add any other context about the problem here such as error logs from VictoriaMetrics and Prometheus, -`/metrics` output, screenshots from the official Grafana dashboards for VictoriaMetrics: - -* [Grafana dashboard for single-node VictoriaMetrics](https://grafana.com/dashboards/10229) -* [Grafana dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176) diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml index ff8bf995a..69a8de9ee 100644 --- a/.github/workflows/main.yml +++ b/.github/workflows/main.yml @@ -60,7 +60,7 @@ jobs: GOOS=darwin go build -mod=vendor ./app/vmctl CGO_ENABLED=0 GOOS=windows go build -mod=vendor ./app/vmagent - name: Publish coverage - uses: codecov/codecov-action@v1.5.0 + uses: codecov/codecov-action@v1.5.2 with: file: ./coverage.txt diff --git a/Makefile b/Makefile index 7a6bcb6d0..2f94c9c1c 100644 --- a/Makefile +++ b/Makefile @@ -278,11 +278,11 @@ copy-docs: # For The rest of docs is ordered manually.t docs-sync: SRC=README.md DST=docs/Single-server-VictoriaMetrics.md ORDER=1 $(MAKE) copy-docs - SRC=app/vmagent/README.md DST=docs/vmagent.md ORDER=2 $(MAKE) copy-docs - SRC=app/vmalert/README.md DST=docs/vmalert.md ORDER=3 $(MAKE) copy-docs - SRC=app/vmauth/README.md DST=docs/vmauth.md ORDER=4 $(MAKE) copy-docs - SRC=app/vmbackup/README.md DST=docs/vmbackup.md ORDER=5 $(MAKE) copy-docs - SRC=app/vmrestore/README.md DST=docs/vmrestore.md ORDER=6 $(MAKE) copy-docs - SRC=app/vmctl/README.md DST=docs/vmctl.md ORDER=7 $(MAKE) copy-docs - SRC=app/vmgateway/README.md DST=docs/vmgateway.md ORDER=8 $(MAKE) copy-docs - SRC=app/vmbackupmanager/README.md DST=docs/vmbackupmanager.md ORDER=9 $(MAKE) copy-docs + SRC=app/vmagent/README.md DST=docs/vmagent.md ORDER=3 $(MAKE) copy-docs + SRC=app/vmalert/README.md DST=docs/vmalert.md ORDER=4 $(MAKE) copy-docs + SRC=app/vmauth/README.md DST=docs/vmauth.md ORDER=5 $(MAKE) copy-docs + SRC=app/vmbackup/README.md DST=docs/vmbackup.md ORDER=6 $(MAKE) copy-docs + SRC=app/vmrestore/README.md DST=docs/vmrestore.md ORDER=7 $(MAKE) copy-docs + SRC=app/vmctl/README.md DST=docs/vmctl.md ORDER=8 $(MAKE) copy-docs + SRC=app/vmgateway/README.md DST=docs/vmgateway.md ORDER=9 $(MAKE) copy-docs + SRC=app/vmbackupmanager/README.md DST=docs/vmbackupmanager.md ORDER=10 $(MAKE) copy-docs diff --git a/README.md b/README.md index 819e050d2..d628f0226 100644 --- a/README.md +++ b/README.md @@ -459,11 +459,7 @@ The `/api/v1/export` endpoint should return the following response: Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via the following APIs: * [Graphite API](#graphite-api-usage) -* [Prometheus querying API](#prometheus-querying-api-usage). Graphite metric names may special chars such as `-`, which may clash - with [MetricsQL operations](https://docs.victoriametrics.com/MetricsQL.html). Such metrics can be queries via `{__name__="foo-bar.baz"}`. - VictoriaMetrics supports `__graphite__` pseudo-label for selecting time series with Graphite-compatible filters in [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html). - For example, `{__graphite__="foo.*.bar"}` is equivalent to `{__name__=~"foo[.][^.]*[.]bar"}`, but it works faster - and it is easier to use when migrating from Graphite to VictoriaMetrics. +* [Prometheus querying API](#prometheus-querying-api-usage). VictoriaMetrics supports `__graphite__` pseudo-label for selecting time series with Graphite-compatible filters in [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html). For example, `{__graphite__="foo.*.bar"}` is equivalent to `{__name__=~"foo[.][^.]*[.]bar"}`, but it works faster and it is easier to use when migrating from Graphite to VictoriaMetrics. * [go-graphite/carbonapi](https://github.com/go-graphite/carbonapi/blob/main/cmd/carbonapi/carbonapi.example.victoriametrics.yaml) ## How to send data from OpenTSDB-compatible agents @@ -1766,6 +1762,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li Whether to suppress scrape errors logging. The last error for each target is always available at '/targets' page even if scrape errors logging is suppressed -relabelConfig string Optional path to a file with relabeling rules, which are applied to all the ingested metrics. See https://docs.victoriametrics.com/#relabeling for details + -relabelDebug + Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, then the metrics aren't sent to storage. This is useful for debugging the relabeling configs -retentionPeriod value Data with timestamps outside the retentionPeriod is automatically deleted The following optional suffixes are supported: h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 1) diff --git a/app/vmagent/README.md b/app/vmagent/README.md index 6477bfb59..6bac2bf56 100644 --- a/app/vmagent/README.md +++ b/app/vmagent/README.md @@ -219,10 +219,10 @@ and also provides the following actions: The relabeling can be defined in the following places: -* At the `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to target labels. -* At the `scrape_config -> metric_relabel_configs` section in `-promscrape.config` file. This relabeling is applied to all the scraped metrics in the given `scrape_config`. -* At the `-remoteWrite.relabelConfig` file. This relabeling is aplied to all the collected metrics before sending them to remote storage. -* At the `-remoteWrite.urlRelabelConfig` files. This relabeling is applied to metrics before sending them to the corresponding `-remoteWrite.url`. +* At the `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to target labels. This relabeling can be debugged by passing `relabel_debug: true` option to the corresponding `scrape_config` section. In this case `vmagent` logs target labels before and after the relabeling and then drops the logged target. +* At the `scrape_config -> metric_relabel_configs` section in `-promscrape.config` file. This relabeling is applied to all the scraped metrics in the given `scrape_config`. This relabeling can be debugged by passing `metric_relabel_debug: true` option to the corresponding `scrape_config` section. In this case `vmagent` logs metrics before and after the relabeling and then drops the logged metrics. +* At the `-remoteWrite.relabelConfig` file. This relabeling is aplied to all the collected metrics before sending them to remote storage. This relabeling can be debugged by passing `-remoteWrite.relabelDebug` command-line option to `vmagent`. In this case `vmagent` logs metrics before and after the relabeling and then drops all the logged metrics instead of sending them to remote storage. +* At the `-remoteWrite.urlRelabelConfig` files. This relabeling is applied to metrics before sending them to the corresponding `-remoteWrite.url`. This relabeling can be debugged by passing `-remoteWrite.urlRelabelDebug` command-line options to `vmagent`. In this case `vmagent` logs metrics before and after the relabeling and then drops all the logged metrics instead of sending them to the corresponding `-remoteWrite.url`. You can read more about relabeling in the following articles: @@ -252,13 +252,13 @@ By default `vmagent` reads the full response from scrape target into memory, the 'match[]': ['{__name__!=""}'] ``` -Note that `sample_limit` option doesn't work if stream parsing is enabled because the parsed data is pushed to remote storage as soon as it is parsed. Therefore the `sample_limit` option doesn't make sense during stream parsing. +Note that `sample_limit` option doesn't prevent from data push to remote storage if stream parsing is enabled because the parsed data is pushed to remote storage as soon as it is parsed. ## Scraping big number of targets A single `vmagent` instance can scrape tens of thousands of scrape targets. Sometimes this isn't enough due to limitations on CPU, network, RAM, etc. -In this case scrape targets can be split among multiple `vmagent` instances (aka `vmagent` horizontal scaling and clustering). +In this case scrape targets can be split among multiple `vmagent` instances (aka `vmagent` horizontal scaling, sharding and clustering). Each `vmagent` instance in the cluster must use identical `-promscrape.config` files with distinct `-promscrape.cluster.memberNum` values. The flag value must be in the range `0 ... N-1`, where `N` is the number of `vmagent` instances in the cluster. The number of `vmagent` instances in the cluster must be passed to `-promscrape.cluster.membersCount` command-line flag. For example, the following commands @@ -721,6 +721,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html . Supports array of values separated by comma or specified via multiple flags. -remoteWrite.relabelConfig string Optional path to file with relabel_config entries. These entries are applied to all the metrics before sending them to -remoteWrite.url. See https://docs.victoriametrics.com/vmagent.html#relabeling for details + -remoteWrite.relabelDebug + Whether to log metrics before and after relabeling with -remoteWrite.relabelConfig. If the -remoteWrite.relabelDebug is enabled, then the metrics aren't sent to remote storage. This is useful for debugging the relabeling configs -remoteWrite.roundDigits array Round metric values to this number of decimal digits after the point before writing them to remote storage. Examples: -remoteWrite.roundDigits=2 would round 1.236 to 1.24, while -remoteWrite.roundDigits=-1 would round 126.78 to 130. By default digits rounding is disabled. Set it to 100 for disabling it for a particular remote storage. This option may be used for improving data compression for the stored metrics Supports array of values separated by comma or specified via multiple flags. @@ -755,6 +757,9 @@ See the docs at https://docs.victoriametrics.com/vmagent.html . -remoteWrite.urlRelabelConfig array Optional path to relabel config for the corresponding -remoteWrite.url Supports an array of values separated by comma or specified via multiple flags. + -remoteWrite.urlRelabelDebug array + Whether to log metrics before and after relabeling with -remoteWrite.urlRelabelConfig. If the -remoteWrite.urlRelabelDebug is enabled, then the metrics aren't sent to the corresponding -remoteWrite.url. This is useful for debugging the relabeling configs + Supports array of values separated by comma or specified via multiple flags. -sortLabels Whether to sort labels for incoming samples before writing them to all the configured remote storage systems. This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}Enabled sorting for labels can slow down ingestion performance a bit -tls diff --git a/app/vmagent/remotewrite/relabel.go b/app/vmagent/remotewrite/relabel.go index 048e4b484..f2e87847b 100644 --- a/app/vmagent/remotewrite/relabel.go +++ b/app/vmagent/remotewrite/relabel.go @@ -17,7 +17,12 @@ var ( "Pass multiple -remoteWrite.label flags in order to add multiple labels to metrics before sending them to remote storage") relabelConfigPathGlobal = flag.String("remoteWrite.relabelConfig", "", "Optional path to file with relabel_config entries. These entries are applied to all the metrics "+ "before sending them to -remoteWrite.url. See https://docs.victoriametrics.com/vmagent.html#relabeling for details") + relabelDebugGlobal = flag.Bool("remoteWrite.relabelDebug", false, "Whether to log metrics before and after relabeling with -remoteWrite.relabelConfig. "+ + "If the -remoteWrite.relabelDebug is enabled, then the metrics aren't sent to remote storage. This is useful for debugging the relabeling configs") relabelConfigPaths = flagutil.NewArray("remoteWrite.urlRelabelConfig", "Optional path to relabel config for the corresponding -remoteWrite.url") + relabelDebug = flagutil.NewArrayBool("remoteWrite.urlRelabelDebug", "Whether to log metrics before and after relabeling with -remoteWrite.urlRelabelConfig. "+ + "If the -remoteWrite.urlRelabelDebug is enabled, then the metrics aren't sent to the corresponding -remoteWrite.url. "+ + "This is useful for debugging the relabeling configs") ) var labelsGlobal []prompbmarshal.Label @@ -31,7 +36,7 @@ func CheckRelabelConfigs() error { func loadRelabelConfigs() (*relabelConfigs, error) { var rcs relabelConfigs if *relabelConfigPathGlobal != "" { - global, err := promrelabel.LoadRelabelConfigs(*relabelConfigPathGlobal) + global, err := promrelabel.LoadRelabelConfigs(*relabelConfigPathGlobal, *relabelDebugGlobal) if err != nil { return nil, fmt.Errorf("cannot load -remoteWrite.relabelConfig=%q: %w", *relabelConfigPathGlobal, err) } @@ -47,7 +52,7 @@ func loadRelabelConfigs() (*relabelConfigs, error) { // Skip empty relabel config. continue } - prc, err := promrelabel.LoadRelabelConfigs(path) + prc, err := promrelabel.LoadRelabelConfigs(path, relabelDebug.GetOptionalArg(i)) if err != nil { return nil, fmt.Errorf("cannot load relabel configs from -remoteWrite.urlRelabelConfig=%q: %w", path, err) } diff --git a/app/vmalert/Makefile b/app/vmalert/Makefile index 162429f98..8c6f8850a 100644 --- a/app/vmalert/Makefile +++ b/app/vmalert/Makefile @@ -66,7 +66,17 @@ run-vmalert: vmalert -remoteRead.url=http://localhost:8428 \ -external.label=cluster=east-1 \ -external.label=replica=a \ - -evaluationInterval=3s + -evaluationInterval=3s \ + -rule.configCheckInterval=10s + +replay-vmalert: vmalert + ./bin/vmalert -rule=app/vmalert/config/testdata/rules-replay-good.rules \ + -datasource.url=http://localhost:8428 \ + -remoteWrite.url=http://localhost:8428 \ + -external.label=cluster=east-1 \ + -external.label=replica=a \ + -replay.timeFrom=2021-05-11T07:21:43Z \ + -replay.timeTo=2021-05-29T18:40:43Z vmalert-amd64: CGO_ENABLED=1 GOARCH=amd64 $(MAKE) vmalert-local-with-goarch diff --git a/app/vmalert/README.md b/app/vmalert/README.md index b1b9bc9bc..f79b7d766 100644 --- a/app/vmalert/README.md +++ b/app/vmalert/README.md @@ -12,7 +12,8 @@ rules against configured address. support; * Integration with [Alertmanager](https://github.com/prometheus/alertmanager); * Keeps the alerts [state on restarts](#alerts-state-on-restarts); -* Graphite datasource can be used for alerting and recording rules. See [these docs](#graphite) for details. +* Graphite datasource can be used for alerting and recording rules. See [these docs](#graphite); +* Recording and Alerting rules backfilling (aka `replay`). See [these docs](#rules-backfilling); * Lightweight without extra dependencies. ## Limitations @@ -227,194 +228,296 @@ implements [Graphite Render API](https://graphite.readthedocs.io/en/stable/rende When using vmalert with both `graphite` and `prometheus` rules configured against cluster version of VM do not forget to set `-datasource.appendTypePrefix` flag to `true`, so vmalert can adjust URL prefix automatically based on query type. +## Rules backfilling + +vmalert supports alerting and recording rules backfilling (aka `replay`). In replay mode vmalert +can read the same rules configuration as normally, evaluate them on the given time range and backfill +results via remote write to the configured storage. vmalert supports any PromQL/MetricsQL compatible +data source for backfilling. + +### How it works + +In `replay` mode vmalert works as a cli-tool and exits immediately after work is done. +To run vmalert in `replay` mode: +``` +./bin/vmalert -rule=path/to/your.rules \ # path to files with rules you usually use with vmalert + -datasource.url=http://localhost:8428 \ # PromQL/MetricsQL compatible datasource + -remoteWrite.url=http://localhost:8428 \ # remote write compatible storage to persist results + -replay.timeFrom=2021-05-11T07:21:43Z \ # time from begin replay + -replay.timeTo=2021-05-29T18:40:43Z # time to finish replay +``` + +The output of the command will look like the following: +``` +Replay mode: +from: 2021-05-11 07:21:43 +0000 UTC # set by -replay.timeFrom +to: 2021-05-29 18:40:43 +0000 UTC # set by -replay.timeTo +max data points per request: 1000 # set by -replay.maxDatapointsPerQuery + +Group "ReplayGroup" +interval: 1m0s +requests to make: 27 +max range per request: 16h40m0s +> Rule "type:vm_cache_entries:rate5m" (ID: 1792509946081842725) +27 / 27 [----------------------------------------------------------------------------------------------------] 100.00% 78 p/s +> Rule "go_cgo_calls_count:rate5m" (ID: 17958425467471411582) +27 / 27 [-----------------------------------------------------------------------------------------------------] 100.00% ? p/s + +Group "vmsingleReplay" +interval: 30s +requests to make: 54 +max range per request: 8h20m0s +> Rule "RequestErrorsToAPI" (ID: 17645863024999990222) +54 / 54 [-----------------------------------------------------------------------------------------------------] 100.00% ? p/s +> Rule "TooManyLogs" (ID: 9042195394653477652) +54 / 54 [-----------------------------------------------------------------------------------------------------] 100.00% ? p/s +2021-06-07T09:59:12.098Z info app/vmalert/replay.go:68 replay finished! Imported 511734 samples +``` + +In `replay` mode all groups are executed sequentially one-by-one. Rules within the group are +executed sequentially as well (`concurrency` setting is ignored). Vmalert sends rule's expression +to [/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries) endpoint +of the configured `-datasource.url`. Returned data then processed according to the rule type and +backfilled to `-remoteWrite.url` via [Remote Write protocol](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations). +Vmalert respects `evaluationInterval` value set by flag or per-group during the replay. + +#### Recording rules + +Result of recording rules `replay` should match with results of normal rules evaluation. + +#### Alerting rules + +Result of alerting rules `replay` is time series reflecting [alert's state](#alerts-state-on-restarts). +To see if `replayed` alert has fired in the past use the following PromQL/MetricsQL expression: +``` +ALERTS{alertname="your_alertname", alertstate="firing"} +``` +Execute the query against storage which was used for `-remoteWrite.url` during the `replay`. + +### Additional configuration + +There are following non-required `replay` flags: + +* `-replay.maxDatapointsPerQuery` - the max number of data points expected to receive in one request. +In two words, it affects the max time range for every `/query_range` request. The higher the value, +the less requests will be issued during `replay`. +* `-replay.ruleRetryAttempts` - when datasource fails to respond vmalert will make this number of retries +per rule before giving up. +* `-replay.rulesDelay` - delay between sequential rules execution. Important in cases if there are chaining +(rules which depend on each other) rules. It is expected, that remote storage will be able to persist +previously accepted data during the delay, so data will be available for the subsequent queries. +Keep it equal or bigger than `-remoteWrite.flushInterval`. + +See full description for these flags in `./vmalert --help`. + +### Limitations + +* Graphite engine isn't supported yet; +* `query` template function is disabled for performance reasons (might be changed in future); + ## Configuration The shortlist of configuration flags is the following: ``` -datasource.appendTypePrefix - Whether to add type prefix to -datasource.url based on the query type. Set to true if sending different query types to the vmselect URL. + Whether to add type prefix to -datasource.url based on the query type. Set to true if sending different query types to the vmselect URL. -datasource.basicAuth.password string - Optional basic auth password for -datasource.url + Optional basic auth password for -datasource.url -datasource.basicAuth.username string - Optional basic auth username for -datasource.url + Optional basic auth username for -datasource.url -datasource.lookback duration - Lookback defines how far into the past to look when evaluating queries. For example, if the datasource.lookback=5m then param "time" with value now()-5m will be added to every query. + Lookback defines how far into the past to look when evaluating queries. For example, if the datasource.lookback=5m then param "time" with value now()-5m will be added to every query. -datasource.maxIdleConnections int - Defines the number of idle (keep-alive connections) to each configured datasource. Consider setting this value equal to the value: groups_total * group.concurrency. Too low a value may result in a high number of sockets in TIME_WAIT state. (default 100) + Defines the number of idle (keep-alive connections) to each configured datasource. Consider setting this value equal to the value: groups_total * group.concurrency. Too low a value may result in a high number of sockets in TIME_WAIT state. (default 100) -datasource.queryStep duration - queryStep defines how far a value can fallback to when evaluating queries. For example, if datasource.queryStep=15s then param "step" with value "15s" will be added to every query.If queryStep isn't specified, rule's evaluationInterval will be used instead. + queryStep defines how far a value can fallback to when evaluating queries. For example, if datasource.queryStep=15s then param "step" with value "15s" will be added to every query.If queryStep isn't specified, rule's evaluationInterval will be used instead. -datasource.roundDigits int - Adds "round_digits" GET param to datasource requests. In VM "round_digits" limits the number of digits after the decimal point in response values. + Adds "round_digits" GET param to datasource requests. In VM "round_digits" limits the number of digits after the decimal point in response values. -datasource.tlsCAFile string - Optional path to TLS CA file to use for verifying connections to -datasource.url. By default, system CA is used + Optional path to TLS CA file to use for verifying connections to -datasource.url. By default, system CA is used -datasource.tlsCertFile string - Optional path to client-side TLS certificate file to use when connecting to -datasource.url + Optional path to client-side TLS certificate file to use when connecting to -datasource.url -datasource.tlsInsecureSkipVerify - Whether to skip tls verification when connecting to -datasource.url + Whether to skip tls verification when connecting to -datasource.url -datasource.tlsKeyFile string - Optional path to client-side TLS certificate key to use when connecting to -datasource.url + Optional path to client-side TLS certificate key to use when connecting to -datasource.url -datasource.tlsServerName string - Optional TLS server name to use for connections to -datasource.url. By default, the server name from -datasource.url is used + Optional TLS server name to use for connections to -datasource.url. By default, the server name from -datasource.url is used -datasource.url string - VictoriaMetrics or vmselect url. Required parameter. E.g. http://127.0.0.1:8428 + VictoriaMetrics or vmselect url. Required parameter. E.g. http://127.0.0.1:8428 -dryRun -rule - Whether to check only config files without running vmalert. The rules file are validated. The -rule flag must be specified. + Whether to check only config files without running vmalert. The rules file are validated. The -rule flag must be specified. -enableTCP6 - Whether to enable IPv6 for listening and dialing. By default only IPv4 TCP and UDP is used + Whether to enable IPv6 for listening and dialing. By default only IPv4 TCP and UDP is used -envflag.enable - Whether to enable reading flags from environment variables additionally to command line. Command line flag values have priority over values from environment vars. Flags are read only from command line if this flag isn't set + Whether to enable reading flags from environment variables additionally to command line. Command line flag values have priority over values from environment vars. Flags are read only from command line if this flag isn't set -envflag.prefix string - Prefix for environment variables if -envflag.enable is set + Prefix for environment variables if -envflag.enable is set -evaluationInterval duration - How often to evaluate the rules (default 1m0s) + How often to evaluate the rules (default 1m0s) -external.alert.source string - External Alert Source allows to override the Source link for alerts sent to AlertManager for cases where you want to build a custom link to Grafana, Prometheus or any other service. - eg. 'explore?orgId=1&left=[\"now-1h\",\"now\",\"VictoriaMetrics\",{\"expr\": \"{{$expr|quotesEscape|crlfEscape|queryEscape}}\"},{\"mode\":\"Metrics\"},{\"ui\":[true,true,true,\"none\"]}]'.If empty '/api/v1/:groupID/alertID/status' is used + External Alert Source allows to override the Source link for alerts sent to AlertManager for cases where you want to build a custom link to Grafana, Prometheus or any other service. + eg. 'explore?orgId=1&left=[\"now-1h\",\"now\",\"VictoriaMetrics\",{\"expr\": \"{{$expr|quotesEscape|crlfEscape|queryEscape}}\"},{\"mode\":\"Metrics\"},{\"ui\":[true,true,true,\"none\"]}]'.If empty '/api/v1/:groupID/alertID/status' is used -external.label array - Optional label in the form 'name=value' to add to all generated recording rules and alerts. Pass multiple -label flags in order to add multiple label sets. - Supports an array of values separated by comma or specified via multiple flags. + Optional label in the form 'name=value' to add to all generated recording rules and alerts. Pass multiple -label flags in order to add multiple label sets. + Supports an array of values separated by comma or specified via multiple flags. -external.url string - External URL is used as alert's source for sent alerts to the notifier + External URL is used as alert's source for sent alerts to the notifier -fs.disableMmap - Whether to use pread() instead of mmap() for reading data files. By default mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread() + Whether to use pread() instead of mmap() for reading data files. By default mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread() -http.connTimeout duration - Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s) + Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s) -http.disableResponseCompression - Disable compression of HTTP responses to save CPU resources. By default compression is enabled to save network bandwidth + Disable compression of HTTP responses to save CPU resources. By default compression is enabled to save network bandwidth -http.idleConnTimeout duration - Timeout for incoming idle http connections (default 1m0s) + Timeout for incoming idle http connections (default 1m0s) -http.maxGracefulShutdownDuration duration - The maximum duration for a graceful shutdown of the HTTP server. A highly loaded server may require increased value for a graceful shutdown (default 7s) + The maximum duration for a graceful shutdown of the HTTP server. A highly loaded server may require increased value for a graceful shutdown (default 7s) -http.pathPrefix string - An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. See https://www.robustperception.io/using-external-urls-and-proxies-with-prometheus + An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. See https://www.robustperception.io/using-external-urls-and-proxies-with-prometheus -http.shutdownDelay duration - Optional delay before http server shutdown. During this delay, the server returns non-OK responses from /health page, so load balancers can route new requests to other servers + Optional delay before http server shutdown. During this delay, the server returns non-OK responses from /health page, so load balancers can route new requests to other servers -httpAuth.password string - Password for HTTP Basic Auth. The authentication is disabled if -httpAuth.username is empty + Password for HTTP Basic Auth. The authentication is disabled if -httpAuth.username is empty -httpAuth.username string - Username for HTTP Basic Auth. The authentication is disabled if empty. See also -httpAuth.password + Username for HTTP Basic Auth. The authentication is disabled if empty. See also -httpAuth.password -httpListenAddr string - Address to listen for http connections (default ":8880") + Address to listen for http connections (default ":8880") -loggerDisableTimestamps - Whether to disable writing timestamps in logs + Whether to disable writing timestamps in logs -loggerErrorsPerSecondLimit int - Per-second limit on the number of ERROR messages. If more than the given number of errors are emitted per second, the remaining errors are suppressed. Zero values disable the rate limit + Per-second limit on the number of ERROR messages. If more than the given number of errors are emitted per second, the remaining errors are suppressed. Zero values disable the rate limit -loggerFormat string - Format for logs. Possible values: default, json (default "default") + Format for logs. Possible values: default, json (default "default") -loggerLevel string - Minimum level of errors to log. Possible values: INFO, WARN, ERROR, FATAL, PANIC (default "INFO") + Minimum level of errors to log. Possible values: INFO, WARN, ERROR, FATAL, PANIC (default "INFO") -loggerOutput string - Output for the logs. Supported values: stderr, stdout (default "stderr") + Output for the logs. Supported values: stderr, stdout (default "stderr") -loggerTimezone string - Timezone to use for timestamps in logs. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 or Local (default "UTC") + Timezone to use for timestamps in logs. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 or Local (default "UTC") -loggerWarnsPerSecondLimit int - Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit + Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit -memory.allowedBytes size - Allowed size of system memory VictoriaMetrics caches may occupy. This option overrides -memory.allowedPercent if set to a non-zero value. Too low a value may increase the cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache resulting in higher disk IO usage - Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0) + Allowed size of system memory VictoriaMetrics caches may occupy. This option overrides -memory.allowedPercent if set to a non-zero value. Too low a value may increase the cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache resulting in higher disk IO usage + Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0) -memory.allowedPercent float - Allowed percent of system memory VictoriaMetrics caches may occupy. See also -memory.allowedBytes. Too low a value may increase cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache which will result in higher disk IO usage (default 60) + Allowed percent of system memory VictoriaMetrics caches may occupy. See also -memory.allowedBytes. Too low a value may increase cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache which will result in higher disk IO usage (default 60) -metricsAuthKey string - Auth key for /metrics. It overrides httpAuth settings + Auth key for /metrics. It overrides httpAuth settings -notifier.basicAuth.password array - Optional basic auth password for -notifier.url - Supports an array of values separated by comma or specified via multiple flags. + Optional basic auth password for -notifier.url + Supports an array of values separated by comma or specified via multiple flags. -notifier.basicAuth.username array - Optional basic auth username for -notifier.url - Supports an array of values separated by comma or specified via multiple flags. + Optional basic auth username for -notifier.url + Supports an array of values separated by comma or specified via multiple flags. -notifier.tlsCAFile array - Optional path to TLS CA file to use for verifying connections to -notifier.url. By default system CA is used - Supports an array of values separated by comma or specified via multiple flags. + Optional path to TLS CA file to use for verifying connections to -notifier.url. By default system CA is used + Supports an array of values separated by comma or specified via multiple flags. -notifier.tlsCertFile array - Optional path to client-side TLS certificate file to use when connecting to -notifier.url - Supports an array of values separated by comma or specified via multiple flags. + Optional path to client-side TLS certificate file to use when connecting to -notifier.url + Supports an array of values separated by comma or specified via multiple flags. -notifier.tlsInsecureSkipVerify array - Whether to skip tls verification when connecting to -notifier.url - Supports array of values separated by comma or specified via multiple flags. + Whether to skip tls verification when connecting to -notifier.url + Supports array of values separated by comma or specified via multiple flags. -notifier.tlsKeyFile array - Optional path to client-side TLS certificate key to use when connecting to -notifier.url - Supports an array of values separated by comma or specified via multiple flags. + Optional path to client-side TLS certificate key to use when connecting to -notifier.url + Supports an array of values separated by comma or specified via multiple flags. -notifier.tlsServerName array - Optional TLS server name to use for connections to -notifier.url. By default the server name from -notifier.url is used - Supports an array of values separated by comma or specified via multiple flags. + Optional TLS server name to use for connections to -notifier.url. By default the server name from -notifier.url is used + Supports an array of values separated by comma or specified via multiple flags. -notifier.url array - Prometheus alertmanager URL. Required parameter. e.g. http://127.0.0.1:9093 - Supports an array of values separated by comma or specified via multiple flags. + Prometheus alertmanager URL. Required parameter. e.g. http://127.0.0.1:9093 + Supports an array of values separated by comma or specified via multiple flags. -pprofAuthKey string - Auth key for /debug/pprof. It overrides httpAuth settings + Auth key for /debug/pprof. It overrides httpAuth settings -remoteRead.basicAuth.password string - Optional basic auth password for -remoteRead.url + Optional basic auth password for -remoteRead.url -remoteRead.basicAuth.username string - Optional basic auth username for -remoteRead.url + Optional basic auth username for -remoteRead.url -remoteRead.ignoreRestoreErrors - Whether to ignore errors from remote storage when restoring alerts state on startup. (default true) + Whether to ignore errors from remote storage when restoring alerts state on startup. (default true) -remoteRead.lookback duration - Lookback defines how far to look into past for alerts timeseries. For example, if lookback=1h then range from now() to now()-1h will be scanned. (default 1h0m0s) + Lookback defines how far to look into past for alerts timeseries. For example, if lookback=1h then range from now() to now()-1h will be scanned. (default 1h0m0s) -remoteRead.tlsCAFile string - Optional path to TLS CA file to use for verifying connections to -remoteRead.url. By default system CA is used + Optional path to TLS CA file to use for verifying connections to -remoteRead.url. By default system CA is used -remoteRead.tlsCertFile string - Optional path to client-side TLS certificate file to use when connecting to -remoteRead.url + Optional path to client-side TLS certificate file to use when connecting to -remoteRead.url -remoteRead.tlsInsecureSkipVerify - Whether to skip tls verification when connecting to -remoteRead.url + Whether to skip tls verification when connecting to -remoteRead.url -remoteRead.tlsKeyFile string - Optional path to client-side TLS certificate key to use when connecting to -remoteRead.url + Optional path to client-side TLS certificate key to use when connecting to -remoteRead.url -remoteRead.tlsServerName string - Optional TLS server name to use for connections to -remoteRead.url. By default the server name from -remoteRead.url is used + Optional TLS server name to use for connections to -remoteRead.url. By default the server name from -remoteRead.url is used -remoteRead.url vmalert - Optional URL to VictoriaMetrics or vmselect that will be used to restore alerts state. This configuration makes sense only if vmalert was configured with `remoteWrite.url` before and has been successfully persisted its state. E.g. http://127.0.0.1:8428 + Optional URL to VictoriaMetrics or vmselect that will be used to restore alerts state. This configuration makes sense only if vmalert was configured with `remoteWrite.url` before and has been successfully persisted its state. E.g. http://127.0.0.1:8428 -remoteWrite.basicAuth.password string - Optional basic auth password for -remoteWrite.url + Optional basic auth password for -remoteWrite.url -remoteWrite.basicAuth.username string - Optional basic auth username for -remoteWrite.url + Optional basic auth username for -remoteWrite.url -remoteWrite.concurrency int - Defines number of writers for concurrent writing into remote querier (default 1) + Defines number of writers for concurrent writing into remote querier (default 1) -remoteWrite.flushInterval duration - Defines interval of flushes to remote write endpoint (default 5s) + Defines interval of flushes to remote write endpoint (default 5s) -remoteWrite.maxBatchSize int - Defines defines max number of timeseries to be flushed at once (default 1000) + Defines defines max number of timeseries to be flushed at once (default 1000) -remoteWrite.maxQueueSize int - Defines the max number of pending datapoints to remote write endpoint (default 100000) + Defines the max number of pending datapoints to remote write endpoint (default 100000) -remoteWrite.tlsCAFile string - Optional path to TLS CA file to use for verifying connections to -remoteWrite.url. By default system CA is used + Optional path to TLS CA file to use for verifying connections to -remoteWrite.url. By default system CA is used -remoteWrite.tlsCertFile string - Optional path to client-side TLS certificate file to use when connecting to -remoteWrite.url + Optional path to client-side TLS certificate file to use when connecting to -remoteWrite.url -remoteWrite.tlsInsecureSkipVerify - Whether to skip tls verification when connecting to -remoteWrite.url + Whether to skip tls verification when connecting to -remoteWrite.url -remoteWrite.tlsKeyFile string - Optional path to client-side TLS certificate key to use when connecting to -remoteWrite.url + Optional path to client-side TLS certificate key to use when connecting to -remoteWrite.url -remoteWrite.tlsServerName string - Optional TLS server name to use for connections to -remoteWrite.url. By default the server name from -remoteWrite.url is used + Optional TLS server name to use for connections to -remoteWrite.url. By default the server name from -remoteWrite.url is used -remoteWrite.url string - Optional URL to VictoriaMetrics or vminsert where to persist alerts state and recording rules results in form of timeseries. E.g. http://127.0.0.1:8428 + Optional URL to VictoriaMetrics or vminsert where to persist alerts state and recording rules results in form of timeseries. E.g. http://127.0.0.1:8428 + -replay.maxDatapointsPerQuery int + Max number of data points expected in one request. The higher the value, the less requests will be made during replay. (default 1000) + -replay.ruleRetryAttempts int + Defines how many retries to make before giving up on rule if request for it returns an error. (default 5) + -replay.rulesDelay duration + Delay between rules evaluation within the group. Could be important if there are chained rules inside of the groupand processing need to wait for previous rule results to be persisted by remote storage before evaluating the next rule. Keep it equal or bigger than -remoteWrite.flushInterval. (default 1s) + -replay.timeFrom string + The time filter in RFC3339 format to select time series with timestamp equal or higher than provided value. E.g. '2020-01-01T20:07:00Z' + -replay.timeTo string + The time filter in RFC3339 format to select timeseries with timestamp equal or lower than provided value. E.g. '2020-01-01T20:07:00Z' -rule array - Path to the file with alert rules. - Supports patterns. Flag can be specified multiple times. - Examples: - -rule="/path/to/file". Path to a single file with alerting rules - -rule="dir/*.yaml" -rule="/*.yaml". Relative path to all .yaml files in "dir" folder, - absolute path to all .yaml files in root. - Rule files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars. - Supports an array of values separated by comma or specified via multiple flags. + Path to the file with alert rules. + Supports patterns. Flag can be specified multiple times. + Examples: + -rule="/path/to/file". Path to a single file with alerting rules + -rule="dir/*.yaml" -rule="/*.yaml". Relative path to all .yaml files in "dir" folder, + absolute path to all .yaml files in root. + Rule files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars. + Supports an array of values separated by comma or specified via multiple flags. + -rule.configCheckInterval duration + Interval for checking for changes in '-rule' files. By default the checking is disabled. Send SIGHUP signal in order to force config check for changes -rule.validateExpressions - Whether to validate rules expressions via MetricsQL engine (default true) + Whether to validate rules expressions via MetricsQL engine (default true) -rule.validateTemplates - Whether to validate annotation and label templates (default true) + Whether to validate annotation and label templates (default true) -tls - Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set + Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set -tlsCertFile string - Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower + Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower -tlsKeyFile string - Path to file with TLS key. Used only if -tls is set + Path to file with TLS key. Used only if -tls is set -version - Show VictoriaMetrics version + Show VictoriaMetrics version ``` Pass `-help` to `vmalert` in order to see the full list of supported command-line flags with their descriptions. -To reload configuration without `vmalert` restart send SIGHUP signal -or send GET request to `/-/reload` endpoint. +`vmalert` supports "hot" config reload via the following methods: +* send SIGHUP signal to `vmalert` process; +* send GET request to `/-/reload` endpoint; +* configure `-rule.configCheckInterval` flag for periodic reload +on config change. ## Contributing diff --git a/app/vmalert/alerting.go b/app/vmalert/alerting.go index fa9537a56..51b6807cc 100644 --- a/app/vmalert/alerting.go +++ b/app/vmalert/alerting.go @@ -19,15 +19,16 @@ import ( // AlertingRule is basic alert entity type AlertingRule struct { - Type datasource.Type - RuleID uint64 - Name string - Expr string - For time.Duration - Labels map[string]string - Annotations map[string]string - GroupID uint64 - GroupName string + Type datasource.Type + RuleID uint64 + Name string + Expr string + For time.Duration + Labels map[string]string + Annotations map[string]string + GroupID uint64 + GroupName string + EvalInterval time.Duration q datasource.Querier @@ -53,15 +54,16 @@ type alertingRuleMetrics struct { func newAlertingRule(qb datasource.QuerierBuilder, group *Group, cfg config.Rule) *AlertingRule { ar := &AlertingRule{ - Type: cfg.Type, - RuleID: cfg.ID, - Name: cfg.Alert, - Expr: cfg.Expr, - For: cfg.For.Duration(), - Labels: cfg.Labels, - Annotations: cfg.Annotations, - GroupID: group.ID(), - GroupName: group.Name, + Type: cfg.Type, + RuleID: cfg.ID, + Name: cfg.Alert, + Expr: cfg.Expr, + For: cfg.For.Duration(), + Labels: cfg.Labels, + Annotations: cfg.Annotations, + GroupID: group.ID(), + GroupName: group.Name, + EvalInterval: group.Interval, q: qb.BuildWithParams(datasource.QuerierParams{ DataSourceType: &cfg.Type, EvaluationInterval: group.Interval, @@ -126,9 +128,66 @@ func (ar *AlertingRule) ID() uint64 { return ar.RuleID } +// ExecRange executes alerting rule on the given time range similarly to Exec. +// It doesn't update internal states of the Rule and meant to be used just +// to get time series for backfilling. +// It returns ALERT and ALERT_FOR_STATE time series as result. +func (ar *AlertingRule) ExecRange(ctx context.Context, start, end time.Time) ([]prompbmarshal.TimeSeries, error) { + series, err := ar.q.QueryRange(ctx, ar.Expr, start, end) + if err != nil { + return nil, err + } + var result []prompbmarshal.TimeSeries + qFn := func(query string) ([]datasource.Metric, error) { + return nil, fmt.Errorf("`query` template isn't supported in replay mode") + } + for _, s := range series { + // extra labels could contain templates, so we expand them first + labels, err := expandLabels(s, qFn, ar) + if err != nil { + return nil, fmt.Errorf("failed to expand labels: %s", err) + } + for k, v := range labels { + // apply extra labels to datasource + // so the hash key will be consistent on restore + s.SetLabel(k, v) + } + + a, err := ar.newAlert(s, time.Time{}, qFn) // initial alert + if err != nil { + return nil, fmt.Errorf("failed to create alert: %s", err) + } + if ar.For == 0 { // if alert is instant + a.State = notifier.StateFiring + for i := range s.Values { + result = append(result, ar.alertToTimeSeries(a, s.Timestamps[i])...) + } + continue + } + + // if alert with For > 0 + prevT := time.Time{} + //activeAt := time.Time{} + for i := range s.Values { + at := time.Unix(s.Timestamps[i], 0) + if at.Sub(prevT) > ar.EvalInterval { + // reset to Pending if there are gaps > EvalInterval between DPs + a.State = notifier.StatePending + //activeAt = at + a.Start = at + } else if at.Sub(a.Start) >= ar.For { + a.State = notifier.StateFiring + } + prevT = at + result = append(result, ar.alertToTimeSeries(a, s.Timestamps[i])...) + } + } + return result, nil +} + // Exec executes AlertingRule expression via the given Querier. // Based on the Querier results AlertingRule maintains notifier.Alerts -func (ar *AlertingRule) Exec(ctx context.Context, series bool) ([]prompbmarshal.TimeSeries, error) { +func (ar *AlertingRule) Exec(ctx context.Context) ([]prompbmarshal.TimeSeries, error) { qMetrics, err := ar.q.Query(ctx, ar.Expr) ar.mu.Lock() defer ar.mu.Unlock() @@ -168,9 +227,9 @@ func (ar *AlertingRule) Exec(ctx context.Context, series bool) ([]prompbmarshal. } updated[h] = struct{}{} if a, ok := ar.alerts[h]; ok { - if a.Value != m.Value { + if a.Value != m.Values[0] { // update Value field with latest value - a.Value = m.Value + a.Value = m.Values[0] // and re-exec template since Value can be used // in annotations a.Annotations, err = a.ExecTemplate(qFn, ar.Annotations) @@ -208,10 +267,7 @@ func (ar *AlertingRule) Exec(ctx context.Context, series bool) ([]prompbmarshal. alertsFired.Inc() } } - if series { - return ar.toTimeSeries(ar.lastExecTime), nil - } - return nil, nil + return ar.toTimeSeries(ar.lastExecTime.Unix()), nil } func expandLabels(m datasource.Metric, q notifier.QueryFn, ar *AlertingRule) (map[string]string, error) { @@ -221,13 +277,13 @@ func expandLabels(m datasource.Metric, q notifier.QueryFn, ar *AlertingRule) (ma } tpl := notifier.AlertTplData{ Labels: metricLabels, - Value: m.Value, + Value: m.Values[0], Expr: ar.Expr, } return notifier.ExecTemplate(q, ar.Labels, tpl) } -func (ar *AlertingRule) toTimeSeries(timestamp time.Time) []prompbmarshal.TimeSeries { +func (ar *AlertingRule) toTimeSeries(timestamp int64) []prompbmarshal.TimeSeries { var tss []prompbmarshal.TimeSeries for _, a := range ar.alerts { if a.State == notifier.StateInactive { @@ -251,6 +307,7 @@ func (ar *AlertingRule) UpdateWith(r Rule) error { ar.For = nr.For ar.Labels = nr.Labels ar.Annotations = nr.Annotations + ar.EvalInterval = nr.EvalInterval ar.q = nr.q return nil } @@ -279,13 +336,15 @@ func (ar *AlertingRule) newAlert(m datasource.Metric, start time.Time, qFn notif GroupID: ar.GroupID, Name: ar.Name, Labels: map[string]string{}, - Value: m.Value, + Value: m.Values[0], Start: start, Expr: ar.Expr, } // label defined here to make override possible by // time series labels. - a.Labels[alertGroupNameLabel] = ar.GroupName + if ar.GroupName != "" { + a.Labels[alertGroupNameLabel] = ar.GroupName + } for _, l := range m.Labels { // drop __name__ to be consistent with Prometheus alerting if l.Name == "__name__" { @@ -374,7 +433,7 @@ const ( ) // alertToTimeSeries converts the given alert with the given timestamp to timeseries -func (ar *AlertingRule) alertToTimeSeries(a *notifier.Alert, timestamp time.Time) []prompbmarshal.TimeSeries { +func (ar *AlertingRule) alertToTimeSeries(a *notifier.Alert, timestamp int64) []prompbmarshal.TimeSeries { var tss []prompbmarshal.TimeSeries tss = append(tss, alertToTimeSeries(ar.Name, a, timestamp)) if ar.For > 0 { @@ -383,7 +442,7 @@ func (ar *AlertingRule) alertToTimeSeries(a *notifier.Alert, timestamp time.Time return tss } -func alertToTimeSeries(name string, a *notifier.Alert, timestamp time.Time) prompbmarshal.TimeSeries { +func alertToTimeSeries(name string, a *notifier.Alert, timestamp int64) prompbmarshal.TimeSeries { labels := make(map[string]string) for k, v := range a.Labels { labels[k] = v @@ -391,19 +450,19 @@ func alertToTimeSeries(name string, a *notifier.Alert, timestamp time.Time) prom labels["__name__"] = alertMetricName labels[alertNameLabel] = name labels[alertStateLabel] = a.State.String() - return newTimeSeries(1, labels, timestamp) + return newTimeSeries([]float64{1}, []int64{timestamp}, labels) } // alertForToTimeSeries returns a timeseries that represents // state of active alerts, where value is time when alert become active -func alertForToTimeSeries(name string, a *notifier.Alert, timestamp time.Time) prompbmarshal.TimeSeries { +func alertForToTimeSeries(name string, a *notifier.Alert, timestamp int64) prompbmarshal.TimeSeries { labels := make(map[string]string) for k, v := range a.Labels { labels[k] = v } labels["__name__"] = alertForStateMetricName labels[alertNameLabel] = name - return newTimeSeries(float64(a.Start.Unix()), labels, timestamp) + return newTimeSeries([]float64{float64(a.Start.Unix())}, []int64{timestamp}, labels) } // Restore restores the state of active alerts basing on previously written timeseries. @@ -445,7 +504,7 @@ func (ar *AlertingRule) Restore(ctx context.Context, q datasource.Querier, lookb m.Labels = append(m.Labels, l) } - a, err := ar.newAlert(m, time.Unix(int64(m.Value), 0), qFn) + a, err := ar.newAlert(m, time.Unix(int64(m.Values[0]), 0), qFn) if err != nil { return fmt.Errorf("failed to create alert: %w", err) } diff --git a/app/vmalert/alerting_test.go b/app/vmalert/alerting_test.go index ea5bb51dc..0be0c055a 100644 --- a/app/vmalert/alerting_test.go +++ b/app/vmalert/alerting_test.go @@ -24,11 +24,11 @@ func TestAlertingRule_ToTimeSeries(t *testing.T) { newTestAlertingRule("instant", 0), ¬ifier.Alert{State: notifier.StateFiring}, []prompbmarshal.TimeSeries{ - newTimeSeries(1, map[string]string{ + newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{ "__name__": alertMetricName, alertStateLabel: notifier.StateFiring.String(), alertNameLabel: "instant", - }, timestamp), + }), }, }, { @@ -38,13 +38,13 @@ func TestAlertingRule_ToTimeSeries(t *testing.T) { "instance": "bar", }}, []prompbmarshal.TimeSeries{ - newTimeSeries(1, map[string]string{ + newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{ "__name__": alertMetricName, alertStateLabel: notifier.StateFiring.String(), alertNameLabel: "instant extra labels", "job": "foo", "instance": "bar", - }, timestamp), + }), }, }, { @@ -54,48 +54,52 @@ func TestAlertingRule_ToTimeSeries(t *testing.T) { "__name__": "bar", }}, []prompbmarshal.TimeSeries{ - newTimeSeries(1, map[string]string{ + newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{ "__name__": alertMetricName, alertStateLabel: notifier.StateFiring.String(), alertNameLabel: "instant labels override", - }, timestamp), + }), }, }, { newTestAlertingRule("for", time.Second), ¬ifier.Alert{State: notifier.StateFiring, Start: timestamp.Add(time.Second)}, []prompbmarshal.TimeSeries{ - newTimeSeries(1, map[string]string{ + newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{ "__name__": alertMetricName, alertStateLabel: notifier.StateFiring.String(), alertNameLabel: "for", - }, timestamp), - newTimeSeries(float64(timestamp.Add(time.Second).Unix()), map[string]string{ - "__name__": alertForStateMetricName, - alertNameLabel: "for", - }, timestamp), + }), + newTimeSeries([]float64{float64(timestamp.Add(time.Second).Unix())}, + []int64{timestamp.UnixNano()}, + map[string]string{ + "__name__": alertForStateMetricName, + alertNameLabel: "for", + }), }, }, { newTestAlertingRule("for pending", 10*time.Second), ¬ifier.Alert{State: notifier.StatePending, Start: timestamp.Add(time.Second)}, []prompbmarshal.TimeSeries{ - newTimeSeries(1, map[string]string{ + newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{ "__name__": alertMetricName, alertStateLabel: notifier.StatePending.String(), alertNameLabel: "for pending", - }, timestamp), - newTimeSeries(float64(timestamp.Add(time.Second).Unix()), map[string]string{ - "__name__": alertForStateMetricName, - alertNameLabel: "for pending", - }, timestamp), + }), + newTimeSeries([]float64{float64(timestamp.Add(time.Second).Unix())}, + []int64{timestamp.UnixNano()}, + map[string]string{ + "__name__": alertForStateMetricName, + alertNameLabel: "for pending", + }), }, }, } for _, tc := range testCases { t.Run(tc.rule.Name, func(t *testing.T) { tc.rule.alerts[tc.alert.ID] = tc.alert - tss := tc.rule.toTimeSeries(timestamp) + tss := tc.rule.toTimeSeries(timestamp.Unix()) if err := compareTimeSeries(t, tc.expTS, tss); err != nil { t.Fatalf("timeseries missmatch: %s", err) } @@ -118,7 +122,7 @@ func TestAlertingRule_Exec(t *testing.T) { { newTestAlertingRule("empty labels", 0), [][]datasource.Metric{ - {datasource.Metric{}}, + {datasource.Metric{Values: []float64{1}, Timestamps: []int64{1}}}, }, map[uint64]*notifier.Alert{ hash(datasource.Metric{}): {State: notifier.StateFiring}, @@ -299,7 +303,7 @@ func TestAlertingRule_Exec(t *testing.T) { for _, step := range tc.steps { fq.reset() fq.add(step...) - if _, err := tc.rule.Exec(context.TODO(), false); err != nil { + if _, err := tc.rule.Exec(context.TODO()); err != nil { t.Fatalf("unexpected err: %s", err) } // artificial delay between applying steps @@ -321,6 +325,166 @@ func TestAlertingRule_Exec(t *testing.T) { } } +func TestAlertingRule_ExecRange(t *testing.T) { + testCases := []struct { + rule *AlertingRule + data []datasource.Metric + expAlerts []*notifier.Alert + }{ + { + newTestAlertingRule("empty", 0), + []datasource.Metric{}, + nil, + }, + { + newTestAlertingRule("empty labels", 0), + []datasource.Metric{ + {Values: []float64{1}, Timestamps: []int64{1}}, + }, + []*notifier.Alert{ + {State: notifier.StateFiring}, + }, + }, + { + newTestAlertingRule("single-firing", 0), + []datasource.Metric{ + metricWithLabels(t, "name", "foo"), + }, + []*notifier.Alert{ + { + Labels: map[string]string{"name": "foo"}, + State: notifier.StateFiring, + }, + }, + }, + { + newTestAlertingRule("single-firing-on-range", 0), + []datasource.Metric{ + {Values: []float64{1, 1, 1}, Timestamps: []int64{1e3, 2e3, 3e3}}, + }, + []*notifier.Alert{ + {State: notifier.StateFiring}, + {State: notifier.StateFiring}, + {State: notifier.StateFiring}, + }, + }, + { + newTestAlertingRule("for-pending", time.Second), + []datasource.Metric{ + {Values: []float64{1, 1, 1}, Timestamps: []int64{1, 3, 5}}, + }, + []*notifier.Alert{ + {State: notifier.StatePending, Start: time.Unix(1, 0)}, + {State: notifier.StatePending, Start: time.Unix(3, 0)}, + {State: notifier.StatePending, Start: time.Unix(5, 0)}, + }, + }, + { + newTestAlertingRule("for-firing", 3*time.Second), + []datasource.Metric{ + {Values: []float64{1, 1, 1}, Timestamps: []int64{1, 3, 5}}, + }, + []*notifier.Alert{ + {State: notifier.StatePending, Start: time.Unix(1, 0)}, + {State: notifier.StatePending, Start: time.Unix(1, 0)}, + {State: notifier.StateFiring, Start: time.Unix(1, 0)}, + }, + }, + { + newTestAlertingRule("for=>pending=>firing=>pending=>firing=>pending", time.Second), + []datasource.Metric{ + {Values: []float64{1, 1, 1, 1, 1}, Timestamps: []int64{1, 2, 5, 6, 20}}, + }, + []*notifier.Alert{ + {State: notifier.StatePending, Start: time.Unix(1, 0)}, + {State: notifier.StateFiring, Start: time.Unix(1, 0)}, + {State: notifier.StatePending, Start: time.Unix(5, 0)}, + {State: notifier.StateFiring, Start: time.Unix(5, 0)}, + {State: notifier.StatePending, Start: time.Unix(20, 0)}, + }, + }, + { + newTestAlertingRule("multi-series-for=>pending=>pending=>firing", 3*time.Second), + []datasource.Metric{ + {Values: []float64{1, 1, 1}, Timestamps: []int64{1, 3, 5}}, + {Values: []float64{1, 1}, Timestamps: []int64{1, 5}, + Labels: []datasource.Label{{Name: "foo", Value: "bar"}}, + }, + }, + []*notifier.Alert{ + {State: notifier.StatePending, Start: time.Unix(1, 0)}, + {State: notifier.StatePending, Start: time.Unix(1, 0)}, + {State: notifier.StateFiring, Start: time.Unix(1, 0)}, + // + {State: notifier.StatePending, Start: time.Unix(1, 0), + Labels: map[string]string{ + "foo": "bar", + }}, + {State: notifier.StatePending, Start: time.Unix(5, 0), + Labels: map[string]string{ + "foo": "bar", + }}, + }, + }, + { + newTestRuleWithLabels("multi-series-firing", "source", "vm"), + []datasource.Metric{ + {Values: []float64{1, 1}, Timestamps: []int64{1, 100}}, + {Values: []float64{1, 1}, Timestamps: []int64{1, 5}, + Labels: []datasource.Label{{Name: "foo", Value: "bar"}}, + }, + }, + []*notifier.Alert{ + {State: notifier.StateFiring, Labels: map[string]string{ + "source": "vm", + }}, + {State: notifier.StateFiring, Labels: map[string]string{ + "source": "vm", + }}, + // + {State: notifier.StateFiring, Labels: map[string]string{ + "foo": "bar", + "source": "vm", + }}, + {State: notifier.StateFiring, Labels: map[string]string{ + "foo": "bar", + "source": "vm", + }}, + }, + }, + } + fakeGroup := Group{Name: "TestRule_ExecRange"} + for _, tc := range testCases { + t.Run(tc.rule.Name, func(t *testing.T) { + fq := &fakeQuerier{} + tc.rule.q = fq + tc.rule.GroupID = fakeGroup.ID() + fq.add(tc.data...) + gotTS, err := tc.rule.ExecRange(context.TODO(), time.Now(), time.Now()) + if err != nil { + t.Fatalf("unexpected err: %s", err) + } + var expTS []prompbmarshal.TimeSeries + var j int + for _, series := range tc.data { + for _, timestamp := range series.Timestamps { + expTS = append(expTS, tc.rule.alertToTimeSeries(tc.expAlerts[j], timestamp)...) + j++ + } + } + if len(gotTS) != len(expTS) { + t.Fatalf("expected %d time series; got %d", len(expTS), len(gotTS)) + } + for i := range expTS { + got, exp := gotTS[i], expTS[i] + if !reflect.DeepEqual(got, exp) { + t.Fatalf("%d: expected \n%v but got \n%v", i, exp, got) + } + } + }) + } +} + func TestAlertingRule_Restore(t *testing.T) { testCases := []struct { rule *AlertingRule @@ -443,14 +607,14 @@ func TestAlertingRule_Exec_Negative(t *testing.T) { // successful attempt fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "bar")) - _, err := ar.Exec(context.TODO(), false) + _, err := ar.Exec(context.TODO()) if err != nil { t.Fatal(err) } // label `job` will collide with rule extra label and will make both time series equal fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "baz")) - _, err = ar.Exec(context.TODO(), false) + _, err = ar.Exec(context.TODO()) if !errors.Is(err, errDuplicate) { t.Fatalf("expected to have %s error; got %s", errDuplicate, err) } @@ -459,7 +623,7 @@ func TestAlertingRule_Exec_Negative(t *testing.T) { expErr := "connection reset by peer" fq.setErr(errors.New(expErr)) - _, err = ar.Exec(context.TODO(), false) + _, err = ar.Exec(context.TODO()) if err == nil { t.Fatalf("expected to get err; got nil") } @@ -484,17 +648,15 @@ func TestAlertingRule_Template(t *testing.T) { hash(metricWithLabels(t, "region", "east", "instance", "foo")): { Annotations: map[string]string{}, Labels: map[string]string{ - alertGroupNameLabel: "", - "region": "east", - "instance": "foo", + "region": "east", + "instance": "foo", }, }, hash(metricWithLabels(t, "region", "east", "instance", "bar")): { Annotations: map[string]string{}, Labels: map[string]string{ - alertGroupNameLabel: "", - "region": "east", - "instance": "bar", + "region": "east", + "instance": "bar", }, }, }, @@ -519,9 +681,8 @@ func TestAlertingRule_Template(t *testing.T) { map[uint64]*notifier.Alert{ hash(metricWithLabels(t, "region", "east", "instance", "foo")): { Labels: map[string]string{ - alertGroupNameLabel: "", - "instance": "foo", - "region": "east", + "instance": "foo", + "region": "east", }, Annotations: map[string]string{ "summary": `Too high connection number for "foo" for region east`, @@ -530,9 +691,8 @@ func TestAlertingRule_Template(t *testing.T) { }, hash(metricWithLabels(t, "region", "east", "instance", "bar")): { Labels: map[string]string{ - alertGroupNameLabel: "", - "instance": "bar", - "region": "east", + "instance": "bar", + "region": "east", }, Annotations: map[string]string{ "summary": `Too high connection number for "bar" for region east`, @@ -549,7 +709,7 @@ func TestAlertingRule_Template(t *testing.T) { tc.rule.GroupID = fakeGroup.ID() tc.rule.q = fq fq.add(tc.metrics...) - if _, err := tc.rule.Exec(context.TODO(), false); err != nil { + if _, err := tc.rule.Exec(context.TODO()); err != nil { t.Fatalf("unexpected err: %s", err) } for hash, expAlert := range tc.expAlerts { @@ -579,5 +739,5 @@ func newTestRuleWithLabels(name string, labels ...string) *AlertingRule { } func newTestAlertingRule(name string, waitFor time.Duration) *AlertingRule { - return &AlertingRule{Name: name, alerts: make(map[uint64]*notifier.Alert), For: waitFor} + return &AlertingRule{Name: name, alerts: make(map[uint64]*notifier.Alert), For: waitFor, EvalInterval: waitFor} } diff --git a/app/vmalert/config/testdata/rules-replay-good.rules b/app/vmalert/config/testdata/rules-replay-good.rules new file mode 100644 index 000000000..134a238bc --- /dev/null +++ b/app/vmalert/config/testdata/rules-replay-good.rules @@ -0,0 +1,39 @@ +groups: + - name: ReplayGroup + interval: 1m + concurrency: 1 + rules: + - record: type:vm_cache_entries:rate5m + expr: sum(rate(vm_cache_entries[5m])) by (type) + labels: + recording: true + - record: go_cgo_calls_count:rate5m + expr: rate(go_cgo_calls_count{job="vmdb"}[5m]) + labels: + recording: true + + - name: vmsingleReplay + interval: 30s + concurrency: 2 + rules: + - alert: RequestErrorsToAPI + expr: increase(vm_http_request_errors_total[5m]) > 0 + for: 15m + labels: + severity: warning + annotations: + dashboard: "http://localhost:3000/d/wNf0q_kZk?viewPanel=35&var-instance={{ $labels.instance }}" + summary: "Too many errors served for path {{ $labels.path }} (instance {{ $labels.instance }})" + description: "Requests to path {{ $labels.path }} are receiving errors. + Please verify if clients are sending correct requests." + + - alert: TooManyLogs + expr: sum(increase(vm_log_messages_total{level!="info"}[5m])) by (job, instance) > 0 + for: 15m + labels: + severity: warning + annotations: + dashboard: "http://localhost:3000/d/wNf0q_kZk?viewPanel=67&var-instance={{ $labels.instance }}" + summary: "Too many logs printed for job \"{{ $labels.job }}\" ({{ $labels.instance }})" + description: "Logging rate for job \"{{ $labels.job }}\" ({{ $labels.instance }}) is {{ $value }} for last 15m.\n + Worth to check logs for specific error messages." \ No newline at end of file diff --git a/app/vmalert/datasource/datasource.go b/app/vmalert/datasource/datasource.go index fbdd36abc..8dcff5e5d 100644 --- a/app/vmalert/datasource/datasource.go +++ b/app/vmalert/datasource/datasource.go @@ -2,26 +2,33 @@ package datasource import ( "context" + "time" ) +// Querier interface wraps Query and QueryRange methods +type Querier interface { + Query(ctx context.Context, query string) ([]Metric, error) + QueryRange(ctx context.Context, query string, from, to time.Time) ([]Metric, error) +} + // QuerierBuilder builds Querier with given params. type QuerierBuilder interface { BuildWithParams(params QuerierParams) Querier } -// Querier interface wraps Query method which -// executes given query and returns list of Metrics -// as result -type Querier interface { - Query(ctx context.Context, query string) ([]Metric, error) +// QuerierParams params for Querier. +type QuerierParams struct { + DataSourceType *Type + EvaluationInterval time.Duration + // see https://docs.victoriametrics.com/#prometheus-querying-api-enhancements + ExtraLabels map[string]string } // Metric is the basic entity which should be return by datasource -// It represents single data point with full list of labels type Metric struct { - Labels []Label - Timestamp int64 - Value float64 + Labels []Label + Timestamps []int64 + Values []float64 } // SetLabel adds or updates existing one label diff --git a/app/vmalert/datasource/datasource_test.go b/app/vmalert/datasource/datasource_test.go new file mode 100644 index 000000000..304b5ebf8 --- /dev/null +++ b/app/vmalert/datasource/datasource_test.go @@ -0,0 +1,18 @@ +package datasource + +import "testing" + +func TestMetric_Label(t *testing.T) { + m := &Metric{} + + m.AddLabel("foo", "bar") + checkEqualString(t, "bar", m.Label("foo")) + + m.SetLabel("foo", "baz") + checkEqualString(t, "baz", m.Label("foo")) + + m.SetLabel("qux", "quux") + checkEqualString(t, "quux", m.Label("qux")) + + checkEqualString(t, "", m.Label("non-existing")) +} diff --git a/app/vmalert/datasource/vm.go b/app/vmalert/datasource/vm.go index 7ffb020a6..cda418852 100644 --- a/app/vmalert/datasource/vm.go +++ b/app/vmalert/datasource/vm.go @@ -2,76 +2,13 @@ package datasource import ( "context" - "encoding/json" "fmt" "io/ioutil" "net/http" - "strconv" "strings" "time" ) -type response struct { - Status string `json:"status"` - Data struct { - ResultType string `json:"resultType"` - Result []struct { - Labels map[string]string `json:"metric"` - TV [2]interface{} `json:"value"` - } `json:"result"` - } `json:"data"` - ErrorType string `json:"errorType"` - Error string `json:"error"` -} - -func (r response) metrics() ([]Metric, error) { - var ms []Metric - var m Metric - var f float64 - var err error - for i, res := range r.Data.Result { - f, err = strconv.ParseFloat(res.TV[1].(string), 64) - if err != nil { - return nil, fmt.Errorf("metric %v, unable to parse float64 from %s: %w", res, res.TV[1], err) - } - m.Labels = nil - for k, v := range r.Data.Result[i].Labels { - m.AddLabel(k, v) - } - m.Timestamp = int64(res.TV[0].(float64)) - m.Value = f - ms = append(ms, m) - } - return ms, nil -} - -type graphiteResponse []graphiteResponseTarget - -type graphiteResponseTarget struct { - Target string `json:"target"` - Tags map[string]string `json:"tags"` - DataPoints [][2]float64 `json:"datapoints"` -} - -func (r graphiteResponse) metrics() []Metric { - var ms []Metric - for _, res := range r { - if len(res.DataPoints) < 1 { - continue - } - var m Metric - // add only last value to the result. - last := res.DataPoints[len(res.DataPoints)-1] - m.Value = last[0] - m.Timestamp = int64(last[1]) - for k, v := range res.Tags { - m.AddLabel(k, v) - } - ms = append(ms, m) - } - return ms -} - // VMStorage represents vmstorage entity with ability to read and write metrics type VMStorage struct { c *http.Client @@ -88,20 +25,6 @@ type VMStorage struct { extraLabels []string } -const queryPath = "/api/v1/query" -const graphitePath = "/render" - -const prometheusPrefix = "/prometheus" -const graphitePrefix = "/graphite" - -// QuerierParams params for Querier. -type QuerierParams struct { - DataSourceType *Type - EvaluationInterval time.Duration - // see https://docs.victoriametrics.com/#prometheus-querying-api-enhancements - ExtraLabels map[string]string -} - // Clone makes clone of VMStorage, shares http client. func (s *VMStorage) Clone() *VMStorage { return &VMStorage{ @@ -149,11 +72,21 @@ func NewVMStorage(baseURL, basicAuthUser, basicAuthPass string, lookBack time.Du // Query executes the given query and returns parsed response func (s *VMStorage) Query(ctx context.Context, query string) ([]Metric, error) { - req, err := s.prepareReq(query, time.Now()) + req, err := s.newRequestPOST() if err != nil { return nil, err } + ts := time.Now() + switch s.dataSourceType.name { + case "", prometheusType: + s.setPrometheusInstantReqParams(req, query, ts) + case graphiteType: + s.setGraphiteReqParams(req, query, ts) + default: + return nil, fmt.Errorf("engine not found: %q", s.dataSourceType.name) + } + resp, err := s.do(ctx, req) if err != nil { return nil, err @@ -169,25 +102,32 @@ func (s *VMStorage) Query(ctx context.Context, query string) ([]Metric, error) { return parseFn(req, resp) } -func (s *VMStorage) prepareReq(query string, timestamp time.Time) (*http.Request, error) { - req, err := http.NewRequest("POST", s.datasourceURL, nil) +// QueryRange executes the given query on the given time range. +// For Prometheus type see https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries +// Graphite type isn't supported. +func (s *VMStorage) QueryRange(ctx context.Context, query string, start, end time.Time) ([]Metric, error) { + if s.dataSourceType.name != prometheusType { + return nil, fmt.Errorf("%q is not supported for QueryRange", s.dataSourceType.name) + } + req, err := s.newRequestPOST() if err != nil { return nil, err } - req.Header.Set("Content-Type", "application/json; charset=utf-8") - if s.basicAuthPass != "" { - req.SetBasicAuth(s.basicAuthUser, s.basicAuthPass) + if start.IsZero() { + return nil, fmt.Errorf("start param is missing") } - - switch s.dataSourceType.name { - case "", prometheusType: - s.setPrometheusReqParams(req, query, timestamp) - case graphiteType: - s.setGraphiteReqParams(req, query, timestamp) - default: - return nil, fmt.Errorf("engine not found: %q", s.dataSourceType.name) + if end.IsZero() { + return nil, fmt.Errorf("end param is missing") } - return req, nil + s.setPrometheusRangeReqParams(req, query, start, end) + resp, err := s.do(ctx, req) + if err != nil { + return nil, err + } + defer func() { + _ = resp.Body.Close() + }() + return parsePrometheusResponse(req, resp) } func (s *VMStorage) do(ctx context.Context, req *http.Request) (*http.Response, error) { @@ -203,80 +143,14 @@ func (s *VMStorage) do(ctx context.Context, req *http.Request) (*http.Response, return resp, nil } -func (s *VMStorage) setPrometheusReqParams(r *http.Request, query string, timestamp time.Time) { - if s.appendTypePrefix { - r.URL.Path += prometheusPrefix +func (s *VMStorage) newRequestPOST() (*http.Request, error) { + req, err := http.NewRequest("POST", s.datasourceURL, nil) + if err != nil { + return nil, err } - r.URL.Path += queryPath - q := r.URL.Query() - q.Set("query", query) - if s.lookBack > 0 { - timestamp = timestamp.Add(-s.lookBack) + req.Header.Set("Content-Type", "application/json; charset=utf-8") + if s.basicAuthPass != "" { + req.SetBasicAuth(s.basicAuthUser, s.basicAuthPass) } - if s.evaluationInterval > 0 { - // see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1232 - timestamp = timestamp.Truncate(s.evaluationInterval) - // set step as evaluationInterval by default - q.Set("step", s.evaluationInterval.String()) - } - q.Set("time", fmt.Sprintf("%d", timestamp.Unix())) - - if s.queryStep > 0 { - // override step with user-specified value - q.Set("step", s.queryStep.String()) - } - if s.roundDigits != "" { - q.Set("round_digits", s.roundDigits) - } - for _, l := range s.extraLabels { - q.Add("extra_label", l) - } - r.URL.RawQuery = q.Encode() -} - -func (s *VMStorage) setGraphiteReqParams(r *http.Request, query string, timestamp time.Time) { - if s.appendTypePrefix { - r.URL.Path += graphitePrefix - } - r.URL.Path += graphitePath - q := r.URL.Query() - q.Set("format", "json") - q.Set("target", query) - from := "-5min" - if s.lookBack > 0 { - lookBack := timestamp.Add(-s.lookBack) - from = strconv.FormatInt(lookBack.Unix(), 10) - } - q.Set("from", from) - q.Set("until", "now") - r.URL.RawQuery = q.Encode() -} - -const ( - statusSuccess, statusError, rtVector = "success", "error", "vector" -) - -func parsePrometheusResponse(req *http.Request, resp *http.Response) ([]Metric, error) { - r := &response{} - if err := json.NewDecoder(resp.Body).Decode(r); err != nil { - return nil, fmt.Errorf("error parsing prometheus metrics for %s: %w", req.URL, err) - } - if r.Status == statusError { - return nil, fmt.Errorf("response error, query: %s, errorType: %s, error: %s", req.URL, r.ErrorType, r.Error) - } - if r.Status != statusSuccess { - return nil, fmt.Errorf("unknown status: %s, Expected success or error ", r.Status) - } - if r.Data.ResultType != rtVector { - return nil, fmt.Errorf("unknown result type:%s. Expected vector", r.Data.ResultType) - } - return r.metrics() -} - -func parseGraphiteResponse(req *http.Request, resp *http.Response) ([]Metric, error) { - r := &graphiteResponse{} - if err := json.NewDecoder(resp.Body).Decode(r); err != nil { - return nil, fmt.Errorf("error parsing graphite metrics for %s: %w", req.URL, err) - } - return r.metrics(), nil + return req, nil } diff --git a/app/vmalert/datasource/vm_graphite_api.go b/app/vmalert/datasource/vm_graphite_api.go new file mode 100644 index 000000000..3c6a2ab34 --- /dev/null +++ b/app/vmalert/datasource/vm_graphite_api.go @@ -0,0 +1,67 @@ +package datasource + +import ( + "encoding/json" + "fmt" + "net/http" + "strconv" + "time" +) + +type graphiteResponse []graphiteResponseTarget + +type graphiteResponseTarget struct { + Target string `json:"target"` + Tags map[string]string `json:"tags"` + DataPoints [][2]float64 `json:"datapoints"` +} + +func (r graphiteResponse) metrics() []Metric { + var ms []Metric + for _, res := range r { + if len(res.DataPoints) < 1 { + continue + } + var m Metric + // add only last value to the result. + last := res.DataPoints[len(res.DataPoints)-1] + m.Values = append(m.Values, last[0]) + m.Timestamps = append(m.Timestamps, int64(last[1])) + for k, v := range res.Tags { + m.AddLabel(k, v) + } + ms = append(ms, m) + } + return ms +} + +func parseGraphiteResponse(req *http.Request, resp *http.Response) ([]Metric, error) { + r := &graphiteResponse{} + if err := json.NewDecoder(resp.Body).Decode(r); err != nil { + return nil, fmt.Errorf("error parsing graphite metrics for %s: %w", req.URL, err) + } + return r.metrics(), nil +} + +const ( + graphitePath = "/render" + graphitePrefix = "/graphite" +) + +func (s *VMStorage) setGraphiteReqParams(r *http.Request, query string, timestamp time.Time) { + if s.appendTypePrefix { + r.URL.Path += graphitePrefix + } + r.URL.Path += graphitePath + q := r.URL.Query() + q.Set("format", "json") + q.Set("target", query) + from := "-5min" + if s.lookBack > 0 { + lookBack := timestamp.Add(-s.lookBack) + from = strconv.FormatInt(lookBack.Unix(), 10) + } + q.Set("from", from) + q.Set("until", "now") + r.URL.RawQuery = q.Encode() +} diff --git a/app/vmalert/datasource/vm_prom_api.go b/app/vmalert/datasource/vm_prom_api.go new file mode 100644 index 000000000..6373b4ce4 --- /dev/null +++ b/app/vmalert/datasource/vm_prom_api.go @@ -0,0 +1,170 @@ +package datasource + +import ( + "encoding/json" + "fmt" + "net/http" + "strconv" + "time" +) + +type promResponse struct { + Status string `json:"status"` + ErrorType string `json:"errorType"` + Error string `json:"error"` + Data struct { + ResultType string `json:"resultType"` + Result json.RawMessage `json:"result"` + } `json:"data"` +} + +type promInstant struct { + Result []struct { + Labels map[string]string `json:"metric"` + TV [2]interface{} `json:"value"` + } `json:"result"` +} + +type promRange struct { + Result []struct { + Labels map[string]string `json:"metric"` + TVs [][2]interface{} `json:"values"` + } `json:"result"` +} + +func (r promInstant) metrics() ([]Metric, error) { + var result []Metric + var m Metric + for i, res := range r.Result { + f, err := strconv.ParseFloat(res.TV[1].(string), 64) + if err != nil { + return nil, fmt.Errorf("metric %v, unable to parse float64 from %s: %w", res, res.TV[1], err) + } + m.Labels = nil + for k, v := range r.Result[i].Labels { + m.AddLabel(k, v) + } + m.Timestamps = append(m.Timestamps, int64(res.TV[0].(float64))) + m.Values = append(m.Values, f) + result = append(result, m) + + m.Values = m.Values[:0] + m.Labels = m.Labels[:0] + m.Timestamps = m.Timestamps[:0] + } + return result, nil +} + +func (r promRange) metrics() ([]Metric, error) { + var result []Metric + for i, res := range r.Result { + var m Metric + for _, tv := range res.TVs { + f, err := strconv.ParseFloat(tv[1].(string), 64) + if err != nil { + return nil, fmt.Errorf("metric %v, unable to parse float64 from %s: %w", res, tv[1], err) + } + m.Values = append(m.Values, f) + m.Timestamps = append(m.Timestamps, int64(tv[0].(float64))) + } + if len(m.Values) < 1 || len(m.Timestamps) < 1 { + return nil, fmt.Errorf("metric %v contains no values", res) + } + m.Labels = nil + for k, v := range r.Result[i].Labels { + m.AddLabel(k, v) + } + result = append(result, m) + } + return result, nil +} + +const ( + statusSuccess, statusError = "success", "error" + rtVector, rtMatrix = "vector", "matrix" +) + +func parsePrometheusResponse(req *http.Request, resp *http.Response) ([]Metric, error) { + r := &promResponse{} + if err := json.NewDecoder(resp.Body).Decode(r); err != nil { + return nil, fmt.Errorf("error parsing prometheus metrics for %s: %w", req.URL, err) + } + if r.Status == statusError { + return nil, fmt.Errorf("response error, query: %s, errorType: %s, error: %s", req.URL, r.ErrorType, r.Error) + } + if r.Status != statusSuccess { + return nil, fmt.Errorf("unknown status: %s, Expected success or error ", r.Status) + } + switch r.Data.ResultType { + case rtVector: + var pi promInstant + if err := json.Unmarshal(r.Data.Result, &pi.Result); err != nil { + return nil, fmt.Errorf("umarshal err %s; \n %#v", err, string(r.Data.Result)) + } + return pi.metrics() + case rtMatrix: + var pr promRange + if err := json.Unmarshal(r.Data.Result, &pr.Result); err != nil { + return nil, err + } + return pr.metrics() + default: + return nil, fmt.Errorf("unknown result type %q", r.Data.ResultType) + } +} + +const ( + prometheusInstantPath = "/api/v1/query" + prometheusRangePath = "/api/v1/query_range" + prometheusPrefix = "/prometheus" +) + +func (s *VMStorage) setPrometheusInstantReqParams(r *http.Request, query string, timestamp time.Time) { + if s.appendTypePrefix { + r.URL.Path += prometheusPrefix + } + r.URL.Path += prometheusInstantPath + q := r.URL.Query() + if s.lookBack > 0 { + timestamp = timestamp.Add(-s.lookBack) + } + if s.evaluationInterval > 0 { + // see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1232 + timestamp = timestamp.Truncate(s.evaluationInterval) + } + q.Set("time", fmt.Sprintf("%d", timestamp.Unix())) + r.URL.RawQuery = q.Encode() + s.setPrometheusReqParams(r, query) +} + +func (s *VMStorage) setPrometheusRangeReqParams(r *http.Request, query string, start, end time.Time) { + if s.appendTypePrefix { + r.URL.Path += prometheusPrefix + } + r.URL.Path += prometheusRangePath + q := r.URL.Query() + q.Add("start", fmt.Sprintf("%d", start.Unix())) + q.Add("end", fmt.Sprintf("%d", end.Unix())) + r.URL.RawQuery = q.Encode() + s.setPrometheusReqParams(r, query) +} + +func (s *VMStorage) setPrometheusReqParams(r *http.Request, query string) { + q := r.URL.Query() + q.Set("query", query) + if s.evaluationInterval > 0 { + // set step as evaluationInterval by default + q.Set("step", s.evaluationInterval.String()) + } + if s.queryStep > 0 { + // override step with user-specified value + q.Set("step", s.queryStep.String()) + } + if s.roundDigits != "" { + q.Set("round_digits", s.roundDigits) + } + for _, l := range s.extraLabels { + q.Add("extra_label", l) + } + r.URL.RawQuery = q.Encode() +} diff --git a/app/vmalert/datasource/vm_test.go b/app/vmalert/datasource/vm_test.go index c7e684aea..622a1e34a 100644 --- a/app/vmalert/datasource/vm_test.go +++ b/app/vmalert/datasource/vm_test.go @@ -7,6 +7,7 @@ import ( "net/http/httptest" "reflect" "strconv" + "strings" "testing" "time" ) @@ -19,7 +20,7 @@ var ( queryRender = "constantLine(10)" ) -func TestVMSelectQuery(t *testing.T) { +func TestVMInstantQuery(t *testing.T) { mux := http.NewServeMux() mux.HandleFunc("/", func(_ http.ResponseWriter, _ *http.Request) { t.Errorf("should not be called") @@ -103,9 +104,9 @@ func TestVMSelectQuery(t *testing.T) { t.Fatalf("expected 1 metric got %d in %+v", len(m), m) } expected := Metric{ - Labels: []Label{{Value: "vm_rows", Name: "__name__"}}, - Timestamp: 1583786142, - Value: 13763, + Labels: []Label{{Value: "vm_rows", Name: "__name__"}}, + Timestamps: []int64{1583786142}, + Values: []float64{13763}, } if !reflect.DeepEqual(m[0], expected) { t.Fatalf("unexpected metric %+v want %+v", m[0], expected) @@ -122,44 +123,145 @@ func TestVMSelectQuery(t *testing.T) { t.Fatalf("expected 1 metric got %d in %+v", len(m), m) } expected = Metric{ - Labels: []Label{{Value: "constantLine(10)", Name: "name"}}, - Timestamp: 1611758403, - Value: 10, + Labels: []Label{{Value: "constantLine(10)", Name: "name"}}, + Timestamps: []int64{1611758403}, + Values: []float64{10}, } if !reflect.DeepEqual(m[0], expected) { t.Fatalf("unexpected metric %+v want %+v", m[0], expected) } } -func TestPrepareReq(t *testing.T) { +func TestVMRangeQuery(t *testing.T) { + mux := http.NewServeMux() + mux.HandleFunc("/", func(_ http.ResponseWriter, _ *http.Request) { + t.Errorf("should not be called") + }) + c := -1 + mux.HandleFunc("/api/v1/query_range", func(w http.ResponseWriter, r *http.Request) { + c++ + if r.Method != http.MethodPost { + t.Errorf("expected POST method got %s", r.Method) + } + if name, pass, _ := r.BasicAuth(); name != basicAuthName || pass != basicAuthPass { + t.Errorf("expected %s:%s as basic auth got %s:%s", basicAuthName, basicAuthPass, name, pass) + } + if r.URL.Query().Get("query") != query { + t.Errorf("expected %s in query param, got %s", query, r.URL.Query().Get("query")) + } + startTS := r.URL.Query().Get("start") + if startTS == "" { + t.Errorf("expected 'start' in query param, got nil instead") + } + if _, err := strconv.ParseInt(startTS, 10, 64); err != nil { + t.Errorf("failed to parse 'start' query param: %s", err) + } + endTS := r.URL.Query().Get("end") + if endTS == "" { + t.Errorf("expected 'end' in query param, got nil instead") + } + if _, err := strconv.ParseInt(endTS, 10, 64); err != nil { + t.Errorf("failed to parse 'end' query param: %s", err) + } + switch c { + case 0: + w.Write([]byte(`{"status":"success","data":{"resultType":"matrix","result":[{"metric":{"__name__":"vm_rows"},"values":[[1583786142,"13763"]]}]}}`)) + } + }) + + srv := httptest.NewServer(mux) + defer srv.Close() + + s := NewVMStorage(srv.URL, basicAuthName, basicAuthPass, time.Minute, 0, false, srv.Client()) + + p := NewPrometheusType() + pq := s.BuildWithParams(QuerierParams{DataSourceType: &p, EvaluationInterval: 15 * time.Second}) + + _, err := pq.QueryRange(ctx, query, time.Now(), time.Time{}) + expectError(t, err, "is missing") + + _, err = pq.QueryRange(ctx, query, time.Time{}, time.Now()) + expectError(t, err, "is missing") + + start, end := time.Now().Add(-time.Minute), time.Now() + + m, err := pq.QueryRange(ctx, query, start, end) + if err != nil { + t.Fatalf("unexpected %s", err) + } + if len(m) != 1 { + t.Fatalf("expected 1 metric got %d in %+v", len(m), m) + } + expected := Metric{ + Labels: []Label{{Value: "vm_rows", Name: "__name__"}}, + Timestamps: []int64{1583786142}, + Values: []float64{13763}, + } + if !reflect.DeepEqual(m[0], expected) { + t.Fatalf("unexpected metric %+v want %+v", m[0], expected) + } + + g := NewGraphiteType() + gq := s.BuildWithParams(QuerierParams{DataSourceType: &g}) + + _, err = gq.QueryRange(ctx, queryRender, start, end) + expectError(t, err, "is not supported") +} + +func TestRequestParams(t *testing.T) { query := "up" timestamp := time.Date(2001, 2, 3, 4, 5, 6, 0, time.UTC) testCases := []struct { - name string - vm *VMStorage - checkFn func(t *testing.T, r *http.Request) + name string + queryRange bool + vm *VMStorage + checkFn func(t *testing.T, r *http.Request) }{ { "prometheus path", + false, &VMStorage{ dataSourceType: NewPrometheusType(), }, func(t *testing.T, r *http.Request) { - checkEqualString(t, queryPath, r.URL.Path) + checkEqualString(t, prometheusInstantPath, r.URL.Path) }, }, { "prometheus prefix", + false, &VMStorage{ dataSourceType: NewPrometheusType(), appendTypePrefix: true, }, func(t *testing.T, r *http.Request) { - checkEqualString(t, prometheusPrefix+queryPath, r.URL.Path) + checkEqualString(t, prometheusPrefix+prometheusInstantPath, r.URL.Path) + }, + }, + { + "prometheus range path", + true, + &VMStorage{ + dataSourceType: NewPrometheusType(), + }, + func(t *testing.T, r *http.Request) { + checkEqualString(t, prometheusRangePath, r.URL.Path) + }, + }, + { + "prometheus range prefix", + true, + &VMStorage{ + dataSourceType: NewPrometheusType(), + appendTypePrefix: true, + }, + func(t *testing.T, r *http.Request) { + checkEqualString(t, prometheusPrefix+prometheusRangePath, r.URL.Path) }, }, { "graphite path", + false, &VMStorage{ dataSourceType: NewGraphiteType(), }, @@ -169,6 +271,7 @@ func TestPrepareReq(t *testing.T) { }, { "graphite prefix", + false, &VMStorage{ dataSourceType: NewGraphiteType(), appendTypePrefix: true, @@ -179,14 +282,38 @@ func TestPrepareReq(t *testing.T) { }, { "default params", + false, &VMStorage{}, func(t *testing.T, r *http.Request) { exp := fmt.Sprintf("query=%s&time=%d", query, timestamp.Unix()) checkEqualString(t, exp, r.URL.RawQuery) }, }, + { + "default range params", + true, + &VMStorage{}, + func(t *testing.T, r *http.Request) { + exp := fmt.Sprintf("end=%d&query=%s&start=%d", timestamp.Unix(), query, timestamp.Unix()) + checkEqualString(t, exp, r.URL.RawQuery) + }, + }, { "basic auth", + false, + &VMStorage{ + basicAuthUser: "foo", + basicAuthPass: "bar", + }, + func(t *testing.T, r *http.Request) { + u, p, _ := r.BasicAuth() + checkEqualString(t, "foo", u) + checkEqualString(t, "bar", p) + }, + }, + { + "basic auth range", + true, &VMStorage{ basicAuthUser: "foo", basicAuthPass: "bar", @@ -199,6 +326,7 @@ func TestPrepareReq(t *testing.T) { }, { "lookback", + false, &VMStorage{ lookBack: time.Minute, }, @@ -209,6 +337,7 @@ func TestPrepareReq(t *testing.T) { }, { "evaluation interval", + false, &VMStorage{ evaluationInterval: 15 * time.Second, }, @@ -221,6 +350,7 @@ func TestPrepareReq(t *testing.T) { }, { "lookback + evaluation interval", + false, &VMStorage{ lookBack: time.Minute, evaluationInterval: 15 * time.Second, @@ -235,6 +365,7 @@ func TestPrepareReq(t *testing.T) { }, { "step override", + false, &VMStorage{ queryStep: time.Minute, }, @@ -245,6 +376,7 @@ func TestPrepareReq(t *testing.T) { }, { "round digits", + false, &VMStorage{ roundDigits: "10", }, @@ -255,6 +387,7 @@ func TestPrepareReq(t *testing.T) { }, { "extra labels", + false, &VMStorage{ extraLabels: []string{ "env=prod", @@ -266,14 +399,39 @@ func TestPrepareReq(t *testing.T) { checkEqualString(t, exp, r.URL.RawQuery) }, }, + { + "extra labels range", + true, + &VMStorage{ + extraLabels: []string{ + "env=prod", + "query=es=cape", + }, + }, + func(t *testing.T, r *http.Request) { + exp := fmt.Sprintf("end=%d&extra_label=env%%3Dprod&extra_label=query%%3Des%%3Dcape&query=%s&start=%d", + timestamp.Unix(), query, timestamp.Unix()) + checkEqualString(t, exp, r.URL.RawQuery) + }, + }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { - req, err := tc.vm.prepareReq(query, timestamp) + req, err := tc.vm.newRequestPOST() if err != nil { t.Fatalf("unexpected error: %s", err) } + switch tc.vm.dataSourceType.name { + case "", prometheusType: + if tc.queryRange { + tc.vm.setPrometheusRangeReqParams(req, query, timestamp, timestamp) + } else { + tc.vm.setPrometheusInstantReqParams(req, query, timestamp) + } + case graphiteType: + tc.vm.setGraphiteReqParams(req, query, timestamp) + } tc.checkFn(t, req) }) } @@ -285,3 +443,13 @@ func checkEqualString(t *testing.T, exp, got string) { t.Errorf("expected to get %q; got %q", exp, got) } } + +func expectError(t *testing.T, err error, exp string) { + t.Helper() + if err == nil { + t.Errorf("expected non-nil error") + } + if !strings.Contains(err.Error(), exp) { + t.Errorf("expected error %q to contain %q", err, exp) + } +} diff --git a/app/vmalert/group.go b/app/vmalert/group.go index efe07cdb4..b7355d38f 100644 --- a/app/vmalert/group.go +++ b/app/vmalert/group.go @@ -269,15 +269,10 @@ type executor struct { func (e *executor) execConcurrently(ctx context.Context, rules []Rule, concurrency int, interval time.Duration) chan error { res := make(chan error, len(rules)) - var returnSeries bool - if e.rw != nil { - returnSeries = true - } - if concurrency == 1 { // fast path for _, rule := range rules { - res <- e.exec(ctx, rule, returnSeries, interval) + res <- e.exec(ctx, rule, interval) } close(res) return res @@ -290,7 +285,7 @@ func (e *executor) execConcurrently(ctx context.Context, rules []Rule, concurren sem <- struct{}{} wg.Add(1) go func(r Rule) { - res <- e.exec(ctx, r, returnSeries, interval) + res <- e.exec(ctx, r, interval) <-sem wg.Done() }(rule) @@ -309,14 +304,14 @@ var ( remoteWriteErrors = metrics.NewCounter(`vmalert_remotewrite_errors_total`) ) -func (e *executor) exec(ctx context.Context, rule Rule, returnSeries bool, interval time.Duration) error { +func (e *executor) exec(ctx context.Context, rule Rule, interval time.Duration) error { execTotal.Inc() execStart := time.Now() defer func() { execDuration.UpdateDuration(execStart) }() - tss, err := rule.Exec(ctx, returnSeries) + tss, err := rule.Exec(ctx) if err != nil { execErrors.Inc() return fmt.Errorf("rule %q: failed to execute: %w", rule, err) diff --git a/app/vmalert/helpers_test.go b/app/vmalert/helpers_test.go index 9c31222df..4d5433a65 100644 --- a/app/vmalert/helpers_test.go +++ b/app/vmalert/helpers_test.go @@ -7,6 +7,7 @@ import ( "sort" "sync" "testing" + "time" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier" @@ -42,6 +43,10 @@ func (fq *fakeQuerier) BuildWithParams(_ datasource.QuerierParams) datasource.Qu return fq } +func (fq *fakeQuerier) QueryRange(ctx context.Context, q string, _, _ time.Time) ([]datasource.Metric, error) { + return fq.Query(ctx, q) +} + func (fq *fakeQuerier) Query(_ context.Context, _ string) ([]datasource.Metric, error) { fq.Lock() defer fq.Unlock() @@ -72,9 +77,16 @@ func (fn *fakeNotifier) getAlerts() []notifier.Alert { } func metricWithValueAndLabels(t *testing.T, value float64, labels ...string) datasource.Metric { + return metricWithValuesAndLabels(t, []float64{value}, labels...) +} + +func metricWithValuesAndLabels(t *testing.T, values []float64, labels ...string) datasource.Metric { t.Helper() m := metricWithLabels(t, labels...) - m.Value = value + m.Values = values + for i := range values { + m.Timestamps = append(m.Timestamps, int64(i)) + } return m } @@ -83,7 +95,7 @@ func metricWithLabels(t *testing.T, labels ...string) datasource.Metric { if len(labels) == 0 || len(labels)%2 != 0 { t.Fatalf("expected to get even number of labels") } - m := datasource.Metric{} + m := datasource.Metric{Values: []float64{1}, Timestamps: []int64{1}} for i := 0; i < len(labels); i += 2 { m.Labels = append(m.Labels, datasource.Label{ Name: labels[i], diff --git a/app/vmalert/main.go b/app/vmalert/main.go index d11d72682..faea8350d 100644 --- a/app/vmalert/main.go +++ b/app/vmalert/main.go @@ -34,6 +34,9 @@ Examples: absolute path to all .yaml files in root. Rule files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars.`) + rulesCheckInterval = flag.Duration("rule.configCheckInterval", 0, "Interval for checking for changes in '-rule' files. "+ + "By default the checking is disabled. Send SIGHUP signal in order to force config check for changes") + httpListenAddr = flag.String("httpListenAddr", ":8880", "Address to listen for http connections") evaluationInterval = flag.Duration("evaluationInterval", time.Minute, "How often to evaluate the rules") @@ -65,47 +68,54 @@ func main() { notifier.InitTemplateFunc(u) groups, err := config.Parse(*rulePath, true, true) if err != nil { - logger.Fatalf(err.Error()) + logger.Fatalf("failed to parse %q: %s", *rulePath, err) } if len(groups) == 0 { logger.Fatalf("No rules for validation. Please specify path to file(s) with alerting and/or recording rules using `-rule` flag") } return } + if *replayFrom != "" || *replayTo != "" { + rw, err := remotewrite.Init(context.Background()) + if err != nil { + logger.Fatalf("failed to init remoteWrite: %s", err) + } + eu, err := getExternalURL(*externalURL, *httpListenAddr, httpserver.IsTLS()) + if err != nil { + logger.Fatalf("failed to init `external.url`: %s", err) + } + notifier.InitTemplateFunc(eu) + groupsCfg, err := config.Parse(*rulePath, *validateTemplates, *validateExpressions) + if err != nil { + logger.Fatalf("cannot parse configuration file: %s", err) + } + q, err := datasource.Init() + if err != nil { + logger.Fatalf("failed to init datasource: %s", err) + } + if err := replay(groupsCfg, q, rw); err != nil { + logger.Fatalf("replay failed: %s", err) + } + return + } + ctx, cancel := context.WithCancel(context.Background()) manager, err := newManager(ctx) if err != nil { logger.Fatalf("failed to init: %s", err) } - // Register SIGHUP handler for config re-read just before manager.start call. - // This guarantees that the config will be re-read if the signal arrives during manager.start call. - // See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1240 - sighupCh := procutil.NewSighupChan() + logger.Infof("reading rules configuration file from %q", strings.Join(*rulePath, ";")) + groupsCfg, err := config.Parse(*rulePath, *validateTemplates, *validateExpressions) + if err != nil { + logger.Fatalf("cannot parse configuration file: %s", err) + } - if err := manager.start(ctx, *rulePath, *validateTemplates, *validateExpressions); err != nil { + if err := manager.start(ctx, groupsCfg); err != nil { logger.Fatalf("failed to start: %s", err) } - go func() { - // init reload metrics with positive values to improve alerting conditions - configSuccess.Set(1) - configTimestamp.Set(fasttime.UnixTimestamp()) - for { - <-sighupCh - configReloads.Inc() - logger.Infof("SIGHUP received. Going to reload rules %q ...", *rulePath) - if err := manager.update(ctx, *rulePath, *validateTemplates, *validateExpressions, false); err != nil { - configReloadErrors.Inc() - configSuccess.Set(0) - logger.Errorf("error while reloading rules: %s", err) - continue - } - configSuccess.Set(1) - configTimestamp.Set(fasttime.UnixTimestamp()) - logger.Infof("Rules reloaded successfully from %q", *rulePath) - } - }() + go configReload(ctx, manager, groupsCfg) rh := &requestHandler{m: manager} go httpserver.Serve(*httpListenAddr, rh.handler) @@ -228,3 +238,62 @@ See the docs at https://docs.victoriametrics.com/vmalert.html . ` flagutil.Usage(s) } + +func configReload(ctx context.Context, m *manager, groupsCfg []config.Group) { + // Register SIGHUP handler for config re-read just before manager.start call. + // This guarantees that the config will be re-read if the signal arrives during manager.start call. + // See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1240 + sighupCh := procutil.NewSighupChan() + + var configCheckCh <-chan time.Time + if *rulesCheckInterval > 0 { + ticker := time.NewTicker(*rulesCheckInterval) + configCheckCh = ticker.C + defer ticker.Stop() + } + + // init reload metrics with positive values to improve alerting conditions + configSuccess.Set(1) + configTimestamp.Set(fasttime.UnixTimestamp()) + for { + select { + case <-ctx.Done(): + return + case <-sighupCh: + logger.Infof("SIGHUP received. Going to reload rules %q ...", *rulePath) + configReloads.Inc() + case <-configCheckCh: + } + newGroupsCfg, err := config.Parse(*rulePath, *validateTemplates, *validateExpressions) + if err != nil { + logger.Errorf("cannot parse configuration file: %s", err) + continue + } + if configsEqual(newGroupsCfg, groupsCfg) { + // config didn't change - skip it + continue + } + groupsCfg = newGroupsCfg + if err := m.update(ctx, groupsCfg, false); err != nil { + configReloadErrors.Inc() + configSuccess.Set(0) + logger.Errorf("error while reloading rules: %s", err) + continue + } + configSuccess.Set(1) + configTimestamp.Set(fasttime.UnixTimestamp()) + logger.Infof("Rules reloaded successfully from %q", *rulePath) + } +} + +func configsEqual(a, b []config.Group) bool { + if len(a) != len(b) { + return false + } + for i := range a { + if a[i].Checksum != b[i].Checksum { + return false + } + } + return true +} diff --git a/app/vmalert/main_test.go b/app/vmalert/main_test.go index 3dc45c0c7..a3d454957 100644 --- a/app/vmalert/main_test.go +++ b/app/vmalert/main_test.go @@ -1,12 +1,16 @@ package main import ( + "context" "fmt" + "io/ioutil" "net/url" "os" "testing" + "time" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier" + "github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil" ) func TestGetExternalURL(t *testing.T) { @@ -51,3 +55,95 @@ func TestGetAlertURLGenerator(t *testing.T) { t.Errorf("unexpected url want %s, got %s", exp, fn(testAlert)) } } + +func TestConfigReload(t *testing.T) { + originalRulePath := *rulePath + defer func() { + *rulePath = originalRulePath + }() + + const ( + rules1 = ` +groups: + - name: group-1 + rules: + - alert: ExampleAlertAlwaysFiring + expr: sum by(job) (up == 1) + - record: handler:requests:rate5m + expr: sum(rate(prometheus_http_requests_total[5m])) by (handler) +` + rules2 = ` +groups: + - name: group-1 + rules: + - alert: ExampleAlertAlwaysFiring + expr: sum by(job) (up == 1) + - name: group-2 + rules: + - record: handler:requests:rate5m + expr: sum(rate(prometheus_http_requests_total[5m])) by (handler) +` + ) + + f, err := ioutil.TempFile("", "") + if err != nil { + t.Fatal(err) + } + writeToFile(t, f.Name(), rules1) + + *rulesCheckInterval = 200 * time.Millisecond + *rulePath = []string{f.Name()} + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + + m := &manager{ + querierBuilder: &fakeQuerier{}, + groups: make(map[uint64]*Group), + labels: map[string]string{}, + } + go configReload(ctx, m, nil) + + lenLocked := func(m *manager) int { + m.groupsMu.RLock() + defer m.groupsMu.RUnlock() + return len(m.groups) + } + + time.Sleep(*rulesCheckInterval * 2) + groupsLen := lenLocked(m) + if groupsLen != 1 { + t.Fatalf("expected to have exactly 1 group loaded; got %d", groupsLen) + } + + writeToFile(t, f.Name(), rules2) + time.Sleep(*rulesCheckInterval * 2) + groupsLen = lenLocked(m) + if groupsLen != 2 { + fmt.Println(m.groups) + t.Fatalf("expected to have exactly 2 groups loaded; got %d", groupsLen) + } + + writeToFile(t, f.Name(), rules1) + procutil.SelfSIGHUP() + time.Sleep(*rulesCheckInterval / 2) + groupsLen = lenLocked(m) + if groupsLen != 1 { + t.Fatalf("expected to have exactly 1 group loaded; got %d", groupsLen) + } + + writeToFile(t, f.Name(), `corrupted`) + procutil.SelfSIGHUP() + time.Sleep(*rulesCheckInterval / 2) + groupsLen = lenLocked(m) + if groupsLen != 1 { // should remain unchanged + t.Fatalf("expected to have exactly 1 group loaded; got %d", groupsLen) + } +} + +func writeToFile(t *testing.T, file, b string) { + t.Helper() + err := ioutil.WriteFile(file, []byte(b), 0644) + if err != nil { + t.Fatal(err) + } +} diff --git a/app/vmalert/manager.go b/app/vmalert/manager.go index 99a00afe5..e73d399fb 100644 --- a/app/vmalert/manager.go +++ b/app/vmalert/manager.go @@ -3,7 +3,6 @@ package main import ( "context" "fmt" - "strings" "sync" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config" @@ -50,8 +49,8 @@ func (m *manager) AlertAPI(gID, aID uint64) (*APIAlert, error) { return nil, fmt.Errorf("can't find alert with id %q in group %q", aID, g.Name) } -func (m *manager) start(ctx context.Context, path []string, validateTpl, validateExpr bool) error { - return m.update(ctx, path, validateTpl, validateExpr, true) +func (m *manager) start(ctx context.Context, groupsCfg []config.Group) error { + return m.update(ctx, groupsCfg, true) } func (m *manager) close() { @@ -85,13 +84,7 @@ func (m *manager) startGroup(ctx context.Context, group *Group, restore bool) er return nil } -func (m *manager) update(ctx context.Context, path []string, validateTpl, validateExpr, restore bool) error { - logger.Infof("reading rules configuration file from %q", strings.Join(path, ";")) - groupsCfg, err := config.Parse(path, validateTpl, validateExpr) - if err != nil { - return fmt.Errorf("cannot parse configuration file: %w", err) - } - +func (m *manager) update(ctx context.Context, groupsCfg []config.Group, restore bool) error { groupsRegistry := make(map[uint64]*Group) for _, cfg := range groupsCfg { ng := newGroup(cfg, m.querierBuilder, *evaluationInterval, m.labels) diff --git a/app/vmalert/manager_test.go b/app/vmalert/manager_test.go index 984505a3a..bd5859fab 100644 --- a/app/vmalert/manager_test.go +++ b/app/vmalert/manager_test.go @@ -9,8 +9,8 @@ import ( "testing" "time" + "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource" - "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier" ) @@ -25,9 +25,8 @@ func TestMain(m *testing.M) { // starting with empty rules folder func TestManagerEmptyRulesDir(t *testing.T) { m := &manager{groups: make(map[uint64]*Group)} - path := []string{"foo/bar"} - err := m.update(context.Background(), path, true, true, false) - if err != nil { + cfg := loadCfg(t, []string{"foo/bar"}, true, true) + if err := m.update(context.Background(), cfg, false); err != nil { t.Fatalf("expected to load succesfully with empty rules dir; got err instead: %v", err) } } @@ -50,8 +49,11 @@ func TestManagerUpdateConcurrent(t *testing.T) { "config/testdata/rules1-good.rules", "config/testdata/rules2-good.rules", } + evalInterval := *evaluationInterval + defer func() { *evaluationInterval = evalInterval }() *evaluationInterval = time.Millisecond - if err := m.start(context.Background(), []string{paths[0]}, true, true); err != nil { + cfg := loadCfg(t, []string{paths[0]}, true, true) + if err := m.start(context.Background(), cfg); err != nil { t.Fatalf("failed to start: %s", err) } @@ -64,8 +66,11 @@ func TestManagerUpdateConcurrent(t *testing.T) { defer wg.Done() for i := 0; i < iterations; i++ { rnd := rand.Intn(len(paths)) - path := []string{paths[rnd]} - _ = m.update(context.Background(), path, true, true, false) + cfg, err := config.Parse([]string{paths[rnd]}, true, true) + if err != nil { // update can fail and this is expected + continue + } + _ = m.update(context.Background(), cfg, false) } }() } @@ -243,13 +248,16 @@ func TestManagerUpdate(t *testing.T) { t.Run(tc.name, func(t *testing.T) { ctx, cancel := context.WithCancel(context.TODO()) m := &manager{groups: make(map[uint64]*Group), querierBuilder: &fakeQuerier{}} - path := []string{tc.initPath} - if err := m.update(ctx, path, true, true, false); err != nil { + + cfgInit := loadCfg(t, []string{tc.initPath}, true, true) + if err := m.update(ctx, cfgInit, false); err != nil { t.Fatalf("failed to complete initial rules update: %s", err) } - path = []string{tc.updatePath} - _ = m.update(ctx, path, true, true, false) + cfgUpdate, err := config.Parse([]string{tc.updatePath}, true, true) + if err == nil { // update can fail and that's expected + _ = m.update(ctx, cfgUpdate, false) + } if len(tc.want) != len(m.groups) { t.Fatalf("\nwant number of groups: %d;\ngot: %d ", len(tc.want), len(m.groups)) } @@ -267,3 +275,12 @@ func TestManagerUpdate(t *testing.T) { }) } } + +func loadCfg(t *testing.T, path []string, validateAnnotations, validateExpressions bool) []config.Group { + t.Helper() + cfg, err := config.Parse(path, validateAnnotations, validateExpressions) + if err != nil { + t.Fatal(err) + } + return cfg +} diff --git a/app/vmalert/notifier/alert_test.go b/app/vmalert/notifier/alert_test.go index bc4e7d2c3..769116979 100644 --- a/app/vmalert/notifier/alert_test.go +++ b/app/vmalert/notifier/alert_test.go @@ -83,14 +83,16 @@ func TestAlert_ExecTemplate(t *testing.T) { {Name: "foo", Value: "bar"}, {Name: "baz", Value: "qux"}, }, - Value: 1, + Values: []float64{1}, + Timestamps: []int64{1}, }, { Labels: []datasource.Label{ {Name: "foo", Value: "garply"}, {Name: "baz", Value: "fred"}, }, - Value: 2, + Values: []float64{2}, + Timestamps: []int64{1}, }, }, nil } diff --git a/app/vmalert/notifier/template_func.go b/app/vmalert/notifier/template_func.go index 3bcc967ba..9704f17c2 100644 --- a/app/vmalert/notifier/template_func.go +++ b/app/vmalert/notifier/template_func.go @@ -47,8 +47,8 @@ func datasourceMetricsToTemplateMetrics(ms []datasource.Metric) []metric { } mss = append(mss, metric{ Labels: labelsMap, - Timestamp: m.Timestamp, - Value: m.Value}) + Timestamp: m.Timestamps[0], + Value: m.Values[0]}) } return mss } diff --git a/app/vmalert/recording.go b/app/vmalert/recording.go index b8112357f..c5c70db50 100644 --- a/app/vmalert/recording.go +++ b/app/vmalert/recording.go @@ -88,12 +88,30 @@ func (rr *RecordingRule) Close() { metrics.UnregisterMetric(rr.metrics.errors.name) } -// Exec executes RecordingRule expression via the given Querier. -func (rr *RecordingRule) Exec(ctx context.Context, series bool) ([]prompbmarshal.TimeSeries, error) { - if !series { - return nil, nil +// ExecRange executes recording rule on the given time range similarly to Exec. +// It doesn't update internal states of the Rule and meant to be used just +// to get time series for backfilling. +func (rr *RecordingRule) ExecRange(ctx context.Context, start, end time.Time) ([]prompbmarshal.TimeSeries, error) { + series, err := rr.q.QueryRange(ctx, rr.Expr, start, end) + if err != nil { + return nil, err } + duplicates := make(map[string]struct{}, len(series)) + var tss []prompbmarshal.TimeSeries + for _, s := range series { + ts := rr.toTimeSeries(s) + key := stringifyLabels(ts) + if _, ok := duplicates[key]; ok { + return nil, fmt.Errorf("original metric %v; resulting labels %q: %w", s.Labels, key, errDuplicate) + } + duplicates[key] = struct{}{} + tss = append(tss, ts) + } + return tss, nil +} +// Exec executes RecordingRule expression via the given Querier. +func (rr *RecordingRule) Exec(ctx context.Context) ([]prompbmarshal.TimeSeries, error) { qMetrics, err := rr.q.Query(ctx, rr.Expr) rr.mu.Lock() defer rr.mu.Unlock() @@ -107,7 +125,7 @@ func (rr *RecordingRule) Exec(ctx context.Context, series bool) ([]prompbmarshal duplicates := make(map[string]struct{}, len(qMetrics)) var tss []prompbmarshal.TimeSeries for _, r := range qMetrics { - ts := rr.toTimeSeries(r, time.Unix(r.Timestamp, 0)) + ts := rr.toTimeSeries(r) key := stringifyLabels(ts) if _, ok := duplicates[key]; ok { rr.lastExecError = errDuplicate @@ -138,7 +156,7 @@ func stringifyLabels(ts prompbmarshal.TimeSeries) string { return b.String() } -func (rr *RecordingRule) toTimeSeries(m datasource.Metric, timestamp time.Time) prompbmarshal.TimeSeries { +func (rr *RecordingRule) toTimeSeries(m datasource.Metric) prompbmarshal.TimeSeries { labels := make(map[string]string) for _, l := range m.Labels { labels[l.Name] = l.Value @@ -148,7 +166,7 @@ func (rr *RecordingRule) toTimeSeries(m datasource.Metric, timestamp time.Time) for k, v := range rr.Labels { labels[k] = v } - return newTimeSeries(m.Value, labels, timestamp) + return newTimeSeries(m.Values, m.Timestamps, labels) } // UpdateWith copies all significant fields. diff --git a/app/vmalert/recording_test.go b/app/vmalert/recording_test.go index 80877b6ff..ab563a291 100644 --- a/app/vmalert/recording_test.go +++ b/app/vmalert/recording_test.go @@ -11,7 +11,7 @@ import ( "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" ) -func TestRecoridngRule_ToTimeSeries(t *testing.T) { +func TestRecoridngRule_Exec(t *testing.T) { timestamp := time.Now() testCases := []struct { rule *RecordingRule @@ -24,9 +24,9 @@ func TestRecoridngRule_ToTimeSeries(t *testing.T) { "__name__", "bar", )}, []prompbmarshal.TimeSeries{ - newTimeSeries(10, map[string]string{ + newTimeSeries([]float64{10}, []int64{timestamp.UnixNano()}, map[string]string{ "__name__": "foo", - }, timestamp), + }), }, }, { @@ -37,18 +37,18 @@ func TestRecoridngRule_ToTimeSeries(t *testing.T) { metricWithValueAndLabels(t, 3, "__name__", "baz", "job", "baz"), }, []prompbmarshal.TimeSeries{ - newTimeSeries(1, map[string]string{ + newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{ "__name__": "foobarbaz", "job": "foo", - }, timestamp), - newTimeSeries(2, map[string]string{ + }), + newTimeSeries([]float64{2}, []int64{timestamp.UnixNano()}, map[string]string{ "__name__": "foobarbaz", "job": "bar", - }, timestamp), - newTimeSeries(3, map[string]string{ + }), + newTimeSeries([]float64{3}, []int64{timestamp.UnixNano()}, map[string]string{ "__name__": "foobarbaz", "job": "baz", - }, timestamp), + }), }, }, { @@ -59,16 +59,16 @@ func TestRecoridngRule_ToTimeSeries(t *testing.T) { metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "foo"), metricWithValueAndLabels(t, 1, "__name__", "bar", "job", "bar")}, []prompbmarshal.TimeSeries{ - newTimeSeries(2, map[string]string{ + newTimeSeries([]float64{2}, []int64{timestamp.UnixNano()}, map[string]string{ "__name__": "job:foo", "job": "foo", "source": "test", - }, timestamp), - newTimeSeries(1, map[string]string{ + }), + newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{ "__name__": "job:foo", "job": "bar", "source": "test", - }, timestamp), + }), }, }, } @@ -77,7 +77,7 @@ func TestRecoridngRule_ToTimeSeries(t *testing.T) { fq := &fakeQuerier{} fq.add(tc.metrics...) tc.rule.q = fq - tss, err := tc.rule.Exec(context.TODO(), true) + tss, err := tc.rule.Exec(context.TODO()) if err != nil { t.Fatalf("unexpected Exec err: %s", err) } @@ -88,7 +88,88 @@ func TestRecoridngRule_ToTimeSeries(t *testing.T) { } } -func TestRecoridngRule_ToTimeSeriesNegative(t *testing.T) { +func TestRecoridngRule_ExecRange(t *testing.T) { + timestamp := time.Now() + testCases := []struct { + rule *RecordingRule + metrics []datasource.Metric + expTS []prompbmarshal.TimeSeries + }{ + { + &RecordingRule{Name: "foo"}, + []datasource.Metric{metricWithValuesAndLabels(t, []float64{10, 20, 30}, + "__name__", "bar", + )}, + []prompbmarshal.TimeSeries{ + newTimeSeries([]float64{10, 20, 30}, + []int64{timestamp.UnixNano(), timestamp.UnixNano(), timestamp.UnixNano()}, + map[string]string{ + "__name__": "foo", + }), + }, + }, + { + &RecordingRule{Name: "foobarbaz"}, + []datasource.Metric{ + metricWithValuesAndLabels(t, []float64{1}, "__name__", "foo", "job", "foo"), + metricWithValuesAndLabels(t, []float64{2, 3}, "__name__", "bar", "job", "bar"), + metricWithValuesAndLabels(t, []float64{4, 5, 6}, "__name__", "baz", "job", "baz"), + }, + []prompbmarshal.TimeSeries{ + newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{ + "__name__": "foobarbaz", + "job": "foo", + }), + newTimeSeries([]float64{2, 3}, []int64{timestamp.UnixNano(), timestamp.UnixNano()}, map[string]string{ + "__name__": "foobarbaz", + "job": "bar", + }), + newTimeSeries([]float64{4, 5, 6}, + []int64{timestamp.UnixNano(), timestamp.UnixNano(), timestamp.UnixNano()}, + map[string]string{ + "__name__": "foobarbaz", + "job": "baz", + }), + }, + }, + { + &RecordingRule{Name: "job:foo", Labels: map[string]string{ + "source": "test", + }}, + []datasource.Metric{ + metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "foo"), + metricWithValueAndLabels(t, 1, "__name__", "bar", "job", "bar")}, + []prompbmarshal.TimeSeries{ + newTimeSeries([]float64{2}, []int64{timestamp.UnixNano()}, map[string]string{ + "__name__": "job:foo", + "job": "foo", + "source": "test", + }), + newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{ + "__name__": "job:foo", + "job": "bar", + "source": "test", + }), + }, + }, + } + for _, tc := range testCases { + t.Run(tc.rule.Name, func(t *testing.T) { + fq := &fakeQuerier{} + fq.add(tc.metrics...) + tc.rule.q = fq + tss, err := tc.rule.ExecRange(context.TODO(), time.Now(), time.Now()) + if err != nil { + t.Fatalf("unexpected Exec err: %s", err) + } + if err := compareTimeSeries(t, tc.expTS, tss); err != nil { + t.Fatalf("timeseries missmatch: %s", err) + } + }) + } +} + +func TestRecoridngRule_ExecNegative(t *testing.T) { rr := &RecordingRule{Name: "job:foo", Labels: map[string]string{ "job": "test", }} @@ -97,7 +178,7 @@ func TestRecoridngRule_ToTimeSeriesNegative(t *testing.T) { expErr := "connection reset by peer" fq.setErr(errors.New(expErr)) rr.q = fq - _, err := rr.Exec(context.TODO(), true) + _, err := rr.Exec(context.TODO()) if err == nil { t.Fatalf("expected to get err; got nil") } @@ -112,7 +193,7 @@ func TestRecoridngRule_ToTimeSeriesNegative(t *testing.T) { fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "foo")) fq.add(metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "bar")) - _, err = rr.Exec(context.TODO(), true) + _, err = rr.Exec(context.TODO()) if err == nil { t.Fatalf("expected to get err; got nil") } diff --git a/app/vmalert/replay.go b/app/vmalert/replay.go new file mode 100644 index 000000000..f702f11d3 --- /dev/null +++ b/app/vmalert/replay.go @@ -0,0 +1,160 @@ +package main + +import ( + "context" + "flag" + "fmt" + "strings" + "time" + + "github.com/cheggaaa/pb/v3" + + "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config" + "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource" + "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/remotewrite" + "github.com/VictoriaMetrics/VictoriaMetrics/lib/logger" + "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" +) + +var ( + replayFrom = flag.String("replay.timeFrom", "", + "The time filter in RFC3339 format to select time series with timestamp equal or higher than provided value. E.g. '2020-01-01T20:07:00Z'") + replayTo = flag.String("replay.timeTo", "", + "The time filter in RFC3339 format to select timeseries with timestamp equal or lower than provided value. E.g. '2020-01-01T20:07:00Z'") + replayRulesDelay = flag.Duration("replay.rulesDelay", time.Second, + "Delay between rules evaluation within the group. Could be important if there are chained rules inside of the group"+ + "and processing need to wait for previous rule results to be persisted by remote storage before evaluating the next rule."+ + "Keep it equal or bigger than -remoteWrite.flushInterval.") + replayMaxDatapoints = flag.Int("replay.maxDatapointsPerQuery", 1e3, + "Max number of data points expected in one request. The higher the value, the less requests will be made during replay.") + replayRuleRetryAttempts = flag.Int("replay.ruleRetryAttempts", 5, + "Defines how many retries to make before giving up on rule if request for it returns an error.") +) + +func replay(groupsCfg []config.Group, qb datasource.QuerierBuilder, rw *remotewrite.Client) error { + if *replayMaxDatapoints < 1 { + return fmt.Errorf("replay.maxDatapointsPerQuery can't be lower than 1") + } + tFrom, err := time.Parse(time.RFC3339, *replayFrom) + if err != nil { + return fmt.Errorf("failed to parse %q: %s", *replayFrom, err) + } + tTo, err := time.Parse(time.RFC3339, *replayTo) + if err != nil { + return fmt.Errorf("failed to parse %q: %s", *replayTo, err) + } + if !tTo.After(tFrom) { + return fmt.Errorf("replay.timeTo must be bigger than replay.timeFrom") + } + labels := make(map[string]string) + for _, s := range *externalLabels { + if len(s) == 0 { + continue + } + n := strings.IndexByte(s, '=') + if n < 0 { + return fmt.Errorf("missing '=' in `-label`. It must contain label in the form `name=value`; got %q", s) + } + labels[s[:n]] = s[n+1:] + } + + fmt.Printf("Replay mode:"+ + "\nfrom: \t%v "+ + "\nto: \t%v "+ + "\nmax data points per request: %d\n", + tFrom, tTo, *replayMaxDatapoints) + + var total int + for _, cfg := range groupsCfg { + ng := newGroup(cfg, qb, *evaluationInterval, labels) + total += ng.replay(tFrom, tTo, rw) + } + logger.Infof("replay finished! Imported %d samples", total) + if rw != nil { + return rw.Close() + } + return nil +} + +func (g *Group) replay(start, end time.Time, rw *remotewrite.Client) int { + var total int + step := g.Interval * time.Duration(*replayMaxDatapoints) + ri := rangeIterator{start: start, end: end, step: step} + iterations := int(end.Sub(start)/step) + 1 + fmt.Printf("\nGroup %q"+ + "\ninterval: \t%v"+ + "\nrequests to make: \t%d"+ + "\nmax range per request: \t%v\n", + g.Name, g.Interval, iterations, step) + for _, rule := range g.Rules { + fmt.Printf("> Rule %q (ID: %d)\n", rule, rule.ID()) + bar := pb.StartNew(iterations) + ri.reset() + for ri.next() { + n, err := replayRule(rule, ri.s, ri.e, rw) + if err != nil { + logger.Fatalf("rule %q: %s", rule, err) + } + total += n + bar.Increment() + } + bar.Finish() + // sleep to let remote storage to flush data on-disk + // so chained rules could be calculated correctly + time.Sleep(*replayRulesDelay) + } + return total +} + +func replayRule(rule Rule, start, end time.Time, rw *remotewrite.Client) (int, error) { + var err error + var tss []prompbmarshal.TimeSeries + for i := 0; i < *replayRuleRetryAttempts; i++ { + tss, err = rule.ExecRange(context.Background(), start, end) + if err == nil { + break + } + logger.Errorf("attempt %d to execute rule %q failed: %s", i+1, rule, err) + time.Sleep(time.Second) + } + if err != nil { // means all attempts failed + return 0, err + } + if len(tss) < 1 { + return 0, nil + } + var n int + for _, ts := range tss { + if err := rw.Push(ts); err != nil { + return n, fmt.Errorf("remote write failure: %s", err) + } + n += len(ts.Samples) + } + return n, nil +} + +type rangeIterator struct { + step time.Duration + start, end time.Time + + iter int + s, e time.Time +} + +func (ri *rangeIterator) reset() { + ri.iter = 0 + ri.s, ri.e = time.Time{}, time.Time{} +} + +func (ri *rangeIterator) next() bool { + ri.s = ri.start.Add(ri.step * time.Duration(ri.iter)) + if !ri.end.After(ri.s) { + return false + } + ri.e = ri.s.Add(ri.step) + if ri.e.After(ri.end) { + ri.e = ri.end + } + ri.iter++ + return true +} diff --git a/app/vmalert/replay_test.go b/app/vmalert/replay_test.go new file mode 100644 index 000000000..2eac8afc3 --- /dev/null +++ b/app/vmalert/replay_test.go @@ -0,0 +1,249 @@ +package main + +import ( + "context" + "fmt" + "testing" + "time" + + "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config" + "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource" +) + +type fakeReplayQuerier struct { + fakeQuerier + registry map[string]map[string]struct{} +} + +func (fr *fakeReplayQuerier) BuildWithParams(_ datasource.QuerierParams) datasource.Querier { + return fr +} + +func (fr *fakeReplayQuerier) QueryRange(_ context.Context, q string, from, to time.Time) ([]datasource.Metric, error) { + key := fmt.Sprintf("%s+%s", from.Format("15:04:05"), to.Format("15:04:05")) + dps, ok := fr.registry[q] + if !ok { + return nil, fmt.Errorf("unexpected query received: %q", q) + } + _, ok = dps[key] + if !ok { + return nil, fmt.Errorf("unexpected time range received: %q", key) + } + delete(dps, key) + if len(fr.registry[q]) < 1 { + delete(fr.registry, q) + } + return nil, nil +} + +func TestReplay(t *testing.T) { + testCases := []struct { + name string + from, to string + maxDP int + cfg []config.Group + qb *fakeReplayQuerier + }{ + { + name: "one rule + one response", + from: "2021-01-01T12:00:00.000Z", + to: "2021-01-01T12:02:00.000Z", + maxDP: 10, + cfg: []config.Group{ + {Rules: []config.Rule{{Record: "foo", Expr: "sum(up)"}}}, + }, + qb: &fakeReplayQuerier{ + registry: map[string]map[string]struct{}{ + "sum(up)": {"12:00:00+12:02:00": {}}, + }, + }, + }, + { + name: "one rule + multiple responses", + from: "2021-01-01T12:00:00.000Z", + to: "2021-01-01T12:02:30.000Z", + maxDP: 1, + cfg: []config.Group{ + {Rules: []config.Rule{{Record: "foo", Expr: "sum(up)"}}}, + }, + qb: &fakeReplayQuerier{ + registry: map[string]map[string]struct{}{ + "sum(up)": { + "12:00:00+12:01:00": {}, + "12:01:00+12:02:00": {}, + "12:02:00+12:02:30": {}, + }, + }, + }, + }, + { + name: "datapoints per step", + from: "2021-01-01T12:00:00.000Z", + to: "2021-01-01T15:02:30.000Z", + maxDP: 60, + cfg: []config.Group{ + {Interval: time.Minute, Rules: []config.Rule{{Record: "foo", Expr: "sum(up)"}}}, + }, + qb: &fakeReplayQuerier{ + registry: map[string]map[string]struct{}{ + "sum(up)": { + "12:00:00+13:00:00": {}, + "13:00:00+14:00:00": {}, + "14:00:00+15:00:00": {}, + "15:00:00+15:02:30": {}, + }, + }, + }, + }, + { + name: "multiple recording rules + multiple responses", + from: "2021-01-01T12:00:00.000Z", + to: "2021-01-01T12:02:30.000Z", + maxDP: 1, + cfg: []config.Group{ + {Rules: []config.Rule{{Record: "foo", Expr: "sum(up)"}}}, + {Rules: []config.Rule{{Record: "bar", Expr: "max(up)"}}}, + }, + qb: &fakeReplayQuerier{ + registry: map[string]map[string]struct{}{ + "sum(up)": { + "12:00:00+12:01:00": {}, + "12:01:00+12:02:00": {}, + "12:02:00+12:02:30": {}, + }, + "max(up)": { + "12:00:00+12:01:00": {}, + "12:01:00+12:02:00": {}, + "12:02:00+12:02:30": {}, + }, + }, + }, + }, + { + name: "multiple alerting rules + multiple responses", + from: "2021-01-01T12:00:00.000Z", + to: "2021-01-01T12:02:30.000Z", + maxDP: 1, + cfg: []config.Group{ + {Rules: []config.Rule{{Alert: "foo", Expr: "sum(up) > 1"}}}, + {Rules: []config.Rule{{Alert: "bar", Expr: "max(up) < 1"}}}, + }, + qb: &fakeReplayQuerier{ + registry: map[string]map[string]struct{}{ + "sum(up) > 1": { + "12:00:00+12:01:00": {}, + "12:01:00+12:02:00": {}, + "12:02:00+12:02:30": {}, + }, + "max(up) < 1": { + "12:00:00+12:01:00": {}, + "12:01:00+12:02:00": {}, + "12:02:00+12:02:30": {}, + }, + }, + }, + }, + } + + from, to, maxDP := *replayFrom, *replayTo, *replayMaxDatapoints + retries, delay := *replayRuleRetryAttempts, *replayRulesDelay + defer func() { + *replayFrom, *replayTo = from, to + *replayMaxDatapoints, *replayRuleRetryAttempts = maxDP, retries + *replayRulesDelay = delay + }() + + *replayRuleRetryAttempts = 1 + *replayRulesDelay = time.Millisecond + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + *replayFrom = tc.from + *replayTo = tc.to + *replayMaxDatapoints = tc.maxDP + if err := replay(tc.cfg, tc.qb, nil); err != nil { + t.Fatalf("replay failed: %s", err) + } + if len(tc.qb.registry) > 0 { + t.Fatalf("not all requests were sent: %#v", tc.qb.registry) + } + }) + } +} + +func TestRangeIterator(t *testing.T) { + testCases := []struct { + ri rangeIterator + result [][2]time.Time + }{ + { + ri: rangeIterator{ + start: parseTime(t, "2021-01-01T12:00:00.000Z"), + end: parseTime(t, "2021-01-01T12:30:00.000Z"), + step: 5 * time.Minute, + }, + result: [][2]time.Time{ + {parseTime(t, "2021-01-01T12:00:00.000Z"), parseTime(t, "2021-01-01T12:05:00.000Z")}, + {parseTime(t, "2021-01-01T12:05:00.000Z"), parseTime(t, "2021-01-01T12:10:00.000Z")}, + {parseTime(t, "2021-01-01T12:10:00.000Z"), parseTime(t, "2021-01-01T12:15:00.000Z")}, + {parseTime(t, "2021-01-01T12:15:00.000Z"), parseTime(t, "2021-01-01T12:20:00.000Z")}, + {parseTime(t, "2021-01-01T12:20:00.000Z"), parseTime(t, "2021-01-01T12:25:00.000Z")}, + {parseTime(t, "2021-01-01T12:25:00.000Z"), parseTime(t, "2021-01-01T12:30:00.000Z")}, + }, + }, + { + ri: rangeIterator{ + start: parseTime(t, "2021-01-01T12:00:00.000Z"), + end: parseTime(t, "2021-01-01T12:30:00.000Z"), + step: 45 * time.Minute, + }, + result: [][2]time.Time{ + {parseTime(t, "2021-01-01T12:00:00.000Z"), parseTime(t, "2021-01-01T12:30:00.000Z")}, + {parseTime(t, "2021-01-01T12:30:00.000Z"), parseTime(t, "2021-01-01T12:30:00.000Z")}, + }, + }, + { + ri: rangeIterator{ + start: parseTime(t, "2021-01-01T12:00:12.000Z"), + end: parseTime(t, "2021-01-01T12:00:17.000Z"), + step: time.Second, + }, + result: [][2]time.Time{ + {parseTime(t, "2021-01-01T12:00:12.000Z"), parseTime(t, "2021-01-01T12:00:13.000Z")}, + {parseTime(t, "2021-01-01T12:00:13.000Z"), parseTime(t, "2021-01-01T12:00:14.000Z")}, + {parseTime(t, "2021-01-01T12:00:14.000Z"), parseTime(t, "2021-01-01T12:00:15.000Z")}, + {parseTime(t, "2021-01-01T12:00:15.000Z"), parseTime(t, "2021-01-01T12:00:16.000Z")}, + {parseTime(t, "2021-01-01T12:00:16.000Z"), parseTime(t, "2021-01-01T12:00:17.000Z")}, + }, + }, + } + + for i, tc := range testCases { + t.Run(fmt.Sprintf("case %d", i), func(t *testing.T) { + var j int + for tc.ri.next() { + if len(tc.result) < j+1 { + t.Fatalf("unexpected result for iterator on step %d: %v - %v", + j, tc.ri.s, tc.ri.e) + } + s, e := tc.ri.s, tc.ri.e + expS, expE := tc.result[j][0], tc.result[j][1] + if s != expS { + t.Fatalf("expected to get start=%v; got %v", expS, s) + } + if e != expE { + t.Fatalf("expected to get end=%v; got %v", expE, e) + } + j++ + } + }) + } +} + +func parseTime(t *testing.T, s string) time.Time { + t.Helper() + tt, err := time.Parse("2006-01-02T15:04:05.000Z", s) + if err != nil { + t.Fatal(err) + } + return tt +} diff --git a/app/vmalert/rule.go b/app/vmalert/rule.go index 7aa3ddff0..6281bc09a 100644 --- a/app/vmalert/rule.go +++ b/app/vmalert/rule.go @@ -3,21 +3,21 @@ package main import ( "context" "errors" - "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" + "time" ) // Rule represents alerting or recording rule // that has unique ID, can be Executed and // updated with other Rule. type Rule interface { - // Returns unique ID that may be used for + // ID returns unique ID that may be used for // identifying this Rule among others. ID() uint64 // Exec executes the rule with given context - // and Querier. If returnSeries is true, Exec - // may return TimeSeries as result of execution - Exec(ctx context.Context, returnSeries bool) ([]prompbmarshal.TimeSeries, error) + Exec(ctx context.Context) ([]prompbmarshal.TimeSeries, error) + // ExecRange executes the rule on the given time range + ExecRange(ctx context.Context, start, end time.Time) ([]prompbmarshal.TimeSeries, error) // UpdateWith performs modification of current Rule // with fields of the given Rule. UpdateWith(Rule) error diff --git a/app/vmalert/utils.go b/app/vmalert/utils.go index 7a824a096..11d153fc9 100644 --- a/app/vmalert/utils.go +++ b/app/vmalert/utils.go @@ -7,17 +7,21 @@ import ( "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" ) -func newTimeSeries(value float64, labels map[string]string, timestamp time.Time) prompbmarshal.TimeSeries { - ts := prompbmarshal.TimeSeries{} - ts.Samples = append(ts.Samples, prompbmarshal.Sample{ - Value: value, - Timestamp: timestamp.UnixNano() / 1e6, - }) +func newTimeSeries(values []float64, timestamps []int64, labels map[string]string) prompbmarshal.TimeSeries { + ts := prompbmarshal.TimeSeries{ + Samples: make([]prompbmarshal.Sample, len(values)), + } + for i := range values { + ts.Samples[i] = prompbmarshal.Sample{ + Value: values[i], + Timestamp: time.Unix(timestamps[i], 0).UnixNano() / 1e6, + } + } keys := make([]string, 0, len(labels)) for k := range labels { keys = append(keys, k) } - sort.Strings(keys) + sort.Strings(keys) // make order deterministic for _, key := range keys { ts.Labels = append(ts.Labels, prompbmarshal.Label{ Name: key, diff --git a/app/vmauth/README.md b/app/vmauth/README.md index 335c2804e..34333e2f2 100644 --- a/app/vmauth/README.md +++ b/app/vmauth/README.md @@ -1,8 +1,8 @@ # vmauth -`vmauth` is a simple auth proxy and router for [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics). -It reads username and password from [Basic Auth headers](https://en.wikipedia.org/wiki/Basic_access_authentication), -matches them against configs pointed by `-auth.config` command-line flag and proxies incoming HTTP requests to the configured per-user `url_prefix` on successful match. +`vmauth` is a simple auth proxy, router and [load balancer](#load-balancing) for [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics). +It reads auth credentials from `Authorization` http header ([Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication) and `Bearer token` is supported), +matches them against configs pointed by [-auth.config](#auth-config) command-line flag and proxies incoming HTTP requests to the configured per-user `url_prefix` on successful match. ## Quick start @@ -27,9 +27,14 @@ Feel free [contacting us](mailto:info@victoriametrics.com) if you need customize accounting and rate limiting such as [vmgateway](https://docs.victoriametrics.com/vmgateway.html). +## Load balancing + +Each `url_prefix` in the [-auth.config](#auth-config) may contain either a single url or a list of urls. In the latter case `vmauth` balances load among the configured urls in a round-robin manner. This feature is useful for balancing the load among multiple `vmselect` and/or `vminsert` nodes in [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html). + + ## Auth config -Auth config is represented in the following simple `yml` format: +`-auth.config` is represented in the following simple `yml` format: ```yml @@ -61,31 +66,47 @@ users: # The user for querying account 123 in VictoriaMetrics cluster # See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format # All the requests to http://vmauth:8427 with the given Basic Auth (username:password) - # will be proxied to http://vmselect:8481/select/123/prometheus . - # For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect:8481/select/123/prometheus/api/v1/select + # will be load-balanced among http://vmselect1:8481/select/123/prometheus and http://vmselect2:8481/select/123/prometheus + # For example, http://vmauth:8427/api/v1/query is proxied to the following urls in a round-robin manner: + # - http://vmselect1:8481/select/123/prometheus/api/v1/select + # - http://vmselect2:8481/select/123/prometheus/api/v1/select - username: "cluster-select-account-123" password: "***" - url_prefix: "http://vmselect:8481/select/123/prometheus" + url_prefix: + - "http://vmselect1:8481/select/123/prometheus" + - "http://vmselect2:8481/select/123/prometheus" # The user for inserting Prometheus data into VictoriaMetrics cluster under account 42 # See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format # All the requests to http://vmauth:8427 with the given Basic Auth (username:password) - # will be proxied to http://vminsert:8480/insert/42/prometheus . - # For example, http://vmauth:8427/api/v1/write is proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write + # will be load-balanced between http://vminsert1:8480/insert/42/prometheus and http://vminsert2:8480/insert/42/prometheus + # For example, http://vmauth:8427/api/v1/write is proxied to the following urls in a round-robin manner: + # - http://vminsert1:8480/insert/42/prometheus/api/v1/write + # - http://vminsert2:8480/insert/42/prometheus/api/v1/write - username: "cluster-insert-account-42" password: "***" - url_prefix: "http://vminsert:8480/insert/42/prometheus" + url_prefix: + - "http://vminsert1:8480/insert/42/prometheus" + - "http://vminsert2:8480/insert/42/prometheus" # A single user for querying and inserting data: # - Requests to http://vmauth:8427/api/v1/query, http://vmauth:8427/api/v1/query_range - # and http://vmauth:8427/api/v1/label//values are proxied to http://vmselect:8481/select/42/prometheus. - # For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect:8480/select/42/prometheus/api/v1/query + # and http://vmauth:8427/api/v1/label//values are proxied to the following urls in a round-robin manner: + # - http://vmselect1:8481/select/42/prometheus + # - http://vmselect2:8481/select/42/prometheus + # For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect1:8480/select/42/prometheus/api/v1/query + # or to http://vmselect2:8480/select/42/prometheus/api/v1/query . # - Requests to http://vmauth:8427/api/v1/write are proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write - username: "foobar" url_map: - - src_paths: ["/api/v1/query", "/api/v1/query_range", "/api/v1/label/[^/]+/values"] - url_prefix: "http://vmselect:8481/select/42/prometheus" + - src_paths: + - "/api/v1/query" + - "/api/v1/query_range" + - "/api/v1/label/[^/]+/values" + url_prefix: + - "http://vmselect1:8481/select/42/prometheus" + - "http://vmselect2:8481/select/42/prometheus" - src_paths: ["/api/v1/write"] url_prefix: "http://vminsert:8480/insert/42/prometheus" ``` diff --git a/app/vmauth/auth_config.go b/app/vmauth/auth_config.go index a5ac3186d..d368bb766 100644 --- a/app/vmauth/auth_config.go +++ b/app/vmauth/auth_config.go @@ -8,6 +8,7 @@ import ( "net/url" "os" "regexp" + "strconv" "strings" "sync" "sync/atomic" @@ -31,11 +32,11 @@ type AuthConfig struct { // UserInfo is user information read from authConfigPath type UserInfo struct { - BearerToken string `yaml:"bearer_token"` - Username string `yaml:"username"` - Password string `yaml:"password"` - URLPrefix *yamlURL `yaml:"url_prefix"` - URLMap []URLMap `yaml:"url_map"` + BearerToken string `yaml:"bearer_token"` + Username string `yaml:"username"` + Password string `yaml:"password"` + URLPrefix *URLPrefix `yaml:"url_prefix"` + URLMap []URLMap `yaml:"url_map"` requests *metrics.Counter } @@ -43,7 +44,7 @@ type UserInfo struct { // URLMap is a mapping from source paths to target urls. type URLMap struct { SrcPaths []*SrcPath `yaml:"src_paths"` - URLPrefix *yamlURL `yaml:"url_prefix"` + URLPrefix *URLPrefix `yaml:"url_prefix"` } // SrcPath represents an src path @@ -52,25 +53,74 @@ type SrcPath struct { re *regexp.Regexp } -type yamlURL struct { - u *url.URL +// URLPrefix represents pased `url_prefix` +type URLPrefix struct { + n uint32 + urls []*url.URL } -func (yu *yamlURL) UnmarshalYAML(f func(interface{}) error) error { - var s string - if err := f(&s); err != nil { +func (up *URLPrefix) getNextURL() *url.URL { + n := atomic.AddUint32(&up.n, 1) + idx := n % uint32(len(up.urls)) + return up.urls[idx] +} + +// UnmarshalYAML unmarshals up from yaml. +func (up *URLPrefix) UnmarshalYAML(f func(interface{}) error) error { + var v interface{} + if err := f(&v); err != nil { return err } - u, err := url.Parse(s) - if err != nil { - return fmt.Errorf("cannot unmarshal %q into url: %w", s, err) + var urls []string + switch x := v.(type) { + case string: + urls = []string{x} + case []interface{}: + if len(x) == 0 { + return fmt.Errorf("`url_prefix` must contain at least a single url") + } + us := make([]string, len(x)) + for i, xx := range x { + s, ok := xx.(string) + if !ok { + return fmt.Errorf("`url_prefix` must contain array of strings; got %T", xx) + } + us[i] = s + } + urls = us + default: + return fmt.Errorf("unexpected type for `url_prefix`: %T; want string or []string", v) } - yu.u = u + pus := make([]*url.URL, len(urls)) + for i, u := range urls { + pu, err := url.Parse(u) + if err != nil { + return fmt.Errorf("cannot unmarshal %q into url: %w", u, err) + } + pus[i] = pu + } + up.urls = pus return nil } -func (yu *yamlURL) MarshalYAML() (interface{}, error) { - return yu.u.String(), nil +// MarshalYAML marshals up to yaml. +func (up *URLPrefix) MarshalYAML() (interface{}, error) { + var b []byte + if len(up.urls) == 1 { + u := up.urls[0].String() + b = strconv.AppendQuote(b, u) + return string(b), nil + } + b = append(b, '[') + for i, pu := range up.urls { + u := pu.String() + b = strconv.AppendQuote(b, u) + if i+1 < len(up.urls) { + b = append(b, ',') + } + } + b = append(b, ']') + return string(b), nil } func (sp *SrcPath) match(s string) bool { @@ -201,11 +251,9 @@ func parseAuthConfig(data []byte) (map[string]*UserInfo, error) { return nil, fmt.Errorf("duplicate auth token found for bearer_token=%q, username=%q: %q", authToken, ui.BearerToken, ui.Username) } if ui.URLPrefix != nil { - urlPrefix, err := sanitizeURLPrefix(ui.URLPrefix.u) - if err != nil { + if err := ui.URLPrefix.sanitize(); err != nil { return nil, err } - ui.URLPrefix.u = urlPrefix } for _, e := range ui.URLMap { if len(e.SrcPaths) == 0 { @@ -214,11 +262,9 @@ func parseAuthConfig(data []byte) (map[string]*UserInfo, error) { if e.URLPrefix == nil { return nil, fmt.Errorf("missing `url_prefix` in `url_map`") } - urlPrefix, err := sanitizeURLPrefix(e.URLPrefix.u) - if err != nil { + if err := e.URLPrefix.sanitize(); err != nil { return nil, err } - e.URLPrefix.u = urlPrefix } if len(ui.URLMap) == 0 && ui.URLPrefix == nil { return nil, fmt.Errorf("missing `url_prefix`") @@ -248,6 +294,17 @@ func getAuthToken(bearerToken, username, password string) string { return "Basic " + token64 } +func (up *URLPrefix) sanitize() error { + for i, pu := range up.urls { + puNew, err := sanitizeURLPrefix(pu) + if err != nil { + return err + } + up.urls[i] = puNew + } + return nil +} + func sanitizeURLPrefix(urlPrefix *url.URL) (*url.URL, error) { // Remove trailing '/' from urlPrefix for strings.HasSuffix(urlPrefix.Path, "/") { diff --git a/app/vmauth/auth_config_test.go b/app/vmauth/auth_config_test.go index e192189c5..1b6ad3d4a 100644 --- a/app/vmauth/auth_config_test.go +++ b/app/vmauth/auth_config_test.go @@ -59,7 +59,21 @@ users: f(` users: - username: foo - url_prefix: [bar] + url_prefix: + bar: baz +`) + f(` +users: +- username: foo + url_prefix: + - [foo] +`) + + // empty url_prefix + f(` +users: +- username: foo + url_prefix: [] `) // Username and bearer_token in a single config @@ -117,6 +131,15 @@ users: url_prefix: foo.bar `) + // empty url_prefix in url_map + f(` +users: +- username: a + url_map: + - src_paths: ['/foo/bar'] + url_prefix: [] +`) + // Missing src_paths in url_map f(` users: @@ -162,6 +185,25 @@ users: }, }) + // Multiple url_prefix entries + f(` +users: +- username: foo + password: bar + url_prefix: + - http://node1:343/bbb + - http://node2:343/bbb +`, map[string]*UserInfo{ + getAuthToken("", "foo", "bar"): { + Username: "foo", + Password: "bar", + URLPrefix: mustParseURLs([]string{ + "http://node1:343/bbb", + "http://node2:343/bbb", + }), + }, + }) + // Multiple users f(` users: @@ -188,7 +230,7 @@ users: - src_paths: ["/api/v1/query","/api/v1/query_range","/api/v1/label/[^./]+/.+"] url_prefix: http://vmselect/select/0/prometheus - src_paths: ["/api/v1/write"] - url_prefix: http://vminsert/insert/0/prometheus + url_prefix: ["http://vminsert1/insert/0/prometheus","http://vminsert2/insert/0/prometheus"] `, map[string]*UserInfo{ getAuthToken("foo", "", ""): { BearerToken: "foo", @@ -198,8 +240,11 @@ users: URLPrefix: mustParseURL("http://vmselect/select/0/prometheus"), }, { - SrcPaths: getSrcPaths([]string{"/api/v1/write"}), - URLPrefix: mustParseURL("http://vminsert/insert/0/prometheus"), + SrcPaths: getSrcPaths([]string{"/api/v1/write"}), + URLPrefix: mustParseURLs([]string{ + "http://vminsert1/insert/0/prometheus", + "http://vminsert2/insert/0/prometheus", + }), }, }, }, @@ -238,12 +283,20 @@ func areEqualConfigs(a, b map[string]*UserInfo) error { return nil } -func mustParseURL(u string) *yamlURL { - pu, err := url.Parse(u) - if err != nil { - panic(fmt.Errorf("BUG: cannot parse %q: %w", u, err)) +func mustParseURL(u string) *URLPrefix { + return mustParseURLs([]string{u}) +} + +func mustParseURLs(us []string) *URLPrefix { + pus := make([]*url.URL, len(us)) + for i, u := range us { + pu, err := url.Parse(u) + if err != nil { + panic(fmt.Errorf("BUG: cannot parse %q: %w", u, err)) + } + pus[i] = pu } - return &yamlURL{ - u: pu, + return &URLPrefix{ + urls: pus, } } diff --git a/app/vmauth/example_config.yml b/app/vmauth/example_config.yml index 31877984e..01e497dfc 100644 --- a/app/vmauth/example_config.yml +++ b/app/vmauth/example_config.yml @@ -26,30 +26,46 @@ users: # The user for querying account 123 in VictoriaMetrics cluster # See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format # All the requests to http://vmauth:8427 with the given Basic Auth (username:password) - # will be proxied to http://vmselect:8481/select/123/prometheus . - # For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect:8481/select/123/prometheus/api/v1/select + # will be load-balanced among http://vmselect1:8481/select/123/prometheus and http://vmselect2:8481/select/123/prometheus + # For example, http://vmauth:8427/api/v1/query is proxied to the following urls in a round-robin manner: + # - http://vmselect1:8481/select/123/prometheus/api/v1/select + # - http://vmselect2:8481/select/123/prometheus/api/v1/select - username: "cluster-select-account-123" password: "***" - url_prefix: "http://vmselect:8481/select/123/prometheus" + url_prefix: + - "http://vmselect1:8481/select/123/prometheus" + - "http://vmselect2:8481/select/123/prometheus" # The user for inserting Prometheus data into VictoriaMetrics cluster under account 42 # See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format # All the requests to http://vmauth:8427 with the given Basic Auth (username:password) - # will be proxied to http://vminsert:8480/insert/42/prometheus . - # For example, http://vmauth:8427/api/v1/write is proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write + # will be load-balanced between http://vminsert1:8480/insert/42/prometheus and http://vminsert2:8480/insert/42/prometheus + # For example, http://vmauth:8427/api/v1/write is proxied to the following urls in a round-robin manner: + # - http://vminsert1:8480/insert/42/prometheus/api/v1/write + # - http://vminsert2:8480/insert/42/prometheus/api/v1/write - username: "cluster-insert-account-42" password: "***" - url_prefix: "http://vminsert:8480/insert/42/prometheus" + url_prefix: + - "http://vminsert1:8480/insert/42/prometheus" + - "http://vminsert2:8480/insert/42/prometheus" # A single user for querying and inserting data: # - Requests to http://vmauth:8427/api/v1/query, http://vmauth:8427/api/v1/query_range - # and http://vmauth:8427/api/v1/label//values are proxied to http://vmselect:8481/select/42/prometheus. - # For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect:8480/select/42/prometheus/api/v1/query + # and http://vmauth:8427/api/v1/label//values are proxied to the following urls in a round-robin manner: + # - http://vmselect1:8481/select/42/prometheus + # - http://vmselect2:8481/select/42/prometheus + # For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect1:8480/select/42/prometheus/api/v1/query + # or to http://vmselect2:8480/select/42/prometheus/api/v1/query . # - Requests to http://vmauth:8427/api/v1/write are proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write - username: "foobar" url_map: - - src_paths: ["/api/v1/query", "/api/v1/query_range", "/api/v1/label/[^/]+/values"] - url_prefix: "http://vmselect:8481/select/42/prometheus" + - src_paths: + - "/api/v1/query" + - "/api/v1/query_range" + - "/api/v1/label/[^/]+/values" + url_prefix: + - "http://vmselect1:8481/select/42/prometheus" + - "http://vmselect2:8481/select/42/prometheus" - src_paths: ["/api/v1/write"] url_prefix: "http://vminsert:8480/insert/42/prometheus" diff --git a/app/vmauth/target_url.go b/app/vmauth/target_url.go index a361b561f..5e5d81f56 100644 --- a/app/vmauth/target_url.go +++ b/app/vmauth/target_url.go @@ -7,6 +7,11 @@ import ( "strings" ) +func (up *URLPrefix) mergeURLs(requestURI *url.URL) *url.URL { + pu := up.getNextURL() + return mergeURLs(pu, requestURI) +} + func mergeURLs(uiURL, requestURI *url.URL) *url.URL { targetURL := *uiURL targetURL.Path += requestURI.Path @@ -40,12 +45,12 @@ func createTargetURL(ui *UserInfo, uOrig *url.URL) (*url.URL, error) { for _, e := range ui.URLMap { for _, sp := range e.SrcPaths { if sp.match(u.Path) { - return mergeURLs(e.URLPrefix.u, &u), nil + return e.URLPrefix.mergeURLs(&u), nil } } } if ui.URLPrefix != nil { - return mergeURLs(ui.URLPrefix.u, &u), nil + return ui.URLPrefix.mergeURLs(&u), nil } return nil, fmt.Errorf("missing route for %q", u.String()) } diff --git a/app/vminsert/relabel/relabel.go b/app/vminsert/relabel/relabel.go index 60e2e7f3b..064ba8333 100644 --- a/app/vminsert/relabel/relabel.go +++ b/app/vminsert/relabel/relabel.go @@ -14,8 +14,12 @@ import ( "github.com/VictoriaMetrics/metrics" ) -var relabelConfig = flag.String("relabelConfig", "", "Optional path to a file with relabeling rules, which are applied to all the ingested metrics. "+ - "See https://docs.victoriametrics.com/#relabeling for details") +var ( + relabelConfig = flag.String("relabelConfig", "", "Optional path to a file with relabeling rules, which are applied to all the ingested metrics. "+ + "See https://docs.victoriametrics.com/#relabeling for details") + relabelDebug = flag.Bool("relabelDebug", false, "Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, "+ + "then the metrics aren't sent to storage. This is useful for debugging the relabeling configs") +) // Init must be called after flag.Parse and before using the relabel package. func Init() { @@ -52,7 +56,7 @@ func loadRelabelConfig() (*promrelabel.ParsedConfigs, error) { if len(*relabelConfig) == 0 { return nil, nil } - pcs, err := promrelabel.LoadRelabelConfigs(*relabelConfig) + pcs, err := promrelabel.LoadRelabelConfigs(*relabelConfig, *relabelDebug) if err != nil { return nil, fmt.Errorf("error when reading -relabelConfig=%q: %w", *relabelConfig, err) } diff --git a/app/vmselect/promql/rollup.go b/app/vmselect/promql/rollup.go index 474e74163..4bdb73fca 100644 --- a/app/vmselect/promql/rollup.go +++ b/app/vmselect/promql/rollup.go @@ -517,7 +517,7 @@ func (rc *rollupConfig) doInternal(dstValues []float64, tsm *timeseriesMap, valu if window <= 0 { window = rc.Step if rc.CanDropLastSample && rc.LookbackDelta > 0 && window > rc.LookbackDelta { - // Implicitly window exceeds -search.maxStalenessInterval, so limit it to -search.maxStalenessInterval + // Implicit window exceeds -search.maxStalenessInterval, so limit it to -search.maxStalenessInterval // according to https://github.com/VictoriaMetrics/VictoriaMetrics/issues/784 window = rc.LookbackDelta } diff --git a/deployment/docker/Makefile b/deployment/docker/Makefile index 7cbab57c4..dda95182d 100644 --- a/deployment/docker/Makefile +++ b/deployment/docker/Makefile @@ -4,7 +4,7 @@ DOCKER_NAMESPACE := victoriametrics ROOT_IMAGE ?= alpine:3.13.5 CERTS_IMAGE := alpine:3.13.5 -GO_BUILDER_IMAGE := golang:1.16.4 +GO_BUILDER_IMAGE := golang:1.16.5 BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr : _) BASE_IMAGE := local/base:1.1.3-$(shell echo $(ROOT_IMAGE) | tr : _)-$(shell echo $(CERTS_IMAGE) | tr : _) diff --git a/deployment/docker/alerts.yml b/deployment/docker/alerts.yml index 88111e365..0c900ef27 100644 --- a/deployment/docker/alerts.yml +++ b/deployment/docker/alerts.yml @@ -2,7 +2,7 @@ # The alerts below are just recommendations and may require some updates # and threshold calibration according to every specific setup. groups: - - name: serviceHealth + - name: vm-health # note the `job` filter and update accordingly to your setup rules: # note the `job` filter and update accordingly to your setup @@ -177,6 +177,18 @@ groups: description: "Exhausting OS file descriptors limit can cause severe degradation of the process. Consider to increase the limit as fast as possible." + - alert: LabelsLimitExceededOnIngestion + expr: sum(increase(vm_metrics_with_dropped_labels_total[5m])) by (instance) > 0 + for: 15m + labels: + severity: warning + annotations: + dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=74&var-instance={{ $labels.instance }}" + summary: "Metrics ingested in ({{ $labels.instance }}) are exceeding labels limit" + description: "VictoriaMetrics limits the number of labels per each metric with `-maxLabelsPerTimeseries` command-line flag.\n + This prevents from ingesting metrics with too many labels. Please verify that `-maxLabelsPerTimeseries` is configured + correctly or that clients which send these metrics aren't misbehaving." + # Alerts group for vmagent assumes that Grafana dashboard # https://grafana.com/grafana/dashboards/12683 is installed. # Pls update the `dashboard` annotation according to your setup. diff --git a/deployment/docker/docker-compose.yml b/deployment/docker/docker-compose.yml index d20f1e08a..7c217e905 100644 --- a/deployment/docker/docker-compose.yml +++ b/deployment/docker/docker-compose.yml @@ -39,7 +39,7 @@ services: restart: always grafana: container_name: grafana - image: grafana/grafana:7.5.2 + image: grafana/grafana:8.0.0 depends_on: - "victoriametrics" ports: diff --git a/docs/Articles.md b/docs/Articles.md index d649a0bc5..7856059b2 100644 --- a/docs/Articles.md +++ b/docs/Articles.md @@ -11,6 +11,7 @@ sort: 16 * [Observations on Better Resource Usage with Percona Monitoring and Management v2.12.0](https://www.percona.com/blog/2020/12/23/observations-on-better-resource-usage-with-percona-monitoring-and-management-v2-12-0/) * [Better Prometheus rate() function with VictoriaMetrics](https://www.percona.com/blog/2020/02/28/better-prometheus-rate-function-with-victoriametrics/) * [Percona monitoring and management migration from Prometheus to VictoriaMetrics FAQ](https://www.percona.com/blog/2020/12/16/percona-monitoring-and-management-migration-from-prometheus-to-victoriametrics-faq/) +* [Compiling a Percona Monitoring and Management v2 Client in ARM: Raspberry Pi 3 Reprise](https://www.percona.com/blog/2021/05/26/compiling-a-percona-monitoring-and-management-v2-client-in-arm-raspberry-pi-3/) * [Making peace with Prometheus rate()](https://blog.doit-intl.com/making-peace-with-prometheus-rate-43a3ea75c4cf) * [Infrastructure monitoring with Prometheus at Zerodha](https://zerodha.tech/blog/infra-monitoring-at-zerodha/) * [Sismology: Iguana Solutions’ Monitoring System](https://medium.com/@IG1.com/sismology-iguana-solutions-monitoring-system-f46e4170447f) @@ -32,7 +33,7 @@ sort: 16 * [Observability, Availability & DORA’s Research Program](https://medium.com/alteos-tech-blog/observability-availability-and-doras-research-program-85deb6680e78) * [Tame Kubernetes Costs with Percona Monitoring and Management and Prometheus Operator](https://www.percona.com/blog/2021/02/12/tame-kubernetes-costs-with-percona-monitoring-and-management-and-prometheus-operator/) * [Prometheus VictoriaMetrics On AWS ECS](https://dalefro.medium.com/prometheus-victoria-metrics-on-aws-ecs-62448e266090) -* [Monitoring with Prometheus, Grafana, AlertManager and VictoriaMetrics](https://www.sensedia.com/post/monitoring-with-prometheus-alertmanager) +* [API Monitoring With Prometheus, Grafana, AlertManager and VictoriaMetrics](https://nordicapis.com/api-monitoring-with-prometheus-grafana-alertmanager-and-victoriametrics/) * [Solving Metrics at scale with VictoriaMetrics](https://www.youtube.com/watch?v=QgLMztnj7-8) * [Monitoring Kubernetes clusters with VictoriaMetrics and Grafana](https://blog.cybozu.io/entry/2021/03/18/115743) * [Multi-tenancy monitoring system for Kubernetes cluster using VictoriaMetrics and operators](https://blog.kintone.io/entry/2021/03/31/175256) diff --git a/docs/CHANGELOG.md b/docs/CHANGELOG.md index f40cc14d7..3ff024bcf 100644 --- a/docs/CHANGELOG.md +++ b/docs/CHANGELOG.md @@ -7,6 +7,23 @@ sort: 15 ## tip +## [v1.61.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.61.0) + +* FEATURE: vmalert: add support for backfilling (aka replay) of recording and alerting rules. See [these docs](https://docs.victoriametrics.com/vmalert.html#rules-backfilling) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/836). +* FEATURE: vmalert: add a command-line flag `-rule.configCheckInterval` for automatic re-reading of `-rule` files without the need to send SIGHUP signal. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/512). +* FEATURE: vmagent: respect the `sample_limit` and `-promscrape.maxScrapeSize` values when scraping targets in [stream parsing mode](https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode). See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/1331). +* FEATURE: vmauth: add ability to specify mutliple `url_prefix` entries for balancing the load among multiple `vmselect` and/or `vminsert` nodes in a cluster. See [these docs](https://docs.victoriametrics.com/vmauth.html#load-balancing). +* FEATURE: vminsert: add `-disableRerouting` command-line flag for forcibly disabling the rerouting. This should help resolving [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/791) and [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1054) issues. +* FEATURE: vminsert: reduce the probability of global re-routing storm if all the vmstorage nodes cannot keep up with the given ingestion rate for some time. This should improve cluster stability in such cases. See [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/791) and [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1054) issues. +* FEATURE: allow building VictoriaMetrics components for Solaris / SmartOS. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1322). +* FEATURE: vmagent: add ability to debug relabeling rules. See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1343). + +* BUGFIX: reduce CPU usage by up to 2x during querying a database with big number of active daily time series. The issue has been introduced in `v1.59.0`. +* BUGFIX: vmagent: properly apply auth and tls configs in `eureka_sd_configs`. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/1350). +* BUGFIX: vmauth: do not panic on aborted http requests. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1353). +* BUGFIX: properly generate `target` property for `*Series(foo.*.bar)` responses returned from [Graphite Render API](https://docs.victoriametrics.com/#graphite-render-api-usage). Previously the `target` contained the expanded list of series for `foo.*.bar`, e.g. `sumSeries(foo.a.bar,foo.b.bar,...foo.z.bar)`. Now VictoriaMetrics returns `sumSeries(foo.*.bar)` as a target in the same way as Graphite does. + + ## [v1.60.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.60.0) * FEATURE: add ability to limit the number of unique time series, which can be added to storage per hour and per day. This can help dealing with high cardinality and high churn rate issues. See [these docs](https://docs.victoriametrics.com/#cardinality-limiter). diff --git a/docs/Cluster-VictoriaMetrics.md b/docs/Cluster-VictoriaMetrics.md index 7ea98b4b0..30728c421 100644 --- a/docs/Cluster-VictoriaMetrics.md +++ b/docs/Cluster-VictoriaMetrics.md @@ -1,5 +1,5 @@ --- -sort: 10 +sort: 2 --- # Cluster version @@ -138,7 +138,7 @@ A minimal cluster must contain the following nodes: It is recommended to run at least two nodes for each service for high availability purposes. -An http load balancer such as `nginx` must be put in front of `vminsert` and `vmselect` nodes: +An http load balancer such as [vmauth](https://docs.victoriametrics.com/vmauth.html) or `nginx` must be put in front of `vminsert` and `vmselect` nodes: - requests starting with `/insert` must be routed to port `8480` on `vminsert` nodes. - requests starting with `/select` must be routed to port `8481` on `vmselect` nodes. diff --git a/docs/FAQ.md b/docs/FAQ.md index ee9e690db..29aa0390e 100644 --- a/docs/FAQ.md +++ b/docs/FAQ.md @@ -90,6 +90,17 @@ and [Remote Write Storage Wars](https://promcon.io/2019-munich/talks/remote-writ VictoriaMetrics also [uses less RAM than Thanos components](https://github.com/thanos-io/thanos/issues/448). +### What is the difference between VictoriaMetrics and [QuestDB](https://questdb.io/)? + +- QuestDB needs more than 20x storage space than VictoriaMetrics. This translates to higher storage costs and slower queries over historical data, which must be read from the disk. +- QuestDB is much harder to setup and operate than VictoriaMetrics. Compare [setup instructions for QuestDB](https://questdb.io/docs/get-started/binaries) to [setup instructions for VictoriaMetrics](https://docs.victoriametrics.com/#how-to-start-victoriametrics). +- VictoriaMetrics provides [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html) query language, which is better suited for typical queries over time series data than SQL-like query language provided by QuestDB. See [this article](https://valyala.medium.com/promql-tutorial-for-beginners-9ab455142085) for details. +- Thanks to PromQL support, VictoriaMetrics [can be used as a drop-in replacement for Prometheus in Grafana](https://docs.victoriametrics.com/#grafana-setup), while QuestDB needs full rewrite of existing dashboards in Grafana. +- Thanks to Prometheus remote_write API support, VictoriaMetrics can be used as a long-term storage for Prometheus or for [vmagent](https://docs.victoriametrics.com/vmagent.html), while QuestDB has no integration with Prometheus. +- QuestDB [supports smaller range of popular data ingestion protocols](https://questdb.io/docs/develop/insert-data) compared to VictoriaMetrics (compare to [the list of supported data ingestion protocols for VictoriaMetrics](https://docs.victoriametrics.com/#how-to-import-time-series-data)). +- [VictoriaMetrics supports backfilling (e.g. storing historical data) out of the box](https://docs.victoriametrics.com/#backfilling), while QuestDB provides [very limited support for backfilling](https://questdb.io/blog/2021/05/10/questdb-release-6-0-tsbs-benchmark#the-problem-with-out-of-order-data). + + ### What is the difference between VictoriaMetrics and [Cortex](https://github.com/cortexproject/cortex)? VictoriaMetrics is similar to Cortex in the following aspects: @@ -142,7 +153,8 @@ The main differences between Cortex and VictoriaMetrics: ### How does VictoriaMetrics compare to [InfluxDB](https://www.influxdata.com/time-series-platform/influxdb/)? - VictoriaMetrics requires [10x less RAM](https://medium.com/@valyala/insert-benchmarks-with-inch-influxdb-vs-victoriametrics-e31a41ae2893) and it [works faster](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae). -- VictoriaMetrics provides [better query language](https://medium.com/@valyala/promql-tutorial-for-beginners-9ab455142085) than InfluxQL or Flux. +- VictoriaMetrics needs lower amounts of storage space than InfluxDB on production data. +- VictoriaMetrics provides better query language - [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html) - than InfluxQL or Flux. See [this tutorial](https://medium.com/@valyala/promql-tutorial-for-beginners-9ab455142085) for details. - VictoriaMetrics accepts data in multiple popular data ingestion protocols additionally to InfluxDB - Prometheus remote_write, OpenTSDB, Graphite, CSV, JSON, native binary. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-import-time-series-data) for details. @@ -151,6 +163,7 @@ The main differences between Cortex and VictoriaMetrics: - TimescaleDB insists on using SQL as a query language. While SQL is more powerful than PromQL, this power is rarely required during typical TSDB usage. Real-world queries usually [look clearer and simpler when written in PromQL than in SQL](https://medium.com/@valyala/promql-tutorial-for-beginners-9ab455142085). - VictoriaMetrics requires [up to 70x less storage space comparing to TimescaleDB](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4) for storing the same amount of time series data. The gap in storage space usage can be lowered from 70x to 3x if [compression in TimescaleDB is properly configured](https://docs.timescale.com/latest/using-timescaledb/compression) (it isn't an easy task in general case :)). +- TimescaleDB is [harder to setup, configure and operate](https://docs.timescale.com/timescaledb/latest/how-to-guides/install-timescaledb/self-hosted/ubuntu/installation-apt-ubuntu/) than VictoriaMetrics (see [how to run VictoriaMetrics](https://docs.victoriametrics.com/#how-to-start-victoriametrics)). - VictoriaMetrics accepts data in multiple popular data ingestion protocols - InfluxDB, OpenTSDB, Graphite, CSV, while TimescaleDB supports only SQL inserts. diff --git a/docs/Home.md b/docs/Home.md deleted file mode 100644 index 66adcb803..000000000 --- a/docs/Home.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -sort: 21 ---- - -# Docs - -* [Quick start](Quick-Start) -* [`WITH` templates playground](https://play.victoriametrics.com/promql/expand-with-exprs) -* [Grafana playground](http://play-grafana.victoriametrics.com:3000/d/4ome8yJmz/node-exporter-on-victoriametrics-demo) -* [MetricsQL](MetricsQL) -* [Single-node version](Single-server-VictoriaMetrics) -* [FAQ](FAQ) -* [Cluster version](Cluster-VictoriaMetrics) -* [Articles](Articles) -* [Case Studies](CaseStudies) -* [vmbackup](vmbackup) -* [vmrestore](vmrestore) -* [vmagent](vmagent) diff --git a/docs/MetricsQL.md b/docs/MetricsQL.md index 200df426c..f8c9ef39c 100644 --- a/docs/MetricsQL.md +++ b/docs/MetricsQL.md @@ -13,6 +13,7 @@ If you are unfamiliar with PromQL, then it is suggested reading [this tutorial f The following functionality is implemented differently in MetricsQL comparing to PromQL in order to improve user experience: * MetricsQL takes into account the previous point before the window in square brackets for range functions such as `rate` and `increase`. It also doesn't extrapolate range function results. This addresses [this issue from Prometheus](https://github.com/prometheus/prometheus/issues/3746). + See technical details about VictoriaMetrics and Prometheus calculations for `rate()` and `increase()` [here](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1215#issuecomment-850305711). * MetricsQL returns the expected non-empty responses for requests with `step` values smaller than scrape interval. This addresses [this issue from Grafana](https://github.com/grafana/grafana/issues/11451). * MetricsQL treats `scalar` type the same as `instant vector` without labels, since subtle difference between these types usually confuses users. See [the corresponding Prometheus docs](https://prometheus.io/docs/prometheus/latest/querying/basics/#expression-language-data-types) for details. @@ -67,7 +68,7 @@ This functionality can be tried at [an editable Grafana dashboard](http://play-g - `label_del(q, label1, ... labelN)` for deleting the given labels from `q`. For example, `label_del(foo, "bar")` would delete `bar` label from all the `foo` series. - `label_keep(q, label1, ... labelN)` for deleting all the labels except the given labels from `q`. For example, `label_keep(foo, "bar")` would delete all the labels except `bar` from `foo` series. - `label_copy(q, src_label1, dst_label1, ... src_labelN, dst_labelN)` for copying label values from `src_*` to `dst_*`. If `src_label` is empty, then `dst_label` is left untouched. For example, `label_copy(foo, "bar", baz")` would transform `foo{bar="x"}` to `foo{bar="x",baz="x"}`. - - `label_move(q, src_label1, dst_label1, ... src_labelN, dst_labelN)` for moving label values from `src_*` to `dst_*`. If `src_label` is empty, then `dst_label` is left untouched. For example, `label_move(foo, 'bar", "baz")` would transform `foo{bar="x"}` to `foo{baz="x"}`. + - `label_move(q, src_label1, dst_label1, ... src_labelN, dst_labelN)` for moving label values from `src_*` to `dst_*`. If `src_label` is empty, then `dst_label` is left untouched. For example, `label_move(foo, "bar", "baz")` would transform `foo{bar="x"}` to `foo{baz="x"}`. - `label_transform(q, label, regexp, replacement)` for replacing all the `regexp` occurences with `replacement` in the `label` values from `q`. For example, `label_transform(foo, "bar", "-", "_")` would transform `foo{bar="a-b-c"}` to `foo{bar="a_b_c"}`. - `label_value(q, label)` - returns numeric values for the given `label` from `q`. For example, if `label_value(foo, "bar")` is applied to `foo{bar="1.234"}`, then it will return a time series `foo{bar="1.234"}` with `1.234` value. - `label_match(q, label, regexp)` and `label_mismatch(q, label, regexp)` for filtering time series with labels matching (or not matching) the given regexps. diff --git a/docs/Single-server-VictoriaMetrics.md b/docs/Single-server-VictoriaMetrics.md index 9cd0c8bfb..65999e0d3 100644 --- a/docs/Single-server-VictoriaMetrics.md +++ b/docs/Single-server-VictoriaMetrics.md @@ -463,11 +463,7 @@ The `/api/v1/export` endpoint should return the following response: Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via the following APIs: * [Graphite API](#graphite-api-usage) -* [Prometheus querying API](#prometheus-querying-api-usage). Graphite metric names may special chars such as `-`, which may clash - with [MetricsQL operations](https://docs.victoriametrics.com/MetricsQL.html). Such metrics can be queries via `{__name__="foo-bar.baz"}`. - VictoriaMetrics supports `__graphite__` pseudo-label for selecting time series with Graphite-compatible filters in [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html). - For example, `{__graphite__="foo.*.bar"}` is equivalent to `{__name__=~"foo[.][^.]*[.]bar"}`, but it works faster - and it is easier to use when migrating from Graphite to VictoriaMetrics. +* [Prometheus querying API](#prometheus-querying-api-usage). VictoriaMetrics supports `__graphite__` pseudo-label for selecting time series with Graphite-compatible filters in [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html). For example, `{__graphite__="foo.*.bar"}` is equivalent to `{__name__=~"foo[.][^.]*[.]bar"}`, but it works faster and it is easier to use when migrating from Graphite to VictoriaMetrics. * [go-graphite/carbonapi](https://github.com/go-graphite/carbonapi/blob/main/cmd/carbonapi/carbonapi.example.victoriametrics.yaml) ## How to send data from OpenTSDB-compatible agents @@ -1770,6 +1766,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li Whether to suppress scrape errors logging. The last error for each target is always available at '/targets' page even if scrape errors logging is suppressed -relabelConfig string Optional path to a file with relabeling rules, which are applied to all the ingested metrics. See https://docs.victoriametrics.com/#relabeling for details + -relabelDebug + Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, then the metrics aren't sent to storage. This is useful for debugging the relabeling configs -retentionPeriod value Data with timestamps outside the retentionPeriod is automatically deleted The following optional suffixes are supported: h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 1) diff --git a/docs/vmagent.md b/docs/vmagent.md index 224172e2c..165793b76 100644 --- a/docs/vmagent.md +++ b/docs/vmagent.md @@ -1,5 +1,5 @@ --- -sort: 2 +sort: 3 --- # vmagent @@ -223,10 +223,10 @@ and also provides the following actions: The relabeling can be defined in the following places: -* At the `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to target labels. -* At the `scrape_config -> metric_relabel_configs` section in `-promscrape.config` file. This relabeling is applied to all the scraped metrics in the given `scrape_config`. -* At the `-remoteWrite.relabelConfig` file. This relabeling is aplied to all the collected metrics before sending them to remote storage. -* At the `-remoteWrite.urlRelabelConfig` files. This relabeling is applied to metrics before sending them to the corresponding `-remoteWrite.url`. +* At the `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to target labels. This relabeling can be debugged by passing `relabel_debug: true` option to the corresponding `scrape_config` section. In this case `vmagent` logs target labels before and after the relabeling and then drops the logged target. +* At the `scrape_config -> metric_relabel_configs` section in `-promscrape.config` file. This relabeling is applied to all the scraped metrics in the given `scrape_config`. This relabeling can be debugged by passing `metric_relabel_debug: true` option to the corresponding `scrape_config` section. In this case `vmagent` logs metrics before and after the relabeling and then drops the logged metrics. +* At the `-remoteWrite.relabelConfig` file. This relabeling is aplied to all the collected metrics before sending them to remote storage. This relabeling can be debugged by passing `-remoteWrite.relabelDebug` command-line option to `vmagent`. In this case `vmagent` logs metrics before and after the relabeling and then drops all the logged metrics instead of sending them to remote storage. +* At the `-remoteWrite.urlRelabelConfig` files. This relabeling is applied to metrics before sending them to the corresponding `-remoteWrite.url`. This relabeling can be debugged by passing `-remoteWrite.urlRelabelDebug` command-line options to `vmagent`. In this case `vmagent` logs metrics before and after the relabeling and then drops all the logged metrics instead of sending them to the corresponding `-remoteWrite.url`. You can read more about relabeling in the following articles: @@ -256,13 +256,13 @@ By default `vmagent` reads the full response from scrape target into memory, the 'match[]': ['{__name__!=""}'] ``` -Note that `sample_limit` option doesn't work if stream parsing is enabled because the parsed data is pushed to remote storage as soon as it is parsed. Therefore the `sample_limit` option doesn't make sense during stream parsing. +Note that `sample_limit` option doesn't prevent from data push to remote storage if stream parsing is enabled because the parsed data is pushed to remote storage as soon as it is parsed. ## Scraping big number of targets A single `vmagent` instance can scrape tens of thousands of scrape targets. Sometimes this isn't enough due to limitations on CPU, network, RAM, etc. -In this case scrape targets can be split among multiple `vmagent` instances (aka `vmagent` horizontal scaling and clustering). +In this case scrape targets can be split among multiple `vmagent` instances (aka `vmagent` horizontal scaling, sharding and clustering). Each `vmagent` instance in the cluster must use identical `-promscrape.config` files with distinct `-promscrape.cluster.memberNum` values. The flag value must be in the range `0 ... N-1`, where `N` is the number of `vmagent` instances in the cluster. The number of `vmagent` instances in the cluster must be passed to `-promscrape.cluster.membersCount` command-line flag. For example, the following commands @@ -725,6 +725,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html . Supports array of values separated by comma or specified via multiple flags. -remoteWrite.relabelConfig string Optional path to file with relabel_config entries. These entries are applied to all the metrics before sending them to -remoteWrite.url. See https://docs.victoriametrics.com/vmagent.html#relabeling for details + -remoteWrite.relabelDebug + Whether to log metrics before and after relabeling with -remoteWrite.relabelConfig. If the -remoteWrite.relabelDebug is enabled, then the metrics aren't sent to remote storage. This is useful for debugging the relabeling configs -remoteWrite.roundDigits array Round metric values to this number of decimal digits after the point before writing them to remote storage. Examples: -remoteWrite.roundDigits=2 would round 1.236 to 1.24, while -remoteWrite.roundDigits=-1 would round 126.78 to 130. By default digits rounding is disabled. Set it to 100 for disabling it for a particular remote storage. This option may be used for improving data compression for the stored metrics Supports array of values separated by comma or specified via multiple flags. @@ -759,6 +761,9 @@ See the docs at https://docs.victoriametrics.com/vmagent.html . -remoteWrite.urlRelabelConfig array Optional path to relabel config for the corresponding -remoteWrite.url Supports an array of values separated by comma or specified via multiple flags. + -remoteWrite.urlRelabelDebug array + Whether to log metrics before and after relabeling with -remoteWrite.urlRelabelConfig. If the -remoteWrite.urlRelabelDebug is enabled, then the metrics aren't sent to the corresponding -remoteWrite.url. This is useful for debugging the relabeling configs + Supports array of values separated by comma or specified via multiple flags. -sortLabels Whether to sort labels for incoming samples before writing them to all the configured remote storage systems. This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}Enabled sorting for labels can slow down ingestion performance a bit -tls diff --git a/docs/vmalert.md b/docs/vmalert.md index 29c01cd71..f9b0b7e86 100644 --- a/docs/vmalert.md +++ b/docs/vmalert.md @@ -1,5 +1,5 @@ --- -sort: 3 +sort: 4 --- # vmalert @@ -16,7 +16,8 @@ rules against configured address. support; * Integration with [Alertmanager](https://github.com/prometheus/alertmanager); * Keeps the alerts [state on restarts](#alerts-state-on-restarts); -* Graphite datasource can be used for alerting and recording rules. See [these docs](#graphite) for details. +* Graphite datasource can be used for alerting and recording rules. See [these docs](#graphite); +* Recording and Alerting rules backfilling (aka `replay`). See [these docs](#rules-backfilling); * Lightweight without extra dependencies. ## Limitations @@ -231,194 +232,296 @@ implements [Graphite Render API](https://graphite.readthedocs.io/en/stable/rende When using vmalert with both `graphite` and `prometheus` rules configured against cluster version of VM do not forget to set `-datasource.appendTypePrefix` flag to `true`, so vmalert can adjust URL prefix automatically based on query type. +## Rules backfilling + +vmalert supports alerting and recording rules backfilling (aka `replay`). In replay mode vmalert +can read the same rules configuration as normally, evaluate them on the given time range and backfill +results via remote write to the configured storage. vmalert supports any PromQL/MetricsQL compatible +data source for backfilling. + +### How it works + +In `replay` mode vmalert works as a cli-tool and exits immediately after work is done. +To run vmalert in `replay` mode: +``` +./bin/vmalert -rule=path/to/your.rules \ # path to files with rules you usually use with vmalert + -datasource.url=http://localhost:8428 \ # PromQL/MetricsQL compatible datasource + -remoteWrite.url=http://localhost:8428 \ # remote write compatible storage to persist results + -replay.timeFrom=2021-05-11T07:21:43Z \ # time from begin replay + -replay.timeTo=2021-05-29T18:40:43Z # time to finish replay +``` + +The output of the command will look like the following: +``` +Replay mode: +from: 2021-05-11 07:21:43 +0000 UTC # set by -replay.timeFrom +to: 2021-05-29 18:40:43 +0000 UTC # set by -replay.timeTo +max data points per request: 1000 # set by -replay.maxDatapointsPerQuery + +Group "ReplayGroup" +interval: 1m0s +requests to make: 27 +max range per request: 16h40m0s +> Rule "type:vm_cache_entries:rate5m" (ID: 1792509946081842725) +27 / 27 [----------------------------------------------------------------------------------------------------] 100.00% 78 p/s +> Rule "go_cgo_calls_count:rate5m" (ID: 17958425467471411582) +27 / 27 [-----------------------------------------------------------------------------------------------------] 100.00% ? p/s + +Group "vmsingleReplay" +interval: 30s +requests to make: 54 +max range per request: 8h20m0s +> Rule "RequestErrorsToAPI" (ID: 17645863024999990222) +54 / 54 [-----------------------------------------------------------------------------------------------------] 100.00% ? p/s +> Rule "TooManyLogs" (ID: 9042195394653477652) +54 / 54 [-----------------------------------------------------------------------------------------------------] 100.00% ? p/s +2021-06-07T09:59:12.098Z info app/vmalert/replay.go:68 replay finished! Imported 511734 samples +``` + +In `replay` mode all groups are executed sequentially one-by-one. Rules within the group are +executed sequentially as well (`concurrency` setting is ignored). Vmalert sends rule's expression +to [/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries) endpoint +of the configured `-datasource.url`. Returned data then processed according to the rule type and +backfilled to `-remoteWrite.url` via [Remote Write protocol](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations). +Vmalert respects `evaluationInterval` value set by flag or per-group during the replay. + +#### Recording rules + +Result of recording rules `replay` should match with results of normal rules evaluation. + +#### Alerting rules + +Result of alerting rules `replay` is time series reflecting [alert's state](#alerts-state-on-restarts). +To see if `replayed` alert has fired in the past use the following PromQL/MetricsQL expression: +``` +ALERTS{alertname="your_alertname", alertstate="firing"} +``` +Execute the query against storage which was used for `-remoteWrite.url` during the `replay`. + +### Additional configuration + +There are following non-required `replay` flags: + +* `-replay.maxDatapointsPerQuery` - the max number of data points expected to receive in one request. +In two words, it affects the max time range for every `/query_range` request. The higher the value, +the less requests will be issued during `replay`. +* `-replay.ruleRetryAttempts` - when datasource fails to respond vmalert will make this number of retries +per rule before giving up. +* `-replay.rulesDelay` - delay between sequential rules execution. Important in cases if there are chaining +(rules which depend on each other) rules. It is expected, that remote storage will be able to persist +previously accepted data during the delay, so data will be available for the subsequent queries. +Keep it equal or bigger than `-remoteWrite.flushInterval`. + +See full description for these flags in `./vmalert --help`. + +### Limitations + +* Graphite engine isn't supported yet; +* `query` template function is disabled for performance reasons (might be changed in future); + ## Configuration The shortlist of configuration flags is the following: ``` -datasource.appendTypePrefix - Whether to add type prefix to -datasource.url based on the query type. Set to true if sending different query types to the vmselect URL. + Whether to add type prefix to -datasource.url based on the query type. Set to true if sending different query types to the vmselect URL. -datasource.basicAuth.password string - Optional basic auth password for -datasource.url + Optional basic auth password for -datasource.url -datasource.basicAuth.username string - Optional basic auth username for -datasource.url + Optional basic auth username for -datasource.url -datasource.lookback duration - Lookback defines how far into the past to look when evaluating queries. For example, if the datasource.lookback=5m then param "time" with value now()-5m will be added to every query. + Lookback defines how far into the past to look when evaluating queries. For example, if the datasource.lookback=5m then param "time" with value now()-5m will be added to every query. -datasource.maxIdleConnections int - Defines the number of idle (keep-alive connections) to each configured datasource. Consider setting this value equal to the value: groups_total * group.concurrency. Too low a value may result in a high number of sockets in TIME_WAIT state. (default 100) + Defines the number of idle (keep-alive connections) to each configured datasource. Consider setting this value equal to the value: groups_total * group.concurrency. Too low a value may result in a high number of sockets in TIME_WAIT state. (default 100) -datasource.queryStep duration - queryStep defines how far a value can fallback to when evaluating queries. For example, if datasource.queryStep=15s then param "step" with value "15s" will be added to every query.If queryStep isn't specified, rule's evaluationInterval will be used instead. + queryStep defines how far a value can fallback to when evaluating queries. For example, if datasource.queryStep=15s then param "step" with value "15s" will be added to every query.If queryStep isn't specified, rule's evaluationInterval will be used instead. -datasource.roundDigits int - Adds "round_digits" GET param to datasource requests. In VM "round_digits" limits the number of digits after the decimal point in response values. + Adds "round_digits" GET param to datasource requests. In VM "round_digits" limits the number of digits after the decimal point in response values. -datasource.tlsCAFile string - Optional path to TLS CA file to use for verifying connections to -datasource.url. By default, system CA is used + Optional path to TLS CA file to use for verifying connections to -datasource.url. By default, system CA is used -datasource.tlsCertFile string - Optional path to client-side TLS certificate file to use when connecting to -datasource.url + Optional path to client-side TLS certificate file to use when connecting to -datasource.url -datasource.tlsInsecureSkipVerify - Whether to skip tls verification when connecting to -datasource.url + Whether to skip tls verification when connecting to -datasource.url -datasource.tlsKeyFile string - Optional path to client-side TLS certificate key to use when connecting to -datasource.url + Optional path to client-side TLS certificate key to use when connecting to -datasource.url -datasource.tlsServerName string - Optional TLS server name to use for connections to -datasource.url. By default, the server name from -datasource.url is used + Optional TLS server name to use for connections to -datasource.url. By default, the server name from -datasource.url is used -datasource.url string - VictoriaMetrics or vmselect url. Required parameter. E.g. http://127.0.0.1:8428 + VictoriaMetrics or vmselect url. Required parameter. E.g. http://127.0.0.1:8428 -dryRun -rule - Whether to check only config files without running vmalert. The rules file are validated. The -rule flag must be specified. + Whether to check only config files without running vmalert. The rules file are validated. The -rule flag must be specified. -enableTCP6 - Whether to enable IPv6 for listening and dialing. By default only IPv4 TCP and UDP is used + Whether to enable IPv6 for listening and dialing. By default only IPv4 TCP and UDP is used -envflag.enable - Whether to enable reading flags from environment variables additionally to command line. Command line flag values have priority over values from environment vars. Flags are read only from command line if this flag isn't set + Whether to enable reading flags from environment variables additionally to command line. Command line flag values have priority over values from environment vars. Flags are read only from command line if this flag isn't set -envflag.prefix string - Prefix for environment variables if -envflag.enable is set + Prefix for environment variables if -envflag.enable is set -evaluationInterval duration - How often to evaluate the rules (default 1m0s) + How often to evaluate the rules (default 1m0s) -external.alert.source string - External Alert Source allows to override the Source link for alerts sent to AlertManager for cases where you want to build a custom link to Grafana, Prometheus or any other service. - eg. 'explore?orgId=1&left=[\"now-1h\",\"now\",\"VictoriaMetrics\",{\"expr\": \"{{$expr|quotesEscape|crlfEscape|queryEscape}}\"},{\"mode\":\"Metrics\"},{\"ui\":[true,true,true,\"none\"]}]'.If empty '/api/v1/:groupID/alertID/status' is used + External Alert Source allows to override the Source link for alerts sent to AlertManager for cases where you want to build a custom link to Grafana, Prometheus or any other service. + eg. 'explore?orgId=1&left=[\"now-1h\",\"now\",\"VictoriaMetrics\",{\"expr\": \"{{$expr|quotesEscape|crlfEscape|queryEscape}}\"},{\"mode\":\"Metrics\"},{\"ui\":[true,true,true,\"none\"]}]'.If empty '/api/v1/:groupID/alertID/status' is used -external.label array - Optional label in the form 'name=value' to add to all generated recording rules and alerts. Pass multiple -label flags in order to add multiple label sets. - Supports an array of values separated by comma or specified via multiple flags. + Optional label in the form 'name=value' to add to all generated recording rules and alerts. Pass multiple -label flags in order to add multiple label sets. + Supports an array of values separated by comma or specified via multiple flags. -external.url string - External URL is used as alert's source for sent alerts to the notifier + External URL is used as alert's source for sent alerts to the notifier -fs.disableMmap - Whether to use pread() instead of mmap() for reading data files. By default mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread() + Whether to use pread() instead of mmap() for reading data files. By default mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread() -http.connTimeout duration - Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s) + Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s) -http.disableResponseCompression - Disable compression of HTTP responses to save CPU resources. By default compression is enabled to save network bandwidth + Disable compression of HTTP responses to save CPU resources. By default compression is enabled to save network bandwidth -http.idleConnTimeout duration - Timeout for incoming idle http connections (default 1m0s) + Timeout for incoming idle http connections (default 1m0s) -http.maxGracefulShutdownDuration duration - The maximum duration for a graceful shutdown of the HTTP server. A highly loaded server may require increased value for a graceful shutdown (default 7s) + The maximum duration for a graceful shutdown of the HTTP server. A highly loaded server may require increased value for a graceful shutdown (default 7s) -http.pathPrefix string - An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. See https://www.robustperception.io/using-external-urls-and-proxies-with-prometheus + An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. See https://www.robustperception.io/using-external-urls-and-proxies-with-prometheus -http.shutdownDelay duration - Optional delay before http server shutdown. During this delay, the server returns non-OK responses from /health page, so load balancers can route new requests to other servers + Optional delay before http server shutdown. During this delay, the server returns non-OK responses from /health page, so load balancers can route new requests to other servers -httpAuth.password string - Password for HTTP Basic Auth. The authentication is disabled if -httpAuth.username is empty + Password for HTTP Basic Auth. The authentication is disabled if -httpAuth.username is empty -httpAuth.username string - Username for HTTP Basic Auth. The authentication is disabled if empty. See also -httpAuth.password + Username for HTTP Basic Auth. The authentication is disabled if empty. See also -httpAuth.password -httpListenAddr string - Address to listen for http connections (default ":8880") + Address to listen for http connections (default ":8880") -loggerDisableTimestamps - Whether to disable writing timestamps in logs + Whether to disable writing timestamps in logs -loggerErrorsPerSecondLimit int - Per-second limit on the number of ERROR messages. If more than the given number of errors are emitted per second, the remaining errors are suppressed. Zero values disable the rate limit + Per-second limit on the number of ERROR messages. If more than the given number of errors are emitted per second, the remaining errors are suppressed. Zero values disable the rate limit -loggerFormat string - Format for logs. Possible values: default, json (default "default") + Format for logs. Possible values: default, json (default "default") -loggerLevel string - Minimum level of errors to log. Possible values: INFO, WARN, ERROR, FATAL, PANIC (default "INFO") + Minimum level of errors to log. Possible values: INFO, WARN, ERROR, FATAL, PANIC (default "INFO") -loggerOutput string - Output for the logs. Supported values: stderr, stdout (default "stderr") + Output for the logs. Supported values: stderr, stdout (default "stderr") -loggerTimezone string - Timezone to use for timestamps in logs. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 or Local (default "UTC") + Timezone to use for timestamps in logs. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 or Local (default "UTC") -loggerWarnsPerSecondLimit int - Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit + Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit -memory.allowedBytes size - Allowed size of system memory VictoriaMetrics caches may occupy. This option overrides -memory.allowedPercent if set to a non-zero value. Too low a value may increase the cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache resulting in higher disk IO usage - Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0) + Allowed size of system memory VictoriaMetrics caches may occupy. This option overrides -memory.allowedPercent if set to a non-zero value. Too low a value may increase the cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache resulting in higher disk IO usage + Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0) -memory.allowedPercent float - Allowed percent of system memory VictoriaMetrics caches may occupy. See also -memory.allowedBytes. Too low a value may increase cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache which will result in higher disk IO usage (default 60) + Allowed percent of system memory VictoriaMetrics caches may occupy. See also -memory.allowedBytes. Too low a value may increase cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache which will result in higher disk IO usage (default 60) -metricsAuthKey string - Auth key for /metrics. It overrides httpAuth settings + Auth key for /metrics. It overrides httpAuth settings -notifier.basicAuth.password array - Optional basic auth password for -notifier.url - Supports an array of values separated by comma or specified via multiple flags. + Optional basic auth password for -notifier.url + Supports an array of values separated by comma or specified via multiple flags. -notifier.basicAuth.username array - Optional basic auth username for -notifier.url - Supports an array of values separated by comma or specified via multiple flags. + Optional basic auth username for -notifier.url + Supports an array of values separated by comma or specified via multiple flags. -notifier.tlsCAFile array - Optional path to TLS CA file to use for verifying connections to -notifier.url. By default system CA is used - Supports an array of values separated by comma or specified via multiple flags. + Optional path to TLS CA file to use for verifying connections to -notifier.url. By default system CA is used + Supports an array of values separated by comma or specified via multiple flags. -notifier.tlsCertFile array - Optional path to client-side TLS certificate file to use when connecting to -notifier.url - Supports an array of values separated by comma or specified via multiple flags. + Optional path to client-side TLS certificate file to use when connecting to -notifier.url + Supports an array of values separated by comma or specified via multiple flags. -notifier.tlsInsecureSkipVerify array - Whether to skip tls verification when connecting to -notifier.url - Supports array of values separated by comma or specified via multiple flags. + Whether to skip tls verification when connecting to -notifier.url + Supports array of values separated by comma or specified via multiple flags. -notifier.tlsKeyFile array - Optional path to client-side TLS certificate key to use when connecting to -notifier.url - Supports an array of values separated by comma or specified via multiple flags. + Optional path to client-side TLS certificate key to use when connecting to -notifier.url + Supports an array of values separated by comma or specified via multiple flags. -notifier.tlsServerName array - Optional TLS server name to use for connections to -notifier.url. By default the server name from -notifier.url is used - Supports an array of values separated by comma or specified via multiple flags. + Optional TLS server name to use for connections to -notifier.url. By default the server name from -notifier.url is used + Supports an array of values separated by comma or specified via multiple flags. -notifier.url array - Prometheus alertmanager URL. Required parameter. e.g. http://127.0.0.1:9093 - Supports an array of values separated by comma or specified via multiple flags. + Prometheus alertmanager URL. Required parameter. e.g. http://127.0.0.1:9093 + Supports an array of values separated by comma or specified via multiple flags. -pprofAuthKey string - Auth key for /debug/pprof. It overrides httpAuth settings + Auth key for /debug/pprof. It overrides httpAuth settings -remoteRead.basicAuth.password string - Optional basic auth password for -remoteRead.url + Optional basic auth password for -remoteRead.url -remoteRead.basicAuth.username string - Optional basic auth username for -remoteRead.url + Optional basic auth username for -remoteRead.url -remoteRead.ignoreRestoreErrors - Whether to ignore errors from remote storage when restoring alerts state on startup. (default true) + Whether to ignore errors from remote storage when restoring alerts state on startup. (default true) -remoteRead.lookback duration - Lookback defines how far to look into past for alerts timeseries. For example, if lookback=1h then range from now() to now()-1h will be scanned. (default 1h0m0s) + Lookback defines how far to look into past for alerts timeseries. For example, if lookback=1h then range from now() to now()-1h will be scanned. (default 1h0m0s) -remoteRead.tlsCAFile string - Optional path to TLS CA file to use for verifying connections to -remoteRead.url. By default system CA is used + Optional path to TLS CA file to use for verifying connections to -remoteRead.url. By default system CA is used -remoteRead.tlsCertFile string - Optional path to client-side TLS certificate file to use when connecting to -remoteRead.url + Optional path to client-side TLS certificate file to use when connecting to -remoteRead.url -remoteRead.tlsInsecureSkipVerify - Whether to skip tls verification when connecting to -remoteRead.url + Whether to skip tls verification when connecting to -remoteRead.url -remoteRead.tlsKeyFile string - Optional path to client-side TLS certificate key to use when connecting to -remoteRead.url + Optional path to client-side TLS certificate key to use when connecting to -remoteRead.url -remoteRead.tlsServerName string - Optional TLS server name to use for connections to -remoteRead.url. By default the server name from -remoteRead.url is used + Optional TLS server name to use for connections to -remoteRead.url. By default the server name from -remoteRead.url is used -remoteRead.url vmalert - Optional URL to VictoriaMetrics or vmselect that will be used to restore alerts state. This configuration makes sense only if vmalert was configured with `remoteWrite.url` before and has been successfully persisted its state. E.g. http://127.0.0.1:8428 + Optional URL to VictoriaMetrics or vmselect that will be used to restore alerts state. This configuration makes sense only if vmalert was configured with `remoteWrite.url` before and has been successfully persisted its state. E.g. http://127.0.0.1:8428 -remoteWrite.basicAuth.password string - Optional basic auth password for -remoteWrite.url + Optional basic auth password for -remoteWrite.url -remoteWrite.basicAuth.username string - Optional basic auth username for -remoteWrite.url + Optional basic auth username for -remoteWrite.url -remoteWrite.concurrency int - Defines number of writers for concurrent writing into remote querier (default 1) + Defines number of writers for concurrent writing into remote querier (default 1) -remoteWrite.flushInterval duration - Defines interval of flushes to remote write endpoint (default 5s) + Defines interval of flushes to remote write endpoint (default 5s) -remoteWrite.maxBatchSize int - Defines defines max number of timeseries to be flushed at once (default 1000) + Defines defines max number of timeseries to be flushed at once (default 1000) -remoteWrite.maxQueueSize int - Defines the max number of pending datapoints to remote write endpoint (default 100000) + Defines the max number of pending datapoints to remote write endpoint (default 100000) -remoteWrite.tlsCAFile string - Optional path to TLS CA file to use for verifying connections to -remoteWrite.url. By default system CA is used + Optional path to TLS CA file to use for verifying connections to -remoteWrite.url. By default system CA is used -remoteWrite.tlsCertFile string - Optional path to client-side TLS certificate file to use when connecting to -remoteWrite.url + Optional path to client-side TLS certificate file to use when connecting to -remoteWrite.url -remoteWrite.tlsInsecureSkipVerify - Whether to skip tls verification when connecting to -remoteWrite.url + Whether to skip tls verification when connecting to -remoteWrite.url -remoteWrite.tlsKeyFile string - Optional path to client-side TLS certificate key to use when connecting to -remoteWrite.url + Optional path to client-side TLS certificate key to use when connecting to -remoteWrite.url -remoteWrite.tlsServerName string - Optional TLS server name to use for connections to -remoteWrite.url. By default the server name from -remoteWrite.url is used + Optional TLS server name to use for connections to -remoteWrite.url. By default the server name from -remoteWrite.url is used -remoteWrite.url string - Optional URL to VictoriaMetrics or vminsert where to persist alerts state and recording rules results in form of timeseries. E.g. http://127.0.0.1:8428 + Optional URL to VictoriaMetrics or vminsert where to persist alerts state and recording rules results in form of timeseries. E.g. http://127.0.0.1:8428 + -replay.maxDatapointsPerQuery int + Max number of data points expected in one request. The higher the value, the less requests will be made during replay. (default 1000) + -replay.ruleRetryAttempts int + Defines how many retries to make before giving up on rule if request for it returns an error. (default 5) + -replay.rulesDelay duration + Delay between rules evaluation within the group. Could be important if there are chained rules inside of the groupand processing need to wait for previous rule results to be persisted by remote storage before evaluating the next rule. Keep it equal or bigger than -remoteWrite.flushInterval. (default 1s) + -replay.timeFrom string + The time filter in RFC3339 format to select time series with timestamp equal or higher than provided value. E.g. '2020-01-01T20:07:00Z' + -replay.timeTo string + The time filter in RFC3339 format to select timeseries with timestamp equal or lower than provided value. E.g. '2020-01-01T20:07:00Z' -rule array - Path to the file with alert rules. - Supports patterns. Flag can be specified multiple times. - Examples: - -rule="/path/to/file". Path to a single file with alerting rules - -rule="dir/*.yaml" -rule="/*.yaml". Relative path to all .yaml files in "dir" folder, - absolute path to all .yaml files in root. - Rule files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars. - Supports an array of values separated by comma or specified via multiple flags. + Path to the file with alert rules. + Supports patterns. Flag can be specified multiple times. + Examples: + -rule="/path/to/file". Path to a single file with alerting rules + -rule="dir/*.yaml" -rule="/*.yaml". Relative path to all .yaml files in "dir" folder, + absolute path to all .yaml files in root. + Rule files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars. + Supports an array of values separated by comma or specified via multiple flags. + -rule.configCheckInterval duration + Interval for checking for changes in '-rule' files. By default the checking is disabled. Send SIGHUP signal in order to force config check for changes -rule.validateExpressions - Whether to validate rules expressions via MetricsQL engine (default true) + Whether to validate rules expressions via MetricsQL engine (default true) -rule.validateTemplates - Whether to validate annotation and label templates (default true) + Whether to validate annotation and label templates (default true) -tls - Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set + Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set -tlsCertFile string - Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower + Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower -tlsKeyFile string - Path to file with TLS key. Used only if -tls is set + Path to file with TLS key. Used only if -tls is set -version - Show VictoriaMetrics version + Show VictoriaMetrics version ``` Pass `-help` to `vmalert` in order to see the full list of supported command-line flags with their descriptions. -To reload configuration without `vmalert` restart send SIGHUP signal -or send GET request to `/-/reload` endpoint. +`vmalert` supports "hot" config reload via the following methods: +* send SIGHUP signal to `vmalert` process; +* send GET request to `/-/reload` endpoint; +* configure `-rule.configCheckInterval` flag for periodic reload +on config change. ## Contributing diff --git a/docs/vmauth.md b/docs/vmauth.md index 48c597c62..6253a1681 100644 --- a/docs/vmauth.md +++ b/docs/vmauth.md @@ -1,12 +1,12 @@ --- -sort: 4 +sort: 5 --- # vmauth -`vmauth` is a simple auth proxy and router for [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics). -It reads username and password from [Basic Auth headers](https://en.wikipedia.org/wiki/Basic_access_authentication), -matches them against configs pointed by `-auth.config` command-line flag and proxies incoming HTTP requests to the configured per-user `url_prefix` on successful match. +`vmauth` is a simple auth proxy, router and [load balancer](#load-balancing) for [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics). +It reads auth credentials from `Authorization` http header ([Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication) and `Bearer token` is supported), +matches them against configs pointed by [-auth.config](#auth-config) command-line flag and proxies incoming HTTP requests to the configured per-user `url_prefix` on successful match. ## Quick start @@ -31,9 +31,14 @@ Feel free [contacting us](mailto:info@victoriametrics.com) if you need customize accounting and rate limiting such as [vmgateway](https://docs.victoriametrics.com/vmgateway.html). +## Load balancing + +Each `url_prefix` in the [-auth.config](#auth-config) may contain either a single url or a list of urls. In the latter case `vmauth` balances load among the configured urls in a round-robin manner. This feature is useful for balancing the load among multiple `vmselect` and/or `vminsert` nodes in [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html). + + ## Auth config -Auth config is represented in the following simple `yml` format: +`-auth.config` is represented in the following simple `yml` format: ```yml @@ -65,31 +70,47 @@ users: # The user for querying account 123 in VictoriaMetrics cluster # See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format # All the requests to http://vmauth:8427 with the given Basic Auth (username:password) - # will be proxied to http://vmselect:8481/select/123/prometheus . - # For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect:8481/select/123/prometheus/api/v1/select + # will be load-balanced among http://vmselect1:8481/select/123/prometheus and http://vmselect2:8481/select/123/prometheus + # For example, http://vmauth:8427/api/v1/query is proxied to the following urls in a round-robin manner: + # - http://vmselect1:8481/select/123/prometheus/api/v1/select + # - http://vmselect2:8481/select/123/prometheus/api/v1/select - username: "cluster-select-account-123" password: "***" - url_prefix: "http://vmselect:8481/select/123/prometheus" + url_prefix: + - "http://vmselect1:8481/select/123/prometheus" + - "http://vmselect2:8481/select/123/prometheus" # The user for inserting Prometheus data into VictoriaMetrics cluster under account 42 # See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format # All the requests to http://vmauth:8427 with the given Basic Auth (username:password) - # will be proxied to http://vminsert:8480/insert/42/prometheus . - # For example, http://vmauth:8427/api/v1/write is proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write + # will be load-balanced between http://vminsert1:8480/insert/42/prometheus and http://vminsert2:8480/insert/42/prometheus + # For example, http://vmauth:8427/api/v1/write is proxied to the following urls in a round-robin manner: + # - http://vminsert1:8480/insert/42/prometheus/api/v1/write + # - http://vminsert2:8480/insert/42/prometheus/api/v1/write - username: "cluster-insert-account-42" password: "***" - url_prefix: "http://vminsert:8480/insert/42/prometheus" + url_prefix: + - "http://vminsert1:8480/insert/42/prometheus" + - "http://vminsert2:8480/insert/42/prometheus" # A single user for querying and inserting data: # - Requests to http://vmauth:8427/api/v1/query, http://vmauth:8427/api/v1/query_range - # and http://vmauth:8427/api/v1/label//values are proxied to http://vmselect:8481/select/42/prometheus. - # For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect:8480/select/42/prometheus/api/v1/query + # and http://vmauth:8427/api/v1/label//values are proxied to the following urls in a round-robin manner: + # - http://vmselect1:8481/select/42/prometheus + # - http://vmselect2:8481/select/42/prometheus + # For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect1:8480/select/42/prometheus/api/v1/query + # or to http://vmselect2:8480/select/42/prometheus/api/v1/query . # - Requests to http://vmauth:8427/api/v1/write are proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write - username: "foobar" url_map: - - src_paths: ["/api/v1/query", "/api/v1/query_range", "/api/v1/label/[^/]+/values"] - url_prefix: "http://vmselect:8481/select/42/prometheus" + - src_paths: + - "/api/v1/query" + - "/api/v1/query_range" + - "/api/v1/label/[^/]+/values" + url_prefix: + - "http://vmselect1:8481/select/42/prometheus" + - "http://vmselect2:8481/select/42/prometheus" - src_paths: ["/api/v1/write"] url_prefix: "http://vminsert:8480/insert/42/prometheus" ``` diff --git a/docs/vmbackup.md b/docs/vmbackup.md index 106f88f35..930d735a6 100644 --- a/docs/vmbackup.md +++ b/docs/vmbackup.md @@ -1,5 +1,5 @@ --- -sort: 5 +sort: 6 --- # vmbackup diff --git a/docs/vmbackupmanager.md b/docs/vmbackupmanager.md index 2e90985e9..7be66246b 100644 --- a/docs/vmbackupmanager.md +++ b/docs/vmbackupmanager.md @@ -1,5 +1,5 @@ --- -sort: 9 +sort: 10 --- ## vmbackupmanager diff --git a/docs/vmctl.md b/docs/vmctl.md index 91810c2e2..42d491717 100644 --- a/docs/vmctl.md +++ b/docs/vmctl.md @@ -1,5 +1,5 @@ --- -sort: 7 +sort: 8 --- # vmctl diff --git a/docs/vmgateway.md b/docs/vmgateway.md index 3e956d017..2aacb16d4 100644 --- a/docs/vmgateway.md +++ b/docs/vmgateway.md @@ -1,5 +1,5 @@ --- -sort: 8 +sort: 9 --- # vmgateway diff --git a/docs/vmrestore.md b/docs/vmrestore.md index aae64a431..ea9423332 100644 --- a/docs/vmrestore.md +++ b/docs/vmrestore.md @@ -1,5 +1,5 @@ --- -sort: 6 +sort: 7 --- # vmrestore diff --git a/go.mod b/go.mod index 5c595ddac..3b6047034 100644 --- a/go.mod +++ b/go.mod @@ -1,9 +1,8 @@ module github.com/VictoriaMetrics/VictoriaMetrics require ( - cloud.google.com/go v0.82.0 // indirect cloud.google.com/go/storage v1.15.0 - github.com/VictoriaMetrics/fastcache v1.5.8 + github.com/VictoriaMetrics/fastcache v1.6.0 // Do not use the original github.com/valyala/fasthttp because of issues // like https://github.com/valyala/fasthttp/commit/996610f021ff45fdc98c2ce7884d5fa4e7f9199b @@ -11,18 +10,20 @@ require ( github.com/VictoriaMetrics/metrics v1.17.2 github.com/VictoriaMetrics/metricsql v0.15.0 github.com/VividCortex/ewma v1.2.0 // indirect - github.com/aws/aws-sdk-go v1.38.43 + github.com/aws/aws-sdk-go v1.38.56 github.com/cespare/xxhash/v2 v2.1.1 github.com/cheggaaa/pb/v3 v3.0.8 github.com/cpuguy83/go-md2man/v2 v2.0.0 // indirect + github.com/fatih/color v1.12.0 // indirect github.com/go-kit/kit v0.10.0 github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect github.com/golang/snappy v0.0.3 - github.com/influxdata/influxdb v1.9.0 - github.com/klauspost/compress v1.12.2 + github.com/influxdata/influxdb v1.9.1 + github.com/klauspost/compress v1.13.0 + github.com/mattn/go-isatty v0.0.13 // indirect + github.com/mattn/go-runewidth v0.0.13 // indirect github.com/oklog/ulid v1.3.1 - github.com/prometheus/client_golang v1.10.0 // indirect - github.com/prometheus/common v0.25.0 // indirect + github.com/prometheus/common v0.28.0 // indirect github.com/prometheus/prometheus v1.8.2-0.20201119142752-3ad25a6dc3d9 github.com/russross/blackfriday/v2 v2.1.0 // indirect github.com/urfave/cli/v2 v2.3.0 @@ -32,12 +33,11 @@ require ( github.com/valyala/gozstd v1.11.0 github.com/valyala/histogram v1.1.2 github.com/valyala/quicktemplate v1.6.3 - golang.org/x/net v0.0.0-20210520170846-37e1c6afe023 + golang.org/x/net v0.0.0-20210525063256-abc453219eb5 golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c - golang.org/x/sys v0.0.0-20210514084401-e8d321eab015 - google.golang.org/api v0.47.0 - google.golang.org/genproto v0.0.0-20210518161634-ec7691c0a37d // indirect - google.golang.org/grpc v1.38.0 // indirect + golang.org/x/sys v0.0.0-20210608053332-aa57babbf139 + google.golang.org/api v0.48.0 + google.golang.org/genproto v0.0.0-20210607140030-00d4fb20b1ae // indirect gopkg.in/yaml.v2 v2.4.0 ) diff --git a/go.sum b/go.sum index f4340dcbb..9f14e6d16 100644 --- a/go.sum +++ b/go.sum @@ -20,8 +20,8 @@ cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmW cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg= cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8= cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0= -cloud.google.com/go v0.82.0 h1:FZ4B2YAzCzkwzGEOp1dqG8sAa3zNIvro1fHRTrB81RU= -cloud.google.com/go v0.82.0/go.mod h1:vlKccHJGuFBFufnAnuB08dfEH9Y3H7dzDzRECFdC2TA= +cloud.google.com/go v0.83.0 h1:bAMqZidYkmIsUqe6PtkEPT7Q+vfizScn+jfNA6jwK9c= +cloud.google.com/go v0.83.0/go.mod h1:Z7MJUsANfY0pYPdw0lbnivPx4/vhy/e2FEkSkF7vAVY= cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o= cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE= cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc= @@ -96,8 +96,8 @@ github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdko github.com/SAP/go-hdb v0.14.1/go.mod h1:7fdQLVC2lER3urZLjZCm0AuMQfApof92n3aylBPEkMo= github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo= github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI= -github.com/VictoriaMetrics/fastcache v1.5.8 h1:XW+YVx9lEXITBVv35ugK9OyotdNJVcbza69o3jmqWuI= -github.com/VictoriaMetrics/fastcache v1.5.8/go.mod h1:SiMZNgwEPJ9qWLshu9tyuE6bKc9ZWYhcNV/L7jurprQ= +github.com/VictoriaMetrics/fastcache v1.6.0 h1:C/3Oi3EiBCqufydp1neRZkqcwmEiuRT9c3fqvvgKm5o= +github.com/VictoriaMetrics/fastcache v1.6.0/go.mod h1:0qHz5QP0GMX4pfmMA/zt5RgfNuXJrTP0zS7DqpHGGTw= github.com/VictoriaMetrics/fasthttp v1.0.15 h1:UaX6kOxcQRtwMWBCX5avt2d1IzHp8qK8OUpUswz5akQ= github.com/VictoriaMetrics/fasthttp v1.0.15/go.mod h1:s9o5H4T58Kt4CTrdyJp4RorBKCwY7gRVS3N2JAUJ9jw= github.com/VictoriaMetrics/metrics v1.12.2/go.mod h1:Z1tSfPfngDn12bTfZSCqArT3OPY3u88J12hSoOhuiRE= @@ -145,8 +145,8 @@ github.com/aws/aws-sdk-go v1.29.16/go.mod h1:1KvfttTE3SPKMpo8g2c6jL3ZKfXtFvKscTg github.com/aws/aws-sdk-go v1.30.12/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0= github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48= github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= -github.com/aws/aws-sdk-go v1.38.43 h1:OKe9+Cdmrkhe0KXgpKhrDqidPhXQ4bv1FzzKnrmTJ5g= -github.com/aws/aws-sdk-go v1.38.43/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= +github.com/aws/aws-sdk-go v1.38.56 h1:JI5bnuDfjVLgnBaDHeZO5btxGbYCQ5QA3P0maYtwPQw= +github.com/aws/aws-sdk-go v1.38.56/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g= github.com/benbjohnson/immutable v0.2.1/go.mod h1:uc6OHo6PN2++n98KHLxW8ef4W42ylHiQSENghE1ezxI= github.com/benbjohnson/tmpl v1.0.0/go.mod h1:igT620JFIi44B6awvU9IsDhR77IXWtFigTLil/RPdps= @@ -234,8 +234,9 @@ github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLi github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= -github.com/fatih/color v1.10.0 h1:s36xzo75JdqLaaWoiEHk767eHiwo0598uUxyfiPkDsg= github.com/fatih/color v1.10.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM= +github.com/fatih/color v1.12.0 h1:mRhaKNwANqRgUBGKmnI5ZxEk7QXmjQeCcuYFMX2bfcc= +github.com/fatih/color v1.12.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM= github.com/fogleman/gg v1.2.1-0.20190220221249-0403632d5b90/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k= github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k= github.com/foxcpp/go-mockdns v0.0.0-20201212160233-ede2f9158d15/go.mod h1:tPg4cp4nseejPd+UKxtCVQ2hUxNTZ7qQZJa7CLriIeo= @@ -257,6 +258,7 @@ github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2 github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.10.0 h1:dXFJfIHVvUcpSgDOV+Ne6t7jXri8Tfv2uOLHUZ2XNuo= github.com/go-kit/kit v0.10.0/go.mod h1:xUsJbQ/Fp4kEt7AFgCuvyX4a71u8h9jB8tj/ORgOZ7o= +github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logfmt/logfmt v0.5.0 h1:TrB8swr/68K7m9CcGut2g3UOihhbcbiMAYiuTXdEih4= @@ -430,16 +432,18 @@ github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/ github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU= github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no= github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= -github.com/google/martian/v3 v3.1.0 h1:wCKgOCHuUEVfsaQLpPSJb7VdYCdTVZQAuOdYm1yc/60= github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= +github.com/google/martian/v3 v3.2.1 h1:d8MncMlErDFTwQGBK1xhv026j9kqhvw1Qv9IbWT1VLQ= +github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk= github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= @@ -453,7 +457,7 @@ github.com/google/pprof v0.0.0-20201117184057-ae444373da19/go.mod h1:kpwsk12EmLe github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20210506205249-923b5ab0fc1a/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= +github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= @@ -532,8 +536,8 @@ github.com/influxdata/flux v0.113.0/go.mod h1:3TJtvbm/Kwuo5/PEo5P6HUzwVg4bXWkb2w github.com/influxdata/httprouter v1.3.1-0.20191122104820-ee83e2772f69/go.mod h1:pwymjR6SrP3gD3pRj9RJwdl1j5s3doEEV8gS4X9qSzA= github.com/influxdata/influxdb v1.8.0/go.mod h1:SIzcnsjaHRFpmlxpJ4S3NT64qtEKYweNTUMb/vh0OMQ= github.com/influxdata/influxdb v1.8.3/go.mod h1:JugdFhsvvI8gadxOI6noqNeeBHvWNTbfYGtiAn+2jhI= -github.com/influxdata/influxdb v1.9.0 h1:9z/aRmTpWT1rIm4EN+qTJTZqgEdLGZ4xRMgvA276UEA= -github.com/influxdata/influxdb v1.9.0/go.mod h1:UEe3MeD9AaP5rlPIes102IhYua3FhIWZuOXNHxDjSrI= +github.com/influxdata/influxdb v1.9.1 h1:YdRsjmSF+RbxdSuTVC1GkVHYaLjW2y6ojUD5lZ0omDM= +github.com/influxdata/influxdb v1.9.1/go.mod h1:UEe3MeD9AaP5rlPIes102IhYua3FhIWZuOXNHxDjSrI= github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo= github.com/influxdata/influxql v1.1.0/go.mod h1:KpVI7okXjK6PRi3Z5B+mtKZli+R1DnZgb3N+tzevNgo= github.com/influxdata/influxql v1.1.1-0.20200828144457-65d3ef77d385/go.mod h1:gHp9y86a/pxhjJ+zMjNXiQAA197Xk9wLxaz+fGG+kWk= @@ -562,6 +566,7 @@ github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/u github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= +github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= github.com/jstemmer/go-junit-report v0.9.1 h1:6QPYqodiu3GuPL+7mfx+NwDdp2eTkp9IfEUpgAwUN0o= github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= @@ -581,8 +586,9 @@ github.com/klauspost/compress v1.4.0/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0 github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= github.com/klauspost/compress v1.10.7/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs= github.com/klauspost/compress v1.11.0/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs= -github.com/klauspost/compress v1.12.2 h1:2KCfW3I9M7nSc5wOqXAlW2v2U6v+w6cbjvbfp+OykW8= github.com/klauspost/compress v1.12.2/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg= +github.com/klauspost/compress v1.13.0 h1:2T7tUoQrQT+fQWdaY5rjWztFGAFwbGD04iPJg90ZiOs= +github.com/klauspost/compress v1.13.0/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg= github.com/klauspost/cpuid v0.0.0-20170728055534-ae7887de9fa5/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek= github.com/klauspost/crc32 v0.0.0-20161016154125-cb6bfca970f6/go.mod h1:+ZoRqAPRLkC4NPOvfYeR5KNOrY6TD+/sAC3HXPZgDYg= github.com/klauspost/pgzip v1.0.2-0.20170402124221-0bf5dcad4ada/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs= @@ -624,12 +630,14 @@ github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNx github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84= github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE= -github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHXY= github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU= +github.com/mattn/go-isatty v0.0.13 h1:qdl+GuBjcsKKDco5BsxPJlId98mSWNKqYA+Co0SC1yA= +github.com/mattn/go-isatty v0.0.13/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU= github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= github.com/mattn/go-runewidth v0.0.3/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= -github.com/mattn/go-runewidth v0.0.12 h1:Y41i/hVW3Pgwr8gV+J23B9YEY0zxjptBuCWEaxmAOow= github.com/mattn/go-runewidth v0.0.12/go.mod h1:RAqKPSqVFrSLVXbA8x7dzmKdmGzieGRCM46jaSJTDAk= +github.com/mattn/go-runewidth v0.0.13 h1:lTGmDsbAYt5DmK6OnoV7EuIF1wEIFAcxld6ypU4OSgU= +github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= github.com/mattn/go-sqlite3 v1.11.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc= github.com/mattn/go-tty v0.0.0-20180907095812-13ff1204f104/go.mod h1:XPvLUNfbS4fJH25nqRHfWLMa1ONC8Amw+mIA639KxkE= github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU= @@ -738,8 +746,8 @@ github.com/prometheus/client_golang v1.5.1/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3O github.com/prometheus/client_golang v1.6.0/go.mod h1:ZLOG9ck3JLRdB5MgO8f+lLTe83AXG6ro35rLTxvnIl4= github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= github.com/prometheus/client_golang v1.8.0/go.mod h1:O9VU6huf47PktckDQfMTX0Y8tY0/7TSWwj+ITvv0TnM= -github.com/prometheus/client_golang v1.10.0 h1:/o0BDeWzLWXNZ+4q5gXltUvaMpJqckTa+jTNoB+z4cg= -github.com/prometheus/client_golang v1.10.0/go.mod h1:WJM3cc3yu7XKBKa/I8WeZm+V3eltZnBwfENSU7mdogU= +github.com/prometheus/client_golang v1.11.0 h1:HNkLOAEQMIDv/K+04rukrLx6ch7msSRwf3/SASFAGtQ= +github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= @@ -755,9 +763,9 @@ github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8b github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo= github.com/prometheus/common v0.14.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s= github.com/prometheus/common v0.15.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s= -github.com/prometheus/common v0.18.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s= -github.com/prometheus/common v0.25.0 h1:IjJYZJCI8HZYtqA3xYwGyDzSCy1r4CA2GRh+4vdOmtE= -github.com/prometheus/common v0.25.0/go.mod h1:H6QK/N6XVT42whUeIdI3dp36w49c+/iMDk7UAI2qm7Q= +github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc= +github.com/prometheus/common v0.28.0 h1:vGVfV9KrDTvWt5boZO0I19g2E3CsWfpPPKZM9dt3mEw= +github.com/prometheus/common v0.28.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= @@ -1030,8 +1038,8 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc= golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.0.0-20210520170846-37e1c6afe023 h1:ADo5wSpq2gqaCGQWzk7S5vd//0iyyLeAratkEoG5dLE= -golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.0.0-20210525063256-abc453219eb5 h1:wjuX4b5yYQnEQHzd+CBcrcC6OVR2J1CN6mUy0oSxIPo= +golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -1044,7 +1052,6 @@ golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210413134643-5e61552d6c78/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210427180440-81ed05c6b58c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c h1:pkQiBZBvdos9qq4wBAHqlzuZHEXo07pqV06ef90u1WI= golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -1129,17 +1136,19 @@ golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210309074719-68d13333faf2/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210412220455-f1c623a9e750/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210503080704-8803ae5d1324/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210514084401-e8d321eab015 h1:hZR0X1kPW+nwyJ9xRxqZk1vx5RUObAPBdKVvXPDUH/E= golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210608053332-aa57babbf139 h1:C+AwYEtBp/VQwoLntUmQ/yx3MS9vmZaKNdw5eOpoQe8= +golang.org/x/sys v0.0.0-20210608053332-aa57babbf139/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -1231,8 +1240,9 @@ golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4f golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= -golang.org/x/tools v0.1.1 h1:wGiQel/hW0NnEkJUk8lbzkX2gFJU6PFxf1v5OlCfuOs= golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.2 h1:kRBLX7v7Af8W7Gdbbc908OJcdgtK8bOz9Uaj8/F1ACA= +golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= @@ -1267,9 +1277,9 @@ google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjR google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU= google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94= google.golang.org/api v0.45.0/go.mod h1:ISLIJCedJolbZvDfAk+Ctuq5hf+aJ33WgtUsfyFoLXA= -google.golang.org/api v0.46.0/go.mod h1:ceL4oozhkAiTID8XMmJBsIxID/9wMXJVVFXPg4ylg3I= -google.golang.org/api v0.47.0 h1:sQLWZQvP6jPGIP4JGPkJu4zHswrv81iobiyszr3b/0I= google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo= +google.golang.org/api v0.48.0 h1:RDAPWfNFY06dffEXfn7hZF5Fr1ZbnChzfQZAPyBd1+I= +google.golang.org/api v0.48.0/go.mod h1:71Pr1vy+TAZRPkPs/xlCf5SsU8WjuAWv1Pfjbtukyy4= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= @@ -1326,11 +1336,11 @@ google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6D google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A= google.golang.org/genproto v0.0.0-20210413151531-c14fb6ef47c3/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= google.golang.org/genproto v0.0.0-20210420162539-3c870d7478d2/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= -google.golang.org/genproto v0.0.0-20210429181445-86c259c2b4ab/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= -google.golang.org/genproto v0.0.0-20210517163617-5e0236093d7a/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= -google.golang.org/genproto v0.0.0-20210518161634-ec7691c0a37d h1:bRz6UmsZEz/CzoTjUDp4ZcdguhSWi6CyU299wMQBpZU= -google.golang.org/genproto v0.0.0-20210518161634-ec7691c0a37d/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= +google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0= +google.golang.org/genproto v0.0.0-20210604141403-392c879c8b08/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0= +google.golang.org/genproto v0.0.0-20210607140030-00d4fb20b1ae h1:2dB4bZ/B7RJdKuvHk3mKTzL2xwrikb+Y/QQy7WdyBPk= +google.golang.org/genproto v0.0.0-20210607140030-00d4fb20b1ae/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0= google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM= @@ -1361,6 +1371,7 @@ google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQ google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= google.golang.org/grpc v1.38.0 h1:/9BgsAsa5nWe26HqOlvlgJnqBuktYOLCgjCPqsa56W0= google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= +google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= diff --git a/lib/filestream/filestream_solaris.go b/lib/filestream/filestream_solaris.go new file mode 100644 index 000000000..631a915d4 --- /dev/null +++ b/lib/filestream/filestream_solaris.go @@ -0,0 +1,30 @@ +package filestream + +import ( + "fmt" + + "golang.org/x/sys/unix" +) + +func (st *streamTracker) adviseDontNeed(n int, fdatasync bool) error { + st.length += uint64(n) + if st.fd == 0 { + return nil + } + if st.length < dontNeedBlockSize { + return nil + } + blockSize := st.length - (st.length % dontNeedBlockSize) + if fdatasync { + if err := unix.Fsync(int(st.fd)); err != nil { + return fmt.Errorf("unix.Fsync error: %w", err) + } + } + st.offset += blockSize + st.length -= blockSize + return nil +} + +func (st *streamTracker) close() error { + return nil +} diff --git a/lib/fs/fadvise_solaris.go b/lib/fs/fadvise_solaris.go new file mode 100644 index 000000000..2a158b771 --- /dev/null +++ b/lib/fs/fadvise_solaris.go @@ -0,0 +1,8 @@ +package fs + +import "os" + +func fadviseSequentialRead(f *os.File, prefetch bool) error { + // TODO: implement this properly + return nil +} diff --git a/lib/fs/fs_solaris.go b/lib/fs/fs_solaris.go new file mode 100644 index 000000000..8cddca829 --- /dev/null +++ b/lib/fs/fs_solaris.go @@ -0,0 +1,68 @@ +package fs + +import ( + "fmt" + "os" + + "github.com/VictoriaMetrics/VictoriaMetrics/lib/logger" + "golang.org/x/sys/unix" +) + +func mmap(fd int, length int) (data []byte, err error) { + return unix.Mmap(fd, 0, length, unix.PROT_READ, unix.MAP_SHARED) + +} +func mUnmap(data []byte) error { + return unix.Munmap(data) +} + +func mustSyncPath(path string) { + d, err := os.Open(path) + if err != nil { + logger.Panicf("FATAL: cannot open %q: %s", path, err) + } + if err := d.Sync(); err != nil { + _ = d.Close() + logger.Panicf("FATAL: cannot flush %q to storage: %s", path, err) + } + if err := d.Close(); err != nil { + logger.Panicf("FATAL: cannot close %q: %s", path, err) + } +} + +func createFlockFile(flockFile string) (*os.File, error) { + flockF, err := os.Create(flockFile) + if err != nil { + return nil, fmt.Errorf("cannot create lock file %q: %w", flockFile, err) + } + + flock := unix.Flock_t{ + Type: unix.F_WRLCK, + Start: 0, + Len: 0, + Whence: 0, + } + if err := unix.FcntlFlock(flockF.Fd(), unix.F_SETLK, &flock); err != nil { + return nil, fmt.Errorf("cannot acquire lock on file %q: %w", flockFile, err) + } + return flockF, nil +} + +func mustGetFreeSpace(path string) uint64 { + d, err := os.Open(path) + if err != nil { + logger.Panicf("FATAL: cannot determine free disk space on %q: %s", path, err) + } + defer MustClose(d) + + fd := d.Fd() + var stat unix.Statvfs_t + if err := unix.Fstatvfs(int(fd), &stat); err != nil { + logger.Panicf("FATAL: cannot determine free disk space on %q: %s", path, err) + } + return freeSpace(stat) +} + +func freeSpace(stat unix.Statvfs_t) uint64 { + return uint64(stat.Bavail) * uint64(stat.Bsize) +} diff --git a/lib/httpserver/httpserver.go b/lib/httpserver/httpserver.go index a5dff2e55..9c61bf767 100644 --- a/lib/httpserver/httpserver.go +++ b/lib/httpserver/httpserver.go @@ -216,7 +216,9 @@ func handlerWrapper(s *server, w http.ResponseWriter, r *http.Request, rh Reques // The following recover() code works around this by explicitly stopping the process after logging the panic. // See https://github.com/golang/go/issues/16542#issuecomment-246549902 for details. defer func() { - if err := recover(); err != nil { + // need to check for abortHandler + // https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1353 + if err := recover(); err != nil && err != http.ErrAbortHandler { buf := make([]byte, 1<<20) n := runtime.Stack(buf, false) fmt.Fprintf(os.Stderr, "panic: %v\n\n%s", err, buf[:n]) diff --git a/lib/memory/memory_solaris.go b/lib/memory/memory_solaris.go new file mode 100644 index 000000000..2e29d9ac6 --- /dev/null +++ b/lib/memory/memory_solaris.go @@ -0,0 +1,20 @@ +package memory + +import ( + "github.com/VictoriaMetrics/VictoriaMetrics/lib/logger" + "golang.org/x/sys/unix" +) + +const PHYS_PAGES = 0x1f4 + +func sysTotalMemory() int { + memPageSize := unix.Getpagesize() + // https://man7.org/linux/man-pages/man3/sysconf.3.html + // _SC_PHYS_PAGES + memPagesCnt, err := unix.Sysconf(PHYS_PAGES) + if err != nil { + logger.Panicf("FATAL: error in unix.Sysconf: %s", err) + } + + return memPageSize * int(memPagesCnt) +} diff --git a/lib/promrelabel/config.go b/lib/promrelabel/config.go index 29d624174..3c9e13660 100644 --- a/lib/promrelabel/config.go +++ b/lib/promrelabel/config.go @@ -25,7 +25,8 @@ type RelabelConfig struct { // ParsedConfigs represents parsed relabel configs. type ParsedConfigs struct { - prcs []*parsedRelabelConfig + prcs []*parsedRelabelConfig + relabelDebug bool } // Len returns the number of relabel configs in pcs. @@ -43,19 +44,20 @@ func (pcs *ParsedConfigs) String() string { } var sb strings.Builder for _, prc := range pcs.prcs { - fmt.Fprintf(&sb, "%s", prc.String()) + fmt.Fprintf(&sb, "%s,", prc.String()) } + fmt.Fprintf(&sb, "relabelDebug=%v", pcs.relabelDebug) return sb.String() } // LoadRelabelConfigs loads relabel configs from the given path. -func LoadRelabelConfigs(path string) (*ParsedConfigs, error) { +func LoadRelabelConfigs(path string, relabelDebug bool) (*ParsedConfigs, error) { data, err := ioutil.ReadFile(path) if err != nil { return nil, fmt.Errorf("cannot read `relabel_configs` from %q: %w", path, err) } data = envtemplate.Replace(data) - pcs, err := ParseRelabelConfigsData(data) + pcs, err := ParseRelabelConfigsData(data, relabelDebug) if err != nil { return nil, fmt.Errorf("cannot unmarshal `relabel_configs` from %q: %w", path, err) } @@ -63,16 +65,16 @@ func LoadRelabelConfigs(path string) (*ParsedConfigs, error) { } // ParseRelabelConfigsData parses relabel configs from the given data. -func ParseRelabelConfigsData(data []byte) (*ParsedConfigs, error) { +func ParseRelabelConfigsData(data []byte, relabelDebug bool) (*ParsedConfigs, error) { var rcs []RelabelConfig if err := yaml.UnmarshalStrict(data, &rcs); err != nil { return nil, err } - return ParseRelabelConfigs(rcs) + return ParseRelabelConfigs(rcs, relabelDebug) } // ParseRelabelConfigs parses rcs to dst. -func ParseRelabelConfigs(rcs []RelabelConfig) (*ParsedConfigs, error) { +func ParseRelabelConfigs(rcs []RelabelConfig, relabelDebug bool) (*ParsedConfigs, error) { if len(rcs) == 0 { return nil, nil } @@ -85,7 +87,8 @@ func ParseRelabelConfigs(rcs []RelabelConfig) (*ParsedConfigs, error) { prcs[i] = prc } return &ParsedConfigs{ - prcs: prcs, + prcs: prcs, + relabelDebug: relabelDebug, }, nil } diff --git a/lib/promrelabel/config_test.go b/lib/promrelabel/config_test.go index f514d4d9a..a590c3f1e 100644 --- a/lib/promrelabel/config_test.go +++ b/lib/promrelabel/config_test.go @@ -7,7 +7,7 @@ import ( func TestLoadRelabelConfigsSuccess(t *testing.T) { path := "testdata/relabel_configs_valid.yml" - pcs, err := LoadRelabelConfigs(path) + pcs, err := LoadRelabelConfigs(path, false) if err != nil { t.Fatalf("cannot load relabel configs from %q: %s", path, err) } @@ -19,7 +19,7 @@ func TestLoadRelabelConfigsSuccess(t *testing.T) { func TestLoadRelabelConfigsFailure(t *testing.T) { f := func(path string) { t.Helper() - rcs, err := LoadRelabelConfigs(path) + rcs, err := LoadRelabelConfigs(path, false) if err == nil { t.Fatalf("expecting non-nil error") } @@ -38,7 +38,7 @@ func TestLoadRelabelConfigsFailure(t *testing.T) { func TestParseRelabelConfigsSuccess(t *testing.T) { f := func(rcs []RelabelConfig, pcsExpected *ParsedConfigs) { t.Helper() - pcs, err := ParseRelabelConfigs(rcs) + pcs, err := ParseRelabelConfigs(rcs, false) if err != nil { t.Fatalf("unexected error: %s", err) } @@ -72,7 +72,7 @@ func TestParseRelabelConfigsSuccess(t *testing.T) { func TestParseRelabelConfigsFailure(t *testing.T) { f := func(rcs []RelabelConfig) { t.Helper() - pcs, err := ParseRelabelConfigs(rcs) + pcs, err := ParseRelabelConfigs(rcs, false) if err == nil { t.Fatalf("expecting non-nil error") } diff --git a/lib/promrelabel/relabel.go b/lib/promrelabel/relabel.go index 1ee0e9868..fa8875986 100644 --- a/lib/promrelabel/relabel.go +++ b/lib/promrelabel/relabel.go @@ -41,11 +41,20 @@ func (prc *parsedRelabelConfig) String() string { // // The returned labels at labels[labelsOffset:] are sorted. func (pcs *ParsedConfigs) Apply(labels []prompbmarshal.Label, labelsOffset int, isFinalize bool) []prompbmarshal.Label { + var inStr string + relabelDebug := false if pcs != nil { + relabelDebug = pcs.relabelDebug + if relabelDebug { + inStr = labelsToString(labels[labelsOffset:]) + } for _, prc := range pcs.prcs { tmp := prc.apply(labels, labelsOffset) if len(tmp) == labelsOffset { // All the labels have been removed. + if pcs.relabelDebug { + logger.Infof("\nRelabel In: %s\nRelabel Out: DROPPED - all labels removed", inStr) + } return tmp } labels = tmp @@ -56,6 +65,20 @@ func (pcs *ParsedConfigs) Apply(labels []prompbmarshal.Label, labelsOffset int, labels = FinalizeLabels(labels[:labelsOffset], labels[labelsOffset:]) } SortLabels(labels[labelsOffset:]) + if relabelDebug { + if len(labels) == labelsOffset { + logger.Infof("\nRelabel In: %s\nRelabel Out: DROPPED - all labels removed", inStr) + return labels + } + outStr := labelsToString(labels[labelsOffset:]) + if inStr == outStr { + logger.Infof("\nRelabel In: %s\nRelabel Out: KEPT AS IS - no change", inStr) + } else { + logger.Infof("\nRelabel In: %s\nRelabel Out: %s", inStr, outStr) + } + // Drop labels + labels = labels[:labelsOffset] + } return labels } @@ -412,3 +435,33 @@ func CleanLabels(labels []prompbmarshal.Label) { label.Value = "" } } + +func labelsToString(labels []prompbmarshal.Label) string { + labelsCopy := append([]prompbmarshal.Label{}, labels...) + SortLabels(labelsCopy) + mname := "" + for _, label := range labelsCopy { + if label.Name == "__name__" { + mname = label.Value + break + } + } + if mname != "" && len(labelsCopy) <= 1 { + return mname + } + b := []byte(mname) + b = append(b, '{') + for i, label := range labelsCopy { + if label.Name == "__name__" { + continue + } + b = append(b, label.Name...) + b = append(b, '=') + b = strconv.AppendQuote(b, label.Value) + if i+1 < len(labelsCopy) { + b = append(b, ',') + } + } + b = append(b, '}') + return string(b) +} diff --git a/lib/promrelabel/relabel_test.go b/lib/promrelabel/relabel_test.go index 742219921..a77e40abe 100644 --- a/lib/promrelabel/relabel_test.go +++ b/lib/promrelabel/relabel_test.go @@ -7,10 +7,57 @@ import ( "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" ) +func TestLabelsToString(t *testing.T) { + f := func(labels []prompbmarshal.Label, sExpected string) { + t.Helper() + s := labelsToString(labels) + if s != sExpected { + t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", s, sExpected) + } + } + f(nil, "{}") + f([]prompbmarshal.Label{ + { + Name: "__name__", + Value: "foo", + }, + }, "foo") + f([]prompbmarshal.Label{ + { + Name: "foo", + Value: "bar", + }, + }, `{foo="bar"}`) + f([]prompbmarshal.Label{ + { + Name: "foo", + Value: "bar", + }, + { + Name: "a", + Value: "bc", + }, + }, `{a="bc",foo="bar"}`) + f([]prompbmarshal.Label{ + { + Name: "foo", + Value: "bar", + }, + { + Name: "__name__", + Value: "xxx", + }, + { + Name: "a", + Value: "bc", + }, + }, `xxx{a="bc",foo="bar"}`) +} + func TestApplyRelabelConfigs(t *testing.T) { f := func(config string, labels []prompbmarshal.Label, isFinalize bool, resultExpected []prompbmarshal.Label) { t.Helper() - pcs, err := ParseRelabelConfigsData([]byte(config)) + pcs, err := ParseRelabelConfigsData([]byte(config), false) if err != nil { t.Fatalf("cannot parse %q: %s", config, err) } diff --git a/lib/promrelabel/relabel_timing_test.go b/lib/promrelabel/relabel_timing_test.go index bdaf05986..cddb84e12 100644 --- a/lib/promrelabel/relabel_timing_test.go +++ b/lib/promrelabel/relabel_timing_test.go @@ -840,7 +840,7 @@ func BenchmarkApplyRelabelConfigs(b *testing.B) { } func mustParseRelabelConfigs(config string) *ParsedConfigs { - pcs, err := ParseRelabelConfigsData([]byte(config)) + pcs, err := ParseRelabelConfigsData([]byte(config), false) if err != nil { panic(fmt.Errorf("unexpected error: %w", err)) } diff --git a/lib/promscrape/client.go b/lib/promscrape/client.go index 5baa7cbab..55e856cea 100644 --- a/lib/promscrape/client.go +++ b/lib/promscrape/client.go @@ -192,8 +192,10 @@ func (c *client) GetStreamReader() (*streamReader, error) { } scrapesOK.Inc() return &streamReader{ - r: resp.Body, - cancel: cancel, + r: resp.Body, + cancel: cancel, + scrapeURL: c.scrapeURL, + maxBodySize: int64(c.hc.MaxResponseBodySize), }, nil } @@ -328,14 +330,20 @@ func doRequestWithPossibleRetry(hc *fasthttp.HostClient, req *fasthttp.Request, } type streamReader struct { - r io.ReadCloser - cancel context.CancelFunc - bytesRead int64 + r io.ReadCloser + cancel context.CancelFunc + bytesRead int64 + scrapeURL string + maxBodySize int64 } func (sr *streamReader) Read(p []byte) (int, error) { n, err := sr.r.Read(p) sr.bytesRead += int64(n) + if err == nil && sr.bytesRead > sr.maxBodySize { + err = fmt.Errorf("the response from %q exceeds -promscrape.maxScrapeSize=%d; "+ + "either reduce the response size for the target or increase -promscrape.maxScrapeSize", sr.scrapeURL, sr.maxBodySize) + } return n, err } diff --git a/lib/promscrape/config.go b/lib/promscrape/config.go index 7ad155915..e5a088ebf 100644 --- a/lib/promscrape/config.go +++ b/lib/promscrape/config.go @@ -118,6 +118,8 @@ type ScrapeConfig struct { GCESDConfigs []gce.SDConfig `yaml:"gce_sd_configs,omitempty"` // These options are supported only by lib/promscrape. + RelabelDebug bool `yaml:"relabel_debug,omitempty"` + MetricRelabelDebug bool `yaml:"metric_relabel_debug,omitempty"` DisableCompression bool `yaml:"disable_compression,omitempty"` DisableKeepAlive bool `yaml:"disable_keepalive,omitempty"` StreamParse bool `yaml:"stream_parse,omitempty"` @@ -573,11 +575,11 @@ func getScrapeWorkConfig(sc *ScrapeConfig, baseDir string, globalCfg *GlobalConf if err != nil { return nil, fmt.Errorf("cannot parse proxy auth config for `job_name` %q: %w", jobName, err) } - relabelConfigs, err := promrelabel.ParseRelabelConfigs(sc.RelabelConfigs) + relabelConfigs, err := promrelabel.ParseRelabelConfigs(sc.RelabelConfigs, sc.RelabelDebug) if err != nil { return nil, fmt.Errorf("cannot parse `relabel_configs` for `job_name` %q: %w", jobName, err) } - metricRelabelConfigs, err := promrelabel.ParseRelabelConfigs(sc.MetricRelabelConfigs) + metricRelabelConfigs, err := promrelabel.ParseRelabelConfigs(sc.MetricRelabelConfigs, sc.MetricRelabelDebug) if err != nil { return nil, fmt.Errorf("cannot parse `metric_relabel_configs` for `job_name` %q: %w", jobName, err) } diff --git a/lib/promscrape/discovery/eureka/eureka.go b/lib/promscrape/discovery/eureka/eureka.go index 54b7c03f2..1edb980cc 100644 --- a/lib/promscrape/discovery/eureka/eureka.go +++ b/lib/promscrape/discovery/eureka/eureka.go @@ -17,7 +17,7 @@ const appsAPIPath = "/apps" // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka type SDConfig struct { Server string `yaml:"server,omitempty"` - HTTPClientConfig promauth.HTTPClientConfig `ymal:",inline"` + HTTPClientConfig promauth.HTTPClientConfig `yaml:",inline"` ProxyURL proxy.URL `yaml:"proxy_url,omitempty"` ProxyClientConfig promauth.ProxyClientConfig `yaml:",inline"` // RefreshInterval time.Duration `yaml:"refresh_interval"` diff --git a/lib/promscrape/scrapework.go b/lib/promscrape/scrapework.go index 3a0948298..792234429 100644 --- a/lib/promscrape/scrapework.go +++ b/lib/promscrape/scrapework.go @@ -305,6 +305,8 @@ func (sw *scrapeWork) scrapeInternal(scrapeTimestamp, realTimestamp int64) error wc.resetNoRows() up = 0 scrapesSkippedBySampleLimit.Inc() + err = fmt.Errorf("the response from %q exceeds sample_limit=%d; "+ + "either reduce the sample count for the target or increase sample_limit", sw.Config.ScrapeURL, sw.Config.SampleLimit) } sw.updateSeriesAdded(wc) seriesAdded := sw.finalizeSeriesAdded(samplesPostRelabeling) @@ -348,6 +350,12 @@ func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error { // after returning from the callback - this will result in data race. // See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825#issuecomment-723198247 samplesPostRelabeling += len(wc.writeRequest.Timeseries) + if sw.Config.SampleLimit > 0 && samplesPostRelabeling > sw.Config.SampleLimit { + wc.resetNoRows() + scrapesSkippedBySampleLimit.Inc() + return fmt.Errorf("the response from %q exceeds sample_limit=%d; "+ + "either reduce the sample count for the target or increase sample_limit", sw.Config.ScrapeURL, sw.Config.SampleLimit) + } sw.updateSeriesAdded(wc) startTime := time.Now() sw.PushData(&wc.writeRequest) diff --git a/lib/promscrape/scrapework_test.go b/lib/promscrape/scrapework_test.go index 79e6cb6ce..7d2eeced6 100644 --- a/lib/promscrape/scrapework_test.go +++ b/lib/promscrape/scrapework_test.go @@ -115,7 +115,9 @@ func TestScrapeWorkScrapeInternalSuccess(t *testing.T) { timestamp := int64(123000) if err := sw.scrapeInternal(timestamp, timestamp); err != nil { - t.Fatalf("unexpected error: %s", err) + if !strings.Contains(err.Error(), "sample_limit") { + t.Fatalf("unexpected error: %s", err) + } } if pushDataErr != nil { t.Fatalf("unexpected error: %s", pushDataErr) @@ -433,7 +435,7 @@ func timeseriesToString(ts *prompbmarshal.TimeSeries) string { } func mustParseRelabelConfigs(config string) *promrelabel.ParsedConfigs { - pcs, err := promrelabel.ParseRelabelConfigsData([]byte(config)) + pcs, err := promrelabel.ParseRelabelConfigsData([]byte(config), false) if err != nil { panic(fmt.Errorf("cannot parse %q: %w", config, err)) } diff --git a/lib/storage/index_db.go b/lib/storage/index_db.go index 5d3c75078..6f210ee16 100644 --- a/lib/storage/index_db.go +++ b/lib/storage/index_db.go @@ -2554,10 +2554,10 @@ func (is *indexSearch) updateMetricIDsForOrSuffixesNoFilter(tf *tagFilter, metri kb.B = append(kb.B, orSuffix...) kb.B = append(kb.B, tagSeparatorChar) lc, err := is.updateMetricIDsForOrSuffixNoFilter(kb.B, metricIDs, maxMetrics, maxLoopsCount-loopsCount) + loopsCount += lc if err != nil { return loopsCount, err } - loopsCount += lc if metricIDs.Len() >= maxMetrics { return loopsCount, nil } @@ -2575,10 +2575,10 @@ func (is *indexSearch) updateMetricIDsForOrSuffixesWithFilter(tf *tagFilter, met kb.B = append(kb.B, orSuffix...) kb.B = append(kb.B, tagSeparatorChar) lc, err := is.updateMetricIDsForOrSuffixWithFilter(kb.B, metricIDs, sortedFilter, tf.isNegative, maxLoopsCount-loopsCount) + loopsCount += lc if err != nil { return loopsCount, err } - loopsCount += lc } return loopsCount, nil } diff --git a/lib/storage/storage.go b/lib/storage/storage.go index 31b250f8c..47d29dcbd 100644 --- a/lib/storage/storage.go +++ b/lib/storage/storage.go @@ -1898,24 +1898,28 @@ type dateMetricIDCache struct { byDate atomic.Value // Contains mutable map protected by mu - byDateMutable *byDateMetricIDMap - lastSyncTime uint64 - mu sync.Mutex + byDateMutable *byDateMetricIDMap + nextSyncDeadline uint64 + mu sync.Mutex } func newDateMetricIDCache() *dateMetricIDCache { var dmc dateMetricIDCache - dmc.Reset() + dmc.resetLocked() return &dmc } func (dmc *dateMetricIDCache) Reset() { dmc.mu.Lock() + dmc.resetLocked() + dmc.mu.Unlock() +} + +func (dmc *dateMetricIDCache) resetLocked() { // Do not reset syncsCount and resetsCount dmc.byDate.Store(newByDateMetricIDMap()) dmc.byDateMutable = newByDateMetricIDMap() - dmc.lastSyncTime = fasttime.UnixTimestamp() - dmc.mu.Unlock() + dmc.nextSyncDeadline = 10 + fasttime.UnixTimestamp() atomic.AddUint64(&dmc.resetsCount, 1) } @@ -1948,20 +1952,12 @@ func (dmc *dateMetricIDCache) Has(date, metricID uint64) bool { } // Slow path. Check mutable map. - currentTime := fasttime.UnixTimestamp() dmc.mu.Lock() v = dmc.byDateMutable.get(date) ok := v.Has(metricID) - mustSync := false - if currentTime-dmc.lastSyncTime > 10 { - mustSync = true - dmc.lastSyncTime = currentTime - } + dmc.syncLockedIfNeeded() dmc.mu.Unlock() - if mustSync { - dmc.sync() - } return ok } @@ -2000,21 +1996,47 @@ func (dmc *dateMetricIDCache) Set(date, metricID uint64) { dmc.mu.Unlock() } -func (dmc *dateMetricIDCache) sync() { - dmc.mu.Lock() +func (dmc *dateMetricIDCache) syncLockedIfNeeded() { + currentTime := fasttime.UnixTimestamp() + if currentTime >= dmc.nextSyncDeadline { + dmc.nextSyncDeadline = currentTime + 10 + dmc.syncLocked() + } +} + +func (dmc *dateMetricIDCache) syncLocked() { + if len(dmc.byDateMutable.m) == 0 { + // Nothing to sync. + return + } byDate := dmc.byDate.Load().(*byDateMetricIDMap) - for date, e := range dmc.byDateMutable.m { + byDateMutable := dmc.byDateMutable + for date, e := range byDateMutable.m { v := byDate.get(date) - e.v.Union(v) + if v == nil { + continue + } + v = v.Clone() + v.Union(&e.v) + byDateMutable.m[date] = &byDateMetricIDEntry{ + date: date, + v: *v, + } + } + for date, e := range byDate.m { + v := byDateMutable.get(date) + if v != nil { + continue + } + byDateMutable.m[date] = e } dmc.byDate.Store(dmc.byDateMutable) dmc.byDateMutable = newByDateMetricIDMap() - dmc.mu.Unlock() atomic.AddUint64(&dmc.syncsCount, 1) if dmc.EntriesCount() > memory.Allowed()/128 { - dmc.Reset() + dmc.resetLocked() } } diff --git a/lib/storage/storage_test.go b/lib/storage/storage_test.go index b0e143b28..975ebe7bc 100644 --- a/lib/storage/storage_test.go +++ b/lib/storage/storage_test.go @@ -89,7 +89,9 @@ func testDateMetricIDCache(c *dateMetricIDCache, concurrent bool) error { return fmt.Errorf("c.Has(%d, %d) must return true, but returned false", date, metricID) } if i%11234 == 0 { - c.sync() + c.mu.Lock() + c.syncLocked() + c.mu.Unlock() } if i%34323 == 0 { c.Reset() @@ -103,7 +105,9 @@ func testDateMetricIDCache(c *dateMetricIDCache, concurrent bool) error { metricID := uint64(i) % 123 c.Set(date, metricID) } - c.sync() + c.mu.Lock() + c.syncLocked() + c.mu.Unlock() for i := 0; i < 1e5; i++ { date := uint64(i) % 3 metricID := uint64(i) % 123 diff --git a/lib/uint64set/uint64set.go b/lib/uint64set/uint64set.go index 4096c0aac..37c5417ac 100644 --- a/lib/uint64set/uint64set.go +++ b/lib/uint64set/uint64set.go @@ -79,9 +79,7 @@ func (s *Set) SizeBytes() uint64 { } n := uint64(unsafe.Sizeof(*s)) for i := range s.buckets { - b32 := &s.buckets[i] - n += uint64(unsafe.Sizeof(b32)) - n += b32.sizeBytes() + n += s.buckets[i].sizeBytes() } return n } @@ -411,7 +409,7 @@ type bucket32 struct { b16his []uint16 // buckets are sorted by b16his - buckets []bucket16 + buckets []*bucket16 } func (b *bucket32) getLen() int { @@ -434,7 +432,7 @@ func (b *bucket32) union(a *bucket32, mayOwn bool) { for j < len(a.b16his) { b16 := b.addBucket16(a.b16his[j]) if mayOwn { - *b16 = a.buckets[j] + *b16 = *a.buckets[j] } else { a.buckets[j].copyTo(b16) } @@ -445,7 +443,7 @@ func (b *bucket32) union(a *bucket32, mayOwn bool) { for j < len(a.b16his) && a.b16his[j] < b.b16his[i] { b16 := b.addBucket16(a.b16his[j]) if mayOwn { - *b16 = a.buckets[j] + *b16 = *a.buckets[j] } else { a.buckets[j].copyTo(b16) } @@ -455,7 +453,7 @@ func (b *bucket32) union(a *bucket32, mayOwn bool) { break } if b.b16his[i] == a.b16his[j] { - b.buckets[i].union(&a.buckets[j]) + b.buckets[i].union(a.buckets[j]) i++ j++ } @@ -481,7 +479,7 @@ func (b *bucket32) intersect(a *bucket32) { j := 0 for { for i < len(b.b16his) && j < len(a.b16his) && b.b16his[i] < a.b16his[j] { - b.buckets[i] = bucket16{} + *b.buckets[i] = bucket16{} i++ } if i >= len(b.b16his) { @@ -492,13 +490,13 @@ func (b *bucket32) intersect(a *bucket32) { } if j >= len(a.b16his) { for i < len(b.b16his) { - b.buckets[i] = bucket16{} + *b.buckets[i] = bucket16{} i++ } break } if b.b16his[i] == a.b16his[j] { - b.buckets[i].intersect(&a.buckets[j]) + b.buckets[i].intersect(a.buckets[j]) i++ j++ } @@ -506,16 +504,15 @@ func (b *bucket32) intersect(a *bucket32) { // Remove zero buckets b16his := b.b16his[:0] bs := b.buckets[:0] - for i := range b.buckets { - b32 := &b.buckets[i] - if b32.isZero() { + for i, b16 := range b.buckets { + if b16.isZero() { continue } b16his = append(b16his, b.b16his[i]) - bs = append(bs, *b32) + bs = append(bs, b16) } for i := len(bs); i < len(b.buckets); i++ { - b.buckets[i] = bucket16{} + b.buckets[i] = nil } b.hint = 0 b.b16his = b16his @@ -525,9 +522,9 @@ func (b *bucket32) intersect(a *bucket32) { func (b *bucket32) forEach(f func(part []uint64) bool) bool { xbuf := partBufPool.Get().(*[]uint64) buf := *xbuf - for i := range b.buckets { + for i, b16 := range b.buckets { hi16 := b.b16his[i] - buf = b.buckets[i].appendTo(buf[:0], b.hi, hi16) + buf = b16.appendTo(buf[:0], b.hi, hi16) if !f(buf) { return false } @@ -547,9 +544,7 @@ var partBufPool = &sync.Pool{ func (b *bucket32) sizeBytes() uint64 { n := uint64(unsafe.Sizeof(*b)) n += 2 * uint64(len(b.b16his)) - for i := range b.buckets { - b16 := &b.buckets[i] - n += uint64(unsafe.Sizeof(b16)) + for _, b16 := range b.buckets { n += b16.sizeBytes() } return n @@ -561,9 +556,11 @@ func (b *bucket32) copyTo(dst *bucket32) { // Do not reuse dst.buckets, since it may be used in other places. dst.buckets = nil if len(b.buckets) > 0 { - dst.buckets = make([]bucket16, len(b.buckets)) - for i := range b.buckets { - b.buckets[i].copyTo(&dst.buckets[i]) + dst.buckets = make([]*bucket16, len(b.buckets)) + for i, b16 := range b.buckets { + b16Dst := &bucket16{} + b16.copyTo(b16Dst) + dst.buckets[i] = b16Dst } } } @@ -617,7 +614,7 @@ func (b *bucket32) getOrCreateBucket16(hi uint16) *bucket16 { if n < 0 || n >= len(his) || his[n] != hi { return b.addBucketAtPos(hi, n) } - return &bs[n] + return bs[n] } func (b *bucket32) addSlow(hi, lo uint16) bool { @@ -635,8 +632,8 @@ func (b *bucket32) addSlow(hi, lo uint16) bool { func (b *bucket32) addBucket16(hi uint16) *bucket16 { b.b16his = append(b.b16his, hi) - b.buckets = append(b.buckets, bucket16{}) - return &b.buckets[len(b.buckets)-1] + b.buckets = append(b.buckets, &bucket16{}) + return b.buckets[len(b.buckets)-1] } func (b *bucket32) addBucketAtPos(hi uint16, pos int) *bucket16 { @@ -650,8 +647,8 @@ func (b *bucket32) addBucketAtPos(hi uint16, pos int) *bucket16 { b.b16his = append(b.b16his[:pos+1], b.b16his[pos:]...) b.b16his[pos] = hi b.buckets = append(b.buckets[:pos+1], b.buckets[pos:]...) - b16 := &b.buckets[pos] - *b16 = bucket16{} + b16 := &bucket16{} + b.buckets[pos] = b16 return b16 } diff --git a/snap/snapcraft.yaml b/snap/snapcraft.yaml index b68305af6..136859821 100644 --- a/snap/snapcraft.yaml +++ b/snap/snapcraft.yaml @@ -51,7 +51,7 @@ confinement: strict # use 'strict' once you have the right plugs and slots parts: build: plugin: go - go-channel: 1.15/stable + go-channel: 1.16/stable go-importpath: github.com/VictoriaMetrics/VictoriaMetrics source: . source-type: local @@ -80,4 +80,4 @@ layout: bind-file: $SNAP_DATA/etc/victoriametrics-scrape-config.yaml architectures: - build-on: ['arm64', 'amd64'] - run-on: ['arm64','amd64'] \ No newline at end of file + run-on: ['arm64','amd64'] diff --git a/vendor/cloud.google.com/go/CHANGES.md b/vendor/cloud.google.com/go/CHANGES.md index 66a6bdf2d..9931f7a8d 100644 --- a/vendor/cloud.google.com/go/CHANGES.md +++ b/vendor/cloud.google.com/go/CHANGES.md @@ -1,5 +1,20 @@ # Changes +## [0.83.0](https://www.github.com/googleapis/google-cloud-go/compare/v0.82.0...v0.83.0) (2021-06-02) + + +### Features + +* **dialogflow:** added a field in the query result to indicate whether slot filling is cancelled. ([f9cda8f](https://www.github.com/googleapis/google-cloud-go/commit/f9cda8fb6c3d76a062affebe6649f0a43aeb96f3)) +* **essentialcontacts:** start generating apiv1 ([#4118](https://www.github.com/googleapis/google-cloud-go/issues/4118)) ([fe14afc](https://www.github.com/googleapis/google-cloud-go/commit/fe14afcf74e09089b22c4f5221cbe37046570fda)) +* **gsuiteaddons:** start generating apiv1 ([#4082](https://www.github.com/googleapis/google-cloud-go/issues/4082)) ([6de5c99](https://www.github.com/googleapis/google-cloud-go/commit/6de5c99173c4eeaf777af18c47522ca15637d232)) +* **osconfig:** OSConfig: add ExecResourceOutput and per step error message. ([f9cda8f](https://www.github.com/googleapis/google-cloud-go/commit/f9cda8fb6c3d76a062affebe6649f0a43aeb96f3)) +* **osconfig:** start generating apiv1alpha ([#4119](https://www.github.com/googleapis/google-cloud-go/issues/4119)) ([8ad471f](https://www.github.com/googleapis/google-cloud-go/commit/8ad471f26087ec076460df6dcf27769ffe1b8834)) +* **privatecatalog:** start generating apiv1beta1 ([500c1a6](https://www.github.com/googleapis/google-cloud-go/commit/500c1a6101f624cb6032f0ea16147645a02e7076)) +* **serviceusage:** start generating apiv1 ([#4120](https://www.github.com/googleapis/google-cloud-go/issues/4120)) ([e4531f9](https://www.github.com/googleapis/google-cloud-go/commit/e4531f93cfeb6388280bb253ef6eb231aba37098)) +* **shell:** start generating apiv1 ([500c1a6](https://www.github.com/googleapis/google-cloud-go/commit/500c1a6101f624cb6032f0ea16147645a02e7076)) +* **vpcaccess:** start generating apiv1 ([500c1a6](https://www.github.com/googleapis/google-cloud-go/commit/500c1a6101f624cb6032f0ea16147645a02e7076)) + ## [0.82.0](https://www.github.com/googleapis/google-cloud-go/compare/v0.81.0...v0.82.0) (2021-05-17) diff --git a/vendor/cloud.google.com/go/CONTRIBUTING.md b/vendor/cloud.google.com/go/CONTRIBUTING.md index 1d7870fa6..ee9846363 100644 --- a/vendor/cloud.google.com/go/CONTRIBUTING.md +++ b/vendor/cloud.google.com/go/CONTRIBUTING.md @@ -136,6 +136,9 @@ As part of the setup that follows, the following variables will be configured: - `GCLOUD_TESTS_GOLANG_KEYRING`: The full name of the keyring for the tests, in the form "projects/P/locations/L/keyRings/R". The creation of this is described below. +- `GCLOUD_TESTS_BIGTABLE_KEYRING`: The full name of the keyring for the bigtable tests, +in the form +"projects/P/locations/L/keyRings/R". The creation of this is described below. Expected to be single region. - `GCLOUD_TESTS_GOLANG_ZONE`: Compute Engine zone. Install the [gcloud command-line tool][gcloudcli] to your machine and use it to @@ -172,6 +175,7 @@ $ gcloud beta spanner instances create go-integration-test --config regional-us- $ export MY_KEYRING=some-keyring-name $ export MY_LOCATION=global +$ export MY_SINGLE_LOCATION=us-central1 # Creates a KMS keyring, in the same location as the default location for your # project's buckets. $ gcloud kms keyrings create $MY_KEYRING --location $MY_LOCATION @@ -182,10 +186,15 @@ $ gcloud kms keys create key2 --keyring $MY_KEYRING --location $MY_LOCATION --pu $ export GCLOUD_TESTS_GOLANG_KEYRING=projects/$GCLOUD_TESTS_GOLANG_PROJECT_ID/locations/$MY_LOCATION/keyRings/$MY_KEYRING # Authorizes Google Cloud Storage to encrypt and decrypt using key1. $ gsutil kms authorize -p $GCLOUD_TESTS_GOLANG_PROJECT_ID -k $GCLOUD_TESTS_GOLANG_KEYRING/cryptoKeys/key1 + +# Create KMS Key in one region for Bigtable +$ gcloud kms keys create key1 --keyring $MY_KEYRING --location $MY_SINGLE_LOCATION --purpose encryption +# Sets the GCLOUD_TESTS_BIGTABLE_KEYRING environment variable. +$ export GCLOUD_TESTS_BIGTABLE_KEYRING=projects/$GCLOUD_TESTS_GOLANG_PROJECT_ID/locations/$MY_SINGLE_LOCATION/keyRings/$MY_KEYRING # Authorizes Google Cloud Bigtable to encrypt and decrypt using key1 $ gcloud kms keys add-iam-policy-binding key1 \ --keyring $MY_KEYRING \ - --location $MY_LOCATION \ + --location $MY_SINGLE_LOCATION \ --role roles/cloudkms.cryptoKeyEncrypterDecrypter \ --member "${GCLOUD_TESTS_GOLANG_PROJECT_ID}@${GCLOUD_TESTS_GOLANG_PROJECT_ID}.iam.gserviceaccount.com" \ --project $GCLOUD_TESTS_GOLANG_PROJECT_ID diff --git a/vendor/cloud.google.com/go/go.mod b/vendor/cloud.google.com/go/go.mod index 1ff540182..04ee0cb44 100644 --- a/vendor/cloud.google.com/go/go.mod +++ b/vendor/cloud.google.com/go/go.mod @@ -6,18 +6,18 @@ require ( cloud.google.com/go/storage v1.10.0 github.com/golang/mock v1.5.0 github.com/golang/protobuf v1.5.2 - github.com/google/go-cmp v0.5.5 - github.com/google/martian/v3 v3.1.0 - github.com/google/pprof v0.0.0-20210506205249-923b5ab0fc1a + github.com/google/go-cmp v0.5.6 + github.com/google/martian/v3 v3.2.1 + github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22 github.com/googleapis/gax-go/v2 v2.0.5 github.com/jstemmer/go-junit-report v0.9.1 go.opencensus.io v0.23.0 golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 - golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420 golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c golang.org/x/text v0.3.6 - golang.org/x/tools v0.1.1 - google.golang.org/api v0.46.0 - google.golang.org/genproto v0.0.0-20210517163617-5e0236093d7a - google.golang.org/grpc v1.37.1 + golang.org/x/tools v0.1.2 + google.golang.org/api v0.47.0 + google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c + google.golang.org/grpc v1.38.0 + google.golang.org/protobuf v1.26.0 ) diff --git a/vendor/cloud.google.com/go/go.sum b/vendor/cloud.google.com/go/go.sum index b0aea2efc..6ed2bf87a 100644 --- a/vendor/cloud.google.com/go/go.sum +++ b/vendor/cloud.google.com/go/go.sum @@ -90,6 +90,8 @@ github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaS github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM= github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw= github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= +github.com/golang/snappy v0.0.3 h1:fHPg5GQYlCeLIPB9BZqMVR5nR9A+IM5zcgeTdjMYmLA= +github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= @@ -102,13 +104,15 @@ github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/ github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU= github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no= github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= -github.com/google/martian/v3 v3.1.0 h1:wCKgOCHuUEVfsaQLpPSJb7VdYCdTVZQAuOdYm1yc/60= github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= +github.com/google/martian/v3 v3.2.1 h1:d8MncMlErDFTwQGBK1xhv026j9kqhvw1Qv9IbWT1VLQ= +github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk= github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= @@ -120,8 +124,8 @@ github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLe github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20210506205249-923b5ab0fc1a h1:jmAp/2PZAScNd62lTD3Mcb0Ey9FvIIJtLohPhtxZJ+Q= -github.com/google/pprof v0.0.0-20210506205249-923b5ab0fc1a/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= +github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22 h1:ub2sxhs2A0HRa2dWHavvmWxiVGXNfE9wI+gcTMwED8A= +github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= @@ -247,7 +251,6 @@ golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210427180440-81ed05c6b58c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c h1:pkQiBZBvdos9qq4wBAHqlzuZHEXo07pqV06ef90u1WI= golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -299,9 +302,9 @@ golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210503080704-8803ae5d1324/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210510120138-977fb7262007 h1:gG67DSER+11cZvqIMb8S8bt0vZtiN6xWYARwirrOSfE= golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210514084401-e8d321eab015 h1:hZR0X1kPW+nwyJ9xRxqZk1vx5RUObAPBdKVvXPDUH/E= +golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -361,8 +364,9 @@ golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4f golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= -golang.org/x/tools v0.1.1 h1:wGiQel/hW0NnEkJUk8lbzkX2gFJU6PFxf1v5OlCfuOs= golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.2 h1:kRBLX7v7Af8W7Gdbbc908OJcdgtK8bOz9Uaj8/F1ACA= +golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= @@ -389,8 +393,8 @@ google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34q google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8= google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU= google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94= -google.golang.org/api v0.46.0 h1:jkDWHOBIoNSD0OQpq4rtBVu+Rh325MPjXG1rakAp8JU= -google.golang.org/api v0.46.0/go.mod h1:ceL4oozhkAiTID8XMmJBsIxID/9wMXJVVFXPg4ylg3I= +google.golang.org/api v0.47.0 h1:sQLWZQvP6jPGIP4JGPkJu4zHswrv81iobiyszr3b/0I= +google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= @@ -438,9 +442,9 @@ google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6D google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A= -google.golang.org/genproto v0.0.0-20210429181445-86c259c2b4ab/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= -google.golang.org/genproto v0.0.0-20210517163617-5e0236093d7a h1:VA0wtJaR+W1I11P2f535J7D/YxyvEFMTMvcmyeZ9FBE= -google.golang.org/genproto v0.0.0-20210517163617-5e0236093d7a/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= +google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= +google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c h1:wtujag7C+4D6KMoulW9YauvK2lgdvCMS260jsqqBXr0= +google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= @@ -460,8 +464,10 @@ google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAG google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU= google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU= google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= -google.golang.org/grpc v1.37.1 h1:ARnQJNWxGyYJpdf/JXscNlQr/uv607ZPU9Z7ogHi+iI= google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= +google.golang.org/grpc v1.38.0 h1:/9BgsAsa5nWe26HqOlvlgJnqBuktYOLCgjCPqsa56W0= +google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= +google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= diff --git a/vendor/cloud.google.com/go/internal/.repo-metadata-full.json b/vendor/cloud.google.com/go/internal/.repo-metadata-full.json index fcab2e74d..fa85f8c25 100644 --- a/vendor/cloud.google.com/go/internal/.repo-metadata-full.json +++ b/vendor/cloud.google.com/go/internal/.repo-metadata-full.json @@ -485,6 +485,15 @@ "release_level": "beta", "library_type": "" }, + "cloud.google.com/go/essentialcontacts/apiv1": { + "distribution_name": "cloud.google.com/go/essentialcontacts/apiv1", + "description": "Essential Contacts API", + "language": "Go", + "client_library_type": "generated", + "docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/essentialcontacts/apiv1", + "release_level": "beta", + "library_type": "" + }, "cloud.google.com/go/firestore": { "distribution_name": "cloud.google.com/go/firestore", "description": "Cloud Firestore API", @@ -557,6 +566,15 @@ "release_level": "beta", "library_type": "" }, + "cloud.google.com/go/gsuiteaddons/apiv1": { + "distribution_name": "cloud.google.com/go/gsuiteaddons/apiv1", + "description": "Google Workspace Add-ons API", + "language": "Go", + "client_library_type": "generated", + "docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/gsuiteaddons/apiv1", + "release_level": "beta", + "library_type": "" + }, "cloud.google.com/go/iam": { "distribution_name": "cloud.google.com/go/iam", "description": "Cloud IAM", @@ -604,7 +622,7 @@ }, "cloud.google.com/go/language/apiv1beta2": { "distribution_name": "cloud.google.com/go/language/apiv1beta2", - "description": "Cloud Natural Language API", + "description": "Google Cloud Natural Language API", "language": "Go", "client_library_type": "generated", "docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/language/apiv1beta2", @@ -773,6 +791,15 @@ "release_level": "ga", "library_type": "" }, + "cloud.google.com/go/osconfig/apiv1alpha": { + "distribution_name": "cloud.google.com/go/osconfig/apiv1alpha", + "description": "OS Config API", + "language": "Go", + "client_library_type": "generated", + "docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/osconfig/apiv1alpha", + "release_level": "alpha", + "library_type": "" + }, "cloud.google.com/go/osconfig/apiv1beta": { "distribution_name": "cloud.google.com/go/osconfig/apiv1beta", "description": "Cloud OS Config API", @@ -818,6 +845,15 @@ "release_level": "ga", "library_type": "" }, + "cloud.google.com/go/privatecatalog/apiv1beta1": { + "distribution_name": "cloud.google.com/go/privatecatalog/apiv1beta1", + "description": "Cloud Private Catalog API", + "language": "Go", + "client_library_type": "generated", + "docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/privatecatalog/apiv1beta1", + "release_level": "beta", + "library_type": "" + }, "cloud.google.com/go/profiler": { "distribution_name": "cloud.google.com/go/profiler", "description": "Cloud Profiler", @@ -1088,6 +1124,24 @@ "release_level": "ga", "library_type": "" }, + "cloud.google.com/go/serviceusage/apiv1": { + "distribution_name": "cloud.google.com/go/serviceusage/apiv1", + "description": "Service Usage API", + "language": "Go", + "client_library_type": "generated", + "docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/serviceusage/apiv1", + "release_level": "beta", + "library_type": "" + }, + "cloud.google.com/go/shell/apiv1": { + "distribution_name": "cloud.google.com/go/shell/apiv1", + "description": "Cloud Shell API", + "language": "Go", + "client_library_type": "generated", + "docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/shell/apiv1", + "release_level": "beta", + "library_type": "" + }, "cloud.google.com/go/spanner": { "distribution_name": "cloud.google.com/go/spanner", "description": "Cloud Spanner", @@ -1250,6 +1304,15 @@ "release_level": "beta", "library_type": "" }, + "cloud.google.com/go/vpcaccess/apiv1": { + "distribution_name": "cloud.google.com/go/vpcaccess/apiv1", + "description": "Serverless VPC Access API", + "language": "Go", + "client_library_type": "generated", + "docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/vpcaccess/apiv1", + "release_level": "beta", + "library_type": "" + }, "cloud.google.com/go/webrisk/apiv1": { "distribution_name": "cloud.google.com/go/webrisk/apiv1", "description": "Web Risk API", diff --git a/vendor/github.com/VictoriaMetrics/fastcache/go.mod b/vendor/github.com/VictoriaMetrics/fastcache/go.mod index f575823ae..783ebfd46 100644 --- a/vendor/github.com/VictoriaMetrics/fastcache/go.mod +++ b/vendor/github.com/VictoriaMetrics/fastcache/go.mod @@ -8,4 +8,5 @@ require ( github.com/davecgh/go-spew v1.1.1 // indirect github.com/golang/snappy v0.0.3 github.com/stretchr/testify v1.3.0 // indirect + golang.org/x/sys v0.0.0-20210324051608-47abb6519492 ) diff --git a/vendor/github.com/VictoriaMetrics/fastcache/go.sum b/vendor/github.com/VictoriaMetrics/fastcache/go.sum index 066369ee9..046d62083 100644 --- a/vendor/github.com/VictoriaMetrics/fastcache/go.sum +++ b/vendor/github.com/VictoriaMetrics/fastcache/go.sum @@ -12,3 +12,5 @@ github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZN github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +golang.org/x/sys v0.0.0-20210324051608-47abb6519492 h1:Paq34FxTluEPvVyayQqMPgHm+vTOrIifmcYxFBx9TLg= +golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= diff --git a/vendor/github.com/VictoriaMetrics/fastcache/malloc_mmap.go b/vendor/github.com/VictoriaMetrics/fastcache/malloc_mmap.go index 424b79b43..e0cd0e761 100644 --- a/vendor/github.com/VictoriaMetrics/fastcache/malloc_mmap.go +++ b/vendor/github.com/VictoriaMetrics/fastcache/malloc_mmap.go @@ -5,8 +5,9 @@ package fastcache import ( "fmt" "sync" - "syscall" "unsafe" + + "golang.org/x/sys/unix" ) const chunksPerAlloc = 1024 @@ -21,7 +22,7 @@ func getChunk() []byte { if len(freeChunks) == 0 { // Allocate offheap memory, so GOGC won't take into account cache size. // This should reduce free memory waste. - data, err := syscall.Mmap(-1, 0, chunkSize*chunksPerAlloc, syscall.PROT_READ|syscall.PROT_WRITE, syscall.MAP_ANON|syscall.MAP_PRIVATE) + data, err := unix.Mmap(-1, 0, chunkSize*chunksPerAlloc, unix.PROT_READ|unix.PROT_WRITE, unix.MAP_ANON|unix.MAP_PRIVATE) if err != nil { panic(fmt.Errorf("cannot allocate %d bytes via mmap: %s", chunkSize*chunksPerAlloc, err)) } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go index b927bd350..9d1266d5d 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go @@ -837,6 +837,16 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "apprunner": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "appstream2": service{ Defaults: endpoint{ Protocols: []string{"https"}, @@ -2857,6 +2867,7 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, @@ -3176,9 +3187,27 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "forecast-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "forecast-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "forecast-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, }, }, "forecastquery": service{ @@ -3191,9 +3220,27 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "forecastquery-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "forecastquery-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "forecastquery-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, }, }, "fsx": service{ @@ -4084,6 +4131,7 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, @@ -5059,6 +5107,7 @@ var awsPartition = partition{ "ap-northeast-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-2": endpoint{}, "us-east-1": endpoint{}, @@ -5423,6 +5472,7 @@ var awsPartition = partition{ "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, + "ap-northeast-3": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, @@ -6124,6 +6174,61 @@ var awsPartition = partition{ }, }, }, + "servicecatalog-appregistry": service{ + + Endpoints: endpoints{ + "af-south-1": endpoint{}, + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-south-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-ca-central-1": endpoint{ + Hostname: "servicecatalog-appregistry-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "fips-us-east-1": endpoint{ + Hostname: "servicecatalog-appregistry-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "servicecatalog-appregistry-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "servicecatalog-appregistry-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "servicecatalog-appregistry-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "servicediscovery": service{ Endpoints: endpoints{ @@ -6192,9 +6297,27 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "session.qldb-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "session.qldb-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "session.qldb-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, }, }, "shield": service{ @@ -9831,6 +9954,25 @@ var awsusgovPartition = partition{ }, }, }, + "servicecatalog-appregistry": service{ + + Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "servicecatalog-appregistry.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "servicecatalog-appregistry.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + "us-gov-east-1": endpoint{}, + "us-gov-west-1": endpoint{}, + }, + }, "servicequotas": service{ Defaults: endpoint{ Protocols: []string{"https"}, @@ -10470,6 +10612,12 @@ var awsisoPartition = partition{ "us-iso-east-1": endpoint{}, }, }, + "ram": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, "rds": service{ Endpoints: endpoints{ diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request.go index d597c6ead..fb0a68fce 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/request/request.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request.go @@ -129,12 +129,27 @@ func New(cfg aws.Config, clientInfo metadata.ClientInfo, handlers Handlers, httpReq, _ := http.NewRequest(method, "", nil) var err error - httpReq.URL, err = url.Parse(clientInfo.Endpoint + operation.HTTPPath) + httpReq.URL, err = url.Parse(clientInfo.Endpoint) if err != nil { httpReq.URL = &url.URL{} err = awserr.New("InvalidEndpointURL", "invalid endpoint uri", err) } + if len(operation.HTTPPath) != 0 { + opHTTPPath := operation.HTTPPath + var opQueryString string + if idx := strings.Index(opHTTPPath, "?"); idx >= 0 { + opQueryString = opHTTPPath[idx+1:] + opHTTPPath = opHTTPPath[:idx] + } + + if strings.HasSuffix(httpReq.URL.Path, "/") && strings.HasPrefix(opHTTPPath, "/") { + opHTTPPath = opHTTPPath[1:] + } + httpReq.URL.Path += opHTTPPath + httpReq.URL.RawQuery = opQueryString + } + r := &Request{ Config: cfg, ClientInfo: clientInfo, diff --git a/vendor/github.com/aws/aws-sdk-go/aws/version.go b/vendor/github.com/aws/aws-sdk-go/aws/version.go index a29d39b7c..e22e2afa6 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/version.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/version.go @@ -5,4 +5,4 @@ package aws const SDKName = "aws-sdk-go" // SDKVersion is the version of this SDK -const SDKVersion = "1.38.43" +const SDKVersion = "1.38.56" diff --git a/vendor/github.com/aws/aws-sdk-go/internal/s3shared/arn/arn.go b/vendor/github.com/aws/aws-sdk-go/internal/s3shared/arn/arn.go index 3079e4ab0..216c4baab 100644 --- a/vendor/github.com/aws/aws-sdk-go/internal/s3shared/arn/arn.go +++ b/vendor/github.com/aws/aws-sdk-go/internal/s3shared/arn/arn.go @@ -48,6 +48,10 @@ func ParseResource(s string, resParser ResourceParser) (resARN Resource, err err return nil, InvalidARNError{ARN: a, Reason: "service is not supported"} } + if strings.HasPrefix(a.Region, "fips-") || strings.HasSuffix(a.Region, "-fips") { + return nil, InvalidARNError{ARN: a, Reason: "FIPS region not allowed in ARN"} + } + if len(a.Resource) == 0 { return nil, InvalidARNError{ARN: a, Reason: "resource not set"} } diff --git a/vendor/github.com/aws/aws-sdk-go/internal/s3shared/endpoint_errors.go b/vendor/github.com/aws/aws-sdk-go/internal/s3shared/endpoint_errors.go index e756b2f87..4290ff676 100644 --- a/vendor/github.com/aws/aws-sdk-go/internal/s3shared/endpoint_errors.go +++ b/vendor/github.com/aws/aws-sdk-go/internal/s3shared/endpoint_errors.go @@ -71,6 +71,8 @@ func NewInvalidARNWithUnsupportedPartitionError(resource arn.Resource, err error } // NewInvalidARNWithFIPSError ARN not supported for FIPS region +// +// Deprecated: FIPS will not appear in the ARN region component. func NewInvalidARNWithFIPSError(resource arn.Resource, err error) InvalidARNError { return InvalidARNError{ message: "resource ARN not supported for FIPS region", @@ -155,6 +157,17 @@ func NewClientConfiguredForFIPSError(resource arn.Resource, clientPartitionID, c } } +// NewFIPSConfigurationError denotes a configuration error when a client or request is configured for FIPS +func NewFIPSConfigurationError(resource arn.Resource, clientPartitionID, clientRegion string, err error) ConfigurationError { + return ConfigurationError{ + message: "use of ARN is not supported when client or request is configured for FIPS", + origErr: err, + resource: resource, + clientPartitionID: clientPartitionID, + clientRegion: clientRegion, + } +} + // NewClientConfiguredForAccelerateError denotes client config error for unsupported S3 accelerate func NewClientConfiguredForAccelerateError(resource arn.Resource, clientPartitionID, clientRegion string, err error) ConfigurationError { return ConfigurationError{ diff --git a/vendor/github.com/aws/aws-sdk-go/internal/s3shared/resource_request.go b/vendor/github.com/aws/aws-sdk-go/internal/s3shared/resource_request.go index 9f70a64ec..2091ba6ba 100644 --- a/vendor/github.com/aws/aws-sdk-go/internal/s3shared/resource_request.go +++ b/vendor/github.com/aws/aws-sdk-go/internal/s3shared/resource_request.go @@ -31,6 +31,8 @@ func (r ResourceRequest) UseFIPS() bool { } // ResourceConfiguredForFIPS returns true if resource ARNs region is FIPS +// +// Deprecated: FIPS pseudo-regions will not be in the ARN func (r ResourceRequest) ResourceConfiguredForFIPS() bool { return IsFIPS(r.ARN().Region) } diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/api.go b/vendor/github.com/aws/aws-sdk-go/service/s3/api.go index 6d15bad28..e23d94b1a 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/api.go @@ -356,9 +356,8 @@ func (c *S3) CopyObjectRequest(input *CopyObjectInput) (req *request.Request, ou // use the s3:x-amz-metadata-directive condition key to enforce certain metadata // behavior when objects are uploaded. For more information, see Specifying // Conditions in a Policy (https://docs.aws.amazon.com/AmazonS3/latest/dev/amazon-s3-policy-keys.html) -// in the Amazon S3 Developer Guide. For a complete list of Amazon S3-specific -// condition keys, see Actions, Resources, and Condition Keys for Amazon S3 -// (https://docs.aws.amazon.com/AmazonS3/latest/dev/list_amazons3.html). +// in the Amazon S3 User Guide. For a complete list of Amazon S3-specific condition +// keys, see Actions, Resources, and Condition Keys for Amazon S3 (https://docs.aws.amazon.com/AmazonS3/latest/dev/list_amazons3.html). // // x-amz-copy-source-if Headers // @@ -422,7 +421,7 @@ func (c *S3) CopyObjectRequest(input *CopyObjectInput) (req *request.Request, ou // You can use the CopyObject action to change the storage class of an object // that is already stored in Amazon S3 using the StorageClass parameter. For // more information, see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) -// in the Amazon S3 Service Developer Guide. +// in the Amazon S3 User Guide. // // Versioning // @@ -535,7 +534,7 @@ func (c *S3) CreateBucketRequest(input *CreateBucketInput) (req *request.Request // become the bucket owner. // // Not every string is an acceptable bucket name. For information about bucket -// naming restrictions, see Working with Amazon S3 buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html). +// naming restrictions, see Bucket naming rules (https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html). // // If you want to create an Amazon S3 on Outposts bucket, see Create Bucket // (https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateBucket.html). @@ -723,10 +722,11 @@ func (c *S3) CreateMultipartUploadRequest(input *CreateMultipartUploadInput) (re // by using CreateMultipartUpload. // // To perform a multipart upload with encryption using an AWS KMS CMK, the requester -// must have permission to the kms:Encrypt, kms:Decrypt, kms:ReEncrypt*, kms:GenerateDataKey*, -// and kms:DescribeKey actions on the key. These permissions are required because -// Amazon S3 must decrypt and read data from the encrypted file parts before -// it completes the multipart upload. +// must have permission to the kms:Decrypt and kms:GenerateDataKey* actions +// on the key. These permissions are required because Amazon S3 must decrypt +// and read data from the encrypted file parts before it completes the multipart +// upload. For more information, see Multipart upload API and permissions (https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html#mpuAndPermissions) +// in the Amazon S3 User Guide. // // If your AWS Identity and Access Management (IAM) user or role is in the same // AWS account as the AWS KMS CMK, then you must have these permissions on the @@ -1835,7 +1835,7 @@ func (c *S3) DeleteBucketReplicationRequest(input *DeleteBucketReplicationInput) // propagate. // // For information about replication configuration, see Replication (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html) -// in the Amazon S3 Developer Guide. +// in the Amazon S3 User Guide. // // The following operations are related to DeleteBucketReplication: // @@ -6497,12 +6497,13 @@ func (c *S3) ListObjectsV2Request(input *ListObjectsV2Input) (req *request.Reque // ListObjectsV2 API operation for Amazon Simple Storage Service. // -// Returns some or all (up to 1,000) of the objects in a bucket. You can use -// the request parameters as selection criteria to return a subset of the objects -// in a bucket. A 200 OK response can contain valid or invalid XML. Make sure -// to design your application to parse the contents of the response and handle -// it appropriately. Objects are returned sorted in an ascending order of the -// respective key names in the list. +// Returns some or all (up to 1,000) of the objects in a bucket with each request. +// You can use the request parameters as selection criteria to return a subset +// of the objects in a bucket. A 200 OK response can contain valid or invalid +// XML. Make sure to design your application to parse the contents of the response +// and handle it appropriately. Objects are returned sorted in an ascending +// order of the respective key names in the list. For more information about +// listing objects, see Listing object keys programmatically (https://docs.aws.amazon.com/AmazonS3/latest/userguide/ListingKeysUsingAPIs.html) // // To use this operation, you must have READ access to the bucket. // @@ -7816,7 +7817,7 @@ func (c *S3) PutBucketLifecycleConfigurationRequest(input *PutBucketLifecycleCon // // Creates a new lifecycle configuration for the bucket or replaces an existing // lifecycle configuration. For information about lifecycle configuration, see -// Managing Access Permissions to Your Amazon S3 Resources (https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html). +// Managing your storage lifecycle (https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html). // // Bucket lifecycle configuration now supports specifying a lifecycle rule using // an object key name prefix, one or more object tags, or a combination of both. @@ -8587,7 +8588,7 @@ func (c *S3) PutBucketReplicationRequest(input *PutBucketReplicationInput) (req // // Creates a replication configuration or replaces an existing one. For more // information, see Replication (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html) -// in the Amazon S3 Developer Guide. +// in the Amazon S3 User Guide. // // To perform this operation, the user or role performing the action must have // the iam:PassRole (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) @@ -8814,11 +8815,12 @@ func (c *S3) PutBucketTaggingRequest(input *PutBucketTaggingInput) (req *request // according to resources with the same tag key values. For example, you can // tag several resources with a specific application name, and then organize // your billing information to see the total cost of that application across -// several services. For more information, see Cost Allocation and Tagging (https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html). +// several services. For more information, see Cost Allocation and Tagging (https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html) +// and Using Cost Allocation in Amazon S3 Bucket Tags (https://docs.aws.amazon.com/AmazonS3/latest/dev/CostAllocTagging.html). // -// Within a bucket, if you add a tag that has the same key as an existing tag, -// the new value overwrites the old value. For more information, see Using Cost -// Allocation in Amazon S3 Bucket Tags (https://docs.aws.amazon.com/AmazonS3/latest/dev/CostAllocTagging.html). +// When this operation sets the tags for a bucket, it will overwrite any current +// tags the bucket already has. You cannot use this operation to add tags to +// an existing list of tags. // // To use this operation, you must have permissions to perform the s3:PutBucketTagging // action. The bucket owner has this permission by default and can grant this @@ -9229,7 +9231,7 @@ func (c *S3) PutObjectRequest(input *PutObjectInput) (req *request.Request, outp // Depending on performance needs, you can specify a different Storage Class. // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) -// in the Amazon S3 Service Developer Guide. +// in the Amazon S3 User Guide. // // Versioning // @@ -9339,7 +9341,7 @@ func (c *S3) PutObjectAclRequest(input *PutObjectAclInput) (req *request.Request // have an existing application that updates a bucket ACL using the request // body, you can continue to use that approach. For more information, see Access // Control List (ACL) Overview (https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html) -// in the Amazon S3 Developer Guide. +// in the Amazon S3 User Guide. // // Access Permissions // @@ -10997,7 +10999,7 @@ type AbortMultipartUploadInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -11025,7 +11027,7 @@ type AbortMultipartUploadInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Upload ID that identifies the multipart upload. @@ -11242,7 +11244,7 @@ type AccessControlTranslation struct { // Specifies the replica ownership. For default and valid values, see PUT bucket // replication (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) - // in the Amazon Simple Storage Service API Reference. + // in the Amazon S3 API Reference. // // Owner is a required field Owner *string `type:"string" required:"true" enum:"OwnerOverride"` @@ -11693,7 +11695,7 @@ type BucketLoggingStatus struct { // Describes where logs are stored and the prefix that Amazon S3 assigns to // all log object keys for a bucket. For more information, see PUT Bucket logging // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html) - // in the Amazon Simple Storage Service API Reference. + // in the Amazon S3 API Reference. LoggingEnabled *LoggingEnabled `type:"structure"` } @@ -12168,7 +12170,7 @@ type CompleteMultipartUploadInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // ID for the initiated multipart upload. @@ -12291,7 +12293,7 @@ type CompleteMultipartUploadOutput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -12577,7 +12579,7 @@ type CopyObjectInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -12735,7 +12737,7 @@ type CopyObjectInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Specifies the algorithm to use to when encrypting the object (for example, @@ -12764,7 +12766,7 @@ type CopyObjectInput struct { // or using SigV4. For information about configuring using any of the officially // supported AWS SDKs and AWS CLI, see Specifying the Signature Version in Request // Authentication (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#specify-signature-version) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"` // The server-side encryption algorithm used when storing this object in Amazon @@ -12776,7 +12778,7 @@ type CopyObjectInput struct { // Depending on performance needs, you can specify a different Storage Class. // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) - // in the Amazon S3 Service Developer Guide. + // in the Amazon S3 User Guide. StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` // The tag-set for the object destination object this value must be used in @@ -13358,7 +13360,10 @@ type CreateBucketInput struct { // Allows grantee to read the bucket ACL. GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` - // Allows grantee to create, overwrite, and delete any object in the bucket. + // Allows grantee to create new objects in the bucket. + // + // For the bucket and object owners of existing objects, also allows deletions + // and overwrites of those objects. GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"` // Allows grantee to write the ACL for the applicable bucket. @@ -13494,7 +13499,7 @@ type CreateMultipartUploadInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -13583,7 +13588,7 @@ type CreateMultipartUploadInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Specifies the algorithm to use to when encrypting the object (for example, @@ -13612,7 +13617,7 @@ type CreateMultipartUploadInput struct { // KMS will fail if not made via SSL or using SigV4. For information about configuring // using any of the officially supported AWS SDKs and AWS CLI, see Specifying // the Signature Version in Request Authentication (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#specify-signature-version) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"` // The server-side encryption algorithm used when storing this object in Amazon @@ -13624,7 +13629,7 @@ type CreateMultipartUploadInput struct { // Depending on performance needs, you can specify a different Storage Class. // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) - // in the Amazon S3 Service Developer Guide. + // in the Amazon S3 User Guide. StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` // The tag-set for the object. The tag-set must be encoded as URL Query parameters. @@ -13908,7 +13913,7 @@ type CreateMultipartUploadOutput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -15613,7 +15618,7 @@ type DeleteObjectInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -15651,7 +15656,7 @@ type DeleteObjectInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // VersionId used to reference a specific version of the object. @@ -15819,7 +15824,7 @@ type DeleteObjectTaggingInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -15970,7 +15975,7 @@ type DeleteObjectsInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -16009,7 +16014,7 @@ type DeleteObjectsInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` } @@ -16333,7 +16338,7 @@ type Destination struct { // the destination bucket by specifying the AccessControlTranslation property, // this is the account ID of the destination bucket owner. For more information, // see Replication Additional Configuration: Changing the Replica Owner (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-change-owner.html) - // in the Amazon Simple Storage Service Developer Guide. + // in the Amazon S3 User Guide. Account *string `type:"string"` // The Amazon Resource Name (ARN) of the bucket where you want Amazon S3 to @@ -16361,7 +16366,7 @@ type Destination struct { // // For valid values, see the StorageClass element of the PUT Bucket replication // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) - // action in the Amazon Simple Storage Service API Reference. + // action in the Amazon S3 API Reference. StorageClass *string `type:"string" enum:"StorageClass"` } @@ -16468,8 +16473,8 @@ type Encryption struct { // If the encryption type is aws:kms, this optional value specifies the ID of // the symmetric customer managed AWS KMS CMK to use for encryption of job results. - // Amazon S3 only supports symmetric CMKs. For more information, see Using Symmetric - // and Asymmetric Keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) + // Amazon S3 only supports symmetric CMKs. For more information, see Using symmetric + // and asymmetric keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) // in the AWS Key Management Service Developer Guide. KMSKeyId *string `type:"string" sensitive:"true"` } @@ -16520,11 +16525,11 @@ func (s *Encryption) SetKMSKeyId(v string) *Encryption { type EncryptionConfiguration struct { _ struct{} `type:"structure"` - // Specifies the ID (Key ARN or Alias ARN) of the customer managed customer - // master key (CMK) stored in AWS Key Management Service (KMS) for the destination - // bucket. Amazon S3 uses this key to encrypt replica objects. Amazon S3 only - // supports symmetric customer managed CMKs. For more information, see Using - // Symmetric and Asymmetric Keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) + // Specifies the ID (Key ARN or Alias ARN) of the customer managed AWS KMS key + // stored in AWS Key Management Service (KMS) for the destination bucket. Amazon + // S3 uses this key to encrypt replica objects. Amazon S3 only supports symmetric, + // customer managed KMS keys. For more information, see Using symmetric and + // asymmetric keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) // in the AWS Key Management Service Developer Guide. ReplicaKmsKeyID *string `type:"string"` } @@ -17035,7 +17040,7 @@ func (s *ErrorDocument) SetKey(v string) *ErrorDocument { // Optional configuration to replicate existing source bucket objects. For more // information, see Replicating Existing Objects (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-what-is-isnot-replicated.html#existing-object-replication) -// in the Amazon S3 Developer Guide. +// in the Amazon S3 User Guide. type ExistingObjectReplication struct { _ struct{} `type:"structure"` @@ -18337,7 +18342,7 @@ type GetBucketLoggingOutput struct { // Describes where logs are stored and the prefix that Amazon S3 assigns to // all log object keys for a bucket. For more information, see PUT Bucket logging // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html) - // in the Amazon Simple Storage Service API Reference. + // in the Amazon S3 API Reference. LoggingEnabled *LoggingEnabled `type:"structure"` } @@ -19490,7 +19495,7 @@ type GetObjectAclInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -19510,7 +19515,7 @@ type GetObjectAclInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // VersionId used to reference a specific version of the object. @@ -19664,7 +19669,7 @@ type GetObjectInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -19720,7 +19725,7 @@ type GetObjectInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Sets the Cache-Control header of the response. @@ -19964,7 +19969,7 @@ type GetObjectLegalHoldInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -19984,7 +19989,7 @@ type GetObjectLegalHoldInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // The version ID of the object whose Legal Hold status you want to retrieve. @@ -20119,7 +20124,7 @@ type GetObjectLockConfigurationInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -20567,7 +20572,7 @@ type GetObjectRetentionInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -20587,7 +20592,7 @@ type GetObjectRetentionInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // The version ID for the object whose retention settings you want to retrieve. @@ -20722,7 +20727,7 @@ type GetObjectTaggingInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -20750,7 +20755,7 @@ type GetObjectTaggingInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // The versionId of the object for which to get the tagging information. @@ -20910,7 +20915,7 @@ type GetObjectTorrentInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` } @@ -21342,7 +21347,7 @@ type HeadBucketInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -21457,7 +21462,7 @@ type HeadObjectInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -21514,7 +21519,7 @@ type HeadObjectInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Specifies the algorithm to use to when encrypting the object (for example, @@ -22417,7 +22422,7 @@ func (s *IntelligentTieringFilter) SetTag(v *Tag) *IntelligentTieringFilter { // Specifies the inventory configuration for an Amazon S3 bucket. For more information, // see GET Bucket inventory (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETInventoryConfig.html) -// in the Amazon Simple Storage Service API Reference. +// in the Amazon S3 API Reference. type InventoryConfiguration struct { _ struct{} `type:"structure"` @@ -23987,7 +23992,7 @@ type ListMultipartUploadsInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -24627,7 +24632,7 @@ type ListObjectsInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -24921,7 +24926,7 @@ type ListObjectsV2Input struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -25157,7 +25162,7 @@ type ListObjectsV2Output struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -25273,7 +25278,7 @@ type ListPartsInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -25308,7 +25313,7 @@ type ListPartsInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Upload ID identifying the multipart upload whose parts are being listed. @@ -25730,7 +25735,7 @@ func (s *Location) SetUserMetadata(v []*MetadataEntry) *Location { // Describes where logs are stored and the prefix that Amazon S3 assigns to // all log object keys for a bucket. For more information, see PUT Bucket logging // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html) -// in the Amazon Simple Storage Service API Reference. +// in the Amazon S3 API Reference. type LoggingEnabled struct { _ struct{} `type:"structure"` @@ -25953,7 +25958,7 @@ func (s *MetricsAndOperator) SetTags(v []*Tag) *MetricsAndOperator { // the existing metrics configuration. If you don't include the elements you // want to keep, they are erased. For more information, see PUT Bucket metrics // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTMetricConfiguration.html) -// in the Amazon Simple Storage Service API Reference. +// in the Amazon S3 API Reference. type MetricsConfiguration struct { _ struct{} `type:"structure"` @@ -26155,7 +26160,7 @@ type NoncurrentVersionExpiration struct { // perform the associated action. For information about the noncurrent days // calculations, see How Amazon S3 Calculates When an Object Became Noncurrent // (https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html#non-current-days-calculations) - // in the Amazon Simple Storage Service Developer Guide. + // in the Amazon S3 User Guide. NoncurrentDays *int64 `type:"integer"` } @@ -27336,7 +27341,10 @@ type PutBucketAclInput struct { // Allows grantee to read the bucket ACL. GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` - // Allows grantee to create, overwrite, and delete any object in the bucket. + // Allows grantee to create new objects in the bucket. + // + // For the bucket and object owners of existing objects, also allows deletions + // and overwrites of those objects. GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"` // Allows grantee to write the ACL for the applicable bucket. @@ -29693,7 +29701,7 @@ type PutObjectAclInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -29720,7 +29728,10 @@ type PutObjectAclInput struct { // This action is not supported by Amazon S3 on Outposts. GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` - // Allows grantee to create, overwrite, and delete any object in the bucket. + // Allows grantee to create new objects in the bucket. + // + // For the bucket and object owners of existing objects, also allows deletions + // and overwrites of those objects. GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"` // Allows grantee to write the ACL for the applicable bucket. @@ -29734,7 +29745,7 @@ type PutObjectAclInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -29752,7 +29763,7 @@ type PutObjectAclInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // VersionId used to reference a specific version of the object. @@ -29944,7 +29955,7 @@ type PutObjectInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -30046,14 +30057,15 @@ type PutObjectInput struct { // The Object Lock mode that you want to apply to this object. ObjectLockMode *string `location:"header" locationName:"x-amz-object-lock-mode" type:"string" enum:"ObjectLockMode"` - // The date and time when you want this object's Object Lock to expire. + // The date and time when you want this object's Object Lock to expire. Must + // be formatted as a timestamp parameter. ObjectLockRetainUntilDate *time.Time `location:"header" locationName:"x-amz-object-lock-retain-until-date" type:"timestamp" timestampFormat:"iso8601"` // Confirms that the requester knows that they will be charged for the request. // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Specifies the algorithm to use to when encrypting the object (for example, @@ -30080,13 +30092,11 @@ type PutObjectInput struct { // If x-amz-server-side-encryption is present and has the value of aws:kms, // this header specifies the ID of the AWS Key Management Service (AWS KMS) // symmetrical customer managed customer master key (CMK) that was used for - // the object. - // - // If the value of x-amz-server-side-encryption is aws:kms, this header specifies - // the ID of the symmetric customer managed AWS KMS CMK that will be used for // the object. If you specify x-amz-server-side-encryption:aws:kms, but do not // providex-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the AWS - // managed CMK in AWS to protect the data. + // managed CMK in AWS to protect the data. If the KMS key does not exist in + // the same account issuing the command, you must use the full ARN and not just + // the ID. SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"` // The server-side encryption algorithm used when storing this object in Amazon @@ -30098,7 +30108,7 @@ type PutObjectInput struct { // Depending on performance needs, you can specify a different Storage Class. // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) - // in the Amazon S3 Service Developer Guide. + // in the Amazon S3 User Guide. StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` // The tag-set for the object. The tag-set must be encoded as URL Query parameters. @@ -30401,7 +30411,7 @@ type PutObjectLegalHoldInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -30425,7 +30435,7 @@ type PutObjectLegalHoldInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // The version ID of the object that you want to place a Legal Hold on. @@ -30578,7 +30588,7 @@ type PutObjectLockConfigurationInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // A token to allow Object Lock to be enabled for an existing bucket. @@ -30831,7 +30841,7 @@ type PutObjectRetentionInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -30855,7 +30865,7 @@ type PutObjectRetentionInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // The container element for the Object Retention configuration. @@ -31007,7 +31017,7 @@ type PutObjectTaggingInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -31035,7 +31045,7 @@ type PutObjectTaggingInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Container for the TagSet and Tag elements @@ -31752,7 +31762,7 @@ type ReplicationRule struct { // Optional configuration to replicate existing source bucket objects. For more // information, see Replicating Existing Objects (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-what-is-isnot-replicated.html#existing-object-replication) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. ExistingObjectReplication *ExistingObjectReplication `type:"structure"` // A filter that identifies the subset of objects to which the replication rule @@ -32195,7 +32205,7 @@ type RestoreObjectInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -32223,7 +32233,7 @@ type RestoreObjectInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Container for restore job parameters. @@ -32540,8 +32550,8 @@ func (s *RoutingRule) SetRedirect(v *Redirect) *RoutingRule { // Specifies lifecycle rules for an Amazon S3 bucket. For more information, // see Put Bucket Lifecycle Configuration (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlifecycle.html) -// in the Amazon Simple Storage Service API Reference. For examples, see Put -// Bucket Lifecycle Configuration Examples (https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html#API_PutBucketLifecycleConfiguration_Examples). +// in the Amazon S3 API Reference. For examples, see Put Bucket Lifecycle Configuration +// Examples (https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html#API_PutBucketLifecycleConfiguration_Examples). type Rule struct { _ struct{} `type:"structure"` @@ -33287,17 +33297,17 @@ func (s *SelectParameters) SetOutputSerialization(v *OutputSerialization) *Selec // bucket. If a PUT Object request doesn't specify any server-side encryption, // this default encryption will be applied. For more information, see PUT Bucket // encryption (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTencryption.html) -// in the Amazon Simple Storage Service API Reference. +// in the Amazon S3 API Reference. type ServerSideEncryptionByDefault struct { _ struct{} `type:"structure"` - // AWS Key Management Service (KMS) customer master key ID to use for the default + // AWS Key Management Service (KMS) customer AWS KMS key ID to use for the default // encryption. This parameter is allowed if and only if SSEAlgorithm is set // to aws:kms. // - // You can specify the key ID or the Amazon Resource Name (ARN) of the CMK. + // You can specify the key ID or the Amazon Resource Name (ARN) of the KMS key. // However, if you are using encryption with cross-account operations, you must - // use a fully qualified CMK ARN. For more information, see Using encryption + // use a fully qualified KMS key ARN. For more information, see Using encryption // for cross-account operations (https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html#bucket-encryption-update-bucket-policy). // // For example: @@ -33306,8 +33316,8 @@ type ServerSideEncryptionByDefault struct { // // * Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab // - // Amazon S3 only supports symmetric CMKs and not asymmetric CMKs. For more - // information, see Using Symmetric and Asymmetric Keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) + // Amazon S3 only supports symmetric KMS keys and not asymmetric KMS keys. For + // more information, see Using symmetric and asymmetric keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) // in the AWS Key Management Service Developer Guide. KMSMasterKeyID *string `type:"string" sensitive:"true"` @@ -33531,7 +33541,7 @@ type SseKmsEncryptedObjects struct { _ struct{} `type:"structure"` // Specifies whether Amazon S3 replicates objects created with server-side encryption - // using a customer master key (CMK) stored in AWS Key Management Service. + // using an AWS KMS key stored in AWS Key Management Service. // // Status is a required field Status *string `type:"string" required:"true" enum:"SseKmsEncryptedObjectsStatus"` @@ -34170,7 +34180,7 @@ type UploadPartCopyInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -34275,7 +34285,7 @@ type UploadPartCopyInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Specifies the algorithm to use to when encrypting the object (for example, @@ -34612,7 +34622,7 @@ type UploadPartInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -34655,7 +34665,7 @@ type UploadPartInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Specifies the algorithm to use to when encrypting the object (for example, @@ -34919,7 +34929,7 @@ func (s *UploadPartOutput) SetServerSideEncryption(v string) *UploadPartOutput { // Describes the versioning state of an Amazon S3 bucket. For more information, // see PUT Bucket versioning (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTVersioningStatus.html) -// in the Amazon Simple Storage Service API Reference. +// in the Amazon S3 API Reference. type VersioningConfiguration struct { _ struct{} `type:"structure"` @@ -36028,6 +36038,9 @@ const ( // InventoryOptionalFieldIntelligentTieringAccessTier is a InventoryOptionalField enum value InventoryOptionalFieldIntelligentTieringAccessTier = "IntelligentTieringAccessTier" + + // InventoryOptionalFieldBucketKeyStatus is a InventoryOptionalField enum value + InventoryOptionalFieldBucketKeyStatus = "BucketKeyStatus" ) // InventoryOptionalField_Values returns all elements of the InventoryOptionalField enum @@ -36044,6 +36057,7 @@ func InventoryOptionalField_Values() []string { InventoryOptionalFieldObjectLockMode, InventoryOptionalFieldObjectLockLegalHoldStatus, InventoryOptionalFieldIntelligentTieringAccessTier, + InventoryOptionalFieldBucketKeyStatus, } } @@ -36477,7 +36491,7 @@ func RequestCharged_Values() []string { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) -// in the Amazon S3 Developer Guide. +// in the Amazon S3 User Guide. const ( // RequestPayerRequester is a RequestPayer enum value RequestPayerRequester = "requester" diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/endpoint.go b/vendor/github.com/aws/aws-sdk-go/service/s3/endpoint.go index 9fc2105fd..ba1a84d09 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/endpoint.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/endpoint.go @@ -155,8 +155,9 @@ func endpointHandler(req *request.Request) { } case arn.OutpostAccessPointARN: // outposts does not support FIPS regions - if resReq.ResourceConfiguredForFIPS() { - req.Error = s3shared.NewInvalidARNWithFIPSError(resource, nil) + if resReq.UseFIPS() { + req.Error = s3shared.NewFIPSConfigurationError(resource, req.ClientInfo.PartitionID, + aws.StringValue(req.Config.Region), nil) return } diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/upload_input.go b/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/upload_input.go index fb8853710..d18025943 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/upload_input.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/upload_input.go @@ -29,7 +29,7 @@ type UploadInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -126,14 +126,15 @@ type UploadInput struct { // The Object Lock mode that you want to apply to this object. ObjectLockMode *string `location:"header" locationName:"x-amz-object-lock-mode" type:"string" enum:"ObjectLockMode"` - // The date and time when you want this object's Object Lock to expire. + // The date and time when you want this object's Object Lock to expire. Must + // be formatted as a timestamp parameter. ObjectLockRetainUntilDate *time.Time `location:"header" locationName:"x-amz-object-lock-retain-until-date" type:"timestamp" timestampFormat:"iso8601"` // Confirms that the requester knows that they will be charged for the request. // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Specifies the algorithm to use to when encrypting the object (for example, @@ -160,13 +161,11 @@ type UploadInput struct { // If x-amz-server-side-encryption is present and has the value of aws:kms, // this header specifies the ID of the AWS Key Management Service (AWS KMS) // symmetrical customer managed customer master key (CMK) that was used for - // the object. - // - // If the value of x-amz-server-side-encryption is aws:kms, this header specifies - // the ID of the symmetric customer managed AWS KMS CMK that will be used for // the object. If you specify x-amz-server-side-encryption:aws:kms, but do not // providex-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the AWS - // managed CMK in AWS to protect the data. + // managed CMK in AWS to protect the data. If the KMS key does not exist in + // the same account issuing the command, you must use the full ARN and not just + // the ID. SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"` // The server-side encryption algorithm used when storing this object in Amazon @@ -178,7 +177,7 @@ type UploadInput struct { // Depending on performance needs, you can specify a different Storage Class. // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) - // in the Amazon S3 Service Developer Guide. + // in the Amazon S3 User Guide. StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` // The tag-set for the object. The tag-set must be encoded as URL Query parameters. diff --git a/vendor/github.com/fatih/color/README.md b/vendor/github.com/fatih/color/README.md index d62e4024a..5c751f215 100644 --- a/vendor/github.com/fatih/color/README.md +++ b/vendor/github.com/fatih/color/README.md @@ -127,14 +127,16 @@ fmt.Println("All text will now be bold magenta.") There might be a case where you want to explicitly disable/enable color output. the `go-isatty` package will automatically disable color output for non-tty output streams -(for example if the output were piped directly to `less`) +(for example if the output were piped directly to `less`). -`Color` has support to disable/enable colors both globally and for single color -definitions. For example suppose you have a CLI app and a `--no-color` bool flag. You -can easily disable the color output with: +The `color` package also disables color output if the [`NO_COLOR`](https://no-color.org) environment +variable is set (regardless of its value). + +`Color` has support to disable/enable colors programatically both globally and +for single color definitions. For example suppose you have a CLI app and a +`--no-color` bool flag. You can easily disable the color output with: ```go - var flagNoColor = flag.Bool("no-color", false, "Disable color output") if *flagNoColor { @@ -156,6 +158,10 @@ c.EnableColor() c.Println("This prints again cyan...") ``` +## GitHub Actions + +To output color in GitHub Actions (or other CI systems that support ANSI colors), make sure to set `color.NoColor = false` so that it bypasses the check for non-tty output streams. + ## Todo * Save/Return previous values @@ -170,4 +176,3 @@ c.Println("This prints again cyan...") ## License The MIT License (MIT) - see [`LICENSE.md`](https://github.com/fatih/color/blob/master/LICENSE.md) for more details - diff --git a/vendor/github.com/fatih/color/color.go b/vendor/github.com/fatih/color/color.go index 91c8e9f06..98a60f3c8 100644 --- a/vendor/github.com/fatih/color/color.go +++ b/vendor/github.com/fatih/color/color.go @@ -15,9 +15,11 @@ import ( var ( // NoColor defines if the output is colorized or not. It's dynamically set to // false or true based on the stdout's file descriptor referring to a terminal - // or not. This is a global option and affects all colors. For more control - // over each color block use the methods DisableColor() individually. - NoColor = os.Getenv("TERM") == "dumb" || + // or not. It's also set to true if the NO_COLOR environment variable is + // set (regardless of its value). This is a global option and affects all + // colors. For more control over each color block use the methods + // DisableColor() individually. + NoColor = noColorExists() || os.Getenv("TERM") == "dumb" || (!isatty.IsTerminal(os.Stdout.Fd()) && !isatty.IsCygwinTerminal(os.Stdout.Fd())) // Output defines the standard output of the print functions. By default @@ -33,6 +35,12 @@ var ( colorsCacheMu sync.Mutex // protects colorsCache ) +// noColorExists returns true if the environment variable NO_COLOR exists. +func noColorExists() bool { + _, exists := os.LookupEnv("NO_COLOR") + return exists +} + // Color defines a custom color object which is defined by SGR parameters. type Color struct { params []Attribute @@ -108,7 +116,14 @@ const ( // New returns a newly created color object. func New(value ...Attribute) *Color { - c := &Color{params: make([]Attribute, 0)} + c := &Color{ + params: make([]Attribute, 0), + } + + if noColorExists() { + c.noColor = boolPtr(true) + } + c.Add(value...) return c } @@ -387,7 +402,7 @@ func (c *Color) EnableColor() { } func (c *Color) isNoColorSet() bool { - // check first if we have user setted action + // check first if we have user set action if c.noColor != nil { return *c.noColor } diff --git a/vendor/github.com/fatih/color/doc.go b/vendor/github.com/fatih/color/doc.go index cf1e96500..04541de78 100644 --- a/vendor/github.com/fatih/color/doc.go +++ b/vendor/github.com/fatih/color/doc.go @@ -118,6 +118,8 @@ the color output with: color.NoColor = true // disables colorized output } +You can also disable the color by setting the NO_COLOR environment variable to any value. + It also has support for single color definitions (local). You can disable/enable color output on the fly: diff --git a/vendor/github.com/klauspost/compress/flate/deflate.go b/vendor/github.com/klauspost/compress/flate/deflate.go index 40b5802de..5283ac5a5 100644 --- a/vendor/github.com/klauspost/compress/flate/deflate.go +++ b/vendor/github.com/klauspost/compress/flate/deflate.go @@ -644,7 +644,7 @@ func (d *compressor) init(w io.Writer, level int) (err error) { d.fill = (*compressor).fillBlock d.step = (*compressor).store case level == ConstantCompression: - d.w.logNewTablePenalty = 8 + d.w.logNewTablePenalty = 10 d.window = make([]byte, 32<<10) d.fill = (*compressor).fillBlock d.step = (*compressor).storeHuff diff --git a/vendor/github.com/klauspost/compress/flate/fast_encoder.go b/vendor/github.com/klauspost/compress/flate/fast_encoder.go index 678f08105..347ac2c90 100644 --- a/vendor/github.com/klauspost/compress/flate/fast_encoder.go +++ b/vendor/github.com/klauspost/compress/flate/fast_encoder.go @@ -45,7 +45,7 @@ const ( bTableBits = 17 // Bits used in the big tables bTableSize = 1 << bTableBits // Size of the table - allocHistory = maxStoreBlockSize * 10 // Size to preallocate for history. + allocHistory = maxStoreBlockSize * 5 // Size to preallocate for history. bufferReset = (1 << 31) - allocHistory - maxStoreBlockSize - 1 // Reset the buffer offset when reaching this. ) diff --git a/vendor/github.com/klauspost/compress/flate/huffman_bit_writer.go b/vendor/github.com/klauspost/compress/flate/huffman_bit_writer.go index db54be139..3ad5e9807 100644 --- a/vendor/github.com/klauspost/compress/flate/huffman_bit_writer.go +++ b/vendor/github.com/klauspost/compress/flate/huffman_bit_writer.go @@ -6,6 +6,7 @@ package flate import ( "encoding/binary" + "fmt" "io" ) @@ -27,7 +28,7 @@ const ( // after which bytes are flushed to the writer. // Should preferably be a multiple of 6, since // we accumulate 6 bytes between writes to the buffer. - bufferFlushSize = 240 + bufferFlushSize = 246 // bufferSize is the actual output byte buffer size. // It must have additional headroom for a flush @@ -59,19 +60,31 @@ var offsetExtraBits = [64]int8{ 14, 14, 15, 15, 16, 16, 17, 17, 18, 18, 19, 19, 20, 20, } -var offsetBase = [64]uint32{ - /* normal deflate */ - 0x000000, 0x000001, 0x000002, 0x000003, 0x000004, - 0x000006, 0x000008, 0x00000c, 0x000010, 0x000018, - 0x000020, 0x000030, 0x000040, 0x000060, 0x000080, - 0x0000c0, 0x000100, 0x000180, 0x000200, 0x000300, - 0x000400, 0x000600, 0x000800, 0x000c00, 0x001000, - 0x001800, 0x002000, 0x003000, 0x004000, 0x006000, +var offsetCombined = [32]uint32{} - /* extended window */ - 0x008000, 0x00c000, 0x010000, 0x018000, 0x020000, - 0x030000, 0x040000, 0x060000, 0x080000, 0x0c0000, - 0x100000, 0x180000, 0x200000, 0x300000, +func init() { + var offsetBase = [64]uint32{ + /* normal deflate */ + 0x000000, 0x000001, 0x000002, 0x000003, 0x000004, + 0x000006, 0x000008, 0x00000c, 0x000010, 0x000018, + 0x000020, 0x000030, 0x000040, 0x000060, 0x000080, + 0x0000c0, 0x000100, 0x000180, 0x000200, 0x000300, + 0x000400, 0x000600, 0x000800, 0x000c00, 0x001000, + 0x001800, 0x002000, 0x003000, 0x004000, 0x006000, + + /* extended window */ + 0x008000, 0x00c000, 0x010000, 0x018000, 0x020000, + 0x030000, 0x040000, 0x060000, 0x080000, 0x0c0000, + 0x100000, 0x180000, 0x200000, 0x300000, + } + + for i := range offsetCombined[:] { + // Don't use extended window values... + if offsetBase[i] > 0x006000 { + continue + } + offsetCombined[i] = uint32(offsetExtraBits[i])<<16 | (offsetBase[i]) + } } // The odd order in which the codegen code sizes are written. @@ -88,15 +101,16 @@ type huffmanBitWriter struct { bits uint64 nbits uint16 nbytes uint8 + lastHuffMan bool literalEncoding *huffmanEncoder + tmpLitEncoding *huffmanEncoder offsetEncoding *huffmanEncoder codegenEncoding *huffmanEncoder err error lastHeader int // Set between 0 (reused block can be up to 2x the size) logNewTablePenalty uint - lastHuffMan bool - bytes [256]byte + bytes [256 + 8]byte literalFreq [lengthCodesStart + 32]uint16 offsetFreq [32]uint16 codegenFreq [codegenCodeCount]uint16 @@ -128,6 +142,7 @@ func newHuffmanBitWriter(w io.Writer) *huffmanBitWriter { return &huffmanBitWriter{ writer: w, literalEncoding: newHuffmanEncoder(literalCount), + tmpLitEncoding: newHuffmanEncoder(literalCount), codegenEncoding: newHuffmanEncoder(codegenCodeCount), offsetEncoding: newHuffmanEncoder(offsetCodeCount), } @@ -745,9 +760,31 @@ func (w *huffmanBitWriter) writeTokens(tokens []token, leCodes, oeCodes []hcode) offs := oeCodes[:32] lengths := leCodes[lengthCodesStart:] lengths = lengths[:32] + + // Go 1.16 LOVES having these on stack. + bits, nbits, nbytes := w.bits, w.nbits, w.nbytes + for _, t := range tokens { if t < matchType { - w.writeCode(lits[t.literal()]) + //w.writeCode(lits[t.literal()]) + c := lits[t.literal()] + bits |= uint64(c.code) << nbits + nbits += c.len + if nbits >= 48 { + binary.LittleEndian.PutUint64(w.bytes[nbytes:], bits) + //*(*uint64)(unsafe.Pointer(&w.bytes[nbytes])) = bits + bits >>= 48 + nbits -= 48 + nbytes += 6 + if nbytes >= bufferFlushSize { + if w.err != nil { + nbytes = 0 + return + } + _, w.err = w.writer.Write(w.bytes[:nbytes]) + nbytes = 0 + } + } continue } @@ -759,38 +796,99 @@ func (w *huffmanBitWriter) writeTokens(tokens []token, leCodes, oeCodes []hcode) } else { // inlined c := lengths[lengthCode&31] - w.bits |= uint64(c.code) << w.nbits - w.nbits += c.len - if w.nbits >= 48 { - w.writeOutBits() + bits |= uint64(c.code) << nbits + nbits += c.len + if nbits >= 48 { + binary.LittleEndian.PutUint64(w.bytes[nbytes:], bits) + //*(*uint64)(unsafe.Pointer(&w.bytes[nbytes])) = bits + bits >>= 48 + nbits -= 48 + nbytes += 6 + if nbytes >= bufferFlushSize { + if w.err != nil { + nbytes = 0 + return + } + _, w.err = w.writer.Write(w.bytes[:nbytes]) + nbytes = 0 + } } } extraLengthBits := uint16(lengthExtraBits[lengthCode&31]) if extraLengthBits > 0 { + //w.writeBits(extraLength, extraLengthBits) extraLength := int32(length - lengthBase[lengthCode&31]) - w.writeBits(extraLength, extraLengthBits) + bits |= uint64(extraLength) << nbits + nbits += extraLengthBits + if nbits >= 48 { + binary.LittleEndian.PutUint64(w.bytes[nbytes:], bits) + //*(*uint64)(unsafe.Pointer(&w.bytes[nbytes])) = bits + bits >>= 48 + nbits -= 48 + nbytes += 6 + if nbytes >= bufferFlushSize { + if w.err != nil { + nbytes = 0 + return + } + _, w.err = w.writer.Write(w.bytes[:nbytes]) + nbytes = 0 + } + } } // Write the offset offset := t.offset() - offsetCode := offsetCode(offset) + offsetCode := offset >> 16 + offset &= matchOffsetOnlyMask if false { w.writeCode(offs[offsetCode&31]) } else { // inlined - c := offs[offsetCode&31] - w.bits |= uint64(c.code) << w.nbits - w.nbits += c.len - if w.nbits >= 48 { - w.writeOutBits() + c := offs[offsetCode] + bits |= uint64(c.code) << nbits + nbits += c.len + if nbits >= 48 { + binary.LittleEndian.PutUint64(w.bytes[nbytes:], bits) + //*(*uint64)(unsafe.Pointer(&w.bytes[nbytes])) = bits + bits >>= 48 + nbits -= 48 + nbytes += 6 + if nbytes >= bufferFlushSize { + if w.err != nil { + nbytes = 0 + return + } + _, w.err = w.writer.Write(w.bytes[:nbytes]) + nbytes = 0 + } } } - extraOffsetBits := uint16(offsetExtraBits[offsetCode&63]) - if extraOffsetBits > 0 { - extraOffset := int32(offset - offsetBase[offsetCode&63]) - w.writeBits(extraOffset, extraOffsetBits) + offsetComb := offsetCombined[offsetCode] + if offsetComb > 1<<16 { + //w.writeBits(extraOffset, extraOffsetBits) + bits |= uint64(offset&matchOffsetOnlyMask-(offsetComb&0xffff)) << nbits + nbits += uint16(offsetComb >> 16) + if nbits >= 48 { + binary.LittleEndian.PutUint64(w.bytes[nbytes:], bits) + //*(*uint64)(unsafe.Pointer(&w.bytes[nbytes])) = bits + bits >>= 48 + nbits -= 48 + nbytes += 6 + if nbytes >= bufferFlushSize { + if w.err != nil { + nbytes = 0 + return + } + _, w.err = w.writer.Write(w.bytes[:nbytes]) + nbytes = 0 + } + } } } + // Restore... + w.bits, w.nbits, w.nbytes = bits, nbits, nbytes + if deferEOB { w.writeCode(leCodes[endBlockMarker]) } @@ -825,13 +923,28 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) { } } + // Fill is rarely better... + const fill = false + const numLiterals = endBlockMarker + 1 + const numOffsets = 1 + // Add everything as literals // We have to estimate the header size. // Assume header is around 70 bytes: // https://stackoverflow.com/a/25454430 const guessHeaderSizeBits = 70 * 8 - estBits := histogramSize(input, w.literalFreq[:], !eof && !sync) - estBits += w.lastHeader + len(input)/32 + histogram(input, w.literalFreq[:numLiterals], fill) + w.literalFreq[endBlockMarker] = 1 + w.tmpLitEncoding.generate(w.literalFreq[:numLiterals], 15) + if fill { + // Clear fill... + for i := range w.literalFreq[:numLiterals] { + w.literalFreq[i] = 0 + } + histogram(input, w.literalFreq[:numLiterals], false) + } + estBits := w.tmpLitEncoding.canReuseBits(w.literalFreq[:numLiterals]) + estBits += w.lastHeader if w.lastHeader == 0 { estBits += guessHeaderSizeBits } @@ -839,33 +952,31 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) { // Store bytes, if we don't get a reasonable improvement. ssize, storable := w.storedSize(input) - if storable && ssize < estBits { + if storable && ssize <= estBits { w.writeStoredHeader(len(input), eof) w.writeBytes(input) return } - reuseSize := 0 if w.lastHeader > 0 { - reuseSize = w.literalEncoding.bitLength(w.literalFreq[:256]) + reuseSize := w.literalEncoding.canReuseBits(w.literalFreq[:256]) if estBits < reuseSize { + if debugDeflate { + //fmt.Println("not reusing, reuse:", reuseSize/8, "> new:", estBits/8, "- header est:", w.lastHeader/8) + } // We owe an EOB w.writeCode(w.literalEncoding.codes[endBlockMarker]) w.lastHeader = 0 + } else if debugDeflate { + fmt.Println("reusing, reuse:", reuseSize/8, "> new:", estBits/8, "- header est:", w.lastHeader/8) } } - const numLiterals = endBlockMarker + 1 - const numOffsets = 1 + count := 0 if w.lastHeader == 0 { - if !eof && !sync { - // Generate a slightly suboptimal tree that can be used for all. - fillHist(w.literalFreq[:numLiterals]) - } - w.literalFreq[endBlockMarker] = 1 - w.literalEncoding.generate(w.literalFreq[:numLiterals], 15) - + // Use the temp encoding, so swap. + w.literalEncoding, w.tmpLitEncoding = w.tmpLitEncoding, w.literalEncoding // Generate codegen and codegenFrequencies, which indicates how to encode // the literalEncoding and the offsetEncoding. w.generateCodegen(numLiterals, numOffsets, w.literalEncoding, huffOffset) @@ -876,34 +987,47 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) { w.writeDynamicHeader(numLiterals, numOffsets, numCodegens, eof) w.lastHuffMan = true w.lastHeader, _ = w.headerSize() + if debugDeflate { + count += w.lastHeader + fmt.Println("header:", count/8) + } } - encoding := w.literalEncoding.codes[:257] + encoding := w.literalEncoding.codes[:256] + // Go 1.16 LOVES having these on stack. At least 1.5x the speed. + bits, nbits, nbytes := w.bits, w.nbits, w.nbytes for _, t := range input { // Bitwriting inlined, ~30% speedup c := encoding[t] - w.bits |= uint64(c.code) << w.nbits - w.nbits += c.len - if w.nbits >= 48 { - bits := w.bits - w.bits >>= 48 - w.nbits -= 48 - n := w.nbytes - binary.LittleEndian.PutUint64(w.bytes[n:], bits) - n += 6 - if n >= bufferFlushSize { + bits |= uint64(c.code) << nbits + nbits += c.len + if debugDeflate { + count += int(c.len) + } + if nbits >= 48 { + binary.LittleEndian.PutUint64(w.bytes[nbytes:], bits) + //*(*uint64)(unsafe.Pointer(&w.bytes[nbytes])) = bits + bits >>= 48 + nbits -= 48 + nbytes += 6 + if nbytes >= bufferFlushSize { if w.err != nil { - n = 0 + nbytes = 0 return } - w.write(w.bytes[:n]) - n = 0 + _, w.err = w.writer.Write(w.bytes[:nbytes]) + nbytes = 0 } - w.nbytes = n } } + // Restore... + w.bits, w.nbits, w.nbytes = bits, nbits, nbytes + + if debugDeflate { + fmt.Println("wrote", count/8, "bytes") + } if eof || sync { - w.writeCode(encoding[endBlockMarker]) + w.writeCode(w.literalEncoding.codes[endBlockMarker]) w.lastHeader = 0 w.lastHuffMan = false } diff --git a/vendor/github.com/klauspost/compress/flate/huffman_code.go b/vendor/github.com/klauspost/compress/flate/huffman_code.go index 0d3445a1c..67b2b3872 100644 --- a/vendor/github.com/klauspost/compress/flate/huffman_code.go +++ b/vendor/github.com/klauspost/compress/flate/huffman_code.go @@ -21,9 +21,13 @@ type hcode struct { } type huffmanEncoder struct { - codes []hcode - freqcache []literalNode - bitCount [17]int32 + codes []hcode + bitCount [17]int32 + + // Allocate a reusable buffer with the longest possible frequency table. + // Possible lengths are codegenCodeCount, offsetCodeCount and literalCount. + // The largest of these is literalCount, so we allocate for that case. + freqcache [literalCount + 1]literalNode } type literalNode struct { @@ -132,6 +136,21 @@ func (h *huffmanEncoder) bitLengthRaw(b []byte) int { return total } +// canReuseBits returns the number of bits or math.MaxInt32 if the encoder cannot be reused. +func (h *huffmanEncoder) canReuseBits(freq []uint16) int { + var total int + for i, f := range freq { + if f != 0 { + code := h.codes[i] + if code.len == 0 { + return math.MaxInt32 + } + total += int(f) * int(code.len) + } + } + return total +} + // Return the number of literals assigned to each bit size in the Huffman encoding // // This method is only called when list.length >= 3 @@ -291,12 +310,6 @@ func (h *huffmanEncoder) assignEncodingAndSize(bitCount []int32, list []literalN // freq An array of frequencies, in which frequency[i] gives the frequency of literal i. // maxBits The maximum number of bits to use for any literal. func (h *huffmanEncoder) generate(freq []uint16, maxBits int32) { - if h.freqcache == nil { - // Allocate a reusable buffer with the longest possible frequency table. - // Possible lengths are codegenCodeCount, offsetCodeCount and literalCount. - // The largest of these is literalCount, so we allocate for that case. - h.freqcache = make([]literalNode, literalCount+1) - } list := h.freqcache[:len(freq)+1] // Number of non-zero literals count := 0 @@ -330,10 +343,14 @@ func (h *huffmanEncoder) generate(freq []uint16, maxBits int32) { h.assignEncodingAndSize(bitCount, list) } +// atLeastOne clamps the result between 1 and 15. func atLeastOne(v float32) float32 { if v < 1 { return 1 } + if v > 15 { + return 15 + } return v } @@ -346,31 +363,12 @@ func fillHist(b []uint16) { } } -// histogramSize accumulates a histogram of b in h. -// An estimated size in bits is returned. -// len(h) must be >= 256, and h's elements must be all zeroes. -func histogramSize(b []byte, h []uint16, fill bool) (bits int) { +func histogram(b []byte, h []uint16, fill bool) { h = h[:256] for _, t := range b { h[t]++ } - total := len(b) if fill { - for _, v := range h { - if v == 0 { - total++ - } - } + fillHist(h) } - - invTotal := 1.0 / float32(total) - shannon := float32(0.0) - for _, v := range h { - if v > 0 { - n := float32(v) - shannon += atLeastOne(-mFastLog2(n*invTotal)) * n - } - } - - return int(shannon + 0.99) } diff --git a/vendor/github.com/klauspost/compress/flate/token.go b/vendor/github.com/klauspost/compress/flate/token.go index f9abf606d..eb862d7a9 100644 --- a/vendor/github.com/klauspost/compress/flate/token.go +++ b/vendor/github.com/klauspost/compress/flate/token.go @@ -13,14 +13,17 @@ import ( ) const ( + // From top // 2 bits: type 0 = literal 1=EOF 2=Match 3=Unused // 8 bits: xlength = length - MIN_MATCH_LENGTH - // 22 bits xoffset = offset - MIN_OFFSET_SIZE, or literal - lengthShift = 22 - offsetMask = 1< 0 { n := float32(v) - shannon += -mFastLog2(n*invTotal) * n + shannon += atLeastOne(-mFastLog2(n*invTotal)) * n } } // Just add 15 for EOB @@ -240,7 +243,7 @@ func (t *tokens) EstimatedBits() int { for i, v := range t.extraHist[1 : literalCount-256] { if v > 0 { n := float32(v) - shannon += -mFastLog2(n*invTotal) * n + shannon += atLeastOne(-mFastLog2(n*invTotal)) * n bits += int(lengthExtraBits[i&31]) * int(v) nMatches += int(v) } @@ -251,7 +254,7 @@ func (t *tokens) EstimatedBits() int { for i, v := range t.offHist[:offsetCodeCount] { if v > 0 { n := float32(v) - shannon += -mFastLog2(n*invTotal) * n + shannon += atLeastOne(-mFastLog2(n*invTotal)) * n bits += int(offsetExtraBits[i&31]) * int(v) } } @@ -270,11 +273,13 @@ func (t *tokens) AddMatch(xlength uint32, xoffset uint32) { panic(fmt.Errorf("invalid offset: %v", xoffset)) } } + oCode := offsetCode(xoffset) + xoffset |= oCode << 16 t.nLits++ - lengthCode := lengthCodes1[uint8(xlength)] & 31 + + t.extraHist[lengthCodes1[uint8(xlength)]]++ + t.offHist[oCode]++ t.tokens[t.n] = token(matchType | xlength< 0 { xl := xlength if xl > 258 { @@ -294,12 +300,11 @@ func (t *tokens) AddMatchLong(xlength int32, xoffset uint32) { xl = 258 - baseMatchLength } xlength -= xl - xl -= 3 + xl -= baseMatchLength t.nLits++ - lengthCode := lengthCodes1[uint8(xl)] & 31 - t.tokens[t.n] = token(matchType | uint32(xl)< maxCompressedBlockSize || uint64(cSize) > b.WindowSize { - if debug { + if debugDecoder { printf("compressed block too big: csize:%d block: %+v\n", uint64(cSize), b) } return ErrCompressedSizeTooBig @@ -179,10 +177,9 @@ func (b *blockDec) reset(br byteBuffer, windowSize uint64) error { if cap(b.dst) <= maxSize { b.dst = make([]byte, 0, maxSize+1) } - var err error b.data, err = br.readBig(cSize, b.dataStorage) if err != nil { - if debug { + if debugDecoder { println("Reading block:", err, "(", cSize, ")", len(b.data)) printf("%T", br) } @@ -252,7 +249,7 @@ func (b *blockDec) startDecoder() { b: b.dst, err: err, } - if debug { + if debugDecoder { println("Decompressed to", len(b.dst), "bytes, error:", err) } b.result <- o @@ -267,7 +264,7 @@ func (b *blockDec) startDecoder() { default: panic("Invalid block type") } - if debug { + if debugDecoder { println("blockDec: Finished block") } } @@ -300,7 +297,7 @@ func (b *blockDec) decodeBuf(hist *history) error { b.dst = hist.b hist.b = nil err := b.decodeCompressed(hist) - if debug { + if debugDecoder { println("Decompressed to total", len(b.dst), "bytes, hash:", xxhash.Sum64(b.dst), "error:", err) } hist.b = b.dst @@ -393,7 +390,7 @@ func (b *blockDec) decodeCompressed(hist *history) error { in = in[5:] } } - if debug { + if debugDecoder { println("literals type:", litType, "litRegenSize:", litRegenSize, "litCompSize:", litCompSize, "sizeFormat:", sizeFormat, "4X:", fourStreams) } var literals []byte @@ -431,7 +428,7 @@ func (b *blockDec) decodeCompressed(hist *history) error { literals[i] = v } in = in[1:] - if debug { + if debugDecoder { printf("Found %d RLE compressed literals\n", litRegenSize) } case literalsBlockTreeless: @@ -442,7 +439,7 @@ func (b *blockDec) decodeCompressed(hist *history) error { // Store compressed literals, so we defer decoding until we get history. literals = in[:litCompSize] in = in[litCompSize:] - if debug { + if debugDecoder { printf("Found %d compressed literals\n", litCompSize) } case literalsBlockCompressed: @@ -484,7 +481,7 @@ func (b *blockDec) decodeCompressed(hist *history) error { if len(literals) != litRegenSize { return fmt.Errorf("literal output size mismatch want %d, got %d", litRegenSize, len(literals)) } - if debug { + if debugDecoder { printf("Decompressed %d literals into %d bytes\n", litCompSize, litRegenSize) } } @@ -535,12 +532,12 @@ func (b *blockDec) decodeCompressed(hist *history) error { br := byteReader{b: in, off: 0} compMode := br.Uint8() br.advance(1) - if debug { + if debugDecoder { printf("Compression modes: 0b%b", compMode) } for i := uint(0); i < 3; i++ { mode := seqCompMode((compMode >> (6 - i*2)) & 3) - if debug { + if debugDecoder { println("Table", tableIndex(i), "is", mode) } var seq *sequenceDec @@ -571,7 +568,7 @@ func (b *blockDec) decodeCompressed(hist *history) error { } dec.setRLE(symb) seq.fse = dec - if debug { + if debugDecoder { printf("RLE set to %+v, code: %v", symb, v) } case compModeFSE: @@ -587,7 +584,7 @@ func (b *blockDec) decodeCompressed(hist *history) error { println("Transform table error:", err) return err } - if debug { + if debugDecoder { println("Read table ok", "symbolLen:", dec.symbolLen) } seq.fse = dec @@ -655,7 +652,7 @@ func (b *blockDec) decodeCompressed(hist *history) error { if huff != nil { hist.huffTree = huff } - if debug { + if debugDecoder { println("Final literals:", len(literals), "hash:", xxhash.Sum64(literals), "and", nSeqs, "sequences.") } @@ -672,7 +669,7 @@ func (b *blockDec) decodeCompressed(hist *history) error { if err != nil { return err } - if debug { + if debugDecoder { println("History merged ok") } br := &bitReader{} @@ -731,7 +728,7 @@ func (b *blockDec) decodeCompressed(hist *history) error { } hist.append(b.dst) hist.recentOffsets = seqs.prevOffset - if debug { + if debugDecoder { println("Finished block with literals:", len(literals), "and", nSeqs, "sequences.") } diff --git a/vendor/github.com/klauspost/compress/zstd/blockenc.go b/vendor/github.com/klauspost/compress/zstd/blockenc.go index e1be092f3..3df185ee4 100644 --- a/vendor/github.com/klauspost/compress/zstd/blockenc.go +++ b/vendor/github.com/klauspost/compress/zstd/blockenc.go @@ -156,7 +156,7 @@ func (h *literalsHeader) setSize(regenLen int) { switch { case inBits < 5: lh |= (uint64(regenLen) << 3) | (1 << 60) - if debug { + if debugEncoder { got := int(lh>>3) & 0xff if got != regenLen { panic(fmt.Sprint("litRegenSize = ", regenLen, "(want) != ", got, "(got)")) @@ -184,7 +184,7 @@ func (h *literalsHeader) setSizes(compLen, inLen int, single bool) { lh |= 1 << 2 } lh |= (uint64(inLen) << 4) | (uint64(compLen) << (10 + 4)) | (3 << 60) - if debug { + if debugEncoder { const mmask = (1 << 24) - 1 n := (lh >> 4) & mmask if int(n&1023) != inLen { @@ -312,7 +312,7 @@ func (b *blockEnc) encodeRaw(a []byte) { bh.setType(blockTypeRaw) b.output = bh.appendTo(b.output[:0]) b.output = append(b.output, a...) - if debug { + if debugEncoder { println("Adding RAW block, length", len(a), "last:", b.last) } } @@ -325,7 +325,7 @@ func (b *blockEnc) encodeRawTo(dst, src []byte) []byte { bh.setType(blockTypeRaw) dst = bh.appendTo(dst) dst = append(dst, src...) - if debug { + if debugEncoder { println("Adding RAW block, length", len(src), "last:", b.last) } return dst @@ -339,7 +339,7 @@ func (b *blockEnc) encodeLits(lits []byte, raw bool) error { // Don't compress extremely small blocks if len(lits) < 8 || (len(lits) < 32 && b.dictLitEnc == nil) || raw { - if debug { + if debugEncoder { println("Adding RAW block, length", len(lits), "last:", b.last) } bh.setType(blockTypeRaw) @@ -371,7 +371,7 @@ func (b *blockEnc) encodeLits(lits []byte, raw bool) error { switch err { case huff0.ErrIncompressible: - if debug { + if debugEncoder { println("Adding RAW block, length", len(lits), "last:", b.last) } bh.setType(blockTypeRaw) @@ -379,7 +379,7 @@ func (b *blockEnc) encodeLits(lits []byte, raw bool) error { b.output = append(b.output, lits...) return nil case huff0.ErrUseRLE: - if debug { + if debugEncoder { println("Adding RLE block, length", len(lits)) } bh.setType(blockTypeRLE) @@ -396,12 +396,12 @@ func (b *blockEnc) encodeLits(lits []byte, raw bool) error { bh.setType(blockTypeCompressed) var lh literalsHeader if reUsed { - if debug { + if debugEncoder { println("Reused tree, compressed to", len(out)) } lh.setType(literalsBlockTreeless) } else { - if debug { + if debugEncoder { println("New tree, compressed to", len(out), "tree size:", len(b.litEnc.OutTable)) } lh.setType(literalsBlockCompressed) @@ -517,7 +517,7 @@ func (b *blockEnc) encode(org []byte, raw, rawAllLits bool) error { lh.setSize(len(b.literals)) b.output = lh.appendTo(b.output) b.output = append(b.output, b.literals...) - if debug { + if debugEncoder { println("Adding literals RAW, length", len(b.literals)) } case huff0.ErrUseRLE: @@ -525,22 +525,22 @@ func (b *blockEnc) encode(org []byte, raw, rawAllLits bool) error { lh.setSize(len(b.literals)) b.output = lh.appendTo(b.output) b.output = append(b.output, b.literals[0]) - if debug { + if debugEncoder { println("Adding literals RLE") } case nil: // Compressed litLen... if reUsed { - if debug { + if debugEncoder { println("reused tree") } lh.setType(literalsBlockTreeless) } else { - if debug { + if debugEncoder { println("new tree, size:", len(b.litEnc.OutTable)) } lh.setType(literalsBlockCompressed) - if debug { + if debugEncoder { _, _, err := huff0.ReadTable(out, nil) if err != nil { panic(err) @@ -548,18 +548,18 @@ func (b *blockEnc) encode(org []byte, raw, rawAllLits bool) error { } } lh.setSizes(len(out), len(b.literals), single) - if debug { + if debugEncoder { printf("Compressed %d literals to %d bytes", len(b.literals), len(out)) println("Adding literal header:", lh) } b.output = lh.appendTo(b.output) b.output = append(b.output, out...) b.litEnc.Reuse = huff0.ReusePolicyAllow - if debug { + if debugEncoder { println("Adding literals compressed") } default: - if debug { + if debugEncoder { println("Adding literals ERROR:", err) } return err @@ -577,7 +577,7 @@ func (b *blockEnc) encode(org []byte, raw, rawAllLits bool) error { n := len(b.sequences) - 0x7f00 b.output = append(b.output, 255, uint8(n), uint8(n>>8)) } - if debug { + if debugEncoder { println("Encoding", len(b.sequences), "sequences") } b.genCodes() @@ -611,17 +611,17 @@ func (b *blockEnc) encode(org []byte, raw, rawAllLits bool) error { nSize = nSize + (nSize+2*8*16)>>4 switch { case predefSize <= prevSize && predefSize <= nSize || forcePreDef: - if debug { + if debugEncoder { println("Using predefined", predefSize>>3, "<=", nSize>>3) } return preDef, compModePredefined case prevSize <= nSize: - if debug { + if debugEncoder { println("Using previous", prevSize>>3, "<=", nSize>>3) } return prev, compModeRepeat default: - if debug { + if debugEncoder { println("Using new, predef", predefSize>>3, ". previous:", prevSize>>3, ">", nSize>>3, "header max:", cur.maxHeaderSize()>>3, "bytes") println("tl:", cur.actualTableLog, "symbolLen:", cur.symbolLen, "norm:", cur.norm[:cur.symbolLen], "hist", cur.count[:cur.symbolLen]) } @@ -634,7 +634,7 @@ func (b *blockEnc) encode(org []byte, raw, rawAllLits bool) error { if llEnc.useRLE { mode |= uint8(compModeRLE) << 6 llEnc.setRLE(b.sequences[0].llCode) - if debug { + if debugEncoder { println("llEnc.useRLE") } } else { @@ -645,7 +645,7 @@ func (b *blockEnc) encode(org []byte, raw, rawAllLits bool) error { if ofEnc.useRLE { mode |= uint8(compModeRLE) << 4 ofEnc.setRLE(b.sequences[0].ofCode) - if debug { + if debugEncoder { println("ofEnc.useRLE") } } else { @@ -657,7 +657,7 @@ func (b *blockEnc) encode(org []byte, raw, rawAllLits bool) error { if mlEnc.useRLE { mode |= uint8(compModeRLE) << 2 mlEnc.setRLE(b.sequences[0].mlCode) - if debug { + if debugEncoder { println("mlEnc.useRLE, code: ", b.sequences[0].mlCode, "value", b.sequences[0].matchLen) } } else { @@ -666,7 +666,7 @@ func (b *blockEnc) encode(org []byte, raw, rawAllLits bool) error { mode |= uint8(m) << 2 } b.output = append(b.output, mode) - if debug { + if debugEncoder { printf("Compression modes: 0b%b", mode) } b.output, err = llEnc.writeCount(b.output) @@ -786,7 +786,7 @@ func (b *blockEnc) encode(org []byte, raw, rawAllLits bool) error { // Size is output minus block header. bh.setSize(uint32(len(b.output)-bhOffset) - 3) - if debug { + if debugEncoder { println("Rewriting block header", bh) } _ = bh.appendTo(b.output[bhOffset:bhOffset]) diff --git a/vendor/github.com/klauspost/compress/zstd/bytebuf.go b/vendor/github.com/klauspost/compress/zstd/bytebuf.go index 658ef7838..aab71c6cf 100644 --- a/vendor/github.com/klauspost/compress/zstd/bytebuf.go +++ b/vendor/github.com/klauspost/compress/zstd/bytebuf.go @@ -12,8 +12,8 @@ import ( type byteBuffer interface { // Read up to 8 bytes. - // Returns nil if no more input is available. - readSmall(n int) []byte + // Returns io.ErrUnexpectedEOF if this cannot be satisfied. + readSmall(n int) ([]byte, error) // Read >8 bytes. // MAY use the destination slice. @@ -29,17 +29,17 @@ type byteBuffer interface { // in-memory buffer type byteBuf []byte -func (b *byteBuf) readSmall(n int) []byte { +func (b *byteBuf) readSmall(n int) ([]byte, error) { if debugAsserts && n > 8 { panic(fmt.Errorf("small read > 8 (%d). use readBig", n)) } bb := *b if len(bb) < n { - return nil + return nil, io.ErrUnexpectedEOF } r := bb[:n] *b = bb[n:] - return r + return r, nil } func (b *byteBuf) readBig(n int, dst []byte) ([]byte, error) { @@ -81,19 +81,22 @@ type readerWrapper struct { tmp [8]byte } -func (r *readerWrapper) readSmall(n int) []byte { +func (r *readerWrapper) readSmall(n int) ([]byte, error) { if debugAsserts && n > 8 { panic(fmt.Errorf("small read > 8 (%d). use readBig", n)) } n2, err := io.ReadFull(r.r, r.tmp[:n]) // We only really care about the actual bytes read. - if n2 != n { - if debug { + if err != nil { + if err == io.EOF { + return nil, io.ErrUnexpectedEOF + } + if debugDecoder { println("readSmall: got", n2, "want", n, "err", err) } - return nil + return nil, err } - return r.tmp[:n] + return r.tmp[:n], nil } func (r *readerWrapper) readBig(n int, dst []byte) ([]byte, error) { diff --git a/vendor/github.com/klauspost/compress/zstd/decoder.go b/vendor/github.com/klauspost/compress/zstd/decoder.go index f593e464b..4d984c3b2 100644 --- a/vendor/github.com/klauspost/compress/zstd/decoder.go +++ b/vendor/github.com/klauspost/compress/zstd/decoder.go @@ -113,9 +113,6 @@ func NewReader(r io.Reader, opts ...DOption) (*Decoder, error) { // Returns the number of bytes written and any error that occurred. // When the stream is done, io.EOF will be returned. func (d *Decoder) Read(p []byte) (int, error) { - if d.stream == nil { - return 0, ErrDecoderNilInput - } var n int for { if len(d.current.b) > 0 { @@ -138,7 +135,7 @@ func (d *Decoder) Read(p []byte) (int, error) { } } if len(d.current.b) > 0 { - if debug { + if debugDecoder { println("returning", n, "still bytes left:", len(d.current.b)) } // Only return error at end of block @@ -147,7 +144,7 @@ func (d *Decoder) Read(p []byte) (int, error) { if d.current.err != nil { d.drainOutput() } - if debug { + if debugDecoder { println("returning", n, d.current.err, len(d.decoders)) } return n, d.current.err @@ -167,20 +164,17 @@ func (d *Decoder) Reset(r io.Reader) error { if r == nil { d.current.err = ErrDecoderNilInput + if len(d.current.b) > 0 { + d.current.b = d.current.b[:0] + } d.current.flushed = true return nil } - if d.stream == nil { - d.stream = make(chan decodeStream, 1) - d.streamWg.Add(1) - go d.startStreamDecoder(d.stream) - } - - // If bytes buffer and < 1MB, do sync decoding anyway. - if bb, ok := r.(byter); ok && bb.Len() < 1<<20 { + // If bytes buffer and < 5MB, do sync decoding anyway. + if bb, ok := r.(byter); ok && bb.Len() < 5<<20 { bb2 := bb - if debug { + if debugDecoder { println("*bytes.Buffer detected, doing sync decode, len:", bb.Len()) } b := bb2.Bytes() @@ -196,12 +190,18 @@ func (d *Decoder) Reset(r io.Reader) error { d.current.b = dst d.current.err = err d.current.flushed = true - if debug { + if debugDecoder { println("sync decode to", len(dst), "bytes, err:", err) } return nil } + if d.stream == nil { + d.stream = make(chan decodeStream, 1) + d.streamWg.Add(1) + go d.startStreamDecoder(d.stream) + } + // Remove current block. d.current.decodeOutput = decodeOutput{} d.current.err = nil @@ -225,7 +225,7 @@ func (d *Decoder) drainOutput() { d.current.cancel = nil } if d.current.d != nil { - if debug { + if debugDecoder { printf("re-adding current decoder %p, decoders: %d", d.current.d, len(d.decoders)) } d.decoders <- d.current.d @@ -238,7 +238,7 @@ func (d *Decoder) drainOutput() { } for v := range d.current.output { if v.d != nil { - if debug { + if debugDecoder { printf("re-adding decoder %p", v.d) } d.decoders <- v.d @@ -255,9 +255,6 @@ func (d *Decoder) drainOutput() { // The return value n is the number of bytes written. // Any error encountered during the write is also returned. func (d *Decoder) WriteTo(w io.Writer) (int64, error) { - if d.stream == nil { - return 0, ErrDecoderNilInput - } var n int64 for { if len(d.current.b) > 0 { @@ -297,7 +294,7 @@ func (d *Decoder) DecodeAll(input, dst []byte) ([]byte, error) { block := <-d.decoders frame := block.localFrame defer func() { - if debug { + if debugDecoder { printf("re-adding decoder: %p", block) } frame.rawInput = nil @@ -310,7 +307,7 @@ func (d *Decoder) DecodeAll(input, dst []byte) ([]byte, error) { frame.history.reset() err := frame.reset(&frame.bBuf) if err == io.EOF { - if debug { + if debugDecoder { println("frame reset return EOF") } return dst, nil @@ -355,7 +352,7 @@ func (d *Decoder) DecodeAll(input, dst []byte) ([]byte, error) { return dst, err } if len(frame.bBuf) == 0 { - if debug { + if debugDecoder { println("frame dbuf empty") } break @@ -371,7 +368,7 @@ func (d *Decoder) DecodeAll(input, dst []byte) ([]byte, error) { // if no data was available without blocking. func (d *Decoder) nextBlock(blocking bool) (ok bool) { if d.current.d != nil { - if debug { + if debugDecoder { printf("re-adding current decoder %p", d.current.d) } d.decoders <- d.current.d @@ -391,7 +388,7 @@ func (d *Decoder) nextBlock(blocking bool) (ok bool) { return false } } - if debug { + if debugDecoder { println("got", len(d.current.b), "bytes, error:", d.current.err) } return true @@ -485,7 +482,7 @@ func (d *Decoder) startStreamDecoder(inStream chan decodeStream) { defer d.streamWg.Done() frame := newFrameDec(d.o) for stream := range inStream { - if debug { + if debugDecoder { println("got new stream") } br := readerWrapper{r: stream.r} @@ -493,7 +490,7 @@ func (d *Decoder) startStreamDecoder(inStream chan decodeStream) { for { frame.history.reset() err := frame.reset(&br) - if debug && err != nil { + if debugDecoder && err != nil { println("Frame decoder returned", err) } if err == nil && frame.DictionaryID != nil { @@ -510,7 +507,7 @@ func (d *Decoder) startStreamDecoder(inStream chan decodeStream) { } break } - if debug { + if debugDecoder { println("starting frame decoder") } diff --git a/vendor/github.com/klauspost/compress/zstd/dict.go b/vendor/github.com/klauspost/compress/zstd/dict.go index fa25a18d8..a36ae83ef 100644 --- a/vendor/github.com/klauspost/compress/zstd/dict.go +++ b/vendor/github.com/klauspost/compress/zstd/dict.go @@ -82,7 +82,7 @@ func loadDict(b []byte) (*dict, error) { println("Transform table error:", err) return err } - if debug { + if debugDecoder || debugEncoder { println("Read table ok", "symbolLen:", dec.symbolLen) } // Set decoders as predefined so they aren't reused. diff --git a/vendor/github.com/klauspost/compress/zstd/enc_best.go b/vendor/github.com/klauspost/compress/zstd/enc_best.go index dc1eed5f0..b7d4b9004 100644 --- a/vendor/github.com/klauspost/compress/zstd/enc_best.go +++ b/vendor/github.com/klauspost/compress/zstd/enc_best.go @@ -132,7 +132,7 @@ func (e *bestFastEncoder) Encode(blk *blockEnc, src []byte) { } _ = addLiterals - if debug { + if debugEncoder { println("recent offsets:", blk.recentOffsets) } @@ -274,7 +274,7 @@ encodeLoop: nextEmit = s if s >= sLimit { - if debug { + if debugEncoder { println("repeat ended", s, best.length) } @@ -412,7 +412,7 @@ encodeLoop: blk.recentOffsets[0] = uint32(offset1) blk.recentOffsets[1] = uint32(offset2) blk.recentOffsets[2] = uint32(offset3) - if debug { + if debugEncoder { println("returning, recent offsets:", blk.recentOffsets, "extra literals:", blk.extraLits) } } diff --git a/vendor/github.com/klauspost/compress/zstd/enc_better.go b/vendor/github.com/klauspost/compress/zstd/enc_better.go index 604954290..eab7b5083 100644 --- a/vendor/github.com/klauspost/compress/zstd/enc_better.go +++ b/vendor/github.com/klauspost/compress/zstd/enc_better.go @@ -138,7 +138,7 @@ func (e *betterFastEncoder) Encode(blk *blockEnc, src []byte) { blk.literals = append(blk.literals, src[nextEmit:until]...) s.litLen = uint32(until - nextEmit) } - if debug { + if debugEncoder { println("recent offsets:", blk.recentOffsets) } @@ -204,7 +204,7 @@ encodeLoop: nextEmit = s if s >= sLimit { - if debug { + if debugEncoder { println("repeat ended", s, lenght) } @@ -264,7 +264,7 @@ encodeLoop: s += lenght + repOff2 nextEmit = s if s >= sLimit { - if debug { + if debugEncoder { println("repeat ended", s, lenght) } @@ -553,7 +553,7 @@ encodeLoop: } blk.recentOffsets[0] = uint32(offset1) blk.recentOffsets[1] = uint32(offset2) - if debug { + if debugEncoder { println("returning, recent offsets:", blk.recentOffsets, "extra literals:", blk.extraLits) } } @@ -656,7 +656,7 @@ func (e *betterFastEncoderDict) Encode(blk *blockEnc, src []byte) { blk.literals = append(blk.literals, src[nextEmit:until]...) s.litLen = uint32(until - nextEmit) } - if debug { + if debugEncoder { println("recent offsets:", blk.recentOffsets) } @@ -724,7 +724,7 @@ encodeLoop: nextEmit = s if s >= sLimit { - if debug { + if debugEncoder { println("repeat ended", s, lenght) } @@ -787,7 +787,7 @@ encodeLoop: s += lenght + repOff2 nextEmit = s if s >= sLimit { - if debug { + if debugEncoder { println("repeat ended", s, lenght) } @@ -1084,7 +1084,7 @@ encodeLoop: } blk.recentOffsets[0] = uint32(offset1) blk.recentOffsets[1] = uint32(offset2) - if debug { + if debugEncoder { println("returning, recent offsets:", blk.recentOffsets, "extra literals:", blk.extraLits) } } diff --git a/vendor/github.com/klauspost/compress/zstd/enc_dfast.go b/vendor/github.com/klauspost/compress/zstd/enc_dfast.go index 8629d43d8..96b21b90e 100644 --- a/vendor/github.com/klauspost/compress/zstd/enc_dfast.go +++ b/vendor/github.com/klauspost/compress/zstd/enc_dfast.go @@ -109,7 +109,7 @@ func (e *doubleFastEncoder) Encode(blk *blockEnc, src []byte) { blk.literals = append(blk.literals, src[nextEmit:until]...) s.litLen = uint32(until - nextEmit) } - if debug { + if debugEncoder { println("recent offsets:", blk.recentOffsets) } @@ -170,7 +170,7 @@ encodeLoop: s += lenght + repOff nextEmit = s if s >= sLimit { - if debug { + if debugEncoder { println("repeat ended", s, lenght) } @@ -368,7 +368,7 @@ encodeLoop: } blk.recentOffsets[0] = uint32(offset1) blk.recentOffsets[1] = uint32(offset2) - if debug { + if debugEncoder { println("returning, recent offsets:", blk.recentOffsets, "extra literals:", blk.extraLits) } } @@ -427,7 +427,7 @@ func (e *doubleFastEncoder) EncodeNoHist(blk *blockEnc, src []byte) { blk.literals = append(blk.literals, src[nextEmit:until]...) s.litLen = uint32(until - nextEmit) } - if debug { + if debugEncoder { println("recent offsets:", blk.recentOffsets) } @@ -483,7 +483,7 @@ encodeLoop: s += length + repOff nextEmit = s if s >= sLimit { - if debug { + if debugEncoder { println("repeat ended", s, length) } @@ -677,7 +677,7 @@ encodeLoop: blk.literals = append(blk.literals, src[nextEmit:]...) blk.extraLits = len(src) - int(nextEmit) } - if debug { + if debugEncoder { println("returning, recent offsets:", blk.recentOffsets, "extra literals:", blk.extraLits) } @@ -767,7 +767,7 @@ func (e *doubleFastEncoderDict) Encode(blk *blockEnc, src []byte) { blk.literals = append(blk.literals, src[nextEmit:until]...) s.litLen = uint32(until - nextEmit) } - if debug { + if debugEncoder { println("recent offsets:", blk.recentOffsets) } @@ -830,7 +830,7 @@ encodeLoop: s += lenght + repOff nextEmit = s if s >= sLimit { - if debug { + if debugEncoder { println("repeat ended", s, lenght) } @@ -1039,7 +1039,7 @@ encodeLoop: } blk.recentOffsets[0] = uint32(offset1) blk.recentOffsets[1] = uint32(offset2) - if debug { + if debugEncoder { println("returning, recent offsets:", blk.recentOffsets, "extra literals:", blk.extraLits) } // If we encoded more than 64K mark all dirty. diff --git a/vendor/github.com/klauspost/compress/zstd/enc_fast.go b/vendor/github.com/klauspost/compress/zstd/enc_fast.go index ba4a17e10..2246d286d 100644 --- a/vendor/github.com/klauspost/compress/zstd/enc_fast.go +++ b/vendor/github.com/klauspost/compress/zstd/enc_fast.go @@ -103,7 +103,7 @@ func (e *fastEncoder) Encode(blk *blockEnc, src []byte) { blk.literals = append(blk.literals, src[nextEmit:until]...) s.litLen = uint32(until - nextEmit) } - if debug { + if debugEncoder { println("recent offsets:", blk.recentOffsets) } @@ -178,7 +178,7 @@ encodeLoop: s += length + 2 nextEmit = s if s >= sLimit { - if debug { + if debugEncoder { println("repeat ended", s, length) } @@ -330,7 +330,7 @@ encodeLoop: } blk.recentOffsets[0] = uint32(offset1) blk.recentOffsets[1] = uint32(offset2) - if debug { + if debugEncoder { println("returning, recent offsets:", blk.recentOffsets, "extra literals:", blk.extraLits) } } @@ -343,7 +343,7 @@ func (e *fastEncoder) EncodeNoHist(blk *blockEnc, src []byte) { inputMargin = 8 minNonLiteralBlockSize = 1 + 1 + inputMargin ) - if debug { + if debugEncoder { if len(src) > maxBlockSize { panic("src too big") } @@ -391,7 +391,7 @@ func (e *fastEncoder) EncodeNoHist(blk *blockEnc, src []byte) { blk.literals = append(blk.literals, src[nextEmit:until]...) s.litLen = uint32(until - nextEmit) } - if debug { + if debugEncoder { println("recent offsets:", blk.recentOffsets) } @@ -462,7 +462,7 @@ encodeLoop: s += length + 2 nextEmit = s if s >= sLimit { - if debug { + if debugEncoder { println("repeat ended", s, length) } @@ -616,7 +616,7 @@ encodeLoop: blk.literals = append(blk.literals, src[nextEmit:]...) blk.extraLits = len(src) - int(nextEmit) } - if debug { + if debugEncoder { println("returning, recent offsets:", blk.recentOffsets, "extra literals:", blk.extraLits) } // We do not store history, so we must offset e.cur to avoid false matches for next user. @@ -696,7 +696,7 @@ func (e *fastEncoderDict) Encode(blk *blockEnc, src []byte) { blk.literals = append(blk.literals, src[nextEmit:until]...) s.litLen = uint32(until - nextEmit) } - if debug { + if debugEncoder { println("recent offsets:", blk.recentOffsets) } @@ -773,7 +773,7 @@ encodeLoop: s += length + 2 nextEmit = s if s >= sLimit { - if debug { + if debugEncoder { println("repeat ended", s, length) } @@ -926,7 +926,7 @@ encodeLoop: } blk.recentOffsets[0] = uint32(offset1) blk.recentOffsets[1] = uint32(offset2) - if debug { + if debugEncoder { println("returning, recent offsets:", blk.recentOffsets, "extra literals:", blk.extraLits) } } diff --git a/vendor/github.com/klauspost/compress/zstd/encoder.go b/vendor/github.com/klauspost/compress/zstd/encoder.go index 4871dd03a..ea85548fc 100644 --- a/vendor/github.com/klauspost/compress/zstd/encoder.go +++ b/vendor/github.com/klauspost/compress/zstd/encoder.go @@ -245,7 +245,7 @@ func (e *Encoder) nextBlock(final bool) error { s.filling, s.current, s.previous = s.previous[:0], s.filling, s.current s.wg.Add(1) go func(src []byte) { - if debug { + if debugEncoder { println("Adding block,", len(src), "bytes, final:", final) } defer func() { @@ -290,7 +290,7 @@ func (e *Encoder) nextBlock(final bool) error { } switch err { case errIncompressible: - if debug { + if debugEncoder { println("Storing incompressible block as raw") } blk.encodeRaw(src) @@ -313,7 +313,7 @@ func (e *Encoder) nextBlock(final bool) error { // // The Copy function uses ReaderFrom if available. func (e *Encoder) ReadFrom(r io.Reader) (n int64, err error) { - if debug { + if debugEncoder { println("Using ReadFrom") } @@ -336,20 +336,20 @@ func (e *Encoder) ReadFrom(r io.Reader) (n int64, err error) { switch err { case io.EOF: e.state.filling = e.state.filling[:len(e.state.filling)-len(src)] - if debug { + if debugEncoder { println("ReadFrom: got EOF final block:", len(e.state.filling)) } return n, nil case nil: default: - if debug { + if debugEncoder { println("ReadFrom: got error:", err) } e.state.err = err return n, err } if len(src) > 0 { - if debug { + if debugEncoder { println("ReadFrom: got space left in source:", len(src)) } continue @@ -512,7 +512,7 @@ func (e *Encoder) EncodeAll(src, dst []byte) []byte { switch err { case errIncompressible: - if debug { + if debugEncoder { println("Storing incompressible block as raw") } dst = blk.encodeRawTo(dst, src) @@ -548,7 +548,7 @@ func (e *Encoder) EncodeAll(src, dst []byte) []byte { switch err { case errIncompressible: - if debug { + if debugEncoder { println("Storing incompressible block as raw") } dst = blk.encodeRawTo(dst, todo) diff --git a/vendor/github.com/klauspost/compress/zstd/framedec.go b/vendor/github.com/klauspost/compress/zstd/framedec.go index 693c5f05d..e8cc9a2c2 100644 --- a/vendor/github.com/klauspost/compress/zstd/framedec.go +++ b/vendor/github.com/klauspost/compress/zstd/framedec.go @@ -78,44 +78,68 @@ func newFrameDec(o decoderOptions) *frameDec { func (d *frameDec) reset(br byteBuffer) error { d.HasCheckSum = false d.WindowSize = 0 - var b []byte + var signature [4]byte for { - b = br.readSmall(4) - if b == nil { + var err error + // Check if we can read more... + b, err := br.readSmall(1) + switch err { + case io.EOF, io.ErrUnexpectedEOF: return io.EOF + default: + return err + case nil: + signature[0] = b[0] } - if !bytes.Equal(b[1:4], skippableFrameMagic) || b[0]&0xf0 != 0x50 { - if debug { - println("Not skippable", hex.EncodeToString(b), hex.EncodeToString(skippableFrameMagic)) + // Read the rest, don't allow io.ErrUnexpectedEOF + b, err = br.readSmall(3) + switch err { + case io.EOF: + return io.EOF + default: + return err + case nil: + copy(signature[1:], b) + } + + if !bytes.Equal(signature[1:4], skippableFrameMagic) || signature[0]&0xf0 != 0x50 { + if debugDecoder { + println("Not skippable", hex.EncodeToString(signature[:]), hex.EncodeToString(skippableFrameMagic)) } // Break if not skippable frame. break } // Read size to skip - b = br.readSmall(4) - if b == nil { - println("Reading Frame Size EOF") - return io.ErrUnexpectedEOF + b, err = br.readSmall(4) + if err != nil { + if debugDecoder { + println("Reading Frame Size", err) + } + return err } n := uint32(b[0]) | (uint32(b[1]) << 8) | (uint32(b[2]) << 16) | (uint32(b[3]) << 24) println("Skipping frame with", n, "bytes.") - err := br.skipN(int(n)) + err = br.skipN(int(n)) if err != nil { - if debug { + if debugDecoder { println("Reading discarded frame", err) } return err } } - if !bytes.Equal(b, frameMagic) { - println("Got magic numbers: ", b, "want:", frameMagic) + if !bytes.Equal(signature[:], frameMagic) { + if debugDecoder { + println("Got magic numbers: ", signature, "want:", frameMagic) + } return ErrMagicMismatch } // Read Frame_Header_Descriptor fhd, err := br.readByte() if err != nil { - println("Reading Frame_Header_Descriptor", err) + if debugDecoder { + println("Reading Frame_Header_Descriptor", err) + } return err } d.SingleSegment = fhd&(1<<5) != 0 @@ -130,7 +154,9 @@ func (d *frameDec) reset(br byteBuffer) error { if !d.SingleSegment { wd, err := br.readByte() if err != nil { - println("Reading Window_Descriptor", err) + if debugDecoder { + println("Reading Window_Descriptor", err) + } return err } printf("raw: %x, mantissa: %d, exponent: %d\n", wd, wd&7, wd>>3) @@ -147,12 +173,11 @@ func (d *frameDec) reset(br byteBuffer) error { if size == 3 { size = 4 } - b = br.readSmall(int(size)) - if b == nil { - if debug { - println("Reading Dictionary_ID", io.ErrUnexpectedEOF) - } - return io.ErrUnexpectedEOF + + b, err := br.readSmall(int(size)) + if err != nil { + println("Reading Dictionary_ID", err) + return err } var id uint32 switch size { @@ -163,7 +188,7 @@ func (d *frameDec) reset(br byteBuffer) error { case 4: id = uint32(b[0]) | (uint32(b[1]) << 8) | (uint32(b[2]) << 16) | (uint32(b[3]) << 24) } - if debug { + if debugDecoder { println("Dict size", size, "ID:", id) } if id > 0 { @@ -187,10 +212,10 @@ func (d *frameDec) reset(br byteBuffer) error { } d.FrameContentSize = 0 if fcsSize > 0 { - b := br.readSmall(fcsSize) - if b == nil { - println("Reading Frame content", io.ErrUnexpectedEOF) - return io.ErrUnexpectedEOF + b, err := br.readSmall(fcsSize) + if err != nil { + println("Reading Frame content", err) + return err } switch fcsSize { case 1: @@ -205,7 +230,7 @@ func (d *frameDec) reset(br byteBuffer) error { d2 := uint32(b[4]) | (uint32(b[5]) << 8) | (uint32(b[6]) << 16) | (uint32(b[7]) << 24) d.FrameContentSize = uint64(d1) | (uint64(d2) << 32) } - if debug { + if debugDecoder { println("field size bits:", v, "fcsSize:", fcsSize, "FrameContentSize:", d.FrameContentSize, hex.EncodeToString(b[:fcsSize]), "singleseg:", d.SingleSegment, "window:", d.WindowSize) } } @@ -248,7 +273,7 @@ func (d *frameDec) reset(br byteBuffer) error { // next will start decoding the next block from stream. func (d *frameDec) next(block *blockDec) error { - if debug { + if debugDecoder { printf("decoding new block %p:%p", block, block.data) } err := block.reset(d.rawInput, d.WindowSize) @@ -259,7 +284,7 @@ func (d *frameDec) next(block *blockDec) error { return err } block.input <- struct{}{} - if debug { + if debugDecoder { println("next block:", block) } d.asyncRunningMu.Lock() @@ -307,19 +332,19 @@ func (d *frameDec) checkCRC() error { tmp[3] = byte(got >> 24) // We can overwrite upper tmp now - want := d.rawInput.readSmall(4) - if want == nil { - println("CRC missing?") - return io.ErrUnexpectedEOF + want, err := d.rawInput.readSmall(4) + if err != nil { + println("CRC missing?", err) + return err } if !bytes.Equal(tmp[:], want) { - if debug { + if debugDecoder { println("CRC Check Failed:", tmp[:], "!=", want) } return ErrCRCMismatch } - if debug { + if debugDecoder { println("CRC ok", tmp[:]) } return nil @@ -340,7 +365,7 @@ func (d *frameDec) initAsync() { if cap(d.decoding) < d.o.concurrent { d.decoding = make(chan *blockDec, d.o.concurrent) } - if debug { + if debugDecoder { h := d.history printf("history init. len: %d, cap: %d", len(h.b), cap(h.b)) } @@ -388,7 +413,7 @@ func (d *frameDec) startDecoder(output chan decodeOutput) { output <- r return } - if debug { + if debugDecoder { println("got result, from ", d.offset, "to", d.offset+int64(len(r.b))) d.offset += int64(len(r.b)) } @@ -396,7 +421,7 @@ func (d *frameDec) startDecoder(output chan decodeOutput) { // Send history to next block select { case next = <-d.decoding: - if debug { + if debugDecoder { println("Sending ", len(d.history.b), "bytes as history") } next.history <- &d.history @@ -434,7 +459,7 @@ func (d *frameDec) startDecoder(output chan decodeOutput) { output <- r if next == nil { // There was no decoder available, we wait for one now that we have sent to the writer. - if debug { + if debugDecoder { println("Sending ", len(d.history.b), " bytes as history") } next = <-d.decoding @@ -458,7 +483,7 @@ func (d *frameDec) runDecoder(dst []byte, dec *blockDec) ([]byte, error) { if err != nil { break } - if debug { + if debugDecoder { println("next block:", dec) } err = dec.decodeBuf(&d.history) diff --git a/vendor/github.com/klauspost/compress/zstd/fse_encoder.go b/vendor/github.com/klauspost/compress/zstd/fse_encoder.go index c74681b99..b4757ee3f 100644 --- a/vendor/github.com/klauspost/compress/zstd/fse_encoder.go +++ b/vendor/github.com/klauspost/compress/zstd/fse_encoder.go @@ -229,7 +229,7 @@ func (s *fseEncoder) setRLE(val byte) { deltaFindState: 0, deltaNbBits: 0, } - if debug { + if debugEncoder { println("setRLE: val", val, "symbolTT", s.ct.symbolTT[val]) } s.rleVal = val diff --git a/vendor/github.com/klauspost/compress/zstd/snappy.go b/vendor/github.com/klauspost/compress/zstd/snappy.go index 9d9d1d567..0372b1714 100644 --- a/vendor/github.com/klauspost/compress/zstd/snappy.go +++ b/vendor/github.com/klauspost/compress/zstd/snappy.go @@ -203,7 +203,7 @@ func (r *SnappyConverter) Convert(in io.Reader, w io.Writer) (int64, error) { written += int64(n) continue case chunkTypeUncompressedData: - if debug { + if debugEncoder { println("Uncompressed, chunklen", chunkLen) } // Section 4.3. Uncompressed data (chunk type 0x01). @@ -246,7 +246,7 @@ func (r *SnappyConverter) Convert(in io.Reader, w io.Writer) (int64, error) { continue case chunkTypeStreamIdentifier: - if debug { + if debugEncoder { println("stream id", chunkLen, len(snappyMagicBody)) } // Section 4.1. Stream identifier (chunk type 0xff). diff --git a/vendor/github.com/klauspost/compress/zstd/zip.go b/vendor/github.com/klauspost/compress/zstd/zip.go index e35a0a2f8..9325b928a 100644 --- a/vendor/github.com/klauspost/compress/zstd/zip.go +++ b/vendor/github.com/klauspost/compress/zstd/zip.go @@ -13,8 +13,9 @@ import ( // See https://www.winzip.com/win/en/comp_info.html const ZipMethodWinZip = 93 -// ZipMethodPKWare is the method number used by PKWARE to indicate Zstandard compression. -// See https://pkware.cachefly.net/webdocs/APPNOTE/APPNOTE-6.3.7.TXT +// ZipMethodPKWare is the original method number used by PKWARE to indicate Zstandard compression. +// Deprecated: This has been deprecated by PKWARE, use ZipMethodWinZip instead for compression. +// See https://pkware.cachefly.net/webdocs/APPNOTE/APPNOTE-6.3.9.TXT const ZipMethodPKWare = 20 var zipReaderPool sync.Pool diff --git a/vendor/github.com/klauspost/compress/zstd/zstd.go b/vendor/github.com/klauspost/compress/zstd/zstd.go index 1ba308c8b..ef1d49a00 100644 --- a/vendor/github.com/klauspost/compress/zstd/zstd.go +++ b/vendor/github.com/klauspost/compress/zstd/zstd.go @@ -15,6 +15,12 @@ import ( // enable debug printing const debug = false +// enable encoding debug printing +const debugEncoder = debug + +// enable decoding debug printing +const debugDecoder = debug + // Enable extra assertions. const debugAsserts = debug || false @@ -82,13 +88,13 @@ var ( ) func println(a ...interface{}) { - if debug { + if debug || debugDecoder || debugEncoder { log.Println(a...) } } func printf(format string, a ...interface{}) { - if debug { + if debug || debugDecoder || debugEncoder { log.Printf(format, a...) } } diff --git a/vendor/github.com/mattn/go-isatty/isatty_others.go b/vendor/github.com/mattn/go-isatty/isatty_others.go index ff714a376..3eba4cb34 100644 --- a/vendor/github.com/mattn/go-isatty/isatty_others.go +++ b/vendor/github.com/mattn/go-isatty/isatty_others.go @@ -1,4 +1,4 @@ -// +build appengine js nacl +// +build appengine js nacl wasm package isatty diff --git a/vendor/github.com/mattn/go-isatty/isatty_solaris.go b/vendor/github.com/mattn/go-isatty/isatty_solaris.go index bdd5c79a0..301067078 100644 --- a/vendor/github.com/mattn/go-isatty/isatty_solaris.go +++ b/vendor/github.com/mattn/go-isatty/isatty_solaris.go @@ -8,10 +8,9 @@ import ( ) // IsTerminal returns true if the given file descriptor is a terminal. -// see: http://src.illumos.org/source/xref/illumos-gate/usr/src/lib/libbc/libc/gen/common/isatty.c +// see: https://src.illumos.org/source/xref/illumos-gate/usr/src/lib/libc/port/gen/isatty.c func IsTerminal(fd uintptr) bool { - var termio unix.Termio - err := unix.IoctlSetTermio(int(fd), unix.TCGETA, &termio) + _, err := unix.IoctlGetTermio(int(fd), unix.TCGETA) return err == nil } diff --git a/vendor/github.com/mattn/go-isatty/isatty_tcgets.go b/vendor/github.com/mattn/go-isatty/isatty_tcgets.go index 31a1ca973..4e7b850ec 100644 --- a/vendor/github.com/mattn/go-isatty/isatty_tcgets.go +++ b/vendor/github.com/mattn/go-isatty/isatty_tcgets.go @@ -1,4 +1,4 @@ -// +build linux aix +// +build linux aix zos // +build !appengine package isatty diff --git a/vendor/github.com/mattn/go-isatty/renovate.json b/vendor/github.com/mattn/go-isatty/renovate.json deleted file mode 100644 index 5ae9d96b7..000000000 --- a/vendor/github.com/mattn/go-isatty/renovate.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "extends": [ - "config:base" - ], - "postUpdateOptions": [ - "gomodTidy" - ] -} diff --git a/vendor/github.com/mattn/go-runewidth/go.mod b/vendor/github.com/mattn/go-runewidth/go.mod index 8a9d524ec..62dba1bfc 100644 --- a/vendor/github.com/mattn/go-runewidth/go.mod +++ b/vendor/github.com/mattn/go-runewidth/go.mod @@ -2,4 +2,4 @@ module github.com/mattn/go-runewidth go 1.9 -require github.com/rivo/uniseg v0.1.0 +require github.com/rivo/uniseg v0.2.0 diff --git a/vendor/github.com/mattn/go-runewidth/go.sum b/vendor/github.com/mattn/go-runewidth/go.sum index 02135660b..03f902d56 100644 --- a/vendor/github.com/mattn/go-runewidth/go.sum +++ b/vendor/github.com/mattn/go-runewidth/go.sum @@ -1,2 +1,2 @@ -github.com/rivo/uniseg v0.1.0 h1:+2KBaVoUmb9XzDsrx/Ct0W/EYOSFf/nWTauy++DprtY= -github.com/rivo/uniseg v0.1.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= diff --git a/vendor/github.com/prometheus/client_golang/prometheus/expvar_collector.go b/vendor/github.com/prometheus/client_golang/prometheus/expvar_collector.go index 18a99d5fa..c41ab37f3 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/expvar_collector.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/expvar_collector.go @@ -22,43 +22,10 @@ type expvarCollector struct { exports map[string]*Desc } -// NewExpvarCollector returns a newly allocated expvar Collector that still has -// to be registered with a Prometheus registry. +// NewExpvarCollector is the obsolete version of collectors.NewExpvarCollector. +// See there for documentation. // -// An expvar Collector collects metrics from the expvar interface. It provides a -// quick way to expose numeric values that are already exported via expvar as -// Prometheus metrics. Note that the data models of expvar and Prometheus are -// fundamentally different, and that the expvar Collector is inherently slower -// than native Prometheus metrics. Thus, the expvar Collector is probably great -// for experiments and prototying, but you should seriously consider a more -// direct implementation of Prometheus metrics for monitoring production -// systems. -// -// The exports map has the following meaning: -// -// The keys in the map correspond to expvar keys, i.e. for every expvar key you -// want to export as Prometheus metric, you need an entry in the exports -// map. The descriptor mapped to each key describes how to export the expvar -// value. It defines the name and the help string of the Prometheus metric -// proxying the expvar value. The type will always be Untyped. -// -// For descriptors without variable labels, the expvar value must be a number or -// a bool. The number is then directly exported as the Prometheus sample -// value. (For a bool, 'false' translates to 0 and 'true' to 1). Expvar values -// that are not numbers or bools are silently ignored. -// -// If the descriptor has one variable label, the expvar value must be an expvar -// map. The keys in the expvar map become the various values of the one -// Prometheus label. The values in the expvar map must be numbers or bools again -// as above. -// -// For descriptors with more than one variable label, the expvar must be a -// nested expvar map, i.e. where the values of the topmost map are maps again -// etc. until a depth is reached that corresponds to the number of labels. The -// leaves of that structure must be numbers or bools as above to serve as the -// sample values. -// -// Anything that does not fit into the scheme above is silently ignored. +// Deprecated: Use collectors.NewExpvarCollector instead. func NewExpvarCollector(exports map[string]*Desc) Collector { return &expvarCollector{ exports: exports, diff --git a/vendor/github.com/prometheus/client_golang/prometheus/go_collector.go b/vendor/github.com/prometheus/client_golang/prometheus/go_collector.go index db43ca5ba..a96ed1cee 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/go_collector.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/go_collector.go @@ -36,32 +36,10 @@ type goCollector struct { msMaxAge time.Duration // Maximum allowed age of old memstats. } -// NewGoCollector returns a collector that exports metrics about the current Go -// process. This includes memory stats. To collect those, runtime.ReadMemStats -// is called. This requires to “stop the world”, which usually only happens for -// garbage collection (GC). Take the following implications into account when -// deciding whether to use the Go collector: +// NewGoCollector is the obsolete version of collectors.NewGoCollector. +// See there for documentation. // -// 1. The performance impact of stopping the world is the more relevant the more -// frequently metrics are collected. However, with Go1.9 or later the -// stop-the-world time per metrics collection is very short (~25µs) so that the -// performance impact will only matter in rare cases. However, with older Go -// versions, the stop-the-world duration depends on the heap size and can be -// quite significant (~1.7 ms/GiB as per -// https://go-review.googlesource.com/c/go/+/34937). -// -// 2. During an ongoing GC, nothing else can stop the world. Therefore, if the -// metrics collection happens to coincide with GC, it will only complete after -// GC has finished. Usually, GC is fast enough to not cause problems. However, -// with a very large heap, GC might take multiple seconds, which is enough to -// cause scrape timeouts in common setups. To avoid this problem, the Go -// collector will use the memstats from a previous collection if -// runtime.ReadMemStats takes more than 1s. However, if there are no previously -// collected memstats, or their collection is more than 5m ago, the collection -// will block until runtime.ReadMemStats succeeds. -// -// NOTE: The problem is solved in Go 1.15, see -// https://github.com/golang/go/issues/19812 for the related Go issue. +// Deprecated: Use collectors.NewGoCollector instead. func NewGoCollector() Collector { return &goCollector{ goroutinesDesc: NewDesc( @@ -366,23 +344,10 @@ type memStatsMetrics []struct { valType ValueType } -// NewBuildInfoCollector returns a collector collecting a single metric -// "go_build_info" with the constant value 1 and three labels "path", "version", -// and "checksum". Their label values contain the main module path, version, and -// checksum, respectively. The labels will only have meaningful values if the -// binary is built with Go module support and from source code retrieved from -// the source repository (rather than the local file system). This is usually -// accomplished by building from outside of GOPATH, specifying the full address -// of the main package, e.g. "GO111MODULE=on go run -// github.com/prometheus/client_golang/examples/random". If built without Go -// module support, all label values will be "unknown". If built with Go module -// support but using the source code from the local file system, the "path" will -// be set appropriately, but "checksum" will be empty and "version" will be -// "(devel)". +// NewBuildInfoCollector is the obsolete version of collectors.NewBuildInfoCollector. +// See there for documentation. // -// This collector uses only the build information for the main module. See -// https://github.com/povilasv/prommod for an example of a collector for the -// module dependencies. +// Deprecated: Use collectors.NewBuildInfoCollector instead. func NewBuildInfoCollector() Collector { path, version, sum := "unknown", "unknown", "unknown" if bi, ok := debug.ReadBuildInfo(); ok { diff --git a/vendor/github.com/prometheus/client_golang/prometheus/histogram.go b/vendor/github.com/prometheus/client_golang/prometheus/histogram.go index 3346fa1c5..8425640b3 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/histogram.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/histogram.go @@ -47,7 +47,12 @@ type Histogram interface { Metric Collector - // Observe adds a single observation to the histogram. + // Observe adds a single observation to the histogram. Observations are + // usually positive or zero. Negative observations are accepted but + // prevent current versions of Prometheus from properly detecting + // counter resets in the sum of observations. See + // https://prometheus.io/docs/practices/histograms/#count-and-sum-of-observations + // for details. Observe(float64) } diff --git a/vendor/github.com/prometheus/client_golang/prometheus/process_collector.go b/vendor/github.com/prometheus/client_golang/prometheus/process_collector.go index c46702d60..5bfe0ff5b 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/process_collector.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/process_collector.go @@ -54,16 +54,10 @@ type ProcessCollectorOpts struct { ReportErrors bool } -// NewProcessCollector returns a collector which exports the current state of -// process metrics including CPU, memory and file descriptor usage as well as -// the process start time. The detailed behavior is defined by the provided -// ProcessCollectorOpts. The zero value of ProcessCollectorOpts creates a -// collector for the current process with an empty namespace string and no error -// reporting. +// NewProcessCollector is the obsolete version of collectors.NewProcessCollector. +// See there for documentation. // -// The collector only works on operating systems with a Linux-style proc -// filesystem and on Microsoft Windows. On other operating systems, it will not -// collect any metrics. +// Deprecated: Use collectors.NewProcessCollector instead. func NewProcessCollector(opts ProcessCollectorOpts) Collector { ns := "" if len(opts.Namespace) > 0 { diff --git a/vendor/github.com/prometheus/client_golang/prometheus/summary.go b/vendor/github.com/prometheus/client_golang/prometheus/summary.go index fb5ce22bf..c5fa8ed7c 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/summary.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/summary.go @@ -55,7 +55,12 @@ type Summary interface { Metric Collector - // Observe adds a single observation to the summary. + // Observe adds a single observation to the summary. Observations are + // usually positive or zero. Negative observations are accepted but + // prevent current versions of Prometheus from properly detecting + // counter resets in the sum of observations. See + // https://prometheus.io/docs/practices/histograms/#count-and-sum-of-observations + // for details. Observe(float64) } @@ -121,7 +126,9 @@ type SummaryOpts struct { Objectives map[float64]float64 // MaxAge defines the duration for which an observation stays relevant - // for the summary. Must be positive. The default value is DefMaxAge. + // for the summary. Only applies to pre-calculated quantiles, does not + // apply to _sum and _count. Must be positive. The default value is + // DefMaxAge. MaxAge time.Duration // AgeBuckets is the number of buckets used to exclude observations that diff --git a/vendor/golang.org/x/sys/unix/README.md b/vendor/golang.org/x/sys/unix/README.md index 579d2d735..474efad0e 100644 --- a/vendor/golang.org/x/sys/unix/README.md +++ b/vendor/golang.org/x/sys/unix/README.md @@ -76,7 +76,7 @@ arguments can be passed to the kernel. The third is for low-level use by the ForkExec wrapper. Unlike the first two, it does not call into the scheduler to let it know that a system call is running. -When porting Go to an new architecture/OS, this file must be implemented for +When porting Go to a new architecture/OS, this file must be implemented for each GOOS/GOARCH pair. ### mksysnum @@ -107,7 +107,7 @@ prototype can be exported (capitalized) or not. Adding a new syscall often just requires adding a new `//sys` function prototype with the desired arguments and a capitalized name so it is exported. However, if you want the interface to the syscall to be different, often one will make an -unexported `//sys` prototype, an then write a custom wrapper in +unexported `//sys` prototype, and then write a custom wrapper in `syscall_${GOOS}.go`. ### types files @@ -137,7 +137,7 @@ some `#if/#elif` macros in your include statements. This script is used to generate the system's various constants. This doesn't just include the error numbers and error strings, but also the signal numbers -an a wide variety of miscellaneous constants. The constants come from the list +and a wide variety of miscellaneous constants. The constants come from the list of include files in the `includes_${uname}` variable. A regex then picks out the desired `#define` statements, and generates the corresponding Go constants. The error numbers and strings are generated from `#include `, and the diff --git a/vendor/golang.org/x/sys/unix/asm_bsd_386.s b/vendor/golang.org/x/sys/unix/asm_bsd_386.s index 7f29275fa..e0fcd9b3d 100644 --- a/vendor/golang.org/x/sys/unix/asm_bsd_386.s +++ b/vendor/golang.org/x/sys/unix/asm_bsd_386.s @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. -//go:build (darwin || freebsd || netbsd || openbsd) && gc -// +build darwin freebsd netbsd openbsd +//go:build (freebsd || netbsd || openbsd) && gc +// +build freebsd netbsd openbsd // +build gc #include "textflag.h" diff --git a/vendor/golang.org/x/sys/unix/asm_bsd_arm.s b/vendor/golang.org/x/sys/unix/asm_bsd_arm.s index 98ebfad9d..d702d4adc 100644 --- a/vendor/golang.org/x/sys/unix/asm_bsd_arm.s +++ b/vendor/golang.org/x/sys/unix/asm_bsd_arm.s @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. -//go:build (darwin || freebsd || netbsd || openbsd) && gc -// +build darwin freebsd netbsd openbsd +//go:build (freebsd || netbsd || openbsd) && gc +// +build freebsd netbsd openbsd // +build gc #include "textflag.h" diff --git a/vendor/golang.org/x/sys/unix/mkerrors.sh b/vendor/golang.org/x/sys/unix/mkerrors.sh index 007358af8..3f670faba 100644 --- a/vendor/golang.org/x/sys/unix/mkerrors.sh +++ b/vendor/golang.org/x/sys/unix/mkerrors.sh @@ -239,6 +239,7 @@ struct ltchars { #include #include #include +#include #include #include #include @@ -258,6 +259,7 @@ struct ltchars { #include #include +#include #include #if defined(__sparc__) @@ -501,6 +503,9 @@ ccflags="$@" $2 ~ /^LO_(KEY|NAME)_SIZE$/ || $2 ~ /^LOOP_(CLR|CTL|GET|SET)_/ || $2 ~ /^(AF|SOCK|SO|SOL|IPPROTO|IP|IPV6|TCP|MCAST|EVFILT|NOTE|SHUT|PROT|MAP|MFD|T?PACKET|MSG|SCM|MCL|DT|MADV|PR|LOCAL)_/ || + $2 ~ /^NFC_(GENL|PROTO|COMM|RF|SE|DIRECTION|LLCP|SOCKPROTO)_/ || + $2 ~ /^NFC_.*_(MAX)?SIZE$/ || + $2 ~ /^RAW_PAYLOAD_/ || $2 ~ /^TP_STATUS_/ || $2 ~ /^FALLOC_/ || $2 ~ /^ICMPV?6?_(FILTER|SEC)/ || @@ -593,6 +598,9 @@ ccflags="$@" $2 == "HID_MAX_DESCRIPTOR_SIZE" || $2 ~ /^_?HIDIOC/ || $2 ~ /^BUS_(USB|HIL|BLUETOOTH|VIRTUAL)$/ || + $2 ~ /^MTD/ || + $2 ~ /^OTP/ || + $2 ~ /^MEM/ || $2 ~ /^BLK[A-Z]*(GET$|SET$|BUF$|PART$|SIZE)/ {printf("\t%s = C.%s\n", $2, $2)} $2 ~ /^__WCOREFLAG$/ {next} $2 ~ /^__W[A-Z0-9]+$/ {printf("\t%s = C.%s\n", substr($2,3), $2)} diff --git a/vendor/golang.org/x/sys/unix/syscall_linux.go b/vendor/golang.org/x/sys/unix/syscall_linux.go index 2dd7c8e34..41b91fdfb 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux.go @@ -904,6 +904,46 @@ func (sa *SockaddrIUCV) sockaddr() (unsafe.Pointer, _Socklen, error) { return unsafe.Pointer(&sa.raw), SizeofSockaddrIUCV, nil } +type SockaddrNFC struct { + DeviceIdx uint32 + TargetIdx uint32 + NFCProtocol uint32 + raw RawSockaddrNFC +} + +func (sa *SockaddrNFC) sockaddr() (unsafe.Pointer, _Socklen, error) { + sa.raw.Sa_family = AF_NFC + sa.raw.Dev_idx = sa.DeviceIdx + sa.raw.Target_idx = sa.TargetIdx + sa.raw.Nfc_protocol = sa.NFCProtocol + return unsafe.Pointer(&sa.raw), SizeofSockaddrNFC, nil +} + +type SockaddrNFCLLCP struct { + DeviceIdx uint32 + TargetIdx uint32 + NFCProtocol uint32 + DestinationSAP uint8 + SourceSAP uint8 + ServiceName string + raw RawSockaddrNFCLLCP +} + +func (sa *SockaddrNFCLLCP) sockaddr() (unsafe.Pointer, _Socklen, error) { + sa.raw.Sa_family = AF_NFC + sa.raw.Dev_idx = sa.DeviceIdx + sa.raw.Target_idx = sa.TargetIdx + sa.raw.Nfc_protocol = sa.NFCProtocol + sa.raw.Dsap = sa.DestinationSAP + sa.raw.Ssap = sa.SourceSAP + if len(sa.ServiceName) > len(sa.raw.Service_name) { + return nil, 0, EINVAL + } + copy(sa.raw.Service_name[:], sa.ServiceName) + sa.raw.SetServiceNameLen(len(sa.ServiceName)) + return unsafe.Pointer(&sa.raw), SizeofSockaddrNFCLLCP, nil +} + var socketProtocol = func(fd int) (int, error) { return GetsockoptInt(fd, SOL_SOCKET, SO_PROTOCOL) } @@ -1144,6 +1184,37 @@ func anyToSockaddr(fd int, rsa *RawSockaddrAny) (Sockaddr, error) { } return sa, nil } + case AF_NFC: + proto, err := socketProtocol(fd) + if err != nil { + return nil, err + } + switch proto { + case NFC_SOCKPROTO_RAW: + pp := (*RawSockaddrNFC)(unsafe.Pointer(rsa)) + sa := &SockaddrNFC{ + DeviceIdx: pp.Dev_idx, + TargetIdx: pp.Target_idx, + NFCProtocol: pp.Nfc_protocol, + } + return sa, nil + case NFC_SOCKPROTO_LLCP: + pp := (*RawSockaddrNFCLLCP)(unsafe.Pointer(rsa)) + if uint64(pp.Service_name_len) > uint64(len(pp.Service_name)) { + return nil, EINVAL + } + sa := &SockaddrNFCLLCP{ + DeviceIdx: pp.Dev_idx, + TargetIdx: pp.Target_idx, + NFCProtocol: pp.Nfc_protocol, + DestinationSAP: pp.Dsap, + SourceSAP: pp.Ssap, + ServiceName: string(pp.Service_name[:pp.Service_name_len]), + } + return sa, nil + default: + return nil, EINVAL + } } return nil, EAFNOSUPPORT } diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_386.go b/vendor/golang.org/x/sys/unix/syscall_linux_386.go index 7b52e5d8a..b430536c8 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_386.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_386.go @@ -378,6 +378,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint32(length) } +func (rsa *RawSockaddrNFCLLCP) SetServiceNameLen(length int) { + rsa.Service_name_len = uint32(length) +} + //sys poll(fds *PollFd, nfds int, timeout int) (n int, err error) func Poll(fds []PollFd, timeout int) (n int, err error) { diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_amd64.go b/vendor/golang.org/x/sys/unix/syscall_linux_amd64.go index 28b764115..85cd97da0 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_amd64.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_amd64.go @@ -172,6 +172,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint64(length) } +func (rsa *RawSockaddrNFCLLCP) SetServiceNameLen(length int) { + rsa.Service_name_len = uint64(length) +} + //sys poll(fds *PollFd, nfds int, timeout int) (n int, err error) func Poll(fds []PollFd, timeout int) (n int, err error) { diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_arm.go b/vendor/golang.org/x/sys/unix/syscall_linux_arm.go index 68877728e..39a864d4e 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_arm.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_arm.go @@ -256,6 +256,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint32(length) } +func (rsa *RawSockaddrNFCLLCP) SetServiceNameLen(length int) { + rsa.Service_name_len = uint32(length) +} + //sys poll(fds *PollFd, nfds int, timeout int) (n int, err error) func Poll(fds []PollFd, timeout int) (n int, err error) { diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_arm64.go b/vendor/golang.org/x/sys/unix/syscall_linux_arm64.go index 7ed703476..7f27ebf2f 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_arm64.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_arm64.go @@ -207,6 +207,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint64(length) } +func (rsa *RawSockaddrNFCLLCP) SetServiceNameLen(length int) { + rsa.Service_name_len = uint64(length) +} + func InotifyInit() (fd int, err error) { return InotifyInit1(0) } diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go b/vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go index 06dec06fa..27aee81d9 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go @@ -217,6 +217,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint64(length) } +func (rsa *RawSockaddrNFCLLCP) SetServiceNameLen(length int) { + rsa.Service_name_len = uint64(length) +} + func InotifyInit() (fd int, err error) { return InotifyInit1(0) } diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go b/vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go index 8f0d0a5b5..3a5621e37 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go @@ -229,6 +229,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint32(length) } +func (rsa *RawSockaddrNFCLLCP) SetServiceNameLen(length int) { + rsa.Service_name_len = uint32(length) +} + //sys poll(fds *PollFd, nfds int, timeout int) (n int, err error) func Poll(fds []PollFd, timeout int) (n int, err error) { diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_ppc.go b/vendor/golang.org/x/sys/unix/syscall_linux_ppc.go index 7e65e088d..cf0d36f76 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_ppc.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_ppc.go @@ -215,6 +215,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint32(length) } +func (rsa *RawSockaddrNFCLLCP) SetServiceNameLen(length int) { + rsa.Service_name_len = uint32(length) +} + //sysnb pipe(p *[2]_C_int) (err error) func Pipe(p []int) (err error) { diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go b/vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go index 0b1f0d6da..5259a5fea 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go @@ -100,6 +100,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint64(length) } +func (rsa *RawSockaddrNFCLLCP) SetServiceNameLen(length int) { + rsa.Service_name_len = uint64(length) +} + //sysnb pipe(p *[2]_C_int) (err error) func Pipe(p []int) (err error) { diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_riscv64.go b/vendor/golang.org/x/sys/unix/syscall_linux_riscv64.go index ce9bcd317..8ef821e5d 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_riscv64.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_riscv64.go @@ -188,6 +188,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint64(length) } +func (rsa *RawSockaddrNFCLLCP) SetServiceNameLen(length int) { + rsa.Service_name_len = uint64(length) +} + func InotifyInit() (fd int, err error) { return InotifyInit1(0) } diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_s390x.go b/vendor/golang.org/x/sys/unix/syscall_linux_s390x.go index a1e45694b..a1c0574b5 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_s390x.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_s390x.go @@ -129,6 +129,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint64(length) } +func (rsa *RawSockaddrNFCLLCP) SetServiceNameLen(length int) { + rsa.Service_name_len = uint64(length) +} + // Linux on s390x uses the old mmap interface, which requires arguments to be passed in a struct. // mmap2 also requires arguments to be passed in a struct; it is currently not exposed in . func mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) { diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go b/vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go index 49055a3cf..de14b8898 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go @@ -116,6 +116,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint64(length) } +func (rsa *RawSockaddrNFCLLCP) SetServiceNameLen(length int) { + rsa.Service_name_len = uint64(length) +} + //sysnb pipe(p *[2]_C_int) (err error) func Pipe(p []int) (err error) { diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux.go b/vendor/golang.org/x/sys/unix/zerrors_linux.go index 47572aaa6..c3fa22486 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux.go @@ -1406,6 +1406,10 @@ const ( MCAST_LEAVE_SOURCE_GROUP = 0x2f MCAST_MSFILTER = 0x30 MCAST_UNBLOCK_SOURCE = 0x2c + MEMGETREGIONINFO = 0xc0104d08 + MEMREADOOB64 = 0xc0184d16 + MEMWRITE = 0xc0304d18 + MEMWRITEOOB64 = 0xc0184d15 MFD_ALLOW_SEALING = 0x2 MFD_CLOEXEC = 0x1 MFD_HUGETLB = 0x4 @@ -1494,7 +1498,35 @@ const ( MS_SYNCHRONOUS = 0x10 MS_UNBINDABLE = 0x20000 MS_VERBOSE = 0x8000 + MTD_ABSENT = 0x0 + MTD_BIT_WRITEABLE = 0x800 + MTD_CAP_NANDFLASH = 0x400 + MTD_CAP_NORFLASH = 0xc00 + MTD_CAP_NVRAM = 0x1c00 + MTD_CAP_RAM = 0x1c00 + MTD_CAP_ROM = 0x0 + MTD_DATAFLASH = 0x6 MTD_INODE_FS_MAGIC = 0x11307854 + MTD_MAX_ECCPOS_ENTRIES = 0x40 + MTD_MAX_OOBFREE_ENTRIES = 0x8 + MTD_MLCNANDFLASH = 0x8 + MTD_NANDECC_AUTOPLACE = 0x2 + MTD_NANDECC_AUTOPL_USR = 0x4 + MTD_NANDECC_OFF = 0x0 + MTD_NANDECC_PLACE = 0x1 + MTD_NANDECC_PLACEONLY = 0x3 + MTD_NANDFLASH = 0x4 + MTD_NORFLASH = 0x3 + MTD_NO_ERASE = 0x1000 + MTD_OTP_FACTORY = 0x1 + MTD_OTP_OFF = 0x0 + MTD_OTP_USER = 0x2 + MTD_POWERUP_LOCK = 0x2000 + MTD_RAM = 0x1 + MTD_ROM = 0x2 + MTD_SLC_ON_MLC_EMULATION = 0x4000 + MTD_UBIVOLUME = 0x7 + MTD_WRITEABLE = 0x400 NAME_MAX = 0xff NCP_SUPER_MAGIC = 0x564c NETLINK_ADD_MEMBERSHIP = 0x1 @@ -1534,6 +1566,59 @@ const ( NETLINK_XFRM = 0x6 NETNSA_MAX = 0x5 NETNSA_NSID_NOT_ASSIGNED = -0x1 + NFC_ATR_REQ_GB_MAXSIZE = 0x30 + NFC_ATR_REQ_MAXSIZE = 0x40 + NFC_ATR_RES_GB_MAXSIZE = 0x2f + NFC_ATR_RES_MAXSIZE = 0x40 + NFC_COMM_ACTIVE = 0x0 + NFC_COMM_PASSIVE = 0x1 + NFC_DEVICE_NAME_MAXSIZE = 0x8 + NFC_DIRECTION_RX = 0x0 + NFC_DIRECTION_TX = 0x1 + NFC_FIRMWARE_NAME_MAXSIZE = 0x20 + NFC_GB_MAXSIZE = 0x30 + NFC_GENL_MCAST_EVENT_NAME = "events" + NFC_GENL_NAME = "nfc" + NFC_GENL_VERSION = 0x1 + NFC_HEADER_SIZE = 0x1 + NFC_ISO15693_UID_MAXSIZE = 0x8 + NFC_LLCP_MAX_SERVICE_NAME = 0x3f + NFC_LLCP_MIUX = 0x1 + NFC_LLCP_REMOTE_LTO = 0x3 + NFC_LLCP_REMOTE_MIU = 0x2 + NFC_LLCP_REMOTE_RW = 0x4 + NFC_LLCP_RW = 0x0 + NFC_NFCID1_MAXSIZE = 0xa + NFC_NFCID2_MAXSIZE = 0x8 + NFC_NFCID3_MAXSIZE = 0xa + NFC_PROTO_FELICA = 0x3 + NFC_PROTO_FELICA_MASK = 0x8 + NFC_PROTO_ISO14443 = 0x4 + NFC_PROTO_ISO14443_B = 0x6 + NFC_PROTO_ISO14443_B_MASK = 0x40 + NFC_PROTO_ISO14443_MASK = 0x10 + NFC_PROTO_ISO15693 = 0x7 + NFC_PROTO_ISO15693_MASK = 0x80 + NFC_PROTO_JEWEL = 0x1 + NFC_PROTO_JEWEL_MASK = 0x2 + NFC_PROTO_MAX = 0x8 + NFC_PROTO_MIFARE = 0x2 + NFC_PROTO_MIFARE_MASK = 0x4 + NFC_PROTO_NFC_DEP = 0x5 + NFC_PROTO_NFC_DEP_MASK = 0x20 + NFC_RAW_HEADER_SIZE = 0x2 + NFC_RF_INITIATOR = 0x0 + NFC_RF_NONE = 0x2 + NFC_RF_TARGET = 0x1 + NFC_SENSB_RES_MAXSIZE = 0xc + NFC_SENSF_RES_MAXSIZE = 0x12 + NFC_SE_DISABLED = 0x0 + NFC_SE_EMBEDDED = 0x2 + NFC_SE_ENABLED = 0x1 + NFC_SE_UICC = 0x1 + NFC_SOCKPROTO_LLCP = 0x1 + NFC_SOCKPROTO_MAX = 0x2 + NFC_SOCKPROTO_RAW = 0x0 NFNETLINK_V0 = 0x0 NFNLGRP_ACCT_QUOTA = 0x8 NFNLGRP_CONNTRACK_DESTROY = 0x3 @@ -1959,6 +2044,11 @@ const ( QNX4_SUPER_MAGIC = 0x2f QNX6_SUPER_MAGIC = 0x68191122 RAMFS_MAGIC = 0x858458f6 + RAW_PAYLOAD_DIGITAL = 0x3 + RAW_PAYLOAD_HCI = 0x2 + RAW_PAYLOAD_LLCP = 0x0 + RAW_PAYLOAD_NCI = 0x1 + RAW_PAYLOAD_PROPRIETARY = 0x4 RDTGROUP_SUPER_MAGIC = 0x7655821 REISERFS_SUPER_MAGIC = 0x52654973 RENAME_EXCHANGE = 0x2 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_386.go b/vendor/golang.org/x/sys/unix/zerrors_linux_386.go index e91a1a957..09fc559ed 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_386.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_386.go @@ -60,6 +60,8 @@ const ( CS8 = 0x30 CSIZE = 0x30 CSTOPB = 0x40 + ECCGETLAYOUT = 0x81484d11 + ECCGETSTATS = 0x80104d12 ECHOCTL = 0x200 ECHOE = 0x10 ECHOK = 0x20 @@ -123,6 +125,19 @@ const ( MCL_CURRENT = 0x1 MCL_FUTURE = 0x2 MCL_ONFAULT = 0x4 + MEMERASE = 0x40084d02 + MEMERASE64 = 0x40104d14 + MEMGETBADBLOCK = 0x40084d0b + MEMGETINFO = 0x80204d01 + MEMGETOOBSEL = 0x80c84d0a + MEMGETREGIONCOUNT = 0x80044d07 + MEMISLOCKED = 0x80084d17 + MEMLOCK = 0x40084d05 + MEMREADOOB = 0xc00c4d04 + MEMSETBADBLOCK = 0x40084d0c + MEMUNLOCK = 0x40084d06 + MEMWRITEOOB = 0xc00c4d03 + MTDFILEMODE = 0x4d13 NFDBITS = 0x20 NLDLY = 0x100 NOFLSH = 0x80 @@ -132,6 +147,10 @@ const ( NS_GET_USERNS = 0xb701 OLCUC = 0x2 ONLCR = 0x4 + OTPGETREGIONCOUNT = 0x40044d0e + OTPGETREGIONINFO = 0x400c4d0f + OTPLOCK = 0x800c4d10 + OTPSELECT = 0x80044d0d O_APPEND = 0x400 O_ASYNC = 0x2000 O_CLOEXEC = 0x80000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go index a9cbac644..75730cc22 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go @@ -60,6 +60,8 @@ const ( CS8 = 0x30 CSIZE = 0x30 CSTOPB = 0x40 + ECCGETLAYOUT = 0x81484d11 + ECCGETSTATS = 0x80104d12 ECHOCTL = 0x200 ECHOE = 0x10 ECHOK = 0x20 @@ -123,6 +125,19 @@ const ( MCL_CURRENT = 0x1 MCL_FUTURE = 0x2 MCL_ONFAULT = 0x4 + MEMERASE = 0x40084d02 + MEMERASE64 = 0x40104d14 + MEMGETBADBLOCK = 0x40084d0b + MEMGETINFO = 0x80204d01 + MEMGETOOBSEL = 0x80c84d0a + MEMGETREGIONCOUNT = 0x80044d07 + MEMISLOCKED = 0x80084d17 + MEMLOCK = 0x40084d05 + MEMREADOOB = 0xc0104d04 + MEMSETBADBLOCK = 0x40084d0c + MEMUNLOCK = 0x40084d06 + MEMWRITEOOB = 0xc0104d03 + MTDFILEMODE = 0x4d13 NFDBITS = 0x40 NLDLY = 0x100 NOFLSH = 0x80 @@ -132,6 +147,10 @@ const ( NS_GET_USERNS = 0xb701 OLCUC = 0x2 ONLCR = 0x4 + OTPGETREGIONCOUNT = 0x40044d0e + OTPGETREGIONINFO = 0x400c4d0f + OTPLOCK = 0x800c4d10 + OTPSELECT = 0x80044d0d O_APPEND = 0x400 O_ASYNC = 0x2000 O_CLOEXEC = 0x80000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go b/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go index d74f3c15a..127cf17ad 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go @@ -60,6 +60,8 @@ const ( CS8 = 0x30 CSIZE = 0x30 CSTOPB = 0x40 + ECCGETLAYOUT = 0x81484d11 + ECCGETSTATS = 0x80104d12 ECHOCTL = 0x200 ECHOE = 0x10 ECHOK = 0x20 @@ -121,6 +123,19 @@ const ( MCL_CURRENT = 0x1 MCL_FUTURE = 0x2 MCL_ONFAULT = 0x4 + MEMERASE = 0x40084d02 + MEMERASE64 = 0x40104d14 + MEMGETBADBLOCK = 0x40084d0b + MEMGETINFO = 0x80204d01 + MEMGETOOBSEL = 0x80c84d0a + MEMGETREGIONCOUNT = 0x80044d07 + MEMISLOCKED = 0x80084d17 + MEMLOCK = 0x40084d05 + MEMREADOOB = 0xc00c4d04 + MEMSETBADBLOCK = 0x40084d0c + MEMUNLOCK = 0x40084d06 + MEMWRITEOOB = 0xc00c4d03 + MTDFILEMODE = 0x4d13 NFDBITS = 0x20 NLDLY = 0x100 NOFLSH = 0x80 @@ -130,6 +145,10 @@ const ( NS_GET_USERNS = 0xb701 OLCUC = 0x2 ONLCR = 0x4 + OTPGETREGIONCOUNT = 0x40044d0e + OTPGETREGIONINFO = 0x400c4d0f + OTPLOCK = 0x800c4d10 + OTPSELECT = 0x80044d0d O_APPEND = 0x400 O_ASYNC = 0x2000 O_CLOEXEC = 0x80000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go index e1538995b..957ca1ff1 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go @@ -60,6 +60,8 @@ const ( CS8 = 0x30 CSIZE = 0x30 CSTOPB = 0x40 + ECCGETLAYOUT = 0x81484d11 + ECCGETSTATS = 0x80104d12 ECHOCTL = 0x200 ECHOE = 0x10 ECHOK = 0x20 @@ -124,6 +126,19 @@ const ( MCL_CURRENT = 0x1 MCL_FUTURE = 0x2 MCL_ONFAULT = 0x4 + MEMERASE = 0x40084d02 + MEMERASE64 = 0x40104d14 + MEMGETBADBLOCK = 0x40084d0b + MEMGETINFO = 0x80204d01 + MEMGETOOBSEL = 0x80c84d0a + MEMGETREGIONCOUNT = 0x80044d07 + MEMISLOCKED = 0x80084d17 + MEMLOCK = 0x40084d05 + MEMREADOOB = 0xc0104d04 + MEMSETBADBLOCK = 0x40084d0c + MEMUNLOCK = 0x40084d06 + MEMWRITEOOB = 0xc0104d03 + MTDFILEMODE = 0x4d13 NFDBITS = 0x40 NLDLY = 0x100 NOFLSH = 0x80 @@ -133,6 +148,10 @@ const ( NS_GET_USERNS = 0xb701 OLCUC = 0x2 ONLCR = 0x4 + OTPGETREGIONCOUNT = 0x40044d0e + OTPGETREGIONINFO = 0x400c4d0f + OTPLOCK = 0x800c4d10 + OTPSELECT = 0x80044d0d O_APPEND = 0x400 O_ASYNC = 0x2000 O_CLOEXEC = 0x80000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go b/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go index 5e8e71ff8..314a2054f 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go @@ -60,6 +60,8 @@ const ( CS8 = 0x30 CSIZE = 0x30 CSTOPB = 0x40 + ECCGETLAYOUT = 0x41484d11 + ECCGETSTATS = 0x40104d12 ECHOCTL = 0x200 ECHOE = 0x10 ECHOK = 0x20 @@ -121,6 +123,19 @@ const ( MCL_CURRENT = 0x1 MCL_FUTURE = 0x2 MCL_ONFAULT = 0x4 + MEMERASE = 0x80084d02 + MEMERASE64 = 0x80104d14 + MEMGETBADBLOCK = 0x80084d0b + MEMGETINFO = 0x40204d01 + MEMGETOOBSEL = 0x40c84d0a + MEMGETREGIONCOUNT = 0x40044d07 + MEMISLOCKED = 0x40084d17 + MEMLOCK = 0x80084d05 + MEMREADOOB = 0xc00c4d04 + MEMSETBADBLOCK = 0x80084d0c + MEMUNLOCK = 0x80084d06 + MEMWRITEOOB = 0xc00c4d03 + MTDFILEMODE = 0x20004d13 NFDBITS = 0x20 NLDLY = 0x100 NOFLSH = 0x80 @@ -130,6 +145,10 @@ const ( NS_GET_USERNS = 0x2000b701 OLCUC = 0x2 ONLCR = 0x4 + OTPGETREGIONCOUNT = 0x80044d0e + OTPGETREGIONINFO = 0x800c4d0f + OTPLOCK = 0x400c4d10 + OTPSELECT = 0x40044d0d O_APPEND = 0x8 O_ASYNC = 0x1000 O_CLOEXEC = 0x80000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go index e670ee148..457e8de97 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go @@ -60,6 +60,8 @@ const ( CS8 = 0x30 CSIZE = 0x30 CSTOPB = 0x40 + ECCGETLAYOUT = 0x41484d11 + ECCGETSTATS = 0x40104d12 ECHOCTL = 0x200 ECHOE = 0x10 ECHOK = 0x20 @@ -121,6 +123,19 @@ const ( MCL_CURRENT = 0x1 MCL_FUTURE = 0x2 MCL_ONFAULT = 0x4 + MEMERASE = 0x80084d02 + MEMERASE64 = 0x80104d14 + MEMGETBADBLOCK = 0x80084d0b + MEMGETINFO = 0x40204d01 + MEMGETOOBSEL = 0x40c84d0a + MEMGETREGIONCOUNT = 0x40044d07 + MEMISLOCKED = 0x40084d17 + MEMLOCK = 0x80084d05 + MEMREADOOB = 0xc0104d04 + MEMSETBADBLOCK = 0x80084d0c + MEMUNLOCK = 0x80084d06 + MEMWRITEOOB = 0xc0104d03 + MTDFILEMODE = 0x20004d13 NFDBITS = 0x40 NLDLY = 0x100 NOFLSH = 0x80 @@ -130,6 +145,10 @@ const ( NS_GET_USERNS = 0x2000b701 OLCUC = 0x2 ONLCR = 0x4 + OTPGETREGIONCOUNT = 0x80044d0e + OTPGETREGIONINFO = 0x800c4d0f + OTPLOCK = 0x400c4d10 + OTPSELECT = 0x40044d0d O_APPEND = 0x8 O_ASYNC = 0x1000 O_CLOEXEC = 0x80000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go b/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go index dd11eacb8..33cd28f6b 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go @@ -60,6 +60,8 @@ const ( CS8 = 0x30 CSIZE = 0x30 CSTOPB = 0x40 + ECCGETLAYOUT = 0x41484d11 + ECCGETSTATS = 0x40104d12 ECHOCTL = 0x200 ECHOE = 0x10 ECHOK = 0x20 @@ -121,6 +123,19 @@ const ( MCL_CURRENT = 0x1 MCL_FUTURE = 0x2 MCL_ONFAULT = 0x4 + MEMERASE = 0x80084d02 + MEMERASE64 = 0x80104d14 + MEMGETBADBLOCK = 0x80084d0b + MEMGETINFO = 0x40204d01 + MEMGETOOBSEL = 0x40c84d0a + MEMGETREGIONCOUNT = 0x40044d07 + MEMISLOCKED = 0x40084d17 + MEMLOCK = 0x80084d05 + MEMREADOOB = 0xc0104d04 + MEMSETBADBLOCK = 0x80084d0c + MEMUNLOCK = 0x80084d06 + MEMWRITEOOB = 0xc0104d03 + MTDFILEMODE = 0x20004d13 NFDBITS = 0x40 NLDLY = 0x100 NOFLSH = 0x80 @@ -130,6 +145,10 @@ const ( NS_GET_USERNS = 0x2000b701 OLCUC = 0x2 ONLCR = 0x4 + OTPGETREGIONCOUNT = 0x80044d0e + OTPGETREGIONINFO = 0x800c4d0f + OTPLOCK = 0x400c4d10 + OTPSELECT = 0x40044d0d O_APPEND = 0x8 O_ASYNC = 0x1000 O_CLOEXEC = 0x80000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go b/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go index a0a5b22ae..0e085ba14 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go @@ -60,6 +60,8 @@ const ( CS8 = 0x30 CSIZE = 0x30 CSTOPB = 0x40 + ECCGETLAYOUT = 0x41484d11 + ECCGETSTATS = 0x40104d12 ECHOCTL = 0x200 ECHOE = 0x10 ECHOK = 0x20 @@ -121,6 +123,19 @@ const ( MCL_CURRENT = 0x1 MCL_FUTURE = 0x2 MCL_ONFAULT = 0x4 + MEMERASE = 0x80084d02 + MEMERASE64 = 0x80104d14 + MEMGETBADBLOCK = 0x80084d0b + MEMGETINFO = 0x40204d01 + MEMGETOOBSEL = 0x40c84d0a + MEMGETREGIONCOUNT = 0x40044d07 + MEMISLOCKED = 0x40084d17 + MEMLOCK = 0x80084d05 + MEMREADOOB = 0xc00c4d04 + MEMSETBADBLOCK = 0x80084d0c + MEMUNLOCK = 0x80084d06 + MEMWRITEOOB = 0xc00c4d03 + MTDFILEMODE = 0x20004d13 NFDBITS = 0x20 NLDLY = 0x100 NOFLSH = 0x80 @@ -130,6 +145,10 @@ const ( NS_GET_USERNS = 0x2000b701 OLCUC = 0x2 ONLCR = 0x4 + OTPGETREGIONCOUNT = 0x80044d0e + OTPGETREGIONINFO = 0x800c4d0f + OTPLOCK = 0x400c4d10 + OTPSELECT = 0x40044d0d O_APPEND = 0x8 O_ASYNC = 0x1000 O_CLOEXEC = 0x80000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go index d9530e5fb..1b5928cff 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go @@ -60,6 +60,8 @@ const ( CS8 = 0x300 CSIZE = 0x300 CSTOPB = 0x400 + ECCGETLAYOUT = 0x41484d11 + ECCGETSTATS = 0x40104d12 ECHOCTL = 0x40 ECHOE = 0x2 ECHOK = 0x4 @@ -121,6 +123,19 @@ const ( MCL_CURRENT = 0x2000 MCL_FUTURE = 0x4000 MCL_ONFAULT = 0x8000 + MEMERASE = 0x80084d02 + MEMERASE64 = 0x80104d14 + MEMGETBADBLOCK = 0x80084d0b + MEMGETINFO = 0x40204d01 + MEMGETOOBSEL = 0x40c84d0a + MEMGETREGIONCOUNT = 0x40044d07 + MEMISLOCKED = 0x40084d17 + MEMLOCK = 0x80084d05 + MEMREADOOB = 0xc00c4d04 + MEMSETBADBLOCK = 0x80084d0c + MEMUNLOCK = 0x80084d06 + MEMWRITEOOB = 0xc00c4d03 + MTDFILEMODE = 0x20004d13 NFDBITS = 0x20 NL2 = 0x200 NL3 = 0x300 @@ -132,6 +147,10 @@ const ( NS_GET_USERNS = 0x2000b701 OLCUC = 0x4 ONLCR = 0x2 + OTPGETREGIONCOUNT = 0x80044d0e + OTPGETREGIONINFO = 0x800c4d0f + OTPLOCK = 0x400c4d10 + OTPSELECT = 0x40044d0d O_APPEND = 0x400 O_ASYNC = 0x2000 O_CLOEXEC = 0x80000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go index e60102f6a..f3a41d6ec 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go @@ -60,6 +60,8 @@ const ( CS8 = 0x300 CSIZE = 0x300 CSTOPB = 0x400 + ECCGETLAYOUT = 0x41484d11 + ECCGETSTATS = 0x40104d12 ECHOCTL = 0x40 ECHOE = 0x2 ECHOK = 0x4 @@ -121,6 +123,19 @@ const ( MCL_CURRENT = 0x2000 MCL_FUTURE = 0x4000 MCL_ONFAULT = 0x8000 + MEMERASE = 0x80084d02 + MEMERASE64 = 0x80104d14 + MEMGETBADBLOCK = 0x80084d0b + MEMGETINFO = 0x40204d01 + MEMGETOOBSEL = 0x40c84d0a + MEMGETREGIONCOUNT = 0x40044d07 + MEMISLOCKED = 0x40084d17 + MEMLOCK = 0x80084d05 + MEMREADOOB = 0xc0104d04 + MEMSETBADBLOCK = 0x80084d0c + MEMUNLOCK = 0x80084d06 + MEMWRITEOOB = 0xc0104d03 + MTDFILEMODE = 0x20004d13 NFDBITS = 0x40 NL2 = 0x200 NL3 = 0x300 @@ -132,6 +147,10 @@ const ( NS_GET_USERNS = 0x2000b701 OLCUC = 0x4 ONLCR = 0x2 + OTPGETREGIONCOUNT = 0x80044d0e + OTPGETREGIONINFO = 0x800c4d0f + OTPLOCK = 0x400c4d10 + OTPSELECT = 0x40044d0d O_APPEND = 0x400 O_ASYNC = 0x2000 O_CLOEXEC = 0x80000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go index 838ff4ea6..6a5a555d5 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go @@ -60,6 +60,8 @@ const ( CS8 = 0x300 CSIZE = 0x300 CSTOPB = 0x400 + ECCGETLAYOUT = 0x41484d11 + ECCGETSTATS = 0x40104d12 ECHOCTL = 0x40 ECHOE = 0x2 ECHOK = 0x4 @@ -121,6 +123,19 @@ const ( MCL_CURRENT = 0x2000 MCL_FUTURE = 0x4000 MCL_ONFAULT = 0x8000 + MEMERASE = 0x80084d02 + MEMERASE64 = 0x80104d14 + MEMGETBADBLOCK = 0x80084d0b + MEMGETINFO = 0x40204d01 + MEMGETOOBSEL = 0x40c84d0a + MEMGETREGIONCOUNT = 0x40044d07 + MEMISLOCKED = 0x40084d17 + MEMLOCK = 0x80084d05 + MEMREADOOB = 0xc0104d04 + MEMSETBADBLOCK = 0x80084d0c + MEMUNLOCK = 0x80084d06 + MEMWRITEOOB = 0xc0104d03 + MTDFILEMODE = 0x20004d13 NFDBITS = 0x40 NL2 = 0x200 NL3 = 0x300 @@ -132,6 +147,10 @@ const ( NS_GET_USERNS = 0x2000b701 OLCUC = 0x4 ONLCR = 0x2 + OTPGETREGIONCOUNT = 0x80044d0e + OTPGETREGIONINFO = 0x800c4d0f + OTPLOCK = 0x400c4d10 + OTPSELECT = 0x40044d0d O_APPEND = 0x400 O_ASYNC = 0x2000 O_CLOEXEC = 0x80000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go index 7cc98f09c..a4da67edb 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go @@ -60,6 +60,8 @@ const ( CS8 = 0x30 CSIZE = 0x30 CSTOPB = 0x40 + ECCGETLAYOUT = 0x81484d11 + ECCGETSTATS = 0x80104d12 ECHOCTL = 0x200 ECHOE = 0x10 ECHOK = 0x20 @@ -121,6 +123,19 @@ const ( MCL_CURRENT = 0x1 MCL_FUTURE = 0x2 MCL_ONFAULT = 0x4 + MEMERASE = 0x40084d02 + MEMERASE64 = 0x40104d14 + MEMGETBADBLOCK = 0x40084d0b + MEMGETINFO = 0x80204d01 + MEMGETOOBSEL = 0x80c84d0a + MEMGETREGIONCOUNT = 0x80044d07 + MEMISLOCKED = 0x80084d17 + MEMLOCK = 0x40084d05 + MEMREADOOB = 0xc0104d04 + MEMSETBADBLOCK = 0x40084d0c + MEMUNLOCK = 0x40084d06 + MEMWRITEOOB = 0xc0104d03 + MTDFILEMODE = 0x4d13 NFDBITS = 0x40 NLDLY = 0x100 NOFLSH = 0x80 @@ -130,6 +145,10 @@ const ( NS_GET_USERNS = 0xb701 OLCUC = 0x2 ONLCR = 0x4 + OTPGETREGIONCOUNT = 0x40044d0e + OTPGETREGIONINFO = 0x400c4d0f + OTPLOCK = 0x800c4d10 + OTPSELECT = 0x80044d0d O_APPEND = 0x400 O_ASYNC = 0x2000 O_CLOEXEC = 0x80000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go b/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go index 6d30e6fd8..a7028e0ef 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go @@ -60,6 +60,8 @@ const ( CS8 = 0x30 CSIZE = 0x30 CSTOPB = 0x40 + ECCGETLAYOUT = 0x81484d11 + ECCGETSTATS = 0x80104d12 ECHOCTL = 0x200 ECHOE = 0x10 ECHOK = 0x20 @@ -121,6 +123,19 @@ const ( MCL_CURRENT = 0x1 MCL_FUTURE = 0x2 MCL_ONFAULT = 0x4 + MEMERASE = 0x40084d02 + MEMERASE64 = 0x40104d14 + MEMGETBADBLOCK = 0x40084d0b + MEMGETINFO = 0x80204d01 + MEMGETOOBSEL = 0x80c84d0a + MEMGETREGIONCOUNT = 0x80044d07 + MEMISLOCKED = 0x80084d17 + MEMLOCK = 0x40084d05 + MEMREADOOB = 0xc0104d04 + MEMSETBADBLOCK = 0x40084d0c + MEMUNLOCK = 0x40084d06 + MEMWRITEOOB = 0xc0104d03 + MTDFILEMODE = 0x4d13 NFDBITS = 0x40 NLDLY = 0x100 NOFLSH = 0x80 @@ -130,6 +145,10 @@ const ( NS_GET_USERNS = 0xb701 OLCUC = 0x2 ONLCR = 0x4 + OTPGETREGIONCOUNT = 0x40044d0e + OTPGETREGIONINFO = 0x400c4d0f + OTPLOCK = 0x800c4d10 + OTPSELECT = 0x80044d0d O_APPEND = 0x400 O_ASYNC = 0x2000 O_CLOEXEC = 0x80000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go index d5e2dc94f..ed3b3286c 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go @@ -63,6 +63,8 @@ const ( CS8 = 0x30 CSIZE = 0x30 CSTOPB = 0x40 + ECCGETLAYOUT = 0x41484d11 + ECCGETSTATS = 0x40104d12 ECHOCTL = 0x200 ECHOE = 0x10 ECHOK = 0x20 @@ -126,6 +128,19 @@ const ( MCL_CURRENT = 0x2000 MCL_FUTURE = 0x4000 MCL_ONFAULT = 0x8000 + MEMERASE = 0x80084d02 + MEMERASE64 = 0x80104d14 + MEMGETBADBLOCK = 0x80084d0b + MEMGETINFO = 0x40204d01 + MEMGETOOBSEL = 0x40c84d0a + MEMGETREGIONCOUNT = 0x40044d07 + MEMISLOCKED = 0x40084d17 + MEMLOCK = 0x80084d05 + MEMREADOOB = 0xc0104d04 + MEMSETBADBLOCK = 0x80084d0c + MEMUNLOCK = 0x80084d06 + MEMWRITEOOB = 0xc0104d03 + MTDFILEMODE = 0x20004d13 NFDBITS = 0x40 NLDLY = 0x100 NOFLSH = 0x80 @@ -135,6 +150,10 @@ const ( NS_GET_USERNS = 0x2000b701 OLCUC = 0x2 ONLCR = 0x4 + OTPGETREGIONCOUNT = 0x80044d0e + OTPGETREGIONINFO = 0x800c4d0f + OTPLOCK = 0x400c4d10 + OTPSELECT = 0x40044d0d O_APPEND = 0x8 O_ASYNC = 0x40 O_CLOEXEC = 0x400000 diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux.go b/vendor/golang.org/x/sys/unix/ztypes_linux.go index 087323591..72887abe5 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux.go @@ -351,6 +351,13 @@ type RawSockaddrIUCV struct { Name [8]int8 } +type RawSockaddrNFC struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 +} + type _Socklen uint32 type Linger struct { @@ -464,6 +471,7 @@ const ( SizeofSockaddrL2TPIP = 0x10 SizeofSockaddrL2TPIP6 = 0x20 SizeofSockaddrIUCV = 0x20 + SizeofSockaddrNFC = 0x10 SizeofLinger = 0x8 SizeofIPMreq = 0x8 SizeofIPMreqn = 0xc @@ -3742,3 +3750,158 @@ const ( NLMSGERR_ATTR_OFFS = 0x2 NLMSGERR_ATTR_COOKIE = 0x3 ) + +type ( + EraseInfo struct { + Start uint32 + Length uint32 + } + EraseInfo64 struct { + Start uint64 + Length uint64 + } + MtdOobBuf struct { + Start uint32 + Length uint32 + Ptr *uint8 + } + MtdOobBuf64 struct { + Start uint64 + Pad uint32 + Length uint32 + Ptr uint64 + } + MtdWriteReq struct { + Start uint64 + Len uint64 + Ooblen uint64 + Data uint64 + Oob uint64 + Mode uint8 + _ [7]uint8 + } + MtdInfo struct { + Type uint8 + Flags uint32 + Size uint32 + Erasesize uint32 + Writesize uint32 + Oobsize uint32 + _ uint64 + } + RegionInfo struct { + Offset uint32 + Erasesize uint32 + Numblocks uint32 + Regionindex uint32 + } + OtpInfo struct { + Start uint32 + Length uint32 + Locked uint32 + } + NandOobinfo struct { + Useecc uint32 + Eccbytes uint32 + Oobfree [8][2]uint32 + Eccpos [32]uint32 + } + NandOobfree struct { + Offset uint32 + Length uint32 + } + NandEcclayout struct { + Eccbytes uint32 + Eccpos [64]uint32 + Oobavail uint32 + Oobfree [8]NandOobfree + } + MtdEccStats struct { + Corrected uint32 + Failed uint32 + Badblocks uint32 + Bbtblocks uint32 + } +) + +const ( + MTD_OPS_PLACE_OOB = 0x0 + MTD_OPS_AUTO_OOB = 0x1 + MTD_OPS_RAW = 0x2 +) + +const ( + MTD_FILE_MODE_NORMAL = 0x0 + MTD_FILE_MODE_OTP_FACTORY = 0x1 + MTD_FILE_MODE_OTP_USER = 0x2 + MTD_FILE_MODE_RAW = 0x3 +) + +const ( + NFC_CMD_UNSPEC = 0x0 + NFC_CMD_GET_DEVICE = 0x1 + NFC_CMD_DEV_UP = 0x2 + NFC_CMD_DEV_DOWN = 0x3 + NFC_CMD_DEP_LINK_UP = 0x4 + NFC_CMD_DEP_LINK_DOWN = 0x5 + NFC_CMD_START_POLL = 0x6 + NFC_CMD_STOP_POLL = 0x7 + NFC_CMD_GET_TARGET = 0x8 + NFC_EVENT_TARGETS_FOUND = 0x9 + NFC_EVENT_DEVICE_ADDED = 0xa + NFC_EVENT_DEVICE_REMOVED = 0xb + NFC_EVENT_TARGET_LOST = 0xc + NFC_EVENT_TM_ACTIVATED = 0xd + NFC_EVENT_TM_DEACTIVATED = 0xe + NFC_CMD_LLC_GET_PARAMS = 0xf + NFC_CMD_LLC_SET_PARAMS = 0x10 + NFC_CMD_ENABLE_SE = 0x11 + NFC_CMD_DISABLE_SE = 0x12 + NFC_CMD_LLC_SDREQ = 0x13 + NFC_EVENT_LLC_SDRES = 0x14 + NFC_CMD_FW_DOWNLOAD = 0x15 + NFC_EVENT_SE_ADDED = 0x16 + NFC_EVENT_SE_REMOVED = 0x17 + NFC_EVENT_SE_CONNECTIVITY = 0x18 + NFC_EVENT_SE_TRANSACTION = 0x19 + NFC_CMD_GET_SE = 0x1a + NFC_CMD_SE_IO = 0x1b + NFC_CMD_ACTIVATE_TARGET = 0x1c + NFC_CMD_VENDOR = 0x1d + NFC_CMD_DEACTIVATE_TARGET = 0x1e + NFC_ATTR_UNSPEC = 0x0 + NFC_ATTR_DEVICE_INDEX = 0x1 + NFC_ATTR_DEVICE_NAME = 0x2 + NFC_ATTR_PROTOCOLS = 0x3 + NFC_ATTR_TARGET_INDEX = 0x4 + NFC_ATTR_TARGET_SENS_RES = 0x5 + NFC_ATTR_TARGET_SEL_RES = 0x6 + NFC_ATTR_TARGET_NFCID1 = 0x7 + NFC_ATTR_TARGET_SENSB_RES = 0x8 + NFC_ATTR_TARGET_SENSF_RES = 0x9 + NFC_ATTR_COMM_MODE = 0xa + NFC_ATTR_RF_MODE = 0xb + NFC_ATTR_DEVICE_POWERED = 0xc + NFC_ATTR_IM_PROTOCOLS = 0xd + NFC_ATTR_TM_PROTOCOLS = 0xe + NFC_ATTR_LLC_PARAM_LTO = 0xf + NFC_ATTR_LLC_PARAM_RW = 0x10 + NFC_ATTR_LLC_PARAM_MIUX = 0x11 + NFC_ATTR_SE = 0x12 + NFC_ATTR_LLC_SDP = 0x13 + NFC_ATTR_FIRMWARE_NAME = 0x14 + NFC_ATTR_SE_INDEX = 0x15 + NFC_ATTR_SE_TYPE = 0x16 + NFC_ATTR_SE_AID = 0x17 + NFC_ATTR_FIRMWARE_DOWNLOAD_STATUS = 0x18 + NFC_ATTR_SE_APDU = 0x19 + NFC_ATTR_TARGET_ISO15693_DSFID = 0x1a + NFC_ATTR_TARGET_ISO15693_UID = 0x1b + NFC_ATTR_SE_PARAMS = 0x1c + NFC_ATTR_VENDOR_ID = 0x1d + NFC_ATTR_VENDOR_SUBCMD = 0x1e + NFC_ATTR_VENDOR_DATA = 0x1f + NFC_SDP_ATTR_UNSPEC = 0x0 + NFC_SDP_ATTR_URI = 0x1 + NFC_SDP_ATTR_SAP = 0x2 +) diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_386.go b/vendor/golang.org/x/sys/unix/ztypes_linux_386.go index 4d4d283de..235c62e46 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_386.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_386.go @@ -128,6 +128,17 @@ const ( FADV_NOREUSE = 0x5 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint32 +} + type RawSockaddr struct { Family uint16 Data [14]int8 @@ -160,9 +171,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x8 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc + SizeofSockaddrNFCLLCP = 0x58 + SizeofIovec = 0x8 + SizeofMsghdr = 0x1c + SizeofCmsghdr = 0xc ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go b/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go index 8a2eed5ec..99b1e5b6a 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go @@ -130,6 +130,17 @@ const ( FADV_NOREUSE = 0x5 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint64 +} + type RawSockaddr struct { Family uint16 Data [14]int8 @@ -163,9 +174,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x10 - SizeofMsghdr = 0x38 - SizeofCmsghdr = 0x10 + SizeofSockaddrNFCLLCP = 0x60 + SizeofIovec = 0x10 + SizeofMsghdr = 0x38 + SizeofCmsghdr = 0x10 ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go b/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go index 94b34add6..cc8bba791 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go @@ -134,6 +134,17 @@ const ( FADV_NOREUSE = 0x5 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint32 +} + type RawSockaddr struct { Family uint16 Data [14]uint8 @@ -166,9 +177,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x8 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc + SizeofSockaddrNFCLLCP = 0x58 + SizeofIovec = 0x8 + SizeofMsghdr = 0x1c + SizeofCmsghdr = 0xc ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go b/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go index 2143de4d5..fa8fe3a75 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go @@ -131,6 +131,17 @@ const ( FADV_NOREUSE = 0x5 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint64 +} + type RawSockaddr struct { Family uint16 Data [14]int8 @@ -164,9 +175,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x10 - SizeofMsghdr = 0x38 - SizeofCmsghdr = 0x10 + SizeofSockaddrNFCLLCP = 0x60 + SizeofIovec = 0x10 + SizeofMsghdr = 0x38 + SizeofCmsghdr = 0x10 ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go b/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go index a40216eee..e7fb8d9b7 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go @@ -133,6 +133,17 @@ const ( FADV_NOREUSE = 0x5 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint32 +} + type RawSockaddr struct { Family uint16 Data [14]int8 @@ -165,9 +176,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x8 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc + SizeofSockaddrNFCLLCP = 0x58 + SizeofIovec = 0x8 + SizeofMsghdr = 0x1c + SizeofCmsghdr = 0xc ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go b/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go index e834b069f..2fa61d593 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go @@ -131,6 +131,17 @@ const ( FADV_NOREUSE = 0x5 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint64 +} + type RawSockaddr struct { Family uint16 Data [14]int8 @@ -164,9 +175,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x10 - SizeofMsghdr = 0x38 - SizeofCmsghdr = 0x10 + SizeofSockaddrNFCLLCP = 0x60 + SizeofIovec = 0x10 + SizeofMsghdr = 0x38 + SizeofCmsghdr = 0x10 ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go b/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go index e31083b04..7f3639933 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go @@ -131,6 +131,17 @@ const ( FADV_NOREUSE = 0x5 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint64 +} + type RawSockaddr struct { Family uint16 Data [14]int8 @@ -164,9 +175,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x10 - SizeofMsghdr = 0x38 - SizeofCmsghdr = 0x10 + SizeofSockaddrNFCLLCP = 0x60 + SizeofIovec = 0x10 + SizeofMsghdr = 0x38 + SizeofCmsghdr = 0x10 ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go b/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go index 42811f7fb..f3c20cb86 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go @@ -133,6 +133,17 @@ const ( FADV_NOREUSE = 0x5 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint32 +} + type RawSockaddr struct { Family uint16 Data [14]int8 @@ -165,9 +176,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x8 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc + SizeofSockaddrNFCLLCP = 0x58 + SizeofIovec = 0x8 + SizeofMsghdr = 0x1c + SizeofCmsghdr = 0xc ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_ppc.go b/vendor/golang.org/x/sys/unix/ztypes_linux_ppc.go index af7a72017..885d27950 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_ppc.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_ppc.go @@ -134,6 +134,17 @@ const ( FADV_NOREUSE = 0x5 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint32 +} + type RawSockaddr struct { Family uint16 Data [14]uint8 @@ -166,9 +177,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x8 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc + SizeofSockaddrNFCLLCP = 0x58 + SizeofIovec = 0x8 + SizeofMsghdr = 0x1c + SizeofCmsghdr = 0xc ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go b/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go index 2a3afbaef..a94eb8e18 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go @@ -132,6 +132,17 @@ const ( FADV_NOREUSE = 0x5 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint64 +} + type RawSockaddr struct { Family uint16 Data [14]uint8 @@ -165,9 +176,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x10 - SizeofMsghdr = 0x38 - SizeofCmsghdr = 0x10 + SizeofSockaddrNFCLLCP = 0x60 + SizeofIovec = 0x10 + SizeofMsghdr = 0x38 + SizeofCmsghdr = 0x10 ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go b/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go index c0de30a65..659e32ebd 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go @@ -132,6 +132,17 @@ const ( FADV_NOREUSE = 0x5 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint64 +} + type RawSockaddr struct { Family uint16 Data [14]uint8 @@ -165,9 +176,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x10 - SizeofMsghdr = 0x38 - SizeofCmsghdr = 0x10 + SizeofSockaddrNFCLLCP = 0x60 + SizeofIovec = 0x10 + SizeofMsghdr = 0x38 + SizeofCmsghdr = 0x10 ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_riscv64.go b/vendor/golang.org/x/sys/unix/ztypes_linux_riscv64.go index 74faf2e91..ab8ec604f 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_riscv64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_riscv64.go @@ -131,6 +131,17 @@ const ( FADV_NOREUSE = 0x5 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint64 +} + type RawSockaddr struct { Family uint16 Data [14]uint8 @@ -164,9 +175,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x10 - SizeofMsghdr = 0x38 - SizeofCmsghdr = 0x10 + SizeofSockaddrNFCLLCP = 0x60 + SizeofIovec = 0x10 + SizeofMsghdr = 0x38 + SizeofCmsghdr = 0x10 ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go b/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go index 9a8f0c2c6..3ec08237f 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go @@ -130,6 +130,17 @@ const ( FADV_NOREUSE = 0x7 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint64 +} + type RawSockaddr struct { Family uint16 Data [14]int8 @@ -163,9 +174,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x10 - SizeofMsghdr = 0x38 - SizeofCmsghdr = 0x10 + SizeofSockaddrNFCLLCP = 0x60 + SizeofIovec = 0x10 + SizeofMsghdr = 0x38 + SizeofCmsghdr = 0x10 ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go b/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go index 72cdda75b..23d474470 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go @@ -134,6 +134,17 @@ const ( FADV_NOREUSE = 0x5 ) +type RawSockaddrNFCLLCP struct { + Sa_family uint16 + Dev_idx uint32 + Target_idx uint32 + Nfc_protocol uint32 + Dsap uint8 + Ssap uint8 + Service_name [63]uint8 + Service_name_len uint64 +} + type RawSockaddr struct { Family uint16 Data [14]int8 @@ -167,9 +178,10 @@ type Cmsghdr struct { } const ( - SizeofIovec = 0x10 - SizeofMsghdr = 0x38 - SizeofCmsghdr = 0x10 + SizeofSockaddrNFCLLCP = 0x60 + SizeofIovec = 0x10 + SizeofMsghdr = 0x38 + SizeofCmsghdr = 0x10 ) const ( diff --git a/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go b/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go index 1fdb0e5fa..2a8b1e6f7 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go +++ b/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go @@ -438,8 +438,10 @@ type Winsize struct { const ( AT_FDCWD = -0x64 - AT_SYMLINK_FOLLOW = 0x4 + AT_EACCESS = 0x1 AT_SYMLINK_NOFOLLOW = 0x2 + AT_SYMLINK_FOLLOW = 0x4 + AT_REMOVEDIR = 0x8 ) type PollFd struct { diff --git a/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go b/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go index e2fc93c7c..b1759cf70 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go @@ -438,8 +438,10 @@ type Winsize struct { const ( AT_FDCWD = -0x64 - AT_SYMLINK_FOLLOW = 0x4 + AT_EACCESS = 0x1 AT_SYMLINK_NOFOLLOW = 0x2 + AT_SYMLINK_FOLLOW = 0x4 + AT_REMOVEDIR = 0x8 ) type PollFd struct { diff --git a/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go b/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go index 8d34b5a2f..e807de206 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go +++ b/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go @@ -439,8 +439,10 @@ type Winsize struct { const ( AT_FDCWD = -0x64 - AT_SYMLINK_FOLLOW = 0x4 + AT_EACCESS = 0x1 AT_SYMLINK_NOFOLLOW = 0x2 + AT_SYMLINK_FOLLOW = 0x4 + AT_REMOVEDIR = 0x8 ) type PollFd struct { diff --git a/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm64.go b/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm64.go index ea8f1a0d9..ff3aecaee 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm64.go @@ -432,8 +432,10 @@ type Winsize struct { const ( AT_FDCWD = -0x64 - AT_SYMLINK_FOLLOW = 0x4 + AT_EACCESS = 0x1 AT_SYMLINK_NOFOLLOW = 0x2 + AT_SYMLINK_FOLLOW = 0x4 + AT_REMOVEDIR = 0x8 ) type PollFd struct { diff --git a/vendor/golang.org/x/sys/unix/ztypes_openbsd_mips64.go b/vendor/golang.org/x/sys/unix/ztypes_openbsd_mips64.go index ec6e8bc3f..9ecda6917 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_openbsd_mips64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_openbsd_mips64.go @@ -432,8 +432,10 @@ type Winsize struct { const ( AT_FDCWD = -0x64 - AT_SYMLINK_FOLLOW = 0x4 + AT_EACCESS = 0x1 AT_SYMLINK_NOFOLLOW = 0x2 + AT_SYMLINK_FOLLOW = 0x4 + AT_REMOVEDIR = 0x8 ) type PollFd struct { diff --git a/vendor/golang.org/x/sys/windows/exec_windows.go b/vendor/golang.org/x/sys/windows/exec_windows.go index a020caeef..7a11e83b7 100644 --- a/vendor/golang.org/x/sys/windows/exec_windows.go +++ b/vendor/golang.org/x/sys/windows/exec_windows.go @@ -9,6 +9,8 @@ package windows import ( errorspkg "errors" "unsafe" + + "golang.org/x/sys/internal/unsafeheader" ) // EscapeArg rewrites command line argument s as prescribed @@ -135,8 +137,8 @@ func FullPath(name string) (path string, err error) { } } -// NewProcThreadAttributeList allocates a new ProcThreadAttributeList, with the requested maximum number of attributes. -func NewProcThreadAttributeList(maxAttrCount uint32) (*ProcThreadAttributeList, error) { +// NewProcThreadAttributeList allocates a new ProcThreadAttributeListContainer, with the requested maximum number of attributes. +func NewProcThreadAttributeList(maxAttrCount uint32) (*ProcThreadAttributeListContainer, error) { var size uintptr err := initializeProcThreadAttributeList(nil, maxAttrCount, 0, &size) if err != ERROR_INSUFFICIENT_BUFFER { @@ -145,10 +147,9 @@ func NewProcThreadAttributeList(maxAttrCount uint32) (*ProcThreadAttributeList, } return nil, err } - const psize = unsafe.Sizeof(uintptr(0)) // size is guaranteed to be ≥1 by InitializeProcThreadAttributeList. - al := (*ProcThreadAttributeList)(unsafe.Pointer(&make([]unsafe.Pointer, (size+psize-1)/psize)[0])) - err = initializeProcThreadAttributeList(al, maxAttrCount, 0, &size) + al := &ProcThreadAttributeListContainer{data: (*ProcThreadAttributeList)(unsafe.Pointer(&make([]byte, size)[0]))} + err = initializeProcThreadAttributeList(al.data, maxAttrCount, 0, &size) if err != nil { return nil, err } @@ -156,11 +157,39 @@ func NewProcThreadAttributeList(maxAttrCount uint32) (*ProcThreadAttributeList, } // Update modifies the ProcThreadAttributeList using UpdateProcThreadAttribute. -func (al *ProcThreadAttributeList) Update(attribute uintptr, flags uint32, value unsafe.Pointer, size uintptr, prevValue unsafe.Pointer, returnedSize *uintptr) error { - return updateProcThreadAttribute(al, flags, attribute, value, size, prevValue, returnedSize) +// Note that the value passed to this function will be copied into memory +// allocated by LocalAlloc, the contents of which should not contain any +// Go-managed pointers, even if the passed value itself is a Go-managed +// pointer. +func (al *ProcThreadAttributeListContainer) Update(attribute uintptr, value unsafe.Pointer, size uintptr) error { + alloc, err := LocalAlloc(LMEM_FIXED, uint32(size)) + if err != nil { + return err + } + var src, dst []byte + hdr := (*unsafeheader.Slice)(unsafe.Pointer(&src)) + hdr.Data = value + hdr.Cap = int(size) + hdr.Len = int(size) + hdr = (*unsafeheader.Slice)(unsafe.Pointer(&dst)) + hdr.Data = unsafe.Pointer(alloc) + hdr.Cap = int(size) + hdr.Len = int(size) + copy(dst, src) + al.heapAllocations = append(al.heapAllocations, alloc) + return updateProcThreadAttribute(al.data, 0, attribute, unsafe.Pointer(alloc), size, nil, nil) } // Delete frees ProcThreadAttributeList's resources. -func (al *ProcThreadAttributeList) Delete() { - deleteProcThreadAttributeList(al) +func (al *ProcThreadAttributeListContainer) Delete() { + deleteProcThreadAttributeList(al.data) + for i := range al.heapAllocations { + LocalFree(Handle(al.heapAllocations[i])) + } + al.heapAllocations = nil +} + +// List returns the actual ProcThreadAttributeList to be passed to StartupInfoEx. +func (al *ProcThreadAttributeListContainer) List() *ProcThreadAttributeList { + return al.data } diff --git a/vendor/golang.org/x/sys/windows/syscall_windows.go b/vendor/golang.org/x/sys/windows/syscall_windows.go index bb6aaf89e..1215b2ae2 100644 --- a/vendor/golang.org/x/sys/windows/syscall_windows.go +++ b/vendor/golang.org/x/sys/windows/syscall_windows.go @@ -220,6 +220,7 @@ func NewCallbackCDecl(fn interface{}) uintptr { //sys CancelIo(s Handle) (err error) //sys CancelIoEx(s Handle, o *Overlapped) (err error) //sys CreateProcess(appName *uint16, commandLine *uint16, procSecurity *SecurityAttributes, threadSecurity *SecurityAttributes, inheritHandles bool, creationFlags uint32, env *uint16, currentDir *uint16, startupInfo *StartupInfo, outProcInfo *ProcessInformation) (err error) = CreateProcessW +//sys CreateProcessAsUser(token Token, appName *uint16, commandLine *uint16, procSecurity *SecurityAttributes, threadSecurity *SecurityAttributes, inheritHandles bool, creationFlags uint32, env *uint16, currentDir *uint16, startupInfo *StartupInfo, outProcInfo *ProcessInformation) (err error) = advapi32.CreateProcessAsUserW //sys initializeProcThreadAttributeList(attrlist *ProcThreadAttributeList, attrcount uint32, flags uint32, size *uintptr) (err error) = InitializeProcThreadAttributeList //sys deleteProcThreadAttributeList(attrlist *ProcThreadAttributeList) = DeleteProcThreadAttributeList //sys updateProcThreadAttribute(attrlist *ProcThreadAttributeList, flags uint32, attr uintptr, value unsafe.Pointer, size uintptr, prevvalue unsafe.Pointer, returnedsize *uintptr) (err error) = UpdateProcThreadAttribute diff --git a/vendor/golang.org/x/sys/windows/types_windows.go b/vendor/golang.org/x/sys/windows/types_windows.go index 23fe18ece..1f733398e 100644 --- a/vendor/golang.org/x/sys/windows/types_windows.go +++ b/vendor/golang.org/x/sys/windows/types_windows.go @@ -909,14 +909,15 @@ type StartupInfoEx struct { // ProcThreadAttributeList is a placeholder type to represent a PROC_THREAD_ATTRIBUTE_LIST. // -// To create a *ProcThreadAttributeList, use NewProcThreadAttributeList, and -// free its memory using ProcThreadAttributeList.Delete. -type ProcThreadAttributeList struct { - // This is of type unsafe.Pointer, not of type byte or uintptr, because - // the contents of it is mostly a list of pointers, and in most cases, - // that's a list of pointers to Go-allocated objects. In order to keep - // the GC from collecting these objects, we declare this as unsafe.Pointer. - _ [1]unsafe.Pointer +// To create a *ProcThreadAttributeList, use NewProcThreadAttributeList, update +// it with ProcThreadAttributeListContainer.Update, free its memory using +// ProcThreadAttributeListContainer.Delete, and access the list itself using +// ProcThreadAttributeListContainer.List. +type ProcThreadAttributeList struct{} + +type ProcThreadAttributeListContainer struct { + data *ProcThreadAttributeList + heapAllocations []uintptr } type ProcessInformation struct { diff --git a/vendor/golang.org/x/sys/windows/zsyscall_windows.go b/vendor/golang.org/x/sys/windows/zsyscall_windows.go index 559bc845c..148de0ffb 100644 --- a/vendor/golang.org/x/sys/windows/zsyscall_windows.go +++ b/vendor/golang.org/x/sys/windows/zsyscall_windows.go @@ -69,6 +69,7 @@ var ( procConvertStringSecurityDescriptorToSecurityDescriptorW = modadvapi32.NewProc("ConvertStringSecurityDescriptorToSecurityDescriptorW") procConvertStringSidToSidW = modadvapi32.NewProc("ConvertStringSidToSidW") procCopySid = modadvapi32.NewProc("CopySid") + procCreateProcessAsUserW = modadvapi32.NewProc("CreateProcessAsUserW") procCreateServiceW = modadvapi32.NewProc("CreateServiceW") procCreateWellKnownSid = modadvapi32.NewProc("CreateWellKnownSid") procCryptAcquireContextW = modadvapi32.NewProc("CryptAcquireContextW") @@ -553,6 +554,18 @@ func CopySid(destSidLen uint32, destSid *SID, srcSid *SID) (err error) { return } +func CreateProcessAsUser(token Token, appName *uint16, commandLine *uint16, procSecurity *SecurityAttributes, threadSecurity *SecurityAttributes, inheritHandles bool, creationFlags uint32, env *uint16, currentDir *uint16, startupInfo *StartupInfo, outProcInfo *ProcessInformation) (err error) { + var _p0 uint32 + if inheritHandles { + _p0 = 1 + } + r1, _, e1 := syscall.Syscall12(procCreateProcessAsUserW.Addr(), 11, uintptr(token), uintptr(unsafe.Pointer(appName)), uintptr(unsafe.Pointer(commandLine)), uintptr(unsafe.Pointer(procSecurity)), uintptr(unsafe.Pointer(threadSecurity)), uintptr(_p0), uintptr(creationFlags), uintptr(unsafe.Pointer(env)), uintptr(unsafe.Pointer(currentDir)), uintptr(unsafe.Pointer(startupInfo)), uintptr(unsafe.Pointer(outProcInfo)), 0) + if r1 == 0 { + err = errnoErr(e1) + } + return +} + func CreateService(mgr Handle, serviceName *uint16, displayName *uint16, access uint32, srvType uint32, startType uint32, errCtl uint32, pathName *uint16, loadOrderGroup *uint16, tagId *uint32, dependencies *uint16, serviceStartName *uint16, password *uint16) (handle Handle, err error) { r0, _, e1 := syscall.Syscall15(procCreateServiceW.Addr(), 13, uintptr(mgr), uintptr(unsafe.Pointer(serviceName)), uintptr(unsafe.Pointer(displayName)), uintptr(access), uintptr(srvType), uintptr(startType), uintptr(errCtl), uintptr(unsafe.Pointer(pathName)), uintptr(unsafe.Pointer(loadOrderGroup)), uintptr(unsafe.Pointer(tagId)), uintptr(unsafe.Pointer(dependencies)), uintptr(unsafe.Pointer(serviceStartName)), uintptr(unsafe.Pointer(password)), 0, 0) handle = Handle(r0) diff --git a/vendor/google.golang.org/api/storage/v1/storage-gen.go b/vendor/google.golang.org/api/storage/v1/storage-gen.go index c1e5ae289..6b042b6ad 100644 --- a/vendor/google.golang.org/api/storage/v1/storage-gen.go +++ b/vendor/google.golang.org/api/storage/v1/storage-gen.go @@ -2454,7 +2454,7 @@ func (c *BucketAccessControlsDeleteCall) Header() http.Header { func (c *BucketAccessControlsDeleteCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -2607,7 +2607,7 @@ func (c *BucketAccessControlsGetCall) Header() http.Header { func (c *BucketAccessControlsGetCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -2776,7 +2776,7 @@ func (c *BucketAccessControlsInsertCall) Header() http.Header { func (c *BucketAccessControlsInsertCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -2951,7 +2951,7 @@ func (c *BucketAccessControlsListCall) Header() http.Header { func (c *BucketAccessControlsListCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -3117,7 +3117,7 @@ func (c *BucketAccessControlsPatchCall) Header() http.Header { func (c *BucketAccessControlsPatchCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -3296,7 +3296,7 @@ func (c *BucketAccessControlsUpdateCall) Header() http.Header { func (c *BucketAccessControlsUpdateCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -3484,7 +3484,7 @@ func (c *BucketsDeleteCall) Header() http.Header { func (c *BucketsDeleteCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -3665,7 +3665,7 @@ func (c *BucketsGetCall) Header() http.Header { func (c *BucketsGetCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -3873,7 +3873,7 @@ func (c *BucketsGetIamPolicyCall) Header() http.Header { func (c *BucketsGetIamPolicyCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -4092,7 +4092,7 @@ func (c *BucketsInsertCall) Header() http.Header { func (c *BucketsInsertCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -4351,7 +4351,7 @@ func (c *BucketsListCall) Header() http.Header { func (c *BucketsListCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -4565,7 +4565,7 @@ func (c *BucketsLockRetentionPolicyCall) Header() http.Header { func (c *BucketsLockRetentionPolicyCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -4802,7 +4802,7 @@ func (c *BucketsPatchCall) Header() http.Header { func (c *BucketsPatchCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -5033,7 +5033,7 @@ func (c *BucketsSetIamPolicyCall) Header() http.Header { func (c *BucketsSetIamPolicyCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -5211,7 +5211,7 @@ func (c *BucketsTestIamPermissionsCall) Header() http.Header { func (c *BucketsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -5453,7 +5453,7 @@ func (c *BucketsUpdateCall) Header() http.Header { func (c *BucketsUpdateCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -5665,7 +5665,7 @@ func (c *ChannelsStopCall) Header() http.Header { func (c *ChannelsStopCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -5787,7 +5787,7 @@ func (c *DefaultObjectAccessControlsDeleteCall) Header() http.Header { func (c *DefaultObjectAccessControlsDeleteCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -5940,7 +5940,7 @@ func (c *DefaultObjectAccessControlsGetCall) Header() http.Header { func (c *DefaultObjectAccessControlsGetCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -6110,7 +6110,7 @@ func (c *DefaultObjectAccessControlsInsertCall) Header() http.Header { func (c *DefaultObjectAccessControlsInsertCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -6302,7 +6302,7 @@ func (c *DefaultObjectAccessControlsListCall) Header() http.Header { func (c *DefaultObjectAccessControlsListCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -6480,7 +6480,7 @@ func (c *DefaultObjectAccessControlsPatchCall) Header() http.Header { func (c *DefaultObjectAccessControlsPatchCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -6659,7 +6659,7 @@ func (c *DefaultObjectAccessControlsUpdateCall) Header() http.Header { func (c *DefaultObjectAccessControlsUpdateCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -6834,7 +6834,7 @@ func (c *NotificationsDeleteCall) Header() http.Header { func (c *NotificationsDeleteCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -6985,7 +6985,7 @@ func (c *NotificationsGetCall) Header() http.Header { func (c *NotificationsGetCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -7157,7 +7157,7 @@ func (c *NotificationsInsertCall) Header() http.Header { func (c *NotificationsInsertCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -7334,7 +7334,7 @@ func (c *NotificationsListCall) Header() http.Header { func (c *NotificationsListCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -7514,7 +7514,7 @@ func (c *ObjectAccessControlsDeleteCall) Header() http.Header { func (c *ObjectAccessControlsDeleteCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -7693,7 +7693,7 @@ func (c *ObjectAccessControlsGetCall) Header() http.Header { func (c *ObjectAccessControlsGetCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -7888,7 +7888,7 @@ func (c *ObjectAccessControlsInsertCall) Header() http.Header { func (c *ObjectAccessControlsInsertCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -8089,7 +8089,7 @@ func (c *ObjectAccessControlsListCall) Header() http.Header { func (c *ObjectAccessControlsListCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -8281,7 +8281,7 @@ func (c *ObjectAccessControlsPatchCall) Header() http.Header { func (c *ObjectAccessControlsPatchCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -8486,7 +8486,7 @@ func (c *ObjectAccessControlsUpdateCall) Header() http.Header { func (c *ObjectAccessControlsUpdateCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -8729,7 +8729,7 @@ func (c *ObjectsComposeCall) Header() http.Header { func (c *ObjectsComposeCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -9085,7 +9085,7 @@ func (c *ObjectsCopyCall) Header() http.Header { func (c *ObjectsCopyCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -9417,7 +9417,7 @@ func (c *ObjectsDeleteCall) Header() http.Header { func (c *ObjectsDeleteCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -9654,7 +9654,7 @@ func (c *ObjectsGetCall) Header() http.Header { func (c *ObjectsGetCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -9908,7 +9908,7 @@ func (c *ObjectsGetIamPolicyCall) Header() http.Header { func (c *ObjectsGetIamPolicyCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -10228,7 +10228,7 @@ func (c *ObjectsInsertCall) Header() http.Header { func (c *ObjectsInsertCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -10603,7 +10603,7 @@ func (c *ObjectsListCall) Header() http.Header { func (c *ObjectsListCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -10924,7 +10924,7 @@ func (c *ObjectsPatchCall) Header() http.Header { func (c *ObjectsPatchCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -11329,7 +11329,7 @@ func (c *ObjectsRewriteCall) Header() http.Header { func (c *ObjectsRewriteCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -11636,7 +11636,7 @@ func (c *ObjectsSetIamPolicyCall) Header() http.Header { func (c *ObjectsSetIamPolicyCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -11841,7 +11841,7 @@ func (c *ObjectsTestIamPermissionsCall) Header() http.Header { func (c *ObjectsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -12106,7 +12106,7 @@ func (c *ObjectsUpdateCall) Header() http.Header { func (c *ObjectsUpdateCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -12426,7 +12426,7 @@ func (c *ObjectsWatchAllCall) Header() http.Header { func (c *ObjectsWatchAllCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -12645,7 +12645,7 @@ func (c *ProjectsHmacKeysCreateCall) Header() http.Header { func (c *ProjectsHmacKeysCreateCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -12798,7 +12798,7 @@ func (c *ProjectsHmacKeysDeleteCall) Header() http.Header { func (c *ProjectsHmacKeysDeleteCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -12937,7 +12937,7 @@ func (c *ProjectsHmacKeysGetCall) Header() http.Header { func (c *ProjectsHmacKeysGetCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -13139,7 +13139,7 @@ func (c *ProjectsHmacKeysListCall) Header() http.Header { func (c *ProjectsHmacKeysListCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -13338,7 +13338,7 @@ func (c *ProjectsHmacKeysUpdateCall) Header() http.Header { func (c *ProjectsHmacKeysUpdateCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } @@ -13517,7 +13517,7 @@ func (c *ProjectsServiceAccountGetCall) Header() http.Header { func (c *ProjectsServiceAccountGetCall) doRequest(alt string) (*http.Response, error) { reqHeaders := make(http.Header) - reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210518") + reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606") for k, v := range c.header_ { reqHeaders[k] = v } diff --git a/vendor/google.golang.org/api/transport/internal/dca/dca.go b/vendor/google.golang.org/api/transport/internal/dca/dca.go index b3be7e4e3..071586e94 100644 --- a/vendor/google.golang.org/api/transport/internal/dca/dca.go +++ b/vendor/google.golang.org/api/transport/internal/dca/dca.go @@ -68,8 +68,6 @@ func GetClientCertificateSourceAndEndpoint(settings *internal.DialSettings) (cer func getClientCertificateSource(settings *internal.DialSettings) (cert.Source, error) { if !isClientCertificateEnabled() { return nil, nil - } else if settings.HTTPClient != nil { - return nil, nil // HTTPClient is incompatible with ClientCertificateSource } else if settings.ClientCertSource != nil { return settings.ClientCertSource, nil } else { diff --git a/vendor/modules.txt b/vendor/modules.txt index d05079a40..7861ce5d8 100644 --- a/vendor/modules.txt +++ b/vendor/modules.txt @@ -1,5 +1,4 @@ -# cloud.google.com/go v0.82.0 -## explicit +# cloud.google.com/go v0.83.0 cloud.google.com/go cloud.google.com/go/compute/metadata cloud.google.com/go/iam @@ -10,7 +9,7 @@ cloud.google.com/go/internal/version # cloud.google.com/go/storage v1.15.0 ## explicit cloud.google.com/go/storage -# github.com/VictoriaMetrics/fastcache v1.5.8 +# github.com/VictoriaMetrics/fastcache v1.6.0 ## explicit github.com/VictoriaMetrics/fastcache # github.com/VictoriaMetrics/fasthttp v1.0.15 @@ -28,7 +27,7 @@ github.com/VictoriaMetrics/metricsql/binaryop # github.com/VividCortex/ewma v1.2.0 ## explicit github.com/VividCortex/ewma -# github.com/aws/aws-sdk-go v1.38.43 +# github.com/aws/aws-sdk-go v1.38.56 ## explicit github.com/aws/aws-sdk-go/aws github.com/aws/aws-sdk-go/aws/arn @@ -93,7 +92,8 @@ github.com/cheggaaa/pb/v3/termutil # github.com/cpuguy83/go-md2man/v2 v2.0.0 ## explicit github.com/cpuguy83/go-md2man/v2/md2man -# github.com/fatih/color v1.10.0 +# github.com/fatih/color v1.12.0 +## explicit github.com/fatih/color # github.com/go-kit/kit v0.10.0 ## explicit @@ -117,7 +117,7 @@ github.com/golang/protobuf/ptypes/timestamp github.com/golang/snappy # github.com/googleapis/gax-go/v2 v2.0.5 github.com/googleapis/gax-go/v2 -# github.com/influxdata/influxdb v1.9.0 +# github.com/influxdata/influxdb v1.9.1 ## explicit github.com/influxdata/influxdb/client/v2 github.com/influxdata/influxdb/models @@ -128,7 +128,7 @@ github.com/jmespath/go-jmespath github.com/jstemmer/go-junit-report github.com/jstemmer/go-junit-report/formatter github.com/jstemmer/go-junit-report/parser -# github.com/klauspost/compress v1.12.2 +# github.com/klauspost/compress v1.13.0 ## explicit github.com/klauspost/compress/flate github.com/klauspost/compress/fse @@ -139,9 +139,11 @@ github.com/klauspost/compress/zstd github.com/klauspost/compress/zstd/internal/xxhash # github.com/mattn/go-colorable v0.1.8 github.com/mattn/go-colorable -# github.com/mattn/go-isatty v0.0.12 +# github.com/mattn/go-isatty v0.0.13 +## explicit github.com/mattn/go-isatty -# github.com/mattn/go-runewidth v0.0.12 +# github.com/mattn/go-runewidth v0.0.13 +## explicit github.com/mattn/go-runewidth # github.com/matttproud/golang_protobuf_extensions v1.0.1 github.com/matttproud/golang_protobuf_extensions/pbutil @@ -150,13 +152,12 @@ github.com/matttproud/golang_protobuf_extensions/pbutil github.com/oklog/ulid # github.com/pkg/errors v0.9.1 github.com/pkg/errors -# github.com/prometheus/client_golang v1.10.0 -## explicit +# github.com/prometheus/client_golang v1.11.0 github.com/prometheus/client_golang/prometheus github.com/prometheus/client_golang/prometheus/internal # github.com/prometheus/client_model v0.2.0 github.com/prometheus/client_model/go -# github.com/prometheus/common v0.25.0 +# github.com/prometheus/common v0.28.0 ## explicit github.com/prometheus/common/expfmt github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg @@ -236,7 +237,7 @@ golang.org/x/lint/golint # golang.org/x/mod v0.4.2 golang.org/x/mod/module golang.org/x/mod/semver -# golang.org/x/net v0.0.0-20210520170846-37e1c6afe023 +# golang.org/x/net v0.0.0-20210525063256-abc453219eb5 ## explicit golang.org/x/net/context golang.org/x/net/context/ctxhttp @@ -260,7 +261,7 @@ golang.org/x/oauth2/jws golang.org/x/oauth2/jwt # golang.org/x/sync v0.0.0-20210220032951-036812b2e83c golang.org/x/sync/errgroup -# golang.org/x/sys v0.0.0-20210514084401-e8d321eab015 +# golang.org/x/sys v0.0.0-20210608053332-aa57babbf139 ## explicit golang.org/x/sys/execabs golang.org/x/sys/internal/unsafeheader @@ -271,7 +272,7 @@ golang.org/x/text/secure/bidirule golang.org/x/text/transform golang.org/x/text/unicode/bidi golang.org/x/text/unicode/norm -# golang.org/x/tools v0.1.1 +# golang.org/x/tools v0.1.2 golang.org/x/tools/cmd/goimports golang.org/x/tools/go/ast/astutil golang.org/x/tools/go/gcexportdata @@ -287,7 +288,7 @@ golang.org/x/tools/internal/imports # golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 golang.org/x/xerrors golang.org/x/xerrors/internal -# google.golang.org/api v0.47.0 +# google.golang.org/api v0.48.0 ## explicit google.golang.org/api/googleapi google.golang.org/api/googleapi/transport @@ -314,7 +315,7 @@ google.golang.org/appengine/internal/modules google.golang.org/appengine/internal/remote_api google.golang.org/appengine/internal/urlfetch google.golang.org/appengine/urlfetch -# google.golang.org/genproto v0.0.0-20210518161634-ec7691c0a37d +# google.golang.org/genproto v0.0.0-20210607140030-00d4fb20b1ae ## explicit google.golang.org/genproto/googleapis/api/annotations google.golang.org/genproto/googleapis/iam/v1 @@ -322,7 +323,6 @@ google.golang.org/genproto/googleapis/rpc/code google.golang.org/genproto/googleapis/rpc/status google.golang.org/genproto/googleapis/type/expr # google.golang.org/grpc v1.38.0 -## explicit google.golang.org/grpc google.golang.org/grpc/attributes google.golang.org/grpc/backoff