Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files

This commit is contained in:
Aliaksandr Valialkin 2021-06-09 19:10:11 +03:00
commit fae6e4fc85
189 changed files with 4819 additions and 1560 deletions

View file

@ -4,12 +4,12 @@ about: Create a report to help us improve
title: '' title: ''
labels: '' labels: ''
assignees: '' assignees: ''
--- ---
**Describe the bug** **Describe the bug**
A clear and concise description of what the bug is. A clear and concise description of what the bug is.
It would be a great [upgrading](https://docs.victoriametrics.com/#how-to-upgrade) to [the latest avaialble release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) It would be a great [upgrading](https://docs.victoriametrics.com/#how-to-upgrade)
to [the latest available release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases)
and verifying whether the bug is reproducible there. and verifying whether the bug is reproducible there.
It is also recommended reading [troubleshooting docs](https://docs.victoriametrics.com/#troubleshooting). It is also recommended reading [troubleshooting docs](https://docs.victoriametrics.com/#troubleshooting).
@ -19,9 +19,22 @@ Steps to reproduce the behavior.
**Expected behavior** **Expected behavior**
A clear and concise description of what you expected to happen. A clear and concise description of what you expected to happen.
**Logs**
Check if any warnings or errors were logged by VictoriaMetrics components
or components in communication with VictoriaMetrics (e.g. Prometheus, Grafana).
**Screenshots** **Screenshots**
If applicable, add screenshots to help explain your problem. If applicable, add screenshots to help explain your problem.
For VictoriaMetrics health-state issues please provide full-length screenshots
of Grafana dashboards if possible:
* [Grafana dashboard for single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
* [Grafana dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176)
See how to setup monitoring here:
* [monitoring for single-node VictoriaMetrics](https://docs.victoriametrics.com/#monitoring)
* [montioring for VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#monitoring)
**Version** **Version**
The line returned when passing `--version` command line flag to binary. For example: The line returned when passing `--version` command line flag to binary. For example:
``` ```
@ -30,15 +43,5 @@ victoria-metrics-20190730-121249-heads-single-node-0-g671d9e55
``` ```
**Used command-line flags** **Used command-line flags**
Command-line flags are listed as `flag{name="httpListenAddr", value=":443"} 1` lines at the `/metrics` page. Please provide applied command-line flags used for running VictoriaMetrics and its components.
See the following docs for details:
* [monitoring for single-node VictoriaMetrics](https://docs.victoriametrics.com/#monitoring)
* [montioring for VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#monitoring)
**Additional context**
Add any other context about the problem here such as error logs from VictoriaMetrics and Prometheus,
`/metrics` output, screenshots from the official Grafana dashboards for VictoriaMetrics:
* [Grafana dashboard for single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
* [Grafana dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176)

View file

@ -60,7 +60,7 @@ jobs:
GOOS=darwin go build -mod=vendor ./app/vmctl GOOS=darwin go build -mod=vendor ./app/vmctl
CGO_ENABLED=0 GOOS=windows go build -mod=vendor ./app/vmagent CGO_ENABLED=0 GOOS=windows go build -mod=vendor ./app/vmagent
- name: Publish coverage - name: Publish coverage
uses: codecov/codecov-action@v1.5.0 uses: codecov/codecov-action@v1.5.2
with: with:
file: ./coverage.txt file: ./coverage.txt

View file

@ -278,11 +278,11 @@ copy-docs:
# For The rest of docs is ordered manually.t # For The rest of docs is ordered manually.t
docs-sync: docs-sync:
SRC=README.md DST=docs/Single-server-VictoriaMetrics.md ORDER=1 $(MAKE) copy-docs SRC=README.md DST=docs/Single-server-VictoriaMetrics.md ORDER=1 $(MAKE) copy-docs
SRC=app/vmagent/README.md DST=docs/vmagent.md ORDER=2 $(MAKE) copy-docs SRC=app/vmagent/README.md DST=docs/vmagent.md ORDER=3 $(MAKE) copy-docs
SRC=app/vmalert/README.md DST=docs/vmalert.md ORDER=3 $(MAKE) copy-docs SRC=app/vmalert/README.md DST=docs/vmalert.md ORDER=4 $(MAKE) copy-docs
SRC=app/vmauth/README.md DST=docs/vmauth.md ORDER=4 $(MAKE) copy-docs SRC=app/vmauth/README.md DST=docs/vmauth.md ORDER=5 $(MAKE) copy-docs
SRC=app/vmbackup/README.md DST=docs/vmbackup.md ORDER=5 $(MAKE) copy-docs SRC=app/vmbackup/README.md DST=docs/vmbackup.md ORDER=6 $(MAKE) copy-docs
SRC=app/vmrestore/README.md DST=docs/vmrestore.md ORDER=6 $(MAKE) copy-docs SRC=app/vmrestore/README.md DST=docs/vmrestore.md ORDER=7 $(MAKE) copy-docs
SRC=app/vmctl/README.md DST=docs/vmctl.md ORDER=7 $(MAKE) copy-docs SRC=app/vmctl/README.md DST=docs/vmctl.md ORDER=8 $(MAKE) copy-docs
SRC=app/vmgateway/README.md DST=docs/vmgateway.md ORDER=8 $(MAKE) copy-docs SRC=app/vmgateway/README.md DST=docs/vmgateway.md ORDER=9 $(MAKE) copy-docs
SRC=app/vmbackupmanager/README.md DST=docs/vmbackupmanager.md ORDER=9 $(MAKE) copy-docs SRC=app/vmbackupmanager/README.md DST=docs/vmbackupmanager.md ORDER=10 $(MAKE) copy-docs

View file

@ -459,11 +459,7 @@ The `/api/v1/export` endpoint should return the following response:
Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via the following APIs: Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via the following APIs:
* [Graphite API](#graphite-api-usage) * [Graphite API](#graphite-api-usage)
* [Prometheus querying API](#prometheus-querying-api-usage). Graphite metric names may special chars such as `-`, which may clash * [Prometheus querying API](#prometheus-querying-api-usage). VictoriaMetrics supports `__graphite__` pseudo-label for selecting time series with Graphite-compatible filters in [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html). For example, `{__graphite__="foo.*.bar"}` is equivalent to `{__name__=~"foo[.][^.]*[.]bar"}`, but it works faster and it is easier to use when migrating from Graphite to VictoriaMetrics.
with [MetricsQL operations](https://docs.victoriametrics.com/MetricsQL.html). Such metrics can be queries via `{__name__="foo-bar.baz"}`.
VictoriaMetrics supports `__graphite__` pseudo-label for selecting time series with Graphite-compatible filters in [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html).
For example, `{__graphite__="foo.*.bar"}` is equivalent to `{__name__=~"foo[.][^.]*[.]bar"}`, but it works faster
and it is easier to use when migrating from Graphite to VictoriaMetrics.
* [go-graphite/carbonapi](https://github.com/go-graphite/carbonapi/blob/main/cmd/carbonapi/carbonapi.example.victoriametrics.yaml) * [go-graphite/carbonapi](https://github.com/go-graphite/carbonapi/blob/main/cmd/carbonapi/carbonapi.example.victoriametrics.yaml)
## How to send data from OpenTSDB-compatible agents ## How to send data from OpenTSDB-compatible agents
@ -1766,6 +1762,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
Whether to suppress scrape errors logging. The last error for each target is always available at '/targets' page even if scrape errors logging is suppressed Whether to suppress scrape errors logging. The last error for each target is always available at '/targets' page even if scrape errors logging is suppressed
-relabelConfig string -relabelConfig string
Optional path to a file with relabeling rules, which are applied to all the ingested metrics. See https://docs.victoriametrics.com/#relabeling for details Optional path to a file with relabeling rules, which are applied to all the ingested metrics. See https://docs.victoriametrics.com/#relabeling for details
-relabelDebug
Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, then the metrics aren't sent to storage. This is useful for debugging the relabeling configs
-retentionPeriod value -retentionPeriod value
Data with timestamps outside the retentionPeriod is automatically deleted Data with timestamps outside the retentionPeriod is automatically deleted
The following optional suffixes are supported: h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 1) The following optional suffixes are supported: h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 1)

View file

@ -219,10 +219,10 @@ and also provides the following actions:
The relabeling can be defined in the following places: The relabeling can be defined in the following places:
* At the `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to target labels. * At the `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to target labels. This relabeling can be debugged by passing `relabel_debug: true` option to the corresponding `scrape_config` section. In this case `vmagent` logs target labels before and after the relabeling and then drops the logged target.
* At the `scrape_config -> metric_relabel_configs` section in `-promscrape.config` file. This relabeling is applied to all the scraped metrics in the given `scrape_config`. * At the `scrape_config -> metric_relabel_configs` section in `-promscrape.config` file. This relabeling is applied to all the scraped metrics in the given `scrape_config`. This relabeling can be debugged by passing `metric_relabel_debug: true` option to the corresponding `scrape_config` section. In this case `vmagent` logs metrics before and after the relabeling and then drops the logged metrics.
* At the `-remoteWrite.relabelConfig` file. This relabeling is aplied to all the collected metrics before sending them to remote storage. * At the `-remoteWrite.relabelConfig` file. This relabeling is aplied to all the collected metrics before sending them to remote storage. This relabeling can be debugged by passing `-remoteWrite.relabelDebug` command-line option to `vmagent`. In this case `vmagent` logs metrics before and after the relabeling and then drops all the logged metrics instead of sending them to remote storage.
* At the `-remoteWrite.urlRelabelConfig` files. This relabeling is applied to metrics before sending them to the corresponding `-remoteWrite.url`. * At the `-remoteWrite.urlRelabelConfig` files. This relabeling is applied to metrics before sending them to the corresponding `-remoteWrite.url`. This relabeling can be debugged by passing `-remoteWrite.urlRelabelDebug` command-line options to `vmagent`. In this case `vmagent` logs metrics before and after the relabeling and then drops all the logged metrics instead of sending them to the corresponding `-remoteWrite.url`.
You can read more about relabeling in the following articles: You can read more about relabeling in the following articles:
@ -252,13 +252,13 @@ By default `vmagent` reads the full response from scrape target into memory, the
'match[]': ['{__name__!=""}'] 'match[]': ['{__name__!=""}']
``` ```
Note that `sample_limit` option doesn't work if stream parsing is enabled because the parsed data is pushed to remote storage as soon as it is parsed. Therefore the `sample_limit` option doesn't make sense during stream parsing. Note that `sample_limit` option doesn't prevent from data push to remote storage if stream parsing is enabled because the parsed data is pushed to remote storage as soon as it is parsed.
## Scraping big number of targets ## Scraping big number of targets
A single `vmagent` instance can scrape tens of thousands of scrape targets. Sometimes this isn't enough due to limitations on CPU, network, RAM, etc. A single `vmagent` instance can scrape tens of thousands of scrape targets. Sometimes this isn't enough due to limitations on CPU, network, RAM, etc.
In this case scrape targets can be split among multiple `vmagent` instances (aka `vmagent` horizontal scaling and clustering). In this case scrape targets can be split among multiple `vmagent` instances (aka `vmagent` horizontal scaling, sharding and clustering).
Each `vmagent` instance in the cluster must use identical `-promscrape.config` files with distinct `-promscrape.cluster.memberNum` values. Each `vmagent` instance in the cluster must use identical `-promscrape.config` files with distinct `-promscrape.cluster.memberNum` values.
The flag value must be in the range `0 ... N-1`, where `N` is the number of `vmagent` instances in the cluster. The flag value must be in the range `0 ... N-1`, where `N` is the number of `vmagent` instances in the cluster.
The number of `vmagent` instances in the cluster must be passed to `-promscrape.cluster.membersCount` command-line flag. For example, the following commands The number of `vmagent` instances in the cluster must be passed to `-promscrape.cluster.membersCount` command-line flag. For example, the following commands
@ -721,6 +721,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
Supports array of values separated by comma or specified via multiple flags. Supports array of values separated by comma or specified via multiple flags.
-remoteWrite.relabelConfig string -remoteWrite.relabelConfig string
Optional path to file with relabel_config entries. These entries are applied to all the metrics before sending them to -remoteWrite.url. See https://docs.victoriametrics.com/vmagent.html#relabeling for details Optional path to file with relabel_config entries. These entries are applied to all the metrics before sending them to -remoteWrite.url. See https://docs.victoriametrics.com/vmagent.html#relabeling for details
-remoteWrite.relabelDebug
Whether to log metrics before and after relabeling with -remoteWrite.relabelConfig. If the -remoteWrite.relabelDebug is enabled, then the metrics aren't sent to remote storage. This is useful for debugging the relabeling configs
-remoteWrite.roundDigits array -remoteWrite.roundDigits array
Round metric values to this number of decimal digits after the point before writing them to remote storage. Examples: -remoteWrite.roundDigits=2 would round 1.236 to 1.24, while -remoteWrite.roundDigits=-1 would round 126.78 to 130. By default digits rounding is disabled. Set it to 100 for disabling it for a particular remote storage. This option may be used for improving data compression for the stored metrics Round metric values to this number of decimal digits after the point before writing them to remote storage. Examples: -remoteWrite.roundDigits=2 would round 1.236 to 1.24, while -remoteWrite.roundDigits=-1 would round 126.78 to 130. By default digits rounding is disabled. Set it to 100 for disabling it for a particular remote storage. This option may be used for improving data compression for the stored metrics
Supports array of values separated by comma or specified via multiple flags. Supports array of values separated by comma or specified via multiple flags.
@ -755,6 +757,9 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
-remoteWrite.urlRelabelConfig array -remoteWrite.urlRelabelConfig array
Optional path to relabel config for the corresponding -remoteWrite.url Optional path to relabel config for the corresponding -remoteWrite.url
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-remoteWrite.urlRelabelDebug array
Whether to log metrics before and after relabeling with -remoteWrite.urlRelabelConfig. If the -remoteWrite.urlRelabelDebug is enabled, then the metrics aren't sent to the corresponding -remoteWrite.url. This is useful for debugging the relabeling configs
Supports array of values separated by comma or specified via multiple flags.
-sortLabels -sortLabels
Whether to sort labels for incoming samples before writing them to all the configured remote storage systems. This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}Enabled sorting for labels can slow down ingestion performance a bit Whether to sort labels for incoming samples before writing them to all the configured remote storage systems. This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}Enabled sorting for labels can slow down ingestion performance a bit
-tls -tls

View file

@ -17,7 +17,12 @@ var (
"Pass multiple -remoteWrite.label flags in order to add multiple labels to metrics before sending them to remote storage") "Pass multiple -remoteWrite.label flags in order to add multiple labels to metrics before sending them to remote storage")
relabelConfigPathGlobal = flag.String("remoteWrite.relabelConfig", "", "Optional path to file with relabel_config entries. These entries are applied to all the metrics "+ relabelConfigPathGlobal = flag.String("remoteWrite.relabelConfig", "", "Optional path to file with relabel_config entries. These entries are applied to all the metrics "+
"before sending them to -remoteWrite.url. See https://docs.victoriametrics.com/vmagent.html#relabeling for details") "before sending them to -remoteWrite.url. See https://docs.victoriametrics.com/vmagent.html#relabeling for details")
relabelDebugGlobal = flag.Bool("remoteWrite.relabelDebug", false, "Whether to log metrics before and after relabeling with -remoteWrite.relabelConfig. "+
"If the -remoteWrite.relabelDebug is enabled, then the metrics aren't sent to remote storage. This is useful for debugging the relabeling configs")
relabelConfigPaths = flagutil.NewArray("remoteWrite.urlRelabelConfig", "Optional path to relabel config for the corresponding -remoteWrite.url") relabelConfigPaths = flagutil.NewArray("remoteWrite.urlRelabelConfig", "Optional path to relabel config for the corresponding -remoteWrite.url")
relabelDebug = flagutil.NewArrayBool("remoteWrite.urlRelabelDebug", "Whether to log metrics before and after relabeling with -remoteWrite.urlRelabelConfig. "+
"If the -remoteWrite.urlRelabelDebug is enabled, then the metrics aren't sent to the corresponding -remoteWrite.url. "+
"This is useful for debugging the relabeling configs")
) )
var labelsGlobal []prompbmarshal.Label var labelsGlobal []prompbmarshal.Label
@ -31,7 +36,7 @@ func CheckRelabelConfigs() error {
func loadRelabelConfigs() (*relabelConfigs, error) { func loadRelabelConfigs() (*relabelConfigs, error) {
var rcs relabelConfigs var rcs relabelConfigs
if *relabelConfigPathGlobal != "" { if *relabelConfigPathGlobal != "" {
global, err := promrelabel.LoadRelabelConfigs(*relabelConfigPathGlobal) global, err := promrelabel.LoadRelabelConfigs(*relabelConfigPathGlobal, *relabelDebugGlobal)
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot load -remoteWrite.relabelConfig=%q: %w", *relabelConfigPathGlobal, err) return nil, fmt.Errorf("cannot load -remoteWrite.relabelConfig=%q: %w", *relabelConfigPathGlobal, err)
} }
@ -47,7 +52,7 @@ func loadRelabelConfigs() (*relabelConfigs, error) {
// Skip empty relabel config. // Skip empty relabel config.
continue continue
} }
prc, err := promrelabel.LoadRelabelConfigs(path) prc, err := promrelabel.LoadRelabelConfigs(path, relabelDebug.GetOptionalArg(i))
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot load relabel configs from -remoteWrite.urlRelabelConfig=%q: %w", path, err) return nil, fmt.Errorf("cannot load relabel configs from -remoteWrite.urlRelabelConfig=%q: %w", path, err)
} }

View file

@ -66,7 +66,17 @@ run-vmalert: vmalert
-remoteRead.url=http://localhost:8428 \ -remoteRead.url=http://localhost:8428 \
-external.label=cluster=east-1 \ -external.label=cluster=east-1 \
-external.label=replica=a \ -external.label=replica=a \
-evaluationInterval=3s -evaluationInterval=3s \
-rule.configCheckInterval=10s
replay-vmalert: vmalert
./bin/vmalert -rule=app/vmalert/config/testdata/rules-replay-good.rules \
-datasource.url=http://localhost:8428 \
-remoteWrite.url=http://localhost:8428 \
-external.label=cluster=east-1 \
-external.label=replica=a \
-replay.timeFrom=2021-05-11T07:21:43Z \
-replay.timeTo=2021-05-29T18:40:43Z
vmalert-amd64: vmalert-amd64:
CGO_ENABLED=1 GOARCH=amd64 $(MAKE) vmalert-local-with-goarch CGO_ENABLED=1 GOARCH=amd64 $(MAKE) vmalert-local-with-goarch

View file

@ -12,7 +12,8 @@ rules against configured address.
support; support;
* Integration with [Alertmanager](https://github.com/prometheus/alertmanager); * Integration with [Alertmanager](https://github.com/prometheus/alertmanager);
* Keeps the alerts [state on restarts](#alerts-state-on-restarts); * Keeps the alerts [state on restarts](#alerts-state-on-restarts);
* Graphite datasource can be used for alerting and recording rules. See [these docs](#graphite) for details. * Graphite datasource can be used for alerting and recording rules. See [these docs](#graphite);
* Recording and Alerting rules backfilling (aka `replay`). See [these docs](#rules-backfilling);
* Lightweight without extra dependencies. * Lightweight without extra dependencies.
## Limitations ## Limitations
@ -227,194 +228,296 @@ implements [Graphite Render API](https://graphite.readthedocs.io/en/stable/rende
When using vmalert with both `graphite` and `prometheus` rules configured against cluster version of VM do not forget When using vmalert with both `graphite` and `prometheus` rules configured against cluster version of VM do not forget
to set `-datasource.appendTypePrefix` flag to `true`, so vmalert can adjust URL prefix automatically based on query type. to set `-datasource.appendTypePrefix` flag to `true`, so vmalert can adjust URL prefix automatically based on query type.
## Rules backfilling
vmalert supports alerting and recording rules backfilling (aka `replay`). In replay mode vmalert
can read the same rules configuration as normally, evaluate them on the given time range and backfill
results via remote write to the configured storage. vmalert supports any PromQL/MetricsQL compatible
data source for backfilling.
### How it works
In `replay` mode vmalert works as a cli-tool and exits immediately after work is done.
To run vmalert in `replay` mode:
```
./bin/vmalert -rule=path/to/your.rules \ # path to files with rules you usually use with vmalert
-datasource.url=http://localhost:8428 \ # PromQL/MetricsQL compatible datasource
-remoteWrite.url=http://localhost:8428 \ # remote write compatible storage to persist results
-replay.timeFrom=2021-05-11T07:21:43Z \ # time from begin replay
-replay.timeTo=2021-05-29T18:40:43Z # time to finish replay
```
The output of the command will look like the following:
```
Replay mode:
from: 2021-05-11 07:21:43 +0000 UTC # set by -replay.timeFrom
to: 2021-05-29 18:40:43 +0000 UTC # set by -replay.timeTo
max data points per request: 1000 # set by -replay.maxDatapointsPerQuery
Group "ReplayGroup"
interval: 1m0s
requests to make: 27
max range per request: 16h40m0s
> Rule "type:vm_cache_entries:rate5m" (ID: 1792509946081842725)
27 / 27 [----------------------------------------------------------------------------------------------------] 100.00% 78 p/s
> Rule "go_cgo_calls_count:rate5m" (ID: 17958425467471411582)
27 / 27 [-----------------------------------------------------------------------------------------------------] 100.00% ? p/s
Group "vmsingleReplay"
interval: 30s
requests to make: 54
max range per request: 8h20m0s
> Rule "RequestErrorsToAPI" (ID: 17645863024999990222)
54 / 54 [-----------------------------------------------------------------------------------------------------] 100.00% ? p/s
> Rule "TooManyLogs" (ID: 9042195394653477652)
54 / 54 [-----------------------------------------------------------------------------------------------------] 100.00% ? p/s
2021-06-07T09:59:12.098Z info app/vmalert/replay.go:68 replay finished! Imported 511734 samples
```
In `replay` mode all groups are executed sequentially one-by-one. Rules within the group are
executed sequentially as well (`concurrency` setting is ignored). Vmalert sends rule's expression
to [/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries) endpoint
of the configured `-datasource.url`. Returned data then processed according to the rule type and
backfilled to `-remoteWrite.url` via [Remote Write protocol](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations).
Vmalert respects `evaluationInterval` value set by flag or per-group during the replay.
#### Recording rules
Result of recording rules `replay` should match with results of normal rules evaluation.
#### Alerting rules
Result of alerting rules `replay` is time series reflecting [alert's state](#alerts-state-on-restarts).
To see if `replayed` alert has fired in the past use the following PromQL/MetricsQL expression:
```
ALERTS{alertname="your_alertname", alertstate="firing"}
```
Execute the query against storage which was used for `-remoteWrite.url` during the `replay`.
### Additional configuration
There are following non-required `replay` flags:
* `-replay.maxDatapointsPerQuery` - the max number of data points expected to receive in one request.
In two words, it affects the max time range for every `/query_range` request. The higher the value,
the less requests will be issued during `replay`.
* `-replay.ruleRetryAttempts` - when datasource fails to respond vmalert will make this number of retries
per rule before giving up.
* `-replay.rulesDelay` - delay between sequential rules execution. Important in cases if there are chaining
(rules which depend on each other) rules. It is expected, that remote storage will be able to persist
previously accepted data during the delay, so data will be available for the subsequent queries.
Keep it equal or bigger than `-remoteWrite.flushInterval`.
See full description for these flags in `./vmalert --help`.
### Limitations
* Graphite engine isn't supported yet;
* `query` template function is disabled for performance reasons (might be changed in future);
## Configuration ## Configuration
The shortlist of configuration flags is the following: The shortlist of configuration flags is the following:
``` ```
-datasource.appendTypePrefix -datasource.appendTypePrefix
Whether to add type prefix to -datasource.url based on the query type. Set to true if sending different query types to the vmselect URL. Whether to add type prefix to -datasource.url based on the query type. Set to true if sending different query types to the vmselect URL.
-datasource.basicAuth.password string -datasource.basicAuth.password string
Optional basic auth password for -datasource.url Optional basic auth password for -datasource.url
-datasource.basicAuth.username string -datasource.basicAuth.username string
Optional basic auth username for -datasource.url Optional basic auth username for -datasource.url
-datasource.lookback duration -datasource.lookback duration
Lookback defines how far into the past to look when evaluating queries. For example, if the datasource.lookback=5m then param "time" with value now()-5m will be added to every query. Lookback defines how far into the past to look when evaluating queries. For example, if the datasource.lookback=5m then param "time" with value now()-5m will be added to every query.
-datasource.maxIdleConnections int -datasource.maxIdleConnections int
Defines the number of idle (keep-alive connections) to each configured datasource. Consider setting this value equal to the value: groups_total * group.concurrency. Too low a value may result in a high number of sockets in TIME_WAIT state. (default 100) Defines the number of idle (keep-alive connections) to each configured datasource. Consider setting this value equal to the value: groups_total * group.concurrency. Too low a value may result in a high number of sockets in TIME_WAIT state. (default 100)
-datasource.queryStep duration -datasource.queryStep duration
queryStep defines how far a value can fallback to when evaluating queries. For example, if datasource.queryStep=15s then param "step" with value "15s" will be added to every query.If queryStep isn't specified, rule's evaluationInterval will be used instead. queryStep defines how far a value can fallback to when evaluating queries. For example, if datasource.queryStep=15s then param "step" with value "15s" will be added to every query.If queryStep isn't specified, rule's evaluationInterval will be used instead.
-datasource.roundDigits int -datasource.roundDigits int
Adds "round_digits" GET param to datasource requests. In VM "round_digits" limits the number of digits after the decimal point in response values. Adds "round_digits" GET param to datasource requests. In VM "round_digits" limits the number of digits after the decimal point in response values.
-datasource.tlsCAFile string -datasource.tlsCAFile string
Optional path to TLS CA file to use for verifying connections to -datasource.url. By default, system CA is used Optional path to TLS CA file to use for verifying connections to -datasource.url. By default, system CA is used
-datasource.tlsCertFile string -datasource.tlsCertFile string
Optional path to client-side TLS certificate file to use when connecting to -datasource.url Optional path to client-side TLS certificate file to use when connecting to -datasource.url
-datasource.tlsInsecureSkipVerify -datasource.tlsInsecureSkipVerify
Whether to skip tls verification when connecting to -datasource.url Whether to skip tls verification when connecting to -datasource.url
-datasource.tlsKeyFile string -datasource.tlsKeyFile string
Optional path to client-side TLS certificate key to use when connecting to -datasource.url Optional path to client-side TLS certificate key to use when connecting to -datasource.url
-datasource.tlsServerName string -datasource.tlsServerName string
Optional TLS server name to use for connections to -datasource.url. By default, the server name from -datasource.url is used Optional TLS server name to use for connections to -datasource.url. By default, the server name from -datasource.url is used
-datasource.url string -datasource.url string
VictoriaMetrics or vmselect url. Required parameter. E.g. http://127.0.0.1:8428 VictoriaMetrics or vmselect url. Required parameter. E.g. http://127.0.0.1:8428
-dryRun -rule -dryRun -rule
Whether to check only config files without running vmalert. The rules file are validated. The -rule flag must be specified. Whether to check only config files without running vmalert. The rules file are validated. The -rule flag must be specified.
-enableTCP6 -enableTCP6
Whether to enable IPv6 for listening and dialing. By default only IPv4 TCP and UDP is used Whether to enable IPv6 for listening and dialing. By default only IPv4 TCP and UDP is used
-envflag.enable -envflag.enable
Whether to enable reading flags from environment variables additionally to command line. Command line flag values have priority over values from environment vars. Flags are read only from command line if this flag isn't set Whether to enable reading flags from environment variables additionally to command line. Command line flag values have priority over values from environment vars. Flags are read only from command line if this flag isn't set
-envflag.prefix string -envflag.prefix string
Prefix for environment variables if -envflag.enable is set Prefix for environment variables if -envflag.enable is set
-evaluationInterval duration -evaluationInterval duration
How often to evaluate the rules (default 1m0s) How often to evaluate the rules (default 1m0s)
-external.alert.source string -external.alert.source string
External Alert Source allows to override the Source link for alerts sent to AlertManager for cases where you want to build a custom link to Grafana, Prometheus or any other service. External Alert Source allows to override the Source link for alerts sent to AlertManager for cases where you want to build a custom link to Grafana, Prometheus or any other service.
eg. 'explore?orgId=1&left=[\"now-1h\",\"now\",\"VictoriaMetrics\",{\"expr\": \"{{$expr|quotesEscape|crlfEscape|queryEscape}}\"},{\"mode\":\"Metrics\"},{\"ui\":[true,true,true,\"none\"]}]'.If empty '/api/v1/:groupID/alertID/status' is used eg. 'explore?orgId=1&left=[\"now-1h\",\"now\",\"VictoriaMetrics\",{\"expr\": \"{{$expr|quotesEscape|crlfEscape|queryEscape}}\"},{\"mode\":\"Metrics\"},{\"ui\":[true,true,true,\"none\"]}]'.If empty '/api/v1/:groupID/alertID/status' is used
-external.label array -external.label array
Optional label in the form 'name=value' to add to all generated recording rules and alerts. Pass multiple -label flags in order to add multiple label sets. Optional label in the form 'name=value' to add to all generated recording rules and alerts. Pass multiple -label flags in order to add multiple label sets.
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-external.url string -external.url string
External URL is used as alert's source for sent alerts to the notifier External URL is used as alert's source for sent alerts to the notifier
-fs.disableMmap -fs.disableMmap
Whether to use pread() instead of mmap() for reading data files. By default mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread() Whether to use pread() instead of mmap() for reading data files. By default mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread()
-http.connTimeout duration -http.connTimeout duration
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s) Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s)
-http.disableResponseCompression -http.disableResponseCompression
Disable compression of HTTP responses to save CPU resources. By default compression is enabled to save network bandwidth Disable compression of HTTP responses to save CPU resources. By default compression is enabled to save network bandwidth
-http.idleConnTimeout duration -http.idleConnTimeout duration
Timeout for incoming idle http connections (default 1m0s) Timeout for incoming idle http connections (default 1m0s)
-http.maxGracefulShutdownDuration duration -http.maxGracefulShutdownDuration duration
The maximum duration for a graceful shutdown of the HTTP server. A highly loaded server may require increased value for a graceful shutdown (default 7s) The maximum duration for a graceful shutdown of the HTTP server. A highly loaded server may require increased value for a graceful shutdown (default 7s)
-http.pathPrefix string -http.pathPrefix string
An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. See https://www.robustperception.io/using-external-urls-and-proxies-with-prometheus An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. See https://www.robustperception.io/using-external-urls-and-proxies-with-prometheus
-http.shutdownDelay duration -http.shutdownDelay duration
Optional delay before http server shutdown. During this delay, the server returns non-OK responses from /health page, so load balancers can route new requests to other servers Optional delay before http server shutdown. During this delay, the server returns non-OK responses from /health page, so load balancers can route new requests to other servers
-httpAuth.password string -httpAuth.password string
Password for HTTP Basic Auth. The authentication is disabled if -httpAuth.username is empty Password for HTTP Basic Auth. The authentication is disabled if -httpAuth.username is empty
-httpAuth.username string -httpAuth.username string
Username for HTTP Basic Auth. The authentication is disabled if empty. See also -httpAuth.password Username for HTTP Basic Auth. The authentication is disabled if empty. See also -httpAuth.password
-httpListenAddr string -httpListenAddr string
Address to listen for http connections (default ":8880") Address to listen for http connections (default ":8880")
-loggerDisableTimestamps -loggerDisableTimestamps
Whether to disable writing timestamps in logs Whether to disable writing timestamps in logs
-loggerErrorsPerSecondLimit int -loggerErrorsPerSecondLimit int
Per-second limit on the number of ERROR messages. If more than the given number of errors are emitted per second, the remaining errors are suppressed. Zero values disable the rate limit Per-second limit on the number of ERROR messages. If more than the given number of errors are emitted per second, the remaining errors are suppressed. Zero values disable the rate limit
-loggerFormat string -loggerFormat string
Format for logs. Possible values: default, json (default "default") Format for logs. Possible values: default, json (default "default")
-loggerLevel string -loggerLevel string
Minimum level of errors to log. Possible values: INFO, WARN, ERROR, FATAL, PANIC (default "INFO") Minimum level of errors to log. Possible values: INFO, WARN, ERROR, FATAL, PANIC (default "INFO")
-loggerOutput string -loggerOutput string
Output for the logs. Supported values: stderr, stdout (default "stderr") Output for the logs. Supported values: stderr, stdout (default "stderr")
-loggerTimezone string -loggerTimezone string
Timezone to use for timestamps in logs. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 or Local (default "UTC") Timezone to use for timestamps in logs. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 or Local (default "UTC")
-loggerWarnsPerSecondLimit int -loggerWarnsPerSecondLimit int
Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit
-memory.allowedBytes size -memory.allowedBytes size
Allowed size of system memory VictoriaMetrics caches may occupy. This option overrides -memory.allowedPercent if set to a non-zero value. Too low a value may increase the cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache resulting in higher disk IO usage Allowed size of system memory VictoriaMetrics caches may occupy. This option overrides -memory.allowedPercent if set to a non-zero value. Too low a value may increase the cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache resulting in higher disk IO usage
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0) Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
-memory.allowedPercent float -memory.allowedPercent float
Allowed percent of system memory VictoriaMetrics caches may occupy. See also -memory.allowedBytes. Too low a value may increase cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache which will result in higher disk IO usage (default 60) Allowed percent of system memory VictoriaMetrics caches may occupy. See also -memory.allowedBytes. Too low a value may increase cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache which will result in higher disk IO usage (default 60)
-metricsAuthKey string -metricsAuthKey string
Auth key for /metrics. It overrides httpAuth settings Auth key for /metrics. It overrides httpAuth settings
-notifier.basicAuth.password array -notifier.basicAuth.password array
Optional basic auth password for -notifier.url Optional basic auth password for -notifier.url
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-notifier.basicAuth.username array -notifier.basicAuth.username array
Optional basic auth username for -notifier.url Optional basic auth username for -notifier.url
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-notifier.tlsCAFile array -notifier.tlsCAFile array
Optional path to TLS CA file to use for verifying connections to -notifier.url. By default system CA is used Optional path to TLS CA file to use for verifying connections to -notifier.url. By default system CA is used
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-notifier.tlsCertFile array -notifier.tlsCertFile array
Optional path to client-side TLS certificate file to use when connecting to -notifier.url Optional path to client-side TLS certificate file to use when connecting to -notifier.url
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-notifier.tlsInsecureSkipVerify array -notifier.tlsInsecureSkipVerify array
Whether to skip tls verification when connecting to -notifier.url Whether to skip tls verification when connecting to -notifier.url
Supports array of values separated by comma or specified via multiple flags. Supports array of values separated by comma or specified via multiple flags.
-notifier.tlsKeyFile array -notifier.tlsKeyFile array
Optional path to client-side TLS certificate key to use when connecting to -notifier.url Optional path to client-side TLS certificate key to use when connecting to -notifier.url
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-notifier.tlsServerName array -notifier.tlsServerName array
Optional TLS server name to use for connections to -notifier.url. By default the server name from -notifier.url is used Optional TLS server name to use for connections to -notifier.url. By default the server name from -notifier.url is used
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-notifier.url array -notifier.url array
Prometheus alertmanager URL. Required parameter. e.g. http://127.0.0.1:9093 Prometheus alertmanager URL. Required parameter. e.g. http://127.0.0.1:9093
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-pprofAuthKey string -pprofAuthKey string
Auth key for /debug/pprof. It overrides httpAuth settings Auth key for /debug/pprof. It overrides httpAuth settings
-remoteRead.basicAuth.password string -remoteRead.basicAuth.password string
Optional basic auth password for -remoteRead.url Optional basic auth password for -remoteRead.url
-remoteRead.basicAuth.username string -remoteRead.basicAuth.username string
Optional basic auth username for -remoteRead.url Optional basic auth username for -remoteRead.url
-remoteRead.ignoreRestoreErrors -remoteRead.ignoreRestoreErrors
Whether to ignore errors from remote storage when restoring alerts state on startup. (default true) Whether to ignore errors from remote storage when restoring alerts state on startup. (default true)
-remoteRead.lookback duration -remoteRead.lookback duration
Lookback defines how far to look into past for alerts timeseries. For example, if lookback=1h then range from now() to now()-1h will be scanned. (default 1h0m0s) Lookback defines how far to look into past for alerts timeseries. For example, if lookback=1h then range from now() to now()-1h will be scanned. (default 1h0m0s)
-remoteRead.tlsCAFile string -remoteRead.tlsCAFile string
Optional path to TLS CA file to use for verifying connections to -remoteRead.url. By default system CA is used Optional path to TLS CA file to use for verifying connections to -remoteRead.url. By default system CA is used
-remoteRead.tlsCertFile string -remoteRead.tlsCertFile string
Optional path to client-side TLS certificate file to use when connecting to -remoteRead.url Optional path to client-side TLS certificate file to use when connecting to -remoteRead.url
-remoteRead.tlsInsecureSkipVerify -remoteRead.tlsInsecureSkipVerify
Whether to skip tls verification when connecting to -remoteRead.url Whether to skip tls verification when connecting to -remoteRead.url
-remoteRead.tlsKeyFile string -remoteRead.tlsKeyFile string
Optional path to client-side TLS certificate key to use when connecting to -remoteRead.url Optional path to client-side TLS certificate key to use when connecting to -remoteRead.url
-remoteRead.tlsServerName string -remoteRead.tlsServerName string
Optional TLS server name to use for connections to -remoteRead.url. By default the server name from -remoteRead.url is used Optional TLS server name to use for connections to -remoteRead.url. By default the server name from -remoteRead.url is used
-remoteRead.url vmalert -remoteRead.url vmalert
Optional URL to VictoriaMetrics or vmselect that will be used to restore alerts state. This configuration makes sense only if vmalert was configured with `remoteWrite.url` before and has been successfully persisted its state. E.g. http://127.0.0.1:8428 Optional URL to VictoriaMetrics or vmselect that will be used to restore alerts state. This configuration makes sense only if vmalert was configured with `remoteWrite.url` before and has been successfully persisted its state. E.g. http://127.0.0.1:8428
-remoteWrite.basicAuth.password string -remoteWrite.basicAuth.password string
Optional basic auth password for -remoteWrite.url Optional basic auth password for -remoteWrite.url
-remoteWrite.basicAuth.username string -remoteWrite.basicAuth.username string
Optional basic auth username for -remoteWrite.url Optional basic auth username for -remoteWrite.url
-remoteWrite.concurrency int -remoteWrite.concurrency int
Defines number of writers for concurrent writing into remote querier (default 1) Defines number of writers for concurrent writing into remote querier (default 1)
-remoteWrite.flushInterval duration -remoteWrite.flushInterval duration
Defines interval of flushes to remote write endpoint (default 5s) Defines interval of flushes to remote write endpoint (default 5s)
-remoteWrite.maxBatchSize int -remoteWrite.maxBatchSize int
Defines defines max number of timeseries to be flushed at once (default 1000) Defines defines max number of timeseries to be flushed at once (default 1000)
-remoteWrite.maxQueueSize int -remoteWrite.maxQueueSize int
Defines the max number of pending datapoints to remote write endpoint (default 100000) Defines the max number of pending datapoints to remote write endpoint (default 100000)
-remoteWrite.tlsCAFile string -remoteWrite.tlsCAFile string
Optional path to TLS CA file to use for verifying connections to -remoteWrite.url. By default system CA is used Optional path to TLS CA file to use for verifying connections to -remoteWrite.url. By default system CA is used
-remoteWrite.tlsCertFile string -remoteWrite.tlsCertFile string
Optional path to client-side TLS certificate file to use when connecting to -remoteWrite.url Optional path to client-side TLS certificate file to use when connecting to -remoteWrite.url
-remoteWrite.tlsInsecureSkipVerify -remoteWrite.tlsInsecureSkipVerify
Whether to skip tls verification when connecting to -remoteWrite.url Whether to skip tls verification when connecting to -remoteWrite.url
-remoteWrite.tlsKeyFile string -remoteWrite.tlsKeyFile string
Optional path to client-side TLS certificate key to use when connecting to -remoteWrite.url Optional path to client-side TLS certificate key to use when connecting to -remoteWrite.url
-remoteWrite.tlsServerName string -remoteWrite.tlsServerName string
Optional TLS server name to use for connections to -remoteWrite.url. By default the server name from -remoteWrite.url is used Optional TLS server name to use for connections to -remoteWrite.url. By default the server name from -remoteWrite.url is used
-remoteWrite.url string -remoteWrite.url string
Optional URL to VictoriaMetrics or vminsert where to persist alerts state and recording rules results in form of timeseries. E.g. http://127.0.0.1:8428 Optional URL to VictoriaMetrics or vminsert where to persist alerts state and recording rules results in form of timeseries. E.g. http://127.0.0.1:8428
-replay.maxDatapointsPerQuery int
Max number of data points expected in one request. The higher the value, the less requests will be made during replay. (default 1000)
-replay.ruleRetryAttempts int
Defines how many retries to make before giving up on rule if request for it returns an error. (default 5)
-replay.rulesDelay duration
Delay between rules evaluation within the group. Could be important if there are chained rules inside of the groupand processing need to wait for previous rule results to be persisted by remote storage before evaluating the next rule. Keep it equal or bigger than -remoteWrite.flushInterval. (default 1s)
-replay.timeFrom string
The time filter in RFC3339 format to select time series with timestamp equal or higher than provided value. E.g. '2020-01-01T20:07:00Z'
-replay.timeTo string
The time filter in RFC3339 format to select timeseries with timestamp equal or lower than provided value. E.g. '2020-01-01T20:07:00Z'
-rule array -rule array
Path to the file with alert rules. Path to the file with alert rules.
Supports patterns. Flag can be specified multiple times. Supports patterns. Flag can be specified multiple times.
Examples: Examples:
-rule="/path/to/file". Path to a single file with alerting rules -rule="/path/to/file". Path to a single file with alerting rules
-rule="dir/*.yaml" -rule="/*.yaml". Relative path to all .yaml files in "dir" folder, -rule="dir/*.yaml" -rule="/*.yaml". Relative path to all .yaml files in "dir" folder,
absolute path to all .yaml files in root. absolute path to all .yaml files in root.
Rule files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars. Rule files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars.
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-rule.configCheckInterval duration
Interval for checking for changes in '-rule' files. By default the checking is disabled. Send SIGHUP signal in order to force config check for changes
-rule.validateExpressions -rule.validateExpressions
Whether to validate rules expressions via MetricsQL engine (default true) Whether to validate rules expressions via MetricsQL engine (default true)
-rule.validateTemplates -rule.validateTemplates
Whether to validate annotation and label templates (default true) Whether to validate annotation and label templates (default true)
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set Path to file with TLS key. Used only if -tls is set
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
``` ```
Pass `-help` to `vmalert` in order to see the full list of supported Pass `-help` to `vmalert` in order to see the full list of supported
command-line flags with their descriptions. command-line flags with their descriptions.
To reload configuration without `vmalert` restart send SIGHUP signal `vmalert` supports "hot" config reload via the following methods:
or send GET request to `/-/reload` endpoint. * send SIGHUP signal to `vmalert` process;
* send GET request to `/-/reload` endpoint;
* configure `-rule.configCheckInterval` flag for periodic reload
on config change.
## Contributing ## Contributing

View file

@ -19,15 +19,16 @@ import (
// AlertingRule is basic alert entity // AlertingRule is basic alert entity
type AlertingRule struct { type AlertingRule struct {
Type datasource.Type Type datasource.Type
RuleID uint64 RuleID uint64
Name string Name string
Expr string Expr string
For time.Duration For time.Duration
Labels map[string]string Labels map[string]string
Annotations map[string]string Annotations map[string]string
GroupID uint64 GroupID uint64
GroupName string GroupName string
EvalInterval time.Duration
q datasource.Querier q datasource.Querier
@ -53,15 +54,16 @@ type alertingRuleMetrics struct {
func newAlertingRule(qb datasource.QuerierBuilder, group *Group, cfg config.Rule) *AlertingRule { func newAlertingRule(qb datasource.QuerierBuilder, group *Group, cfg config.Rule) *AlertingRule {
ar := &AlertingRule{ ar := &AlertingRule{
Type: cfg.Type, Type: cfg.Type,
RuleID: cfg.ID, RuleID: cfg.ID,
Name: cfg.Alert, Name: cfg.Alert,
Expr: cfg.Expr, Expr: cfg.Expr,
For: cfg.For.Duration(), For: cfg.For.Duration(),
Labels: cfg.Labels, Labels: cfg.Labels,
Annotations: cfg.Annotations, Annotations: cfg.Annotations,
GroupID: group.ID(), GroupID: group.ID(),
GroupName: group.Name, GroupName: group.Name,
EvalInterval: group.Interval,
q: qb.BuildWithParams(datasource.QuerierParams{ q: qb.BuildWithParams(datasource.QuerierParams{
DataSourceType: &cfg.Type, DataSourceType: &cfg.Type,
EvaluationInterval: group.Interval, EvaluationInterval: group.Interval,
@ -126,9 +128,66 @@ func (ar *AlertingRule) ID() uint64 {
return ar.RuleID return ar.RuleID
} }
// ExecRange executes alerting rule on the given time range similarly to Exec.
// It doesn't update internal states of the Rule and meant to be used just
// to get time series for backfilling.
// It returns ALERT and ALERT_FOR_STATE time series as result.
func (ar *AlertingRule) ExecRange(ctx context.Context, start, end time.Time) ([]prompbmarshal.TimeSeries, error) {
series, err := ar.q.QueryRange(ctx, ar.Expr, start, end)
if err != nil {
return nil, err
}
var result []prompbmarshal.TimeSeries
qFn := func(query string) ([]datasource.Metric, error) {
return nil, fmt.Errorf("`query` template isn't supported in replay mode")
}
for _, s := range series {
// extra labels could contain templates, so we expand them first
labels, err := expandLabels(s, qFn, ar)
if err != nil {
return nil, fmt.Errorf("failed to expand labels: %s", err)
}
for k, v := range labels {
// apply extra labels to datasource
// so the hash key will be consistent on restore
s.SetLabel(k, v)
}
a, err := ar.newAlert(s, time.Time{}, qFn) // initial alert
if err != nil {
return nil, fmt.Errorf("failed to create alert: %s", err)
}
if ar.For == 0 { // if alert is instant
a.State = notifier.StateFiring
for i := range s.Values {
result = append(result, ar.alertToTimeSeries(a, s.Timestamps[i])...)
}
continue
}
// if alert with For > 0
prevT := time.Time{}
//activeAt := time.Time{}
for i := range s.Values {
at := time.Unix(s.Timestamps[i], 0)
if at.Sub(prevT) > ar.EvalInterval {
// reset to Pending if there are gaps > EvalInterval between DPs
a.State = notifier.StatePending
//activeAt = at
a.Start = at
} else if at.Sub(a.Start) >= ar.For {
a.State = notifier.StateFiring
}
prevT = at
result = append(result, ar.alertToTimeSeries(a, s.Timestamps[i])...)
}
}
return result, nil
}
// Exec executes AlertingRule expression via the given Querier. // Exec executes AlertingRule expression via the given Querier.
// Based on the Querier results AlertingRule maintains notifier.Alerts // Based on the Querier results AlertingRule maintains notifier.Alerts
func (ar *AlertingRule) Exec(ctx context.Context, series bool) ([]prompbmarshal.TimeSeries, error) { func (ar *AlertingRule) Exec(ctx context.Context) ([]prompbmarshal.TimeSeries, error) {
qMetrics, err := ar.q.Query(ctx, ar.Expr) qMetrics, err := ar.q.Query(ctx, ar.Expr)
ar.mu.Lock() ar.mu.Lock()
defer ar.mu.Unlock() defer ar.mu.Unlock()
@ -168,9 +227,9 @@ func (ar *AlertingRule) Exec(ctx context.Context, series bool) ([]prompbmarshal.
} }
updated[h] = struct{}{} updated[h] = struct{}{}
if a, ok := ar.alerts[h]; ok { if a, ok := ar.alerts[h]; ok {
if a.Value != m.Value { if a.Value != m.Values[0] {
// update Value field with latest value // update Value field with latest value
a.Value = m.Value a.Value = m.Values[0]
// and re-exec template since Value can be used // and re-exec template since Value can be used
// in annotations // in annotations
a.Annotations, err = a.ExecTemplate(qFn, ar.Annotations) a.Annotations, err = a.ExecTemplate(qFn, ar.Annotations)
@ -208,10 +267,7 @@ func (ar *AlertingRule) Exec(ctx context.Context, series bool) ([]prompbmarshal.
alertsFired.Inc() alertsFired.Inc()
} }
} }
if series { return ar.toTimeSeries(ar.lastExecTime.Unix()), nil
return ar.toTimeSeries(ar.lastExecTime), nil
}
return nil, nil
} }
func expandLabels(m datasource.Metric, q notifier.QueryFn, ar *AlertingRule) (map[string]string, error) { func expandLabels(m datasource.Metric, q notifier.QueryFn, ar *AlertingRule) (map[string]string, error) {
@ -221,13 +277,13 @@ func expandLabels(m datasource.Metric, q notifier.QueryFn, ar *AlertingRule) (ma
} }
tpl := notifier.AlertTplData{ tpl := notifier.AlertTplData{
Labels: metricLabels, Labels: metricLabels,
Value: m.Value, Value: m.Values[0],
Expr: ar.Expr, Expr: ar.Expr,
} }
return notifier.ExecTemplate(q, ar.Labels, tpl) return notifier.ExecTemplate(q, ar.Labels, tpl)
} }
func (ar *AlertingRule) toTimeSeries(timestamp time.Time) []prompbmarshal.TimeSeries { func (ar *AlertingRule) toTimeSeries(timestamp int64) []prompbmarshal.TimeSeries {
var tss []prompbmarshal.TimeSeries var tss []prompbmarshal.TimeSeries
for _, a := range ar.alerts { for _, a := range ar.alerts {
if a.State == notifier.StateInactive { if a.State == notifier.StateInactive {
@ -251,6 +307,7 @@ func (ar *AlertingRule) UpdateWith(r Rule) error {
ar.For = nr.For ar.For = nr.For
ar.Labels = nr.Labels ar.Labels = nr.Labels
ar.Annotations = nr.Annotations ar.Annotations = nr.Annotations
ar.EvalInterval = nr.EvalInterval
ar.q = nr.q ar.q = nr.q
return nil return nil
} }
@ -279,13 +336,15 @@ func (ar *AlertingRule) newAlert(m datasource.Metric, start time.Time, qFn notif
GroupID: ar.GroupID, GroupID: ar.GroupID,
Name: ar.Name, Name: ar.Name,
Labels: map[string]string{}, Labels: map[string]string{},
Value: m.Value, Value: m.Values[0],
Start: start, Start: start,
Expr: ar.Expr, Expr: ar.Expr,
} }
// label defined here to make override possible by // label defined here to make override possible by
// time series labels. // time series labels.
a.Labels[alertGroupNameLabel] = ar.GroupName if ar.GroupName != "" {
a.Labels[alertGroupNameLabel] = ar.GroupName
}
for _, l := range m.Labels { for _, l := range m.Labels {
// drop __name__ to be consistent with Prometheus alerting // drop __name__ to be consistent with Prometheus alerting
if l.Name == "__name__" { if l.Name == "__name__" {
@ -374,7 +433,7 @@ const (
) )
// alertToTimeSeries converts the given alert with the given timestamp to timeseries // alertToTimeSeries converts the given alert with the given timestamp to timeseries
func (ar *AlertingRule) alertToTimeSeries(a *notifier.Alert, timestamp time.Time) []prompbmarshal.TimeSeries { func (ar *AlertingRule) alertToTimeSeries(a *notifier.Alert, timestamp int64) []prompbmarshal.TimeSeries {
var tss []prompbmarshal.TimeSeries var tss []prompbmarshal.TimeSeries
tss = append(tss, alertToTimeSeries(ar.Name, a, timestamp)) tss = append(tss, alertToTimeSeries(ar.Name, a, timestamp))
if ar.For > 0 { if ar.For > 0 {
@ -383,7 +442,7 @@ func (ar *AlertingRule) alertToTimeSeries(a *notifier.Alert, timestamp time.Time
return tss return tss
} }
func alertToTimeSeries(name string, a *notifier.Alert, timestamp time.Time) prompbmarshal.TimeSeries { func alertToTimeSeries(name string, a *notifier.Alert, timestamp int64) prompbmarshal.TimeSeries {
labels := make(map[string]string) labels := make(map[string]string)
for k, v := range a.Labels { for k, v := range a.Labels {
labels[k] = v labels[k] = v
@ -391,19 +450,19 @@ func alertToTimeSeries(name string, a *notifier.Alert, timestamp time.Time) prom
labels["__name__"] = alertMetricName labels["__name__"] = alertMetricName
labels[alertNameLabel] = name labels[alertNameLabel] = name
labels[alertStateLabel] = a.State.String() labels[alertStateLabel] = a.State.String()
return newTimeSeries(1, labels, timestamp) return newTimeSeries([]float64{1}, []int64{timestamp}, labels)
} }
// alertForToTimeSeries returns a timeseries that represents // alertForToTimeSeries returns a timeseries that represents
// state of active alerts, where value is time when alert become active // state of active alerts, where value is time when alert become active
func alertForToTimeSeries(name string, a *notifier.Alert, timestamp time.Time) prompbmarshal.TimeSeries { func alertForToTimeSeries(name string, a *notifier.Alert, timestamp int64) prompbmarshal.TimeSeries {
labels := make(map[string]string) labels := make(map[string]string)
for k, v := range a.Labels { for k, v := range a.Labels {
labels[k] = v labels[k] = v
} }
labels["__name__"] = alertForStateMetricName labels["__name__"] = alertForStateMetricName
labels[alertNameLabel] = name labels[alertNameLabel] = name
return newTimeSeries(float64(a.Start.Unix()), labels, timestamp) return newTimeSeries([]float64{float64(a.Start.Unix())}, []int64{timestamp}, labels)
} }
// Restore restores the state of active alerts basing on previously written timeseries. // Restore restores the state of active alerts basing on previously written timeseries.
@ -445,7 +504,7 @@ func (ar *AlertingRule) Restore(ctx context.Context, q datasource.Querier, lookb
m.Labels = append(m.Labels, l) m.Labels = append(m.Labels, l)
} }
a, err := ar.newAlert(m, time.Unix(int64(m.Value), 0), qFn) a, err := ar.newAlert(m, time.Unix(int64(m.Values[0]), 0), qFn)
if err != nil { if err != nil {
return fmt.Errorf("failed to create alert: %w", err) return fmt.Errorf("failed to create alert: %w", err)
} }

View file

@ -24,11 +24,11 @@ func TestAlertingRule_ToTimeSeries(t *testing.T) {
newTestAlertingRule("instant", 0), newTestAlertingRule("instant", 0),
&notifier.Alert{State: notifier.StateFiring}, &notifier.Alert{State: notifier.StateFiring},
[]prompbmarshal.TimeSeries{ []prompbmarshal.TimeSeries{
newTimeSeries(1, map[string]string{ newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": alertMetricName, "__name__": alertMetricName,
alertStateLabel: notifier.StateFiring.String(), alertStateLabel: notifier.StateFiring.String(),
alertNameLabel: "instant", alertNameLabel: "instant",
}, timestamp), }),
}, },
}, },
{ {
@ -38,13 +38,13 @@ func TestAlertingRule_ToTimeSeries(t *testing.T) {
"instance": "bar", "instance": "bar",
}}, }},
[]prompbmarshal.TimeSeries{ []prompbmarshal.TimeSeries{
newTimeSeries(1, map[string]string{ newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": alertMetricName, "__name__": alertMetricName,
alertStateLabel: notifier.StateFiring.String(), alertStateLabel: notifier.StateFiring.String(),
alertNameLabel: "instant extra labels", alertNameLabel: "instant extra labels",
"job": "foo", "job": "foo",
"instance": "bar", "instance": "bar",
}, timestamp), }),
}, },
}, },
{ {
@ -54,48 +54,52 @@ func TestAlertingRule_ToTimeSeries(t *testing.T) {
"__name__": "bar", "__name__": "bar",
}}, }},
[]prompbmarshal.TimeSeries{ []prompbmarshal.TimeSeries{
newTimeSeries(1, map[string]string{ newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": alertMetricName, "__name__": alertMetricName,
alertStateLabel: notifier.StateFiring.String(), alertStateLabel: notifier.StateFiring.String(),
alertNameLabel: "instant labels override", alertNameLabel: "instant labels override",
}, timestamp), }),
}, },
}, },
{ {
newTestAlertingRule("for", time.Second), newTestAlertingRule("for", time.Second),
&notifier.Alert{State: notifier.StateFiring, Start: timestamp.Add(time.Second)}, &notifier.Alert{State: notifier.StateFiring, Start: timestamp.Add(time.Second)},
[]prompbmarshal.TimeSeries{ []prompbmarshal.TimeSeries{
newTimeSeries(1, map[string]string{ newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": alertMetricName, "__name__": alertMetricName,
alertStateLabel: notifier.StateFiring.String(), alertStateLabel: notifier.StateFiring.String(),
alertNameLabel: "for", alertNameLabel: "for",
}, timestamp), }),
newTimeSeries(float64(timestamp.Add(time.Second).Unix()), map[string]string{ newTimeSeries([]float64{float64(timestamp.Add(time.Second).Unix())},
"__name__": alertForStateMetricName, []int64{timestamp.UnixNano()},
alertNameLabel: "for", map[string]string{
}, timestamp), "__name__": alertForStateMetricName,
alertNameLabel: "for",
}),
}, },
}, },
{ {
newTestAlertingRule("for pending", 10*time.Second), newTestAlertingRule("for pending", 10*time.Second),
&notifier.Alert{State: notifier.StatePending, Start: timestamp.Add(time.Second)}, &notifier.Alert{State: notifier.StatePending, Start: timestamp.Add(time.Second)},
[]prompbmarshal.TimeSeries{ []prompbmarshal.TimeSeries{
newTimeSeries(1, map[string]string{ newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": alertMetricName, "__name__": alertMetricName,
alertStateLabel: notifier.StatePending.String(), alertStateLabel: notifier.StatePending.String(),
alertNameLabel: "for pending", alertNameLabel: "for pending",
}, timestamp), }),
newTimeSeries(float64(timestamp.Add(time.Second).Unix()), map[string]string{ newTimeSeries([]float64{float64(timestamp.Add(time.Second).Unix())},
"__name__": alertForStateMetricName, []int64{timestamp.UnixNano()},
alertNameLabel: "for pending", map[string]string{
}, timestamp), "__name__": alertForStateMetricName,
alertNameLabel: "for pending",
}),
}, },
}, },
} }
for _, tc := range testCases { for _, tc := range testCases {
t.Run(tc.rule.Name, func(t *testing.T) { t.Run(tc.rule.Name, func(t *testing.T) {
tc.rule.alerts[tc.alert.ID] = tc.alert tc.rule.alerts[tc.alert.ID] = tc.alert
tss := tc.rule.toTimeSeries(timestamp) tss := tc.rule.toTimeSeries(timestamp.Unix())
if err := compareTimeSeries(t, tc.expTS, tss); err != nil { if err := compareTimeSeries(t, tc.expTS, tss); err != nil {
t.Fatalf("timeseries missmatch: %s", err) t.Fatalf("timeseries missmatch: %s", err)
} }
@ -118,7 +122,7 @@ func TestAlertingRule_Exec(t *testing.T) {
{ {
newTestAlertingRule("empty labels", 0), newTestAlertingRule("empty labels", 0),
[][]datasource.Metric{ [][]datasource.Metric{
{datasource.Metric{}}, {datasource.Metric{Values: []float64{1}, Timestamps: []int64{1}}},
}, },
map[uint64]*notifier.Alert{ map[uint64]*notifier.Alert{
hash(datasource.Metric{}): {State: notifier.StateFiring}, hash(datasource.Metric{}): {State: notifier.StateFiring},
@ -299,7 +303,7 @@ func TestAlertingRule_Exec(t *testing.T) {
for _, step := range tc.steps { for _, step := range tc.steps {
fq.reset() fq.reset()
fq.add(step...) fq.add(step...)
if _, err := tc.rule.Exec(context.TODO(), false); err != nil { if _, err := tc.rule.Exec(context.TODO()); err != nil {
t.Fatalf("unexpected err: %s", err) t.Fatalf("unexpected err: %s", err)
} }
// artificial delay between applying steps // artificial delay between applying steps
@ -321,6 +325,166 @@ func TestAlertingRule_Exec(t *testing.T) {
} }
} }
func TestAlertingRule_ExecRange(t *testing.T) {
testCases := []struct {
rule *AlertingRule
data []datasource.Metric
expAlerts []*notifier.Alert
}{
{
newTestAlertingRule("empty", 0),
[]datasource.Metric{},
nil,
},
{
newTestAlertingRule("empty labels", 0),
[]datasource.Metric{
{Values: []float64{1}, Timestamps: []int64{1}},
},
[]*notifier.Alert{
{State: notifier.StateFiring},
},
},
{
newTestAlertingRule("single-firing", 0),
[]datasource.Metric{
metricWithLabels(t, "name", "foo"),
},
[]*notifier.Alert{
{
Labels: map[string]string{"name": "foo"},
State: notifier.StateFiring,
},
},
},
{
newTestAlertingRule("single-firing-on-range", 0),
[]datasource.Metric{
{Values: []float64{1, 1, 1}, Timestamps: []int64{1e3, 2e3, 3e3}},
},
[]*notifier.Alert{
{State: notifier.StateFiring},
{State: notifier.StateFiring},
{State: notifier.StateFiring},
},
},
{
newTestAlertingRule("for-pending", time.Second),
[]datasource.Metric{
{Values: []float64{1, 1, 1}, Timestamps: []int64{1, 3, 5}},
},
[]*notifier.Alert{
{State: notifier.StatePending, Start: time.Unix(1, 0)},
{State: notifier.StatePending, Start: time.Unix(3, 0)},
{State: notifier.StatePending, Start: time.Unix(5, 0)},
},
},
{
newTestAlertingRule("for-firing", 3*time.Second),
[]datasource.Metric{
{Values: []float64{1, 1, 1}, Timestamps: []int64{1, 3, 5}},
},
[]*notifier.Alert{
{State: notifier.StatePending, Start: time.Unix(1, 0)},
{State: notifier.StatePending, Start: time.Unix(1, 0)},
{State: notifier.StateFiring, Start: time.Unix(1, 0)},
},
},
{
newTestAlertingRule("for=>pending=>firing=>pending=>firing=>pending", time.Second),
[]datasource.Metric{
{Values: []float64{1, 1, 1, 1, 1}, Timestamps: []int64{1, 2, 5, 6, 20}},
},
[]*notifier.Alert{
{State: notifier.StatePending, Start: time.Unix(1, 0)},
{State: notifier.StateFiring, Start: time.Unix(1, 0)},
{State: notifier.StatePending, Start: time.Unix(5, 0)},
{State: notifier.StateFiring, Start: time.Unix(5, 0)},
{State: notifier.StatePending, Start: time.Unix(20, 0)},
},
},
{
newTestAlertingRule("multi-series-for=>pending=>pending=>firing", 3*time.Second),
[]datasource.Metric{
{Values: []float64{1, 1, 1}, Timestamps: []int64{1, 3, 5}},
{Values: []float64{1, 1}, Timestamps: []int64{1, 5},
Labels: []datasource.Label{{Name: "foo", Value: "bar"}},
},
},
[]*notifier.Alert{
{State: notifier.StatePending, Start: time.Unix(1, 0)},
{State: notifier.StatePending, Start: time.Unix(1, 0)},
{State: notifier.StateFiring, Start: time.Unix(1, 0)},
//
{State: notifier.StatePending, Start: time.Unix(1, 0),
Labels: map[string]string{
"foo": "bar",
}},
{State: notifier.StatePending, Start: time.Unix(5, 0),
Labels: map[string]string{
"foo": "bar",
}},
},
},
{
newTestRuleWithLabels("multi-series-firing", "source", "vm"),
[]datasource.Metric{
{Values: []float64{1, 1}, Timestamps: []int64{1, 100}},
{Values: []float64{1, 1}, Timestamps: []int64{1, 5},
Labels: []datasource.Label{{Name: "foo", Value: "bar"}},
},
},
[]*notifier.Alert{
{State: notifier.StateFiring, Labels: map[string]string{
"source": "vm",
}},
{State: notifier.StateFiring, Labels: map[string]string{
"source": "vm",
}},
//
{State: notifier.StateFiring, Labels: map[string]string{
"foo": "bar",
"source": "vm",
}},
{State: notifier.StateFiring, Labels: map[string]string{
"foo": "bar",
"source": "vm",
}},
},
},
}
fakeGroup := Group{Name: "TestRule_ExecRange"}
for _, tc := range testCases {
t.Run(tc.rule.Name, func(t *testing.T) {
fq := &fakeQuerier{}
tc.rule.q = fq
tc.rule.GroupID = fakeGroup.ID()
fq.add(tc.data...)
gotTS, err := tc.rule.ExecRange(context.TODO(), time.Now(), time.Now())
if err != nil {
t.Fatalf("unexpected err: %s", err)
}
var expTS []prompbmarshal.TimeSeries
var j int
for _, series := range tc.data {
for _, timestamp := range series.Timestamps {
expTS = append(expTS, tc.rule.alertToTimeSeries(tc.expAlerts[j], timestamp)...)
j++
}
}
if len(gotTS) != len(expTS) {
t.Fatalf("expected %d time series; got %d", len(expTS), len(gotTS))
}
for i := range expTS {
got, exp := gotTS[i], expTS[i]
if !reflect.DeepEqual(got, exp) {
t.Fatalf("%d: expected \n%v but got \n%v", i, exp, got)
}
}
})
}
}
func TestAlertingRule_Restore(t *testing.T) { func TestAlertingRule_Restore(t *testing.T) {
testCases := []struct { testCases := []struct {
rule *AlertingRule rule *AlertingRule
@ -443,14 +607,14 @@ func TestAlertingRule_Exec_Negative(t *testing.T) {
// successful attempt // successful attempt
fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "bar")) fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "bar"))
_, err := ar.Exec(context.TODO(), false) _, err := ar.Exec(context.TODO())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
// label `job` will collide with rule extra label and will make both time series equal // label `job` will collide with rule extra label and will make both time series equal
fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "baz")) fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "baz"))
_, err = ar.Exec(context.TODO(), false) _, err = ar.Exec(context.TODO())
if !errors.Is(err, errDuplicate) { if !errors.Is(err, errDuplicate) {
t.Fatalf("expected to have %s error; got %s", errDuplicate, err) t.Fatalf("expected to have %s error; got %s", errDuplicate, err)
} }
@ -459,7 +623,7 @@ func TestAlertingRule_Exec_Negative(t *testing.T) {
expErr := "connection reset by peer" expErr := "connection reset by peer"
fq.setErr(errors.New(expErr)) fq.setErr(errors.New(expErr))
_, err = ar.Exec(context.TODO(), false) _, err = ar.Exec(context.TODO())
if err == nil { if err == nil {
t.Fatalf("expected to get err; got nil") t.Fatalf("expected to get err; got nil")
} }
@ -484,17 +648,15 @@ func TestAlertingRule_Template(t *testing.T) {
hash(metricWithLabels(t, "region", "east", "instance", "foo")): { hash(metricWithLabels(t, "region", "east", "instance", "foo")): {
Annotations: map[string]string{}, Annotations: map[string]string{},
Labels: map[string]string{ Labels: map[string]string{
alertGroupNameLabel: "", "region": "east",
"region": "east", "instance": "foo",
"instance": "foo",
}, },
}, },
hash(metricWithLabels(t, "region", "east", "instance", "bar")): { hash(metricWithLabels(t, "region", "east", "instance", "bar")): {
Annotations: map[string]string{}, Annotations: map[string]string{},
Labels: map[string]string{ Labels: map[string]string{
alertGroupNameLabel: "", "region": "east",
"region": "east", "instance": "bar",
"instance": "bar",
}, },
}, },
}, },
@ -519,9 +681,8 @@ func TestAlertingRule_Template(t *testing.T) {
map[uint64]*notifier.Alert{ map[uint64]*notifier.Alert{
hash(metricWithLabels(t, "region", "east", "instance", "foo")): { hash(metricWithLabels(t, "region", "east", "instance", "foo")): {
Labels: map[string]string{ Labels: map[string]string{
alertGroupNameLabel: "", "instance": "foo",
"instance": "foo", "region": "east",
"region": "east",
}, },
Annotations: map[string]string{ Annotations: map[string]string{
"summary": `Too high connection number for "foo" for region east`, "summary": `Too high connection number for "foo" for region east`,
@ -530,9 +691,8 @@ func TestAlertingRule_Template(t *testing.T) {
}, },
hash(metricWithLabels(t, "region", "east", "instance", "bar")): { hash(metricWithLabels(t, "region", "east", "instance", "bar")): {
Labels: map[string]string{ Labels: map[string]string{
alertGroupNameLabel: "", "instance": "bar",
"instance": "bar", "region": "east",
"region": "east",
}, },
Annotations: map[string]string{ Annotations: map[string]string{
"summary": `Too high connection number for "bar" for region east`, "summary": `Too high connection number for "bar" for region east`,
@ -549,7 +709,7 @@ func TestAlertingRule_Template(t *testing.T) {
tc.rule.GroupID = fakeGroup.ID() tc.rule.GroupID = fakeGroup.ID()
tc.rule.q = fq tc.rule.q = fq
fq.add(tc.metrics...) fq.add(tc.metrics...)
if _, err := tc.rule.Exec(context.TODO(), false); err != nil { if _, err := tc.rule.Exec(context.TODO()); err != nil {
t.Fatalf("unexpected err: %s", err) t.Fatalf("unexpected err: %s", err)
} }
for hash, expAlert := range tc.expAlerts { for hash, expAlert := range tc.expAlerts {
@ -579,5 +739,5 @@ func newTestRuleWithLabels(name string, labels ...string) *AlertingRule {
} }
func newTestAlertingRule(name string, waitFor time.Duration) *AlertingRule { func newTestAlertingRule(name string, waitFor time.Duration) *AlertingRule {
return &AlertingRule{Name: name, alerts: make(map[uint64]*notifier.Alert), For: waitFor} return &AlertingRule{Name: name, alerts: make(map[uint64]*notifier.Alert), For: waitFor, EvalInterval: waitFor}
} }

View file

@ -0,0 +1,39 @@
groups:
- name: ReplayGroup
interval: 1m
concurrency: 1
rules:
- record: type:vm_cache_entries:rate5m
expr: sum(rate(vm_cache_entries[5m])) by (type)
labels:
recording: true
- record: go_cgo_calls_count:rate5m
expr: rate(go_cgo_calls_count{job="vmdb"}[5m])
labels:
recording: true
- name: vmsingleReplay
interval: 30s
concurrency: 2
rules:
- alert: RequestErrorsToAPI
expr: increase(vm_http_request_errors_total[5m]) > 0
for: 15m
labels:
severity: warning
annotations:
dashboard: "http://localhost:3000/d/wNf0q_kZk?viewPanel=35&var-instance={{ $labels.instance }}"
summary: "Too many errors served for path {{ $labels.path }} (instance {{ $labels.instance }})"
description: "Requests to path {{ $labels.path }} are receiving errors.
Please verify if clients are sending correct requests."
- alert: TooManyLogs
expr: sum(increase(vm_log_messages_total{level!="info"}[5m])) by (job, instance) > 0
for: 15m
labels:
severity: warning
annotations:
dashboard: "http://localhost:3000/d/wNf0q_kZk?viewPanel=67&var-instance={{ $labels.instance }}"
summary: "Too many logs printed for job \"{{ $labels.job }}\" ({{ $labels.instance }})"
description: "Logging rate for job \"{{ $labels.job }}\" ({{ $labels.instance }}) is {{ $value }} for last 15m.\n
Worth to check logs for specific error messages."

View file

@ -2,26 +2,33 @@ package datasource
import ( import (
"context" "context"
"time"
) )
// Querier interface wraps Query and QueryRange methods
type Querier interface {
Query(ctx context.Context, query string) ([]Metric, error)
QueryRange(ctx context.Context, query string, from, to time.Time) ([]Metric, error)
}
// QuerierBuilder builds Querier with given params. // QuerierBuilder builds Querier with given params.
type QuerierBuilder interface { type QuerierBuilder interface {
BuildWithParams(params QuerierParams) Querier BuildWithParams(params QuerierParams) Querier
} }
// Querier interface wraps Query method which // QuerierParams params for Querier.
// executes given query and returns list of Metrics type QuerierParams struct {
// as result DataSourceType *Type
type Querier interface { EvaluationInterval time.Duration
Query(ctx context.Context, query string) ([]Metric, error) // see https://docs.victoriametrics.com/#prometheus-querying-api-enhancements
ExtraLabels map[string]string
} }
// Metric is the basic entity which should be return by datasource // Metric is the basic entity which should be return by datasource
// It represents single data point with full list of labels
type Metric struct { type Metric struct {
Labels []Label Labels []Label
Timestamp int64 Timestamps []int64
Value float64 Values []float64
} }
// SetLabel adds or updates existing one label // SetLabel adds or updates existing one label

View file

@ -0,0 +1,18 @@
package datasource
import "testing"
func TestMetric_Label(t *testing.T) {
m := &Metric{}
m.AddLabel("foo", "bar")
checkEqualString(t, "bar", m.Label("foo"))
m.SetLabel("foo", "baz")
checkEqualString(t, "baz", m.Label("foo"))
m.SetLabel("qux", "quux")
checkEqualString(t, "quux", m.Label("qux"))
checkEqualString(t, "", m.Label("non-existing"))
}

View file

@ -2,76 +2,13 @@ package datasource
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"net/http" "net/http"
"strconv"
"strings" "strings"
"time" "time"
) )
type response struct {
Status string `json:"status"`
Data struct {
ResultType string `json:"resultType"`
Result []struct {
Labels map[string]string `json:"metric"`
TV [2]interface{} `json:"value"`
} `json:"result"`
} `json:"data"`
ErrorType string `json:"errorType"`
Error string `json:"error"`
}
func (r response) metrics() ([]Metric, error) {
var ms []Metric
var m Metric
var f float64
var err error
for i, res := range r.Data.Result {
f, err = strconv.ParseFloat(res.TV[1].(string), 64)
if err != nil {
return nil, fmt.Errorf("metric %v, unable to parse float64 from %s: %w", res, res.TV[1], err)
}
m.Labels = nil
for k, v := range r.Data.Result[i].Labels {
m.AddLabel(k, v)
}
m.Timestamp = int64(res.TV[0].(float64))
m.Value = f
ms = append(ms, m)
}
return ms, nil
}
type graphiteResponse []graphiteResponseTarget
type graphiteResponseTarget struct {
Target string `json:"target"`
Tags map[string]string `json:"tags"`
DataPoints [][2]float64 `json:"datapoints"`
}
func (r graphiteResponse) metrics() []Metric {
var ms []Metric
for _, res := range r {
if len(res.DataPoints) < 1 {
continue
}
var m Metric
// add only last value to the result.
last := res.DataPoints[len(res.DataPoints)-1]
m.Value = last[0]
m.Timestamp = int64(last[1])
for k, v := range res.Tags {
m.AddLabel(k, v)
}
ms = append(ms, m)
}
return ms
}
// VMStorage represents vmstorage entity with ability to read and write metrics // VMStorage represents vmstorage entity with ability to read and write metrics
type VMStorage struct { type VMStorage struct {
c *http.Client c *http.Client
@ -88,20 +25,6 @@ type VMStorage struct {
extraLabels []string extraLabels []string
} }
const queryPath = "/api/v1/query"
const graphitePath = "/render"
const prometheusPrefix = "/prometheus"
const graphitePrefix = "/graphite"
// QuerierParams params for Querier.
type QuerierParams struct {
DataSourceType *Type
EvaluationInterval time.Duration
// see https://docs.victoriametrics.com/#prometheus-querying-api-enhancements
ExtraLabels map[string]string
}
// Clone makes clone of VMStorage, shares http client. // Clone makes clone of VMStorage, shares http client.
func (s *VMStorage) Clone() *VMStorage { func (s *VMStorage) Clone() *VMStorage {
return &VMStorage{ return &VMStorage{
@ -149,11 +72,21 @@ func NewVMStorage(baseURL, basicAuthUser, basicAuthPass string, lookBack time.Du
// Query executes the given query and returns parsed response // Query executes the given query and returns parsed response
func (s *VMStorage) Query(ctx context.Context, query string) ([]Metric, error) { func (s *VMStorage) Query(ctx context.Context, query string) ([]Metric, error) {
req, err := s.prepareReq(query, time.Now()) req, err := s.newRequestPOST()
if err != nil { if err != nil {
return nil, err return nil, err
} }
ts := time.Now()
switch s.dataSourceType.name {
case "", prometheusType:
s.setPrometheusInstantReqParams(req, query, ts)
case graphiteType:
s.setGraphiteReqParams(req, query, ts)
default:
return nil, fmt.Errorf("engine not found: %q", s.dataSourceType.name)
}
resp, err := s.do(ctx, req) resp, err := s.do(ctx, req)
if err != nil { if err != nil {
return nil, err return nil, err
@ -169,25 +102,32 @@ func (s *VMStorage) Query(ctx context.Context, query string) ([]Metric, error) {
return parseFn(req, resp) return parseFn(req, resp)
} }
func (s *VMStorage) prepareReq(query string, timestamp time.Time) (*http.Request, error) { // QueryRange executes the given query on the given time range.
req, err := http.NewRequest("POST", s.datasourceURL, nil) // For Prometheus type see https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries
// Graphite type isn't supported.
func (s *VMStorage) QueryRange(ctx context.Context, query string, start, end time.Time) ([]Metric, error) {
if s.dataSourceType.name != prometheusType {
return nil, fmt.Errorf("%q is not supported for QueryRange", s.dataSourceType.name)
}
req, err := s.newRequestPOST()
if err != nil { if err != nil {
return nil, err return nil, err
} }
req.Header.Set("Content-Type", "application/json; charset=utf-8") if start.IsZero() {
if s.basicAuthPass != "" { return nil, fmt.Errorf("start param is missing")
req.SetBasicAuth(s.basicAuthUser, s.basicAuthPass)
} }
if end.IsZero() {
switch s.dataSourceType.name { return nil, fmt.Errorf("end param is missing")
case "", prometheusType:
s.setPrometheusReqParams(req, query, timestamp)
case graphiteType:
s.setGraphiteReqParams(req, query, timestamp)
default:
return nil, fmt.Errorf("engine not found: %q", s.dataSourceType.name)
} }
return req, nil s.setPrometheusRangeReqParams(req, query, start, end)
resp, err := s.do(ctx, req)
if err != nil {
return nil, err
}
defer func() {
_ = resp.Body.Close()
}()
return parsePrometheusResponse(req, resp)
} }
func (s *VMStorage) do(ctx context.Context, req *http.Request) (*http.Response, error) { func (s *VMStorage) do(ctx context.Context, req *http.Request) (*http.Response, error) {
@ -203,80 +143,14 @@ func (s *VMStorage) do(ctx context.Context, req *http.Request) (*http.Response,
return resp, nil return resp, nil
} }
func (s *VMStorage) setPrometheusReqParams(r *http.Request, query string, timestamp time.Time) { func (s *VMStorage) newRequestPOST() (*http.Request, error) {
if s.appendTypePrefix { req, err := http.NewRequest("POST", s.datasourceURL, nil)
r.URL.Path += prometheusPrefix if err != nil {
return nil, err
} }
r.URL.Path += queryPath req.Header.Set("Content-Type", "application/json; charset=utf-8")
q := r.URL.Query() if s.basicAuthPass != "" {
q.Set("query", query) req.SetBasicAuth(s.basicAuthUser, s.basicAuthPass)
if s.lookBack > 0 {
timestamp = timestamp.Add(-s.lookBack)
} }
if s.evaluationInterval > 0 { return req, nil
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1232
timestamp = timestamp.Truncate(s.evaluationInterval)
// set step as evaluationInterval by default
q.Set("step", s.evaluationInterval.String())
}
q.Set("time", fmt.Sprintf("%d", timestamp.Unix()))
if s.queryStep > 0 {
// override step with user-specified value
q.Set("step", s.queryStep.String())
}
if s.roundDigits != "" {
q.Set("round_digits", s.roundDigits)
}
for _, l := range s.extraLabels {
q.Add("extra_label", l)
}
r.URL.RawQuery = q.Encode()
}
func (s *VMStorage) setGraphiteReqParams(r *http.Request, query string, timestamp time.Time) {
if s.appendTypePrefix {
r.URL.Path += graphitePrefix
}
r.URL.Path += graphitePath
q := r.URL.Query()
q.Set("format", "json")
q.Set("target", query)
from := "-5min"
if s.lookBack > 0 {
lookBack := timestamp.Add(-s.lookBack)
from = strconv.FormatInt(lookBack.Unix(), 10)
}
q.Set("from", from)
q.Set("until", "now")
r.URL.RawQuery = q.Encode()
}
const (
statusSuccess, statusError, rtVector = "success", "error", "vector"
)
func parsePrometheusResponse(req *http.Request, resp *http.Response) ([]Metric, error) {
r := &response{}
if err := json.NewDecoder(resp.Body).Decode(r); err != nil {
return nil, fmt.Errorf("error parsing prometheus metrics for %s: %w", req.URL, err)
}
if r.Status == statusError {
return nil, fmt.Errorf("response error, query: %s, errorType: %s, error: %s", req.URL, r.ErrorType, r.Error)
}
if r.Status != statusSuccess {
return nil, fmt.Errorf("unknown status: %s, Expected success or error ", r.Status)
}
if r.Data.ResultType != rtVector {
return nil, fmt.Errorf("unknown result type:%s. Expected vector", r.Data.ResultType)
}
return r.metrics()
}
func parseGraphiteResponse(req *http.Request, resp *http.Response) ([]Metric, error) {
r := &graphiteResponse{}
if err := json.NewDecoder(resp.Body).Decode(r); err != nil {
return nil, fmt.Errorf("error parsing graphite metrics for %s: %w", req.URL, err)
}
return r.metrics(), nil
} }

View file

@ -0,0 +1,67 @@
package datasource
import (
"encoding/json"
"fmt"
"net/http"
"strconv"
"time"
)
type graphiteResponse []graphiteResponseTarget
type graphiteResponseTarget struct {
Target string `json:"target"`
Tags map[string]string `json:"tags"`
DataPoints [][2]float64 `json:"datapoints"`
}
func (r graphiteResponse) metrics() []Metric {
var ms []Metric
for _, res := range r {
if len(res.DataPoints) < 1 {
continue
}
var m Metric
// add only last value to the result.
last := res.DataPoints[len(res.DataPoints)-1]
m.Values = append(m.Values, last[0])
m.Timestamps = append(m.Timestamps, int64(last[1]))
for k, v := range res.Tags {
m.AddLabel(k, v)
}
ms = append(ms, m)
}
return ms
}
func parseGraphiteResponse(req *http.Request, resp *http.Response) ([]Metric, error) {
r := &graphiteResponse{}
if err := json.NewDecoder(resp.Body).Decode(r); err != nil {
return nil, fmt.Errorf("error parsing graphite metrics for %s: %w", req.URL, err)
}
return r.metrics(), nil
}
const (
graphitePath = "/render"
graphitePrefix = "/graphite"
)
func (s *VMStorage) setGraphiteReqParams(r *http.Request, query string, timestamp time.Time) {
if s.appendTypePrefix {
r.URL.Path += graphitePrefix
}
r.URL.Path += graphitePath
q := r.URL.Query()
q.Set("format", "json")
q.Set("target", query)
from := "-5min"
if s.lookBack > 0 {
lookBack := timestamp.Add(-s.lookBack)
from = strconv.FormatInt(lookBack.Unix(), 10)
}
q.Set("from", from)
q.Set("until", "now")
r.URL.RawQuery = q.Encode()
}

View file

@ -0,0 +1,170 @@
package datasource
import (
"encoding/json"
"fmt"
"net/http"
"strconv"
"time"
)
type promResponse struct {
Status string `json:"status"`
ErrorType string `json:"errorType"`
Error string `json:"error"`
Data struct {
ResultType string `json:"resultType"`
Result json.RawMessage `json:"result"`
} `json:"data"`
}
type promInstant struct {
Result []struct {
Labels map[string]string `json:"metric"`
TV [2]interface{} `json:"value"`
} `json:"result"`
}
type promRange struct {
Result []struct {
Labels map[string]string `json:"metric"`
TVs [][2]interface{} `json:"values"`
} `json:"result"`
}
func (r promInstant) metrics() ([]Metric, error) {
var result []Metric
var m Metric
for i, res := range r.Result {
f, err := strconv.ParseFloat(res.TV[1].(string), 64)
if err != nil {
return nil, fmt.Errorf("metric %v, unable to parse float64 from %s: %w", res, res.TV[1], err)
}
m.Labels = nil
for k, v := range r.Result[i].Labels {
m.AddLabel(k, v)
}
m.Timestamps = append(m.Timestamps, int64(res.TV[0].(float64)))
m.Values = append(m.Values, f)
result = append(result, m)
m.Values = m.Values[:0]
m.Labels = m.Labels[:0]
m.Timestamps = m.Timestamps[:0]
}
return result, nil
}
func (r promRange) metrics() ([]Metric, error) {
var result []Metric
for i, res := range r.Result {
var m Metric
for _, tv := range res.TVs {
f, err := strconv.ParseFloat(tv[1].(string), 64)
if err != nil {
return nil, fmt.Errorf("metric %v, unable to parse float64 from %s: %w", res, tv[1], err)
}
m.Values = append(m.Values, f)
m.Timestamps = append(m.Timestamps, int64(tv[0].(float64)))
}
if len(m.Values) < 1 || len(m.Timestamps) < 1 {
return nil, fmt.Errorf("metric %v contains no values", res)
}
m.Labels = nil
for k, v := range r.Result[i].Labels {
m.AddLabel(k, v)
}
result = append(result, m)
}
return result, nil
}
const (
statusSuccess, statusError = "success", "error"
rtVector, rtMatrix = "vector", "matrix"
)
func parsePrometheusResponse(req *http.Request, resp *http.Response) ([]Metric, error) {
r := &promResponse{}
if err := json.NewDecoder(resp.Body).Decode(r); err != nil {
return nil, fmt.Errorf("error parsing prometheus metrics for %s: %w", req.URL, err)
}
if r.Status == statusError {
return nil, fmt.Errorf("response error, query: %s, errorType: %s, error: %s", req.URL, r.ErrorType, r.Error)
}
if r.Status != statusSuccess {
return nil, fmt.Errorf("unknown status: %s, Expected success or error ", r.Status)
}
switch r.Data.ResultType {
case rtVector:
var pi promInstant
if err := json.Unmarshal(r.Data.Result, &pi.Result); err != nil {
return nil, fmt.Errorf("umarshal err %s; \n %#v", err, string(r.Data.Result))
}
return pi.metrics()
case rtMatrix:
var pr promRange
if err := json.Unmarshal(r.Data.Result, &pr.Result); err != nil {
return nil, err
}
return pr.metrics()
default:
return nil, fmt.Errorf("unknown result type %q", r.Data.ResultType)
}
}
const (
prometheusInstantPath = "/api/v1/query"
prometheusRangePath = "/api/v1/query_range"
prometheusPrefix = "/prometheus"
)
func (s *VMStorage) setPrometheusInstantReqParams(r *http.Request, query string, timestamp time.Time) {
if s.appendTypePrefix {
r.URL.Path += prometheusPrefix
}
r.URL.Path += prometheusInstantPath
q := r.URL.Query()
if s.lookBack > 0 {
timestamp = timestamp.Add(-s.lookBack)
}
if s.evaluationInterval > 0 {
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1232
timestamp = timestamp.Truncate(s.evaluationInterval)
}
q.Set("time", fmt.Sprintf("%d", timestamp.Unix()))
r.URL.RawQuery = q.Encode()
s.setPrometheusReqParams(r, query)
}
func (s *VMStorage) setPrometheusRangeReqParams(r *http.Request, query string, start, end time.Time) {
if s.appendTypePrefix {
r.URL.Path += prometheusPrefix
}
r.URL.Path += prometheusRangePath
q := r.URL.Query()
q.Add("start", fmt.Sprintf("%d", start.Unix()))
q.Add("end", fmt.Sprintf("%d", end.Unix()))
r.URL.RawQuery = q.Encode()
s.setPrometheusReqParams(r, query)
}
func (s *VMStorage) setPrometheusReqParams(r *http.Request, query string) {
q := r.URL.Query()
q.Set("query", query)
if s.evaluationInterval > 0 {
// set step as evaluationInterval by default
q.Set("step", s.evaluationInterval.String())
}
if s.queryStep > 0 {
// override step with user-specified value
q.Set("step", s.queryStep.String())
}
if s.roundDigits != "" {
q.Set("round_digits", s.roundDigits)
}
for _, l := range s.extraLabels {
q.Add("extra_label", l)
}
r.URL.RawQuery = q.Encode()
}

View file

@ -7,6 +7,7 @@ import (
"net/http/httptest" "net/http/httptest"
"reflect" "reflect"
"strconv" "strconv"
"strings"
"testing" "testing"
"time" "time"
) )
@ -19,7 +20,7 @@ var (
queryRender = "constantLine(10)" queryRender = "constantLine(10)"
) )
func TestVMSelectQuery(t *testing.T) { func TestVMInstantQuery(t *testing.T) {
mux := http.NewServeMux() mux := http.NewServeMux()
mux.HandleFunc("/", func(_ http.ResponseWriter, _ *http.Request) { mux.HandleFunc("/", func(_ http.ResponseWriter, _ *http.Request) {
t.Errorf("should not be called") t.Errorf("should not be called")
@ -103,9 +104,9 @@ func TestVMSelectQuery(t *testing.T) {
t.Fatalf("expected 1 metric got %d in %+v", len(m), m) t.Fatalf("expected 1 metric got %d in %+v", len(m), m)
} }
expected := Metric{ expected := Metric{
Labels: []Label{{Value: "vm_rows", Name: "__name__"}}, Labels: []Label{{Value: "vm_rows", Name: "__name__"}},
Timestamp: 1583786142, Timestamps: []int64{1583786142},
Value: 13763, Values: []float64{13763},
} }
if !reflect.DeepEqual(m[0], expected) { if !reflect.DeepEqual(m[0], expected) {
t.Fatalf("unexpected metric %+v want %+v", m[0], expected) t.Fatalf("unexpected metric %+v want %+v", m[0], expected)
@ -122,44 +123,145 @@ func TestVMSelectQuery(t *testing.T) {
t.Fatalf("expected 1 metric got %d in %+v", len(m), m) t.Fatalf("expected 1 metric got %d in %+v", len(m), m)
} }
expected = Metric{ expected = Metric{
Labels: []Label{{Value: "constantLine(10)", Name: "name"}}, Labels: []Label{{Value: "constantLine(10)", Name: "name"}},
Timestamp: 1611758403, Timestamps: []int64{1611758403},
Value: 10, Values: []float64{10},
} }
if !reflect.DeepEqual(m[0], expected) { if !reflect.DeepEqual(m[0], expected) {
t.Fatalf("unexpected metric %+v want %+v", m[0], expected) t.Fatalf("unexpected metric %+v want %+v", m[0], expected)
} }
} }
func TestPrepareReq(t *testing.T) { func TestVMRangeQuery(t *testing.T) {
mux := http.NewServeMux()
mux.HandleFunc("/", func(_ http.ResponseWriter, _ *http.Request) {
t.Errorf("should not be called")
})
c := -1
mux.HandleFunc("/api/v1/query_range", func(w http.ResponseWriter, r *http.Request) {
c++
if r.Method != http.MethodPost {
t.Errorf("expected POST method got %s", r.Method)
}
if name, pass, _ := r.BasicAuth(); name != basicAuthName || pass != basicAuthPass {
t.Errorf("expected %s:%s as basic auth got %s:%s", basicAuthName, basicAuthPass, name, pass)
}
if r.URL.Query().Get("query") != query {
t.Errorf("expected %s in query param, got %s", query, r.URL.Query().Get("query"))
}
startTS := r.URL.Query().Get("start")
if startTS == "" {
t.Errorf("expected 'start' in query param, got nil instead")
}
if _, err := strconv.ParseInt(startTS, 10, 64); err != nil {
t.Errorf("failed to parse 'start' query param: %s", err)
}
endTS := r.URL.Query().Get("end")
if endTS == "" {
t.Errorf("expected 'end' in query param, got nil instead")
}
if _, err := strconv.ParseInt(endTS, 10, 64); err != nil {
t.Errorf("failed to parse 'end' query param: %s", err)
}
switch c {
case 0:
w.Write([]byte(`{"status":"success","data":{"resultType":"matrix","result":[{"metric":{"__name__":"vm_rows"},"values":[[1583786142,"13763"]]}]}}`))
}
})
srv := httptest.NewServer(mux)
defer srv.Close()
s := NewVMStorage(srv.URL, basicAuthName, basicAuthPass, time.Minute, 0, false, srv.Client())
p := NewPrometheusType()
pq := s.BuildWithParams(QuerierParams{DataSourceType: &p, EvaluationInterval: 15 * time.Second})
_, err := pq.QueryRange(ctx, query, time.Now(), time.Time{})
expectError(t, err, "is missing")
_, err = pq.QueryRange(ctx, query, time.Time{}, time.Now())
expectError(t, err, "is missing")
start, end := time.Now().Add(-time.Minute), time.Now()
m, err := pq.QueryRange(ctx, query, start, end)
if err != nil {
t.Fatalf("unexpected %s", err)
}
if len(m) != 1 {
t.Fatalf("expected 1 metric got %d in %+v", len(m), m)
}
expected := Metric{
Labels: []Label{{Value: "vm_rows", Name: "__name__"}},
Timestamps: []int64{1583786142},
Values: []float64{13763},
}
if !reflect.DeepEqual(m[0], expected) {
t.Fatalf("unexpected metric %+v want %+v", m[0], expected)
}
g := NewGraphiteType()
gq := s.BuildWithParams(QuerierParams{DataSourceType: &g})
_, err = gq.QueryRange(ctx, queryRender, start, end)
expectError(t, err, "is not supported")
}
func TestRequestParams(t *testing.T) {
query := "up" query := "up"
timestamp := time.Date(2001, 2, 3, 4, 5, 6, 0, time.UTC) timestamp := time.Date(2001, 2, 3, 4, 5, 6, 0, time.UTC)
testCases := []struct { testCases := []struct {
name string name string
vm *VMStorage queryRange bool
checkFn func(t *testing.T, r *http.Request) vm *VMStorage
checkFn func(t *testing.T, r *http.Request)
}{ }{
{ {
"prometheus path", "prometheus path",
false,
&VMStorage{ &VMStorage{
dataSourceType: NewPrometheusType(), dataSourceType: NewPrometheusType(),
}, },
func(t *testing.T, r *http.Request) { func(t *testing.T, r *http.Request) {
checkEqualString(t, queryPath, r.URL.Path) checkEqualString(t, prometheusInstantPath, r.URL.Path)
}, },
}, },
{ {
"prometheus prefix", "prometheus prefix",
false,
&VMStorage{ &VMStorage{
dataSourceType: NewPrometheusType(), dataSourceType: NewPrometheusType(),
appendTypePrefix: true, appendTypePrefix: true,
}, },
func(t *testing.T, r *http.Request) { func(t *testing.T, r *http.Request) {
checkEqualString(t, prometheusPrefix+queryPath, r.URL.Path) checkEqualString(t, prometheusPrefix+prometheusInstantPath, r.URL.Path)
},
},
{
"prometheus range path",
true,
&VMStorage{
dataSourceType: NewPrometheusType(),
},
func(t *testing.T, r *http.Request) {
checkEqualString(t, prometheusRangePath, r.URL.Path)
},
},
{
"prometheus range prefix",
true,
&VMStorage{
dataSourceType: NewPrometheusType(),
appendTypePrefix: true,
},
func(t *testing.T, r *http.Request) {
checkEqualString(t, prometheusPrefix+prometheusRangePath, r.URL.Path)
}, },
}, },
{ {
"graphite path", "graphite path",
false,
&VMStorage{ &VMStorage{
dataSourceType: NewGraphiteType(), dataSourceType: NewGraphiteType(),
}, },
@ -169,6 +271,7 @@ func TestPrepareReq(t *testing.T) {
}, },
{ {
"graphite prefix", "graphite prefix",
false,
&VMStorage{ &VMStorage{
dataSourceType: NewGraphiteType(), dataSourceType: NewGraphiteType(),
appendTypePrefix: true, appendTypePrefix: true,
@ -179,14 +282,38 @@ func TestPrepareReq(t *testing.T) {
}, },
{ {
"default params", "default params",
false,
&VMStorage{}, &VMStorage{},
func(t *testing.T, r *http.Request) { func(t *testing.T, r *http.Request) {
exp := fmt.Sprintf("query=%s&time=%d", query, timestamp.Unix()) exp := fmt.Sprintf("query=%s&time=%d", query, timestamp.Unix())
checkEqualString(t, exp, r.URL.RawQuery) checkEqualString(t, exp, r.URL.RawQuery)
}, },
}, },
{
"default range params",
true,
&VMStorage{},
func(t *testing.T, r *http.Request) {
exp := fmt.Sprintf("end=%d&query=%s&start=%d", timestamp.Unix(), query, timestamp.Unix())
checkEqualString(t, exp, r.URL.RawQuery)
},
},
{ {
"basic auth", "basic auth",
false,
&VMStorage{
basicAuthUser: "foo",
basicAuthPass: "bar",
},
func(t *testing.T, r *http.Request) {
u, p, _ := r.BasicAuth()
checkEqualString(t, "foo", u)
checkEqualString(t, "bar", p)
},
},
{
"basic auth range",
true,
&VMStorage{ &VMStorage{
basicAuthUser: "foo", basicAuthUser: "foo",
basicAuthPass: "bar", basicAuthPass: "bar",
@ -199,6 +326,7 @@ func TestPrepareReq(t *testing.T) {
}, },
{ {
"lookback", "lookback",
false,
&VMStorage{ &VMStorage{
lookBack: time.Minute, lookBack: time.Minute,
}, },
@ -209,6 +337,7 @@ func TestPrepareReq(t *testing.T) {
}, },
{ {
"evaluation interval", "evaluation interval",
false,
&VMStorage{ &VMStorage{
evaluationInterval: 15 * time.Second, evaluationInterval: 15 * time.Second,
}, },
@ -221,6 +350,7 @@ func TestPrepareReq(t *testing.T) {
}, },
{ {
"lookback + evaluation interval", "lookback + evaluation interval",
false,
&VMStorage{ &VMStorage{
lookBack: time.Minute, lookBack: time.Minute,
evaluationInterval: 15 * time.Second, evaluationInterval: 15 * time.Second,
@ -235,6 +365,7 @@ func TestPrepareReq(t *testing.T) {
}, },
{ {
"step override", "step override",
false,
&VMStorage{ &VMStorage{
queryStep: time.Minute, queryStep: time.Minute,
}, },
@ -245,6 +376,7 @@ func TestPrepareReq(t *testing.T) {
}, },
{ {
"round digits", "round digits",
false,
&VMStorage{ &VMStorage{
roundDigits: "10", roundDigits: "10",
}, },
@ -255,6 +387,7 @@ func TestPrepareReq(t *testing.T) {
}, },
{ {
"extra labels", "extra labels",
false,
&VMStorage{ &VMStorage{
extraLabels: []string{ extraLabels: []string{
"env=prod", "env=prod",
@ -266,14 +399,39 @@ func TestPrepareReq(t *testing.T) {
checkEqualString(t, exp, r.URL.RawQuery) checkEqualString(t, exp, r.URL.RawQuery)
}, },
}, },
{
"extra labels range",
true,
&VMStorage{
extraLabels: []string{
"env=prod",
"query=es=cape",
},
},
func(t *testing.T, r *http.Request) {
exp := fmt.Sprintf("end=%d&extra_label=env%%3Dprod&extra_label=query%%3Des%%3Dcape&query=%s&start=%d",
timestamp.Unix(), query, timestamp.Unix())
checkEqualString(t, exp, r.URL.RawQuery)
},
},
} }
for _, tc := range testCases { for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) { t.Run(tc.name, func(t *testing.T) {
req, err := tc.vm.prepareReq(query, timestamp) req, err := tc.vm.newRequestPOST()
if err != nil { if err != nil {
t.Fatalf("unexpected error: %s", err) t.Fatalf("unexpected error: %s", err)
} }
switch tc.vm.dataSourceType.name {
case "", prometheusType:
if tc.queryRange {
tc.vm.setPrometheusRangeReqParams(req, query, timestamp, timestamp)
} else {
tc.vm.setPrometheusInstantReqParams(req, query, timestamp)
}
case graphiteType:
tc.vm.setGraphiteReqParams(req, query, timestamp)
}
tc.checkFn(t, req) tc.checkFn(t, req)
}) })
} }
@ -285,3 +443,13 @@ func checkEqualString(t *testing.T, exp, got string) {
t.Errorf("expected to get %q; got %q", exp, got) t.Errorf("expected to get %q; got %q", exp, got)
} }
} }
func expectError(t *testing.T, err error, exp string) {
t.Helper()
if err == nil {
t.Errorf("expected non-nil error")
}
if !strings.Contains(err.Error(), exp) {
t.Errorf("expected error %q to contain %q", err, exp)
}
}

View file

@ -269,15 +269,10 @@ type executor struct {
func (e *executor) execConcurrently(ctx context.Context, rules []Rule, concurrency int, interval time.Duration) chan error { func (e *executor) execConcurrently(ctx context.Context, rules []Rule, concurrency int, interval time.Duration) chan error {
res := make(chan error, len(rules)) res := make(chan error, len(rules))
var returnSeries bool
if e.rw != nil {
returnSeries = true
}
if concurrency == 1 { if concurrency == 1 {
// fast path // fast path
for _, rule := range rules { for _, rule := range rules {
res <- e.exec(ctx, rule, returnSeries, interval) res <- e.exec(ctx, rule, interval)
} }
close(res) close(res)
return res return res
@ -290,7 +285,7 @@ func (e *executor) execConcurrently(ctx context.Context, rules []Rule, concurren
sem <- struct{}{} sem <- struct{}{}
wg.Add(1) wg.Add(1)
go func(r Rule) { go func(r Rule) {
res <- e.exec(ctx, r, returnSeries, interval) res <- e.exec(ctx, r, interval)
<-sem <-sem
wg.Done() wg.Done()
}(rule) }(rule)
@ -309,14 +304,14 @@ var (
remoteWriteErrors = metrics.NewCounter(`vmalert_remotewrite_errors_total`) remoteWriteErrors = metrics.NewCounter(`vmalert_remotewrite_errors_total`)
) )
func (e *executor) exec(ctx context.Context, rule Rule, returnSeries bool, interval time.Duration) error { func (e *executor) exec(ctx context.Context, rule Rule, interval time.Duration) error {
execTotal.Inc() execTotal.Inc()
execStart := time.Now() execStart := time.Now()
defer func() { defer func() {
execDuration.UpdateDuration(execStart) execDuration.UpdateDuration(execStart)
}() }()
tss, err := rule.Exec(ctx, returnSeries) tss, err := rule.Exec(ctx)
if err != nil { if err != nil {
execErrors.Inc() execErrors.Inc()
return fmt.Errorf("rule %q: failed to execute: %w", rule, err) return fmt.Errorf("rule %q: failed to execute: %w", rule, err)

View file

@ -7,6 +7,7 @@ import (
"sort" "sort"
"sync" "sync"
"testing" "testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier"
@ -42,6 +43,10 @@ func (fq *fakeQuerier) BuildWithParams(_ datasource.QuerierParams) datasource.Qu
return fq return fq
} }
func (fq *fakeQuerier) QueryRange(ctx context.Context, q string, _, _ time.Time) ([]datasource.Metric, error) {
return fq.Query(ctx, q)
}
func (fq *fakeQuerier) Query(_ context.Context, _ string) ([]datasource.Metric, error) { func (fq *fakeQuerier) Query(_ context.Context, _ string) ([]datasource.Metric, error) {
fq.Lock() fq.Lock()
defer fq.Unlock() defer fq.Unlock()
@ -72,9 +77,16 @@ func (fn *fakeNotifier) getAlerts() []notifier.Alert {
} }
func metricWithValueAndLabels(t *testing.T, value float64, labels ...string) datasource.Metric { func metricWithValueAndLabels(t *testing.T, value float64, labels ...string) datasource.Metric {
return metricWithValuesAndLabels(t, []float64{value}, labels...)
}
func metricWithValuesAndLabels(t *testing.T, values []float64, labels ...string) datasource.Metric {
t.Helper() t.Helper()
m := metricWithLabels(t, labels...) m := metricWithLabels(t, labels...)
m.Value = value m.Values = values
for i := range values {
m.Timestamps = append(m.Timestamps, int64(i))
}
return m return m
} }
@ -83,7 +95,7 @@ func metricWithLabels(t *testing.T, labels ...string) datasource.Metric {
if len(labels) == 0 || len(labels)%2 != 0 { if len(labels) == 0 || len(labels)%2 != 0 {
t.Fatalf("expected to get even number of labels") t.Fatalf("expected to get even number of labels")
} }
m := datasource.Metric{} m := datasource.Metric{Values: []float64{1}, Timestamps: []int64{1}}
for i := 0; i < len(labels); i += 2 { for i := 0; i < len(labels); i += 2 {
m.Labels = append(m.Labels, datasource.Label{ m.Labels = append(m.Labels, datasource.Label{
Name: labels[i], Name: labels[i],

View file

@ -34,6 +34,9 @@ Examples:
absolute path to all .yaml files in root. absolute path to all .yaml files in root.
Rule files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars.`) Rule files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars.`)
rulesCheckInterval = flag.Duration("rule.configCheckInterval", 0, "Interval for checking for changes in '-rule' files. "+
"By default the checking is disabled. Send SIGHUP signal in order to force config check for changes")
httpListenAddr = flag.String("httpListenAddr", ":8880", "Address to listen for http connections") httpListenAddr = flag.String("httpListenAddr", ":8880", "Address to listen for http connections")
evaluationInterval = flag.Duration("evaluationInterval", time.Minute, "How often to evaluate the rules") evaluationInterval = flag.Duration("evaluationInterval", time.Minute, "How often to evaluate the rules")
@ -65,47 +68,54 @@ func main() {
notifier.InitTemplateFunc(u) notifier.InitTemplateFunc(u)
groups, err := config.Parse(*rulePath, true, true) groups, err := config.Parse(*rulePath, true, true)
if err != nil { if err != nil {
logger.Fatalf(err.Error()) logger.Fatalf("failed to parse %q: %s", *rulePath, err)
} }
if len(groups) == 0 { if len(groups) == 0 {
logger.Fatalf("No rules for validation. Please specify path to file(s) with alerting and/or recording rules using `-rule` flag") logger.Fatalf("No rules for validation. Please specify path to file(s) with alerting and/or recording rules using `-rule` flag")
} }
return return
} }
if *replayFrom != "" || *replayTo != "" {
rw, err := remotewrite.Init(context.Background())
if err != nil {
logger.Fatalf("failed to init remoteWrite: %s", err)
}
eu, err := getExternalURL(*externalURL, *httpListenAddr, httpserver.IsTLS())
if err != nil {
logger.Fatalf("failed to init `external.url`: %s", err)
}
notifier.InitTemplateFunc(eu)
groupsCfg, err := config.Parse(*rulePath, *validateTemplates, *validateExpressions)
if err != nil {
logger.Fatalf("cannot parse configuration file: %s", err)
}
q, err := datasource.Init()
if err != nil {
logger.Fatalf("failed to init datasource: %s", err)
}
if err := replay(groupsCfg, q, rw); err != nil {
logger.Fatalf("replay failed: %s", err)
}
return
}
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
manager, err := newManager(ctx) manager, err := newManager(ctx)
if err != nil { if err != nil {
logger.Fatalf("failed to init: %s", err) logger.Fatalf("failed to init: %s", err)
} }
// Register SIGHUP handler for config re-read just before manager.start call. logger.Infof("reading rules configuration file from %q", strings.Join(*rulePath, ";"))
// This guarantees that the config will be re-read if the signal arrives during manager.start call. groupsCfg, err := config.Parse(*rulePath, *validateTemplates, *validateExpressions)
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1240 if err != nil {
sighupCh := procutil.NewSighupChan() logger.Fatalf("cannot parse configuration file: %s", err)
}
if err := manager.start(ctx, *rulePath, *validateTemplates, *validateExpressions); err != nil { if err := manager.start(ctx, groupsCfg); err != nil {
logger.Fatalf("failed to start: %s", err) logger.Fatalf("failed to start: %s", err)
} }
go func() { go configReload(ctx, manager, groupsCfg)
// init reload metrics with positive values to improve alerting conditions
configSuccess.Set(1)
configTimestamp.Set(fasttime.UnixTimestamp())
for {
<-sighupCh
configReloads.Inc()
logger.Infof("SIGHUP received. Going to reload rules %q ...", *rulePath)
if err := manager.update(ctx, *rulePath, *validateTemplates, *validateExpressions, false); err != nil {
configReloadErrors.Inc()
configSuccess.Set(0)
logger.Errorf("error while reloading rules: %s", err)
continue
}
configSuccess.Set(1)
configTimestamp.Set(fasttime.UnixTimestamp())
logger.Infof("Rules reloaded successfully from %q", *rulePath)
}
}()
rh := &requestHandler{m: manager} rh := &requestHandler{m: manager}
go httpserver.Serve(*httpListenAddr, rh.handler) go httpserver.Serve(*httpListenAddr, rh.handler)
@ -228,3 +238,62 @@ See the docs at https://docs.victoriametrics.com/vmalert.html .
` `
flagutil.Usage(s) flagutil.Usage(s)
} }
func configReload(ctx context.Context, m *manager, groupsCfg []config.Group) {
// Register SIGHUP handler for config re-read just before manager.start call.
// This guarantees that the config will be re-read if the signal arrives during manager.start call.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1240
sighupCh := procutil.NewSighupChan()
var configCheckCh <-chan time.Time
if *rulesCheckInterval > 0 {
ticker := time.NewTicker(*rulesCheckInterval)
configCheckCh = ticker.C
defer ticker.Stop()
}
// init reload metrics with positive values to improve alerting conditions
configSuccess.Set(1)
configTimestamp.Set(fasttime.UnixTimestamp())
for {
select {
case <-ctx.Done():
return
case <-sighupCh:
logger.Infof("SIGHUP received. Going to reload rules %q ...", *rulePath)
configReloads.Inc()
case <-configCheckCh:
}
newGroupsCfg, err := config.Parse(*rulePath, *validateTemplates, *validateExpressions)
if err != nil {
logger.Errorf("cannot parse configuration file: %s", err)
continue
}
if configsEqual(newGroupsCfg, groupsCfg) {
// config didn't change - skip it
continue
}
groupsCfg = newGroupsCfg
if err := m.update(ctx, groupsCfg, false); err != nil {
configReloadErrors.Inc()
configSuccess.Set(0)
logger.Errorf("error while reloading rules: %s", err)
continue
}
configSuccess.Set(1)
configTimestamp.Set(fasttime.UnixTimestamp())
logger.Infof("Rules reloaded successfully from %q", *rulePath)
}
}
func configsEqual(a, b []config.Group) bool {
if len(a) != len(b) {
return false
}
for i := range a {
if a[i].Checksum != b[i].Checksum {
return false
}
}
return true
}

View file

@ -1,12 +1,16 @@
package main package main
import ( import (
"context"
"fmt" "fmt"
"io/ioutil"
"net/url" "net/url"
"os" "os"
"testing" "testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
) )
func TestGetExternalURL(t *testing.T) { func TestGetExternalURL(t *testing.T) {
@ -51,3 +55,95 @@ func TestGetAlertURLGenerator(t *testing.T) {
t.Errorf("unexpected url want %s, got %s", exp, fn(testAlert)) t.Errorf("unexpected url want %s, got %s", exp, fn(testAlert))
} }
} }
func TestConfigReload(t *testing.T) {
originalRulePath := *rulePath
defer func() {
*rulePath = originalRulePath
}()
const (
rules1 = `
groups:
- name: group-1
rules:
- alert: ExampleAlertAlwaysFiring
expr: sum by(job) (up == 1)
- record: handler:requests:rate5m
expr: sum(rate(prometheus_http_requests_total[5m])) by (handler)
`
rules2 = `
groups:
- name: group-1
rules:
- alert: ExampleAlertAlwaysFiring
expr: sum by(job) (up == 1)
- name: group-2
rules:
- record: handler:requests:rate5m
expr: sum(rate(prometheus_http_requests_total[5m])) by (handler)
`
)
f, err := ioutil.TempFile("", "")
if err != nil {
t.Fatal(err)
}
writeToFile(t, f.Name(), rules1)
*rulesCheckInterval = 200 * time.Millisecond
*rulePath = []string{f.Name()}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
m := &manager{
querierBuilder: &fakeQuerier{},
groups: make(map[uint64]*Group),
labels: map[string]string{},
}
go configReload(ctx, m, nil)
lenLocked := func(m *manager) int {
m.groupsMu.RLock()
defer m.groupsMu.RUnlock()
return len(m.groups)
}
time.Sleep(*rulesCheckInterval * 2)
groupsLen := lenLocked(m)
if groupsLen != 1 {
t.Fatalf("expected to have exactly 1 group loaded; got %d", groupsLen)
}
writeToFile(t, f.Name(), rules2)
time.Sleep(*rulesCheckInterval * 2)
groupsLen = lenLocked(m)
if groupsLen != 2 {
fmt.Println(m.groups)
t.Fatalf("expected to have exactly 2 groups loaded; got %d", groupsLen)
}
writeToFile(t, f.Name(), rules1)
procutil.SelfSIGHUP()
time.Sleep(*rulesCheckInterval / 2)
groupsLen = lenLocked(m)
if groupsLen != 1 {
t.Fatalf("expected to have exactly 1 group loaded; got %d", groupsLen)
}
writeToFile(t, f.Name(), `corrupted`)
procutil.SelfSIGHUP()
time.Sleep(*rulesCheckInterval / 2)
groupsLen = lenLocked(m)
if groupsLen != 1 { // should remain unchanged
t.Fatalf("expected to have exactly 1 group loaded; got %d", groupsLen)
}
}
func writeToFile(t *testing.T, file, b string) {
t.Helper()
err := ioutil.WriteFile(file, []byte(b), 0644)
if err != nil {
t.Fatal(err)
}
}

View file

@ -3,7 +3,6 @@ package main
import ( import (
"context" "context"
"fmt" "fmt"
"strings"
"sync" "sync"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
@ -50,8 +49,8 @@ func (m *manager) AlertAPI(gID, aID uint64) (*APIAlert, error) {
return nil, fmt.Errorf("can't find alert with id %q in group %q", aID, g.Name) return nil, fmt.Errorf("can't find alert with id %q in group %q", aID, g.Name)
} }
func (m *manager) start(ctx context.Context, path []string, validateTpl, validateExpr bool) error { func (m *manager) start(ctx context.Context, groupsCfg []config.Group) error {
return m.update(ctx, path, validateTpl, validateExpr, true) return m.update(ctx, groupsCfg, true)
} }
func (m *manager) close() { func (m *manager) close() {
@ -85,13 +84,7 @@ func (m *manager) startGroup(ctx context.Context, group *Group, restore bool) er
return nil return nil
} }
func (m *manager) update(ctx context.Context, path []string, validateTpl, validateExpr, restore bool) error { func (m *manager) update(ctx context.Context, groupsCfg []config.Group, restore bool) error {
logger.Infof("reading rules configuration file from %q", strings.Join(path, ";"))
groupsCfg, err := config.Parse(path, validateTpl, validateExpr)
if err != nil {
return fmt.Errorf("cannot parse configuration file: %w", err)
}
groupsRegistry := make(map[uint64]*Group) groupsRegistry := make(map[uint64]*Group)
for _, cfg := range groupsCfg { for _, cfg := range groupsCfg {
ng := newGroup(cfg, m.querierBuilder, *evaluationInterval, m.labels) ng := newGroup(cfg, m.querierBuilder, *evaluationInterval, m.labels)

View file

@ -9,8 +9,8 @@ import (
"testing" "testing"
"time" "time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier"
) )
@ -25,9 +25,8 @@ func TestMain(m *testing.M) {
// starting with empty rules folder // starting with empty rules folder
func TestManagerEmptyRulesDir(t *testing.T) { func TestManagerEmptyRulesDir(t *testing.T) {
m := &manager{groups: make(map[uint64]*Group)} m := &manager{groups: make(map[uint64]*Group)}
path := []string{"foo/bar"} cfg := loadCfg(t, []string{"foo/bar"}, true, true)
err := m.update(context.Background(), path, true, true, false) if err := m.update(context.Background(), cfg, false); err != nil {
if err != nil {
t.Fatalf("expected to load succesfully with empty rules dir; got err instead: %v", err) t.Fatalf("expected to load succesfully with empty rules dir; got err instead: %v", err)
} }
} }
@ -50,8 +49,11 @@ func TestManagerUpdateConcurrent(t *testing.T) {
"config/testdata/rules1-good.rules", "config/testdata/rules1-good.rules",
"config/testdata/rules2-good.rules", "config/testdata/rules2-good.rules",
} }
evalInterval := *evaluationInterval
defer func() { *evaluationInterval = evalInterval }()
*evaluationInterval = time.Millisecond *evaluationInterval = time.Millisecond
if err := m.start(context.Background(), []string{paths[0]}, true, true); err != nil { cfg := loadCfg(t, []string{paths[0]}, true, true)
if err := m.start(context.Background(), cfg); err != nil {
t.Fatalf("failed to start: %s", err) t.Fatalf("failed to start: %s", err)
} }
@ -64,8 +66,11 @@ func TestManagerUpdateConcurrent(t *testing.T) {
defer wg.Done() defer wg.Done()
for i := 0; i < iterations; i++ { for i := 0; i < iterations; i++ {
rnd := rand.Intn(len(paths)) rnd := rand.Intn(len(paths))
path := []string{paths[rnd]} cfg, err := config.Parse([]string{paths[rnd]}, true, true)
_ = m.update(context.Background(), path, true, true, false) if err != nil { // update can fail and this is expected
continue
}
_ = m.update(context.Background(), cfg, false)
} }
}() }()
} }
@ -243,13 +248,16 @@ func TestManagerUpdate(t *testing.T) {
t.Run(tc.name, func(t *testing.T) { t.Run(tc.name, func(t *testing.T) {
ctx, cancel := context.WithCancel(context.TODO()) ctx, cancel := context.WithCancel(context.TODO())
m := &manager{groups: make(map[uint64]*Group), querierBuilder: &fakeQuerier{}} m := &manager{groups: make(map[uint64]*Group), querierBuilder: &fakeQuerier{}}
path := []string{tc.initPath}
if err := m.update(ctx, path, true, true, false); err != nil { cfgInit := loadCfg(t, []string{tc.initPath}, true, true)
if err := m.update(ctx, cfgInit, false); err != nil {
t.Fatalf("failed to complete initial rules update: %s", err) t.Fatalf("failed to complete initial rules update: %s", err)
} }
path = []string{tc.updatePath} cfgUpdate, err := config.Parse([]string{tc.updatePath}, true, true)
_ = m.update(ctx, path, true, true, false) if err == nil { // update can fail and that's expected
_ = m.update(ctx, cfgUpdate, false)
}
if len(tc.want) != len(m.groups) { if len(tc.want) != len(m.groups) {
t.Fatalf("\nwant number of groups: %d;\ngot: %d ", len(tc.want), len(m.groups)) t.Fatalf("\nwant number of groups: %d;\ngot: %d ", len(tc.want), len(m.groups))
} }
@ -267,3 +275,12 @@ func TestManagerUpdate(t *testing.T) {
}) })
} }
} }
func loadCfg(t *testing.T, path []string, validateAnnotations, validateExpressions bool) []config.Group {
t.Helper()
cfg, err := config.Parse(path, validateAnnotations, validateExpressions)
if err != nil {
t.Fatal(err)
}
return cfg
}

View file

@ -83,14 +83,16 @@ func TestAlert_ExecTemplate(t *testing.T) {
{Name: "foo", Value: "bar"}, {Name: "foo", Value: "bar"},
{Name: "baz", Value: "qux"}, {Name: "baz", Value: "qux"},
}, },
Value: 1, Values: []float64{1},
Timestamps: []int64{1},
}, },
{ {
Labels: []datasource.Label{ Labels: []datasource.Label{
{Name: "foo", Value: "garply"}, {Name: "foo", Value: "garply"},
{Name: "baz", Value: "fred"}, {Name: "baz", Value: "fred"},
}, },
Value: 2, Values: []float64{2},
Timestamps: []int64{1},
}, },
}, nil }, nil
} }

View file

@ -47,8 +47,8 @@ func datasourceMetricsToTemplateMetrics(ms []datasource.Metric) []metric {
} }
mss = append(mss, metric{ mss = append(mss, metric{
Labels: labelsMap, Labels: labelsMap,
Timestamp: m.Timestamp, Timestamp: m.Timestamps[0],
Value: m.Value}) Value: m.Values[0]})
} }
return mss return mss
} }

View file

@ -88,12 +88,30 @@ func (rr *RecordingRule) Close() {
metrics.UnregisterMetric(rr.metrics.errors.name) metrics.UnregisterMetric(rr.metrics.errors.name)
} }
// Exec executes RecordingRule expression via the given Querier. // ExecRange executes recording rule on the given time range similarly to Exec.
func (rr *RecordingRule) Exec(ctx context.Context, series bool) ([]prompbmarshal.TimeSeries, error) { // It doesn't update internal states of the Rule and meant to be used just
if !series { // to get time series for backfilling.
return nil, nil func (rr *RecordingRule) ExecRange(ctx context.Context, start, end time.Time) ([]prompbmarshal.TimeSeries, error) {
series, err := rr.q.QueryRange(ctx, rr.Expr, start, end)
if err != nil {
return nil, err
} }
duplicates := make(map[string]struct{}, len(series))
var tss []prompbmarshal.TimeSeries
for _, s := range series {
ts := rr.toTimeSeries(s)
key := stringifyLabels(ts)
if _, ok := duplicates[key]; ok {
return nil, fmt.Errorf("original metric %v; resulting labels %q: %w", s.Labels, key, errDuplicate)
}
duplicates[key] = struct{}{}
tss = append(tss, ts)
}
return tss, nil
}
// Exec executes RecordingRule expression via the given Querier.
func (rr *RecordingRule) Exec(ctx context.Context) ([]prompbmarshal.TimeSeries, error) {
qMetrics, err := rr.q.Query(ctx, rr.Expr) qMetrics, err := rr.q.Query(ctx, rr.Expr)
rr.mu.Lock() rr.mu.Lock()
defer rr.mu.Unlock() defer rr.mu.Unlock()
@ -107,7 +125,7 @@ func (rr *RecordingRule) Exec(ctx context.Context, series bool) ([]prompbmarshal
duplicates := make(map[string]struct{}, len(qMetrics)) duplicates := make(map[string]struct{}, len(qMetrics))
var tss []prompbmarshal.TimeSeries var tss []prompbmarshal.TimeSeries
for _, r := range qMetrics { for _, r := range qMetrics {
ts := rr.toTimeSeries(r, time.Unix(r.Timestamp, 0)) ts := rr.toTimeSeries(r)
key := stringifyLabels(ts) key := stringifyLabels(ts)
if _, ok := duplicates[key]; ok { if _, ok := duplicates[key]; ok {
rr.lastExecError = errDuplicate rr.lastExecError = errDuplicate
@ -138,7 +156,7 @@ func stringifyLabels(ts prompbmarshal.TimeSeries) string {
return b.String() return b.String()
} }
func (rr *RecordingRule) toTimeSeries(m datasource.Metric, timestamp time.Time) prompbmarshal.TimeSeries { func (rr *RecordingRule) toTimeSeries(m datasource.Metric) prompbmarshal.TimeSeries {
labels := make(map[string]string) labels := make(map[string]string)
for _, l := range m.Labels { for _, l := range m.Labels {
labels[l.Name] = l.Value labels[l.Name] = l.Value
@ -148,7 +166,7 @@ func (rr *RecordingRule) toTimeSeries(m datasource.Metric, timestamp time.Time)
for k, v := range rr.Labels { for k, v := range rr.Labels {
labels[k] = v labels[k] = v
} }
return newTimeSeries(m.Value, labels, timestamp) return newTimeSeries(m.Values, m.Timestamps, labels)
} }
// UpdateWith copies all significant fields. // UpdateWith copies all significant fields.

View file

@ -11,7 +11,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
) )
func TestRecoridngRule_ToTimeSeries(t *testing.T) { func TestRecoridngRule_Exec(t *testing.T) {
timestamp := time.Now() timestamp := time.Now()
testCases := []struct { testCases := []struct {
rule *RecordingRule rule *RecordingRule
@ -24,9 +24,9 @@ func TestRecoridngRule_ToTimeSeries(t *testing.T) {
"__name__", "bar", "__name__", "bar",
)}, )},
[]prompbmarshal.TimeSeries{ []prompbmarshal.TimeSeries{
newTimeSeries(10, map[string]string{ newTimeSeries([]float64{10}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": "foo", "__name__": "foo",
}, timestamp), }),
}, },
}, },
{ {
@ -37,18 +37,18 @@ func TestRecoridngRule_ToTimeSeries(t *testing.T) {
metricWithValueAndLabels(t, 3, "__name__", "baz", "job", "baz"), metricWithValueAndLabels(t, 3, "__name__", "baz", "job", "baz"),
}, },
[]prompbmarshal.TimeSeries{ []prompbmarshal.TimeSeries{
newTimeSeries(1, map[string]string{ newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": "foobarbaz", "__name__": "foobarbaz",
"job": "foo", "job": "foo",
}, timestamp), }),
newTimeSeries(2, map[string]string{ newTimeSeries([]float64{2}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": "foobarbaz", "__name__": "foobarbaz",
"job": "bar", "job": "bar",
}, timestamp), }),
newTimeSeries(3, map[string]string{ newTimeSeries([]float64{3}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": "foobarbaz", "__name__": "foobarbaz",
"job": "baz", "job": "baz",
}, timestamp), }),
}, },
}, },
{ {
@ -59,16 +59,16 @@ func TestRecoridngRule_ToTimeSeries(t *testing.T) {
metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "foo"), metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "foo"),
metricWithValueAndLabels(t, 1, "__name__", "bar", "job", "bar")}, metricWithValueAndLabels(t, 1, "__name__", "bar", "job", "bar")},
[]prompbmarshal.TimeSeries{ []prompbmarshal.TimeSeries{
newTimeSeries(2, map[string]string{ newTimeSeries([]float64{2}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": "job:foo", "__name__": "job:foo",
"job": "foo", "job": "foo",
"source": "test", "source": "test",
}, timestamp), }),
newTimeSeries(1, map[string]string{ newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": "job:foo", "__name__": "job:foo",
"job": "bar", "job": "bar",
"source": "test", "source": "test",
}, timestamp), }),
}, },
}, },
} }
@ -77,7 +77,7 @@ func TestRecoridngRule_ToTimeSeries(t *testing.T) {
fq := &fakeQuerier{} fq := &fakeQuerier{}
fq.add(tc.metrics...) fq.add(tc.metrics...)
tc.rule.q = fq tc.rule.q = fq
tss, err := tc.rule.Exec(context.TODO(), true) tss, err := tc.rule.Exec(context.TODO())
if err != nil { if err != nil {
t.Fatalf("unexpected Exec err: %s", err) t.Fatalf("unexpected Exec err: %s", err)
} }
@ -88,7 +88,88 @@ func TestRecoridngRule_ToTimeSeries(t *testing.T) {
} }
} }
func TestRecoridngRule_ToTimeSeriesNegative(t *testing.T) { func TestRecoridngRule_ExecRange(t *testing.T) {
timestamp := time.Now()
testCases := []struct {
rule *RecordingRule
metrics []datasource.Metric
expTS []prompbmarshal.TimeSeries
}{
{
&RecordingRule{Name: "foo"},
[]datasource.Metric{metricWithValuesAndLabels(t, []float64{10, 20, 30},
"__name__", "bar",
)},
[]prompbmarshal.TimeSeries{
newTimeSeries([]float64{10, 20, 30},
[]int64{timestamp.UnixNano(), timestamp.UnixNano(), timestamp.UnixNano()},
map[string]string{
"__name__": "foo",
}),
},
},
{
&RecordingRule{Name: "foobarbaz"},
[]datasource.Metric{
metricWithValuesAndLabels(t, []float64{1}, "__name__", "foo", "job", "foo"),
metricWithValuesAndLabels(t, []float64{2, 3}, "__name__", "bar", "job", "bar"),
metricWithValuesAndLabels(t, []float64{4, 5, 6}, "__name__", "baz", "job", "baz"),
},
[]prompbmarshal.TimeSeries{
newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": "foobarbaz",
"job": "foo",
}),
newTimeSeries([]float64{2, 3}, []int64{timestamp.UnixNano(), timestamp.UnixNano()}, map[string]string{
"__name__": "foobarbaz",
"job": "bar",
}),
newTimeSeries([]float64{4, 5, 6},
[]int64{timestamp.UnixNano(), timestamp.UnixNano(), timestamp.UnixNano()},
map[string]string{
"__name__": "foobarbaz",
"job": "baz",
}),
},
},
{
&RecordingRule{Name: "job:foo", Labels: map[string]string{
"source": "test",
}},
[]datasource.Metric{
metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "foo"),
metricWithValueAndLabels(t, 1, "__name__", "bar", "job", "bar")},
[]prompbmarshal.TimeSeries{
newTimeSeries([]float64{2}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": "job:foo",
"job": "foo",
"source": "test",
}),
newTimeSeries([]float64{1}, []int64{timestamp.UnixNano()}, map[string]string{
"__name__": "job:foo",
"job": "bar",
"source": "test",
}),
},
},
}
for _, tc := range testCases {
t.Run(tc.rule.Name, func(t *testing.T) {
fq := &fakeQuerier{}
fq.add(tc.metrics...)
tc.rule.q = fq
tss, err := tc.rule.ExecRange(context.TODO(), time.Now(), time.Now())
if err != nil {
t.Fatalf("unexpected Exec err: %s", err)
}
if err := compareTimeSeries(t, tc.expTS, tss); err != nil {
t.Fatalf("timeseries missmatch: %s", err)
}
})
}
}
func TestRecoridngRule_ExecNegative(t *testing.T) {
rr := &RecordingRule{Name: "job:foo", Labels: map[string]string{ rr := &RecordingRule{Name: "job:foo", Labels: map[string]string{
"job": "test", "job": "test",
}} }}
@ -97,7 +178,7 @@ func TestRecoridngRule_ToTimeSeriesNegative(t *testing.T) {
expErr := "connection reset by peer" expErr := "connection reset by peer"
fq.setErr(errors.New(expErr)) fq.setErr(errors.New(expErr))
rr.q = fq rr.q = fq
_, err := rr.Exec(context.TODO(), true) _, err := rr.Exec(context.TODO())
if err == nil { if err == nil {
t.Fatalf("expected to get err; got nil") t.Fatalf("expected to get err; got nil")
} }
@ -112,7 +193,7 @@ func TestRecoridngRule_ToTimeSeriesNegative(t *testing.T) {
fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "foo")) fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "foo"))
fq.add(metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "bar")) fq.add(metricWithValueAndLabels(t, 2, "__name__", "foo", "job", "bar"))
_, err = rr.Exec(context.TODO(), true) _, err = rr.Exec(context.TODO())
if err == nil { if err == nil {
t.Fatalf("expected to get err; got nil") t.Fatalf("expected to get err; got nil")
} }

160
app/vmalert/replay.go Normal file
View file

@ -0,0 +1,160 @@
package main
import (
"context"
"flag"
"fmt"
"strings"
"time"
"github.com/cheggaaa/pb/v3"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
)
var (
replayFrom = flag.String("replay.timeFrom", "",
"The time filter in RFC3339 format to select time series with timestamp equal or higher than provided value. E.g. '2020-01-01T20:07:00Z'")
replayTo = flag.String("replay.timeTo", "",
"The time filter in RFC3339 format to select timeseries with timestamp equal or lower than provided value. E.g. '2020-01-01T20:07:00Z'")
replayRulesDelay = flag.Duration("replay.rulesDelay", time.Second,
"Delay between rules evaluation within the group. Could be important if there are chained rules inside of the group"+
"and processing need to wait for previous rule results to be persisted by remote storage before evaluating the next rule."+
"Keep it equal or bigger than -remoteWrite.flushInterval.")
replayMaxDatapoints = flag.Int("replay.maxDatapointsPerQuery", 1e3,
"Max number of data points expected in one request. The higher the value, the less requests will be made during replay.")
replayRuleRetryAttempts = flag.Int("replay.ruleRetryAttempts", 5,
"Defines how many retries to make before giving up on rule if request for it returns an error.")
)
func replay(groupsCfg []config.Group, qb datasource.QuerierBuilder, rw *remotewrite.Client) error {
if *replayMaxDatapoints < 1 {
return fmt.Errorf("replay.maxDatapointsPerQuery can't be lower than 1")
}
tFrom, err := time.Parse(time.RFC3339, *replayFrom)
if err != nil {
return fmt.Errorf("failed to parse %q: %s", *replayFrom, err)
}
tTo, err := time.Parse(time.RFC3339, *replayTo)
if err != nil {
return fmt.Errorf("failed to parse %q: %s", *replayTo, err)
}
if !tTo.After(tFrom) {
return fmt.Errorf("replay.timeTo must be bigger than replay.timeFrom")
}
labels := make(map[string]string)
for _, s := range *externalLabels {
if len(s) == 0 {
continue
}
n := strings.IndexByte(s, '=')
if n < 0 {
return fmt.Errorf("missing '=' in `-label`. It must contain label in the form `name=value`; got %q", s)
}
labels[s[:n]] = s[n+1:]
}
fmt.Printf("Replay mode:"+
"\nfrom: \t%v "+
"\nto: \t%v "+
"\nmax data points per request: %d\n",
tFrom, tTo, *replayMaxDatapoints)
var total int
for _, cfg := range groupsCfg {
ng := newGroup(cfg, qb, *evaluationInterval, labels)
total += ng.replay(tFrom, tTo, rw)
}
logger.Infof("replay finished! Imported %d samples", total)
if rw != nil {
return rw.Close()
}
return nil
}
func (g *Group) replay(start, end time.Time, rw *remotewrite.Client) int {
var total int
step := g.Interval * time.Duration(*replayMaxDatapoints)
ri := rangeIterator{start: start, end: end, step: step}
iterations := int(end.Sub(start)/step) + 1
fmt.Printf("\nGroup %q"+
"\ninterval: \t%v"+
"\nrequests to make: \t%d"+
"\nmax range per request: \t%v\n",
g.Name, g.Interval, iterations, step)
for _, rule := range g.Rules {
fmt.Printf("> Rule %q (ID: %d)\n", rule, rule.ID())
bar := pb.StartNew(iterations)
ri.reset()
for ri.next() {
n, err := replayRule(rule, ri.s, ri.e, rw)
if err != nil {
logger.Fatalf("rule %q: %s", rule, err)
}
total += n
bar.Increment()
}
bar.Finish()
// sleep to let remote storage to flush data on-disk
// so chained rules could be calculated correctly
time.Sleep(*replayRulesDelay)
}
return total
}
func replayRule(rule Rule, start, end time.Time, rw *remotewrite.Client) (int, error) {
var err error
var tss []prompbmarshal.TimeSeries
for i := 0; i < *replayRuleRetryAttempts; i++ {
tss, err = rule.ExecRange(context.Background(), start, end)
if err == nil {
break
}
logger.Errorf("attempt %d to execute rule %q failed: %s", i+1, rule, err)
time.Sleep(time.Second)
}
if err != nil { // means all attempts failed
return 0, err
}
if len(tss) < 1 {
return 0, nil
}
var n int
for _, ts := range tss {
if err := rw.Push(ts); err != nil {
return n, fmt.Errorf("remote write failure: %s", err)
}
n += len(ts.Samples)
}
return n, nil
}
type rangeIterator struct {
step time.Duration
start, end time.Time
iter int
s, e time.Time
}
func (ri *rangeIterator) reset() {
ri.iter = 0
ri.s, ri.e = time.Time{}, time.Time{}
}
func (ri *rangeIterator) next() bool {
ri.s = ri.start.Add(ri.step * time.Duration(ri.iter))
if !ri.end.After(ri.s) {
return false
}
ri.e = ri.s.Add(ri.step)
if ri.e.After(ri.end) {
ri.e = ri.end
}
ri.iter++
return true
}

249
app/vmalert/replay_test.go Normal file
View file

@ -0,0 +1,249 @@
package main
import (
"context"
"fmt"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
)
type fakeReplayQuerier struct {
fakeQuerier
registry map[string]map[string]struct{}
}
func (fr *fakeReplayQuerier) BuildWithParams(_ datasource.QuerierParams) datasource.Querier {
return fr
}
func (fr *fakeReplayQuerier) QueryRange(_ context.Context, q string, from, to time.Time) ([]datasource.Metric, error) {
key := fmt.Sprintf("%s+%s", from.Format("15:04:05"), to.Format("15:04:05"))
dps, ok := fr.registry[q]
if !ok {
return nil, fmt.Errorf("unexpected query received: %q", q)
}
_, ok = dps[key]
if !ok {
return nil, fmt.Errorf("unexpected time range received: %q", key)
}
delete(dps, key)
if len(fr.registry[q]) < 1 {
delete(fr.registry, q)
}
return nil, nil
}
func TestReplay(t *testing.T) {
testCases := []struct {
name string
from, to string
maxDP int
cfg []config.Group
qb *fakeReplayQuerier
}{
{
name: "one rule + one response",
from: "2021-01-01T12:00:00.000Z",
to: "2021-01-01T12:02:00.000Z",
maxDP: 10,
cfg: []config.Group{
{Rules: []config.Rule{{Record: "foo", Expr: "sum(up)"}}},
},
qb: &fakeReplayQuerier{
registry: map[string]map[string]struct{}{
"sum(up)": {"12:00:00+12:02:00": {}},
},
},
},
{
name: "one rule + multiple responses",
from: "2021-01-01T12:00:00.000Z",
to: "2021-01-01T12:02:30.000Z",
maxDP: 1,
cfg: []config.Group{
{Rules: []config.Rule{{Record: "foo", Expr: "sum(up)"}}},
},
qb: &fakeReplayQuerier{
registry: map[string]map[string]struct{}{
"sum(up)": {
"12:00:00+12:01:00": {},
"12:01:00+12:02:00": {},
"12:02:00+12:02:30": {},
},
},
},
},
{
name: "datapoints per step",
from: "2021-01-01T12:00:00.000Z",
to: "2021-01-01T15:02:30.000Z",
maxDP: 60,
cfg: []config.Group{
{Interval: time.Minute, Rules: []config.Rule{{Record: "foo", Expr: "sum(up)"}}},
},
qb: &fakeReplayQuerier{
registry: map[string]map[string]struct{}{
"sum(up)": {
"12:00:00+13:00:00": {},
"13:00:00+14:00:00": {},
"14:00:00+15:00:00": {},
"15:00:00+15:02:30": {},
},
},
},
},
{
name: "multiple recording rules + multiple responses",
from: "2021-01-01T12:00:00.000Z",
to: "2021-01-01T12:02:30.000Z",
maxDP: 1,
cfg: []config.Group{
{Rules: []config.Rule{{Record: "foo", Expr: "sum(up)"}}},
{Rules: []config.Rule{{Record: "bar", Expr: "max(up)"}}},
},
qb: &fakeReplayQuerier{
registry: map[string]map[string]struct{}{
"sum(up)": {
"12:00:00+12:01:00": {},
"12:01:00+12:02:00": {},
"12:02:00+12:02:30": {},
},
"max(up)": {
"12:00:00+12:01:00": {},
"12:01:00+12:02:00": {},
"12:02:00+12:02:30": {},
},
},
},
},
{
name: "multiple alerting rules + multiple responses",
from: "2021-01-01T12:00:00.000Z",
to: "2021-01-01T12:02:30.000Z",
maxDP: 1,
cfg: []config.Group{
{Rules: []config.Rule{{Alert: "foo", Expr: "sum(up) > 1"}}},
{Rules: []config.Rule{{Alert: "bar", Expr: "max(up) < 1"}}},
},
qb: &fakeReplayQuerier{
registry: map[string]map[string]struct{}{
"sum(up) > 1": {
"12:00:00+12:01:00": {},
"12:01:00+12:02:00": {},
"12:02:00+12:02:30": {},
},
"max(up) < 1": {
"12:00:00+12:01:00": {},
"12:01:00+12:02:00": {},
"12:02:00+12:02:30": {},
},
},
},
},
}
from, to, maxDP := *replayFrom, *replayTo, *replayMaxDatapoints
retries, delay := *replayRuleRetryAttempts, *replayRulesDelay
defer func() {
*replayFrom, *replayTo = from, to
*replayMaxDatapoints, *replayRuleRetryAttempts = maxDP, retries
*replayRulesDelay = delay
}()
*replayRuleRetryAttempts = 1
*replayRulesDelay = time.Millisecond
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
*replayFrom = tc.from
*replayTo = tc.to
*replayMaxDatapoints = tc.maxDP
if err := replay(tc.cfg, tc.qb, nil); err != nil {
t.Fatalf("replay failed: %s", err)
}
if len(tc.qb.registry) > 0 {
t.Fatalf("not all requests were sent: %#v", tc.qb.registry)
}
})
}
}
func TestRangeIterator(t *testing.T) {
testCases := []struct {
ri rangeIterator
result [][2]time.Time
}{
{
ri: rangeIterator{
start: parseTime(t, "2021-01-01T12:00:00.000Z"),
end: parseTime(t, "2021-01-01T12:30:00.000Z"),
step: 5 * time.Minute,
},
result: [][2]time.Time{
{parseTime(t, "2021-01-01T12:00:00.000Z"), parseTime(t, "2021-01-01T12:05:00.000Z")},
{parseTime(t, "2021-01-01T12:05:00.000Z"), parseTime(t, "2021-01-01T12:10:00.000Z")},
{parseTime(t, "2021-01-01T12:10:00.000Z"), parseTime(t, "2021-01-01T12:15:00.000Z")},
{parseTime(t, "2021-01-01T12:15:00.000Z"), parseTime(t, "2021-01-01T12:20:00.000Z")},
{parseTime(t, "2021-01-01T12:20:00.000Z"), parseTime(t, "2021-01-01T12:25:00.000Z")},
{parseTime(t, "2021-01-01T12:25:00.000Z"), parseTime(t, "2021-01-01T12:30:00.000Z")},
},
},
{
ri: rangeIterator{
start: parseTime(t, "2021-01-01T12:00:00.000Z"),
end: parseTime(t, "2021-01-01T12:30:00.000Z"),
step: 45 * time.Minute,
},
result: [][2]time.Time{
{parseTime(t, "2021-01-01T12:00:00.000Z"), parseTime(t, "2021-01-01T12:30:00.000Z")},
{parseTime(t, "2021-01-01T12:30:00.000Z"), parseTime(t, "2021-01-01T12:30:00.000Z")},
},
},
{
ri: rangeIterator{
start: parseTime(t, "2021-01-01T12:00:12.000Z"),
end: parseTime(t, "2021-01-01T12:00:17.000Z"),
step: time.Second,
},
result: [][2]time.Time{
{parseTime(t, "2021-01-01T12:00:12.000Z"), parseTime(t, "2021-01-01T12:00:13.000Z")},
{parseTime(t, "2021-01-01T12:00:13.000Z"), parseTime(t, "2021-01-01T12:00:14.000Z")},
{parseTime(t, "2021-01-01T12:00:14.000Z"), parseTime(t, "2021-01-01T12:00:15.000Z")},
{parseTime(t, "2021-01-01T12:00:15.000Z"), parseTime(t, "2021-01-01T12:00:16.000Z")},
{parseTime(t, "2021-01-01T12:00:16.000Z"), parseTime(t, "2021-01-01T12:00:17.000Z")},
},
},
}
for i, tc := range testCases {
t.Run(fmt.Sprintf("case %d", i), func(t *testing.T) {
var j int
for tc.ri.next() {
if len(tc.result) < j+1 {
t.Fatalf("unexpected result for iterator on step %d: %v - %v",
j, tc.ri.s, tc.ri.e)
}
s, e := tc.ri.s, tc.ri.e
expS, expE := tc.result[j][0], tc.result[j][1]
if s != expS {
t.Fatalf("expected to get start=%v; got %v", expS, s)
}
if e != expE {
t.Fatalf("expected to get end=%v; got %v", expE, e)
}
j++
}
})
}
}
func parseTime(t *testing.T, s string) time.Time {
t.Helper()
tt, err := time.Parse("2006-01-02T15:04:05.000Z", s)
if err != nil {
t.Fatal(err)
}
return tt
}

View file

@ -3,21 +3,21 @@ package main
import ( import (
"context" "context"
"errors" "errors"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"time"
) )
// Rule represents alerting or recording rule // Rule represents alerting or recording rule
// that has unique ID, can be Executed and // that has unique ID, can be Executed and
// updated with other Rule. // updated with other Rule.
type Rule interface { type Rule interface {
// Returns unique ID that may be used for // ID returns unique ID that may be used for
// identifying this Rule among others. // identifying this Rule among others.
ID() uint64 ID() uint64
// Exec executes the rule with given context // Exec executes the rule with given context
// and Querier. If returnSeries is true, Exec Exec(ctx context.Context) ([]prompbmarshal.TimeSeries, error)
// may return TimeSeries as result of execution // ExecRange executes the rule on the given time range
Exec(ctx context.Context, returnSeries bool) ([]prompbmarshal.TimeSeries, error) ExecRange(ctx context.Context, start, end time.Time) ([]prompbmarshal.TimeSeries, error)
// UpdateWith performs modification of current Rule // UpdateWith performs modification of current Rule
// with fields of the given Rule. // with fields of the given Rule.
UpdateWith(Rule) error UpdateWith(Rule) error

View file

@ -7,17 +7,21 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
) )
func newTimeSeries(value float64, labels map[string]string, timestamp time.Time) prompbmarshal.TimeSeries { func newTimeSeries(values []float64, timestamps []int64, labels map[string]string) prompbmarshal.TimeSeries {
ts := prompbmarshal.TimeSeries{} ts := prompbmarshal.TimeSeries{
ts.Samples = append(ts.Samples, prompbmarshal.Sample{ Samples: make([]prompbmarshal.Sample, len(values)),
Value: value, }
Timestamp: timestamp.UnixNano() / 1e6, for i := range values {
}) ts.Samples[i] = prompbmarshal.Sample{
Value: values[i],
Timestamp: time.Unix(timestamps[i], 0).UnixNano() / 1e6,
}
}
keys := make([]string, 0, len(labels)) keys := make([]string, 0, len(labels))
for k := range labels { for k := range labels {
keys = append(keys, k) keys = append(keys, k)
} }
sort.Strings(keys) sort.Strings(keys) // make order deterministic
for _, key := range keys { for _, key := range keys {
ts.Labels = append(ts.Labels, prompbmarshal.Label{ ts.Labels = append(ts.Labels, prompbmarshal.Label{
Name: key, Name: key,

View file

@ -1,8 +1,8 @@
# vmauth # vmauth
`vmauth` is a simple auth proxy and router for [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics). `vmauth` is a simple auth proxy, router and [load balancer](#load-balancing) for [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics).
It reads username and password from [Basic Auth headers](https://en.wikipedia.org/wiki/Basic_access_authentication), It reads auth credentials from `Authorization` http header ([Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication) and `Bearer token` is supported),
matches them against configs pointed by `-auth.config` command-line flag and proxies incoming HTTP requests to the configured per-user `url_prefix` on successful match. matches them against configs pointed by [-auth.config](#auth-config) command-line flag and proxies incoming HTTP requests to the configured per-user `url_prefix` on successful match.
## Quick start ## Quick start
@ -27,9 +27,14 @@ Feel free [contacting us](mailto:info@victoriametrics.com) if you need customize
accounting and rate limiting such as [vmgateway](https://docs.victoriametrics.com/vmgateway.html). accounting and rate limiting such as [vmgateway](https://docs.victoriametrics.com/vmgateway.html).
## Load balancing
Each `url_prefix` in the [-auth.config](#auth-config) may contain either a single url or a list of urls. In the latter case `vmauth` balances load among the configured urls in a round-robin manner. This feature is useful for balancing the load among multiple `vmselect` and/or `vminsert` nodes in [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html).
## Auth config ## Auth config
Auth config is represented in the following simple `yml` format: `-auth.config` is represented in the following simple `yml` format:
```yml ```yml
@ -61,31 +66,47 @@ users:
# The user for querying account 123 in VictoriaMetrics cluster # The user for querying account 123 in VictoriaMetrics cluster
# See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format # See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format
# All the requests to http://vmauth:8427 with the given Basic Auth (username:password) # All the requests to http://vmauth:8427 with the given Basic Auth (username:password)
# will be proxied to http://vmselect:8481/select/123/prometheus . # will be load-balanced among http://vmselect1:8481/select/123/prometheus and http://vmselect2:8481/select/123/prometheus
# For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect:8481/select/123/prometheus/api/v1/select # For example, http://vmauth:8427/api/v1/query is proxied to the following urls in a round-robin manner:
# - http://vmselect1:8481/select/123/prometheus/api/v1/select
# - http://vmselect2:8481/select/123/prometheus/api/v1/select
- username: "cluster-select-account-123" - username: "cluster-select-account-123"
password: "***" password: "***"
url_prefix: "http://vmselect:8481/select/123/prometheus" url_prefix:
- "http://vmselect1:8481/select/123/prometheus"
- "http://vmselect2:8481/select/123/prometheus"
# The user for inserting Prometheus data into VictoriaMetrics cluster under account 42 # The user for inserting Prometheus data into VictoriaMetrics cluster under account 42
# See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format # See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format
# All the requests to http://vmauth:8427 with the given Basic Auth (username:password) # All the requests to http://vmauth:8427 with the given Basic Auth (username:password)
# will be proxied to http://vminsert:8480/insert/42/prometheus . # will be load-balanced between http://vminsert1:8480/insert/42/prometheus and http://vminsert2:8480/insert/42/prometheus
# For example, http://vmauth:8427/api/v1/write is proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write # For example, http://vmauth:8427/api/v1/write is proxied to the following urls in a round-robin manner:
# - http://vminsert1:8480/insert/42/prometheus/api/v1/write
# - http://vminsert2:8480/insert/42/prometheus/api/v1/write
- username: "cluster-insert-account-42" - username: "cluster-insert-account-42"
password: "***" password: "***"
url_prefix: "http://vminsert:8480/insert/42/prometheus" url_prefix:
- "http://vminsert1:8480/insert/42/prometheus"
- "http://vminsert2:8480/insert/42/prometheus"
# A single user for querying and inserting data: # A single user for querying and inserting data:
# - Requests to http://vmauth:8427/api/v1/query, http://vmauth:8427/api/v1/query_range # - Requests to http://vmauth:8427/api/v1/query, http://vmauth:8427/api/v1/query_range
# and http://vmauth:8427/api/v1/label/<label_name>/values are proxied to http://vmselect:8481/select/42/prometheus. # and http://vmauth:8427/api/v1/label/<label_name>/values are proxied to the following urls in a round-robin manner:
# For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect:8480/select/42/prometheus/api/v1/query # - http://vmselect1:8481/select/42/prometheus
# - http://vmselect2:8481/select/42/prometheus
# For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect1:8480/select/42/prometheus/api/v1/query
# or to http://vmselect2:8480/select/42/prometheus/api/v1/query .
# - Requests to http://vmauth:8427/api/v1/write are proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write # - Requests to http://vmauth:8427/api/v1/write are proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write
- username: "foobar" - username: "foobar"
url_map: url_map:
- src_paths: ["/api/v1/query", "/api/v1/query_range", "/api/v1/label/[^/]+/values"] - src_paths:
url_prefix: "http://vmselect:8481/select/42/prometheus" - "/api/v1/query"
- "/api/v1/query_range"
- "/api/v1/label/[^/]+/values"
url_prefix:
- "http://vmselect1:8481/select/42/prometheus"
- "http://vmselect2:8481/select/42/prometheus"
- src_paths: ["/api/v1/write"] - src_paths: ["/api/v1/write"]
url_prefix: "http://vminsert:8480/insert/42/prometheus" url_prefix: "http://vminsert:8480/insert/42/prometheus"
``` ```

View file

@ -8,6 +8,7 @@ import (
"net/url" "net/url"
"os" "os"
"regexp" "regexp"
"strconv"
"strings" "strings"
"sync" "sync"
"sync/atomic" "sync/atomic"
@ -31,11 +32,11 @@ type AuthConfig struct {
// UserInfo is user information read from authConfigPath // UserInfo is user information read from authConfigPath
type UserInfo struct { type UserInfo struct {
BearerToken string `yaml:"bearer_token"` BearerToken string `yaml:"bearer_token"`
Username string `yaml:"username"` Username string `yaml:"username"`
Password string `yaml:"password"` Password string `yaml:"password"`
URLPrefix *yamlURL `yaml:"url_prefix"` URLPrefix *URLPrefix `yaml:"url_prefix"`
URLMap []URLMap `yaml:"url_map"` URLMap []URLMap `yaml:"url_map"`
requests *metrics.Counter requests *metrics.Counter
} }
@ -43,7 +44,7 @@ type UserInfo struct {
// URLMap is a mapping from source paths to target urls. // URLMap is a mapping from source paths to target urls.
type URLMap struct { type URLMap struct {
SrcPaths []*SrcPath `yaml:"src_paths"` SrcPaths []*SrcPath `yaml:"src_paths"`
URLPrefix *yamlURL `yaml:"url_prefix"` URLPrefix *URLPrefix `yaml:"url_prefix"`
} }
// SrcPath represents an src path // SrcPath represents an src path
@ -52,25 +53,74 @@ type SrcPath struct {
re *regexp.Regexp re *regexp.Regexp
} }
type yamlURL struct { // URLPrefix represents pased `url_prefix`
u *url.URL type URLPrefix struct {
n uint32
urls []*url.URL
} }
func (yu *yamlURL) UnmarshalYAML(f func(interface{}) error) error { func (up *URLPrefix) getNextURL() *url.URL {
var s string n := atomic.AddUint32(&up.n, 1)
if err := f(&s); err != nil { idx := n % uint32(len(up.urls))
return up.urls[idx]
}
// UnmarshalYAML unmarshals up from yaml.
func (up *URLPrefix) UnmarshalYAML(f func(interface{}) error) error {
var v interface{}
if err := f(&v); err != nil {
return err return err
} }
u, err := url.Parse(s) var urls []string
if err != nil { switch x := v.(type) {
return fmt.Errorf("cannot unmarshal %q into url: %w", s, err) case string:
urls = []string{x}
case []interface{}:
if len(x) == 0 {
return fmt.Errorf("`url_prefix` must contain at least a single url")
}
us := make([]string, len(x))
for i, xx := range x {
s, ok := xx.(string)
if !ok {
return fmt.Errorf("`url_prefix` must contain array of strings; got %T", xx)
}
us[i] = s
}
urls = us
default:
return fmt.Errorf("unexpected type for `url_prefix`: %T; want string or []string", v)
} }
yu.u = u pus := make([]*url.URL, len(urls))
for i, u := range urls {
pu, err := url.Parse(u)
if err != nil {
return fmt.Errorf("cannot unmarshal %q into url: %w", u, err)
}
pus[i] = pu
}
up.urls = pus
return nil return nil
} }
func (yu *yamlURL) MarshalYAML() (interface{}, error) { // MarshalYAML marshals up to yaml.
return yu.u.String(), nil func (up *URLPrefix) MarshalYAML() (interface{}, error) {
var b []byte
if len(up.urls) == 1 {
u := up.urls[0].String()
b = strconv.AppendQuote(b, u)
return string(b), nil
}
b = append(b, '[')
for i, pu := range up.urls {
u := pu.String()
b = strconv.AppendQuote(b, u)
if i+1 < len(up.urls) {
b = append(b, ',')
}
}
b = append(b, ']')
return string(b), nil
} }
func (sp *SrcPath) match(s string) bool { func (sp *SrcPath) match(s string) bool {
@ -201,11 +251,9 @@ func parseAuthConfig(data []byte) (map[string]*UserInfo, error) {
return nil, fmt.Errorf("duplicate auth token found for bearer_token=%q, username=%q: %q", authToken, ui.BearerToken, ui.Username) return nil, fmt.Errorf("duplicate auth token found for bearer_token=%q, username=%q: %q", authToken, ui.BearerToken, ui.Username)
} }
if ui.URLPrefix != nil { if ui.URLPrefix != nil {
urlPrefix, err := sanitizeURLPrefix(ui.URLPrefix.u) if err := ui.URLPrefix.sanitize(); err != nil {
if err != nil {
return nil, err return nil, err
} }
ui.URLPrefix.u = urlPrefix
} }
for _, e := range ui.URLMap { for _, e := range ui.URLMap {
if len(e.SrcPaths) == 0 { if len(e.SrcPaths) == 0 {
@ -214,11 +262,9 @@ func parseAuthConfig(data []byte) (map[string]*UserInfo, error) {
if e.URLPrefix == nil { if e.URLPrefix == nil {
return nil, fmt.Errorf("missing `url_prefix` in `url_map`") return nil, fmt.Errorf("missing `url_prefix` in `url_map`")
} }
urlPrefix, err := sanitizeURLPrefix(e.URLPrefix.u) if err := e.URLPrefix.sanitize(); err != nil {
if err != nil {
return nil, err return nil, err
} }
e.URLPrefix.u = urlPrefix
} }
if len(ui.URLMap) == 0 && ui.URLPrefix == nil { if len(ui.URLMap) == 0 && ui.URLPrefix == nil {
return nil, fmt.Errorf("missing `url_prefix`") return nil, fmt.Errorf("missing `url_prefix`")
@ -248,6 +294,17 @@ func getAuthToken(bearerToken, username, password string) string {
return "Basic " + token64 return "Basic " + token64
} }
func (up *URLPrefix) sanitize() error {
for i, pu := range up.urls {
puNew, err := sanitizeURLPrefix(pu)
if err != nil {
return err
}
up.urls[i] = puNew
}
return nil
}
func sanitizeURLPrefix(urlPrefix *url.URL) (*url.URL, error) { func sanitizeURLPrefix(urlPrefix *url.URL) (*url.URL, error) {
// Remove trailing '/' from urlPrefix // Remove trailing '/' from urlPrefix
for strings.HasSuffix(urlPrefix.Path, "/") { for strings.HasSuffix(urlPrefix.Path, "/") {

View file

@ -59,7 +59,21 @@ users:
f(` f(`
users: users:
- username: foo - username: foo
url_prefix: [bar] url_prefix:
bar: baz
`)
f(`
users:
- username: foo
url_prefix:
- [foo]
`)
// empty url_prefix
f(`
users:
- username: foo
url_prefix: []
`) `)
// Username and bearer_token in a single config // Username and bearer_token in a single config
@ -117,6 +131,15 @@ users:
url_prefix: foo.bar url_prefix: foo.bar
`) `)
// empty url_prefix in url_map
f(`
users:
- username: a
url_map:
- src_paths: ['/foo/bar']
url_prefix: []
`)
// Missing src_paths in url_map // Missing src_paths in url_map
f(` f(`
users: users:
@ -162,6 +185,25 @@ users:
}, },
}) })
// Multiple url_prefix entries
f(`
users:
- username: foo
password: bar
url_prefix:
- http://node1:343/bbb
- http://node2:343/bbb
`, map[string]*UserInfo{
getAuthToken("", "foo", "bar"): {
Username: "foo",
Password: "bar",
URLPrefix: mustParseURLs([]string{
"http://node1:343/bbb",
"http://node2:343/bbb",
}),
},
})
// Multiple users // Multiple users
f(` f(`
users: users:
@ -188,7 +230,7 @@ users:
- src_paths: ["/api/v1/query","/api/v1/query_range","/api/v1/label/[^./]+/.+"] - src_paths: ["/api/v1/query","/api/v1/query_range","/api/v1/label/[^./]+/.+"]
url_prefix: http://vmselect/select/0/prometheus url_prefix: http://vmselect/select/0/prometheus
- src_paths: ["/api/v1/write"] - src_paths: ["/api/v1/write"]
url_prefix: http://vminsert/insert/0/prometheus url_prefix: ["http://vminsert1/insert/0/prometheus","http://vminsert2/insert/0/prometheus"]
`, map[string]*UserInfo{ `, map[string]*UserInfo{
getAuthToken("foo", "", ""): { getAuthToken("foo", "", ""): {
BearerToken: "foo", BearerToken: "foo",
@ -198,8 +240,11 @@ users:
URLPrefix: mustParseURL("http://vmselect/select/0/prometheus"), URLPrefix: mustParseURL("http://vmselect/select/0/prometheus"),
}, },
{ {
SrcPaths: getSrcPaths([]string{"/api/v1/write"}), SrcPaths: getSrcPaths([]string{"/api/v1/write"}),
URLPrefix: mustParseURL("http://vminsert/insert/0/prometheus"), URLPrefix: mustParseURLs([]string{
"http://vminsert1/insert/0/prometheus",
"http://vminsert2/insert/0/prometheus",
}),
}, },
}, },
}, },
@ -238,12 +283,20 @@ func areEqualConfigs(a, b map[string]*UserInfo) error {
return nil return nil
} }
func mustParseURL(u string) *yamlURL { func mustParseURL(u string) *URLPrefix {
pu, err := url.Parse(u) return mustParseURLs([]string{u})
if err != nil { }
panic(fmt.Errorf("BUG: cannot parse %q: %w", u, err))
func mustParseURLs(us []string) *URLPrefix {
pus := make([]*url.URL, len(us))
for i, u := range us {
pu, err := url.Parse(u)
if err != nil {
panic(fmt.Errorf("BUG: cannot parse %q: %w", u, err))
}
pus[i] = pu
} }
return &yamlURL{ return &URLPrefix{
u: pu, urls: pus,
} }
} }

View file

@ -26,30 +26,46 @@ users:
# The user for querying account 123 in VictoriaMetrics cluster # The user for querying account 123 in VictoriaMetrics cluster
# See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format # See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format
# All the requests to http://vmauth:8427 with the given Basic Auth (username:password) # All the requests to http://vmauth:8427 with the given Basic Auth (username:password)
# will be proxied to http://vmselect:8481/select/123/prometheus . # will be load-balanced among http://vmselect1:8481/select/123/prometheus and http://vmselect2:8481/select/123/prometheus
# For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect:8481/select/123/prometheus/api/v1/select # For example, http://vmauth:8427/api/v1/query is proxied to the following urls in a round-robin manner:
# - http://vmselect1:8481/select/123/prometheus/api/v1/select
# - http://vmselect2:8481/select/123/prometheus/api/v1/select
- username: "cluster-select-account-123" - username: "cluster-select-account-123"
password: "***" password: "***"
url_prefix: "http://vmselect:8481/select/123/prometheus" url_prefix:
- "http://vmselect1:8481/select/123/prometheus"
- "http://vmselect2:8481/select/123/prometheus"
# The user for inserting Prometheus data into VictoriaMetrics cluster under account 42 # The user for inserting Prometheus data into VictoriaMetrics cluster under account 42
# See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format # See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format
# All the requests to http://vmauth:8427 with the given Basic Auth (username:password) # All the requests to http://vmauth:8427 with the given Basic Auth (username:password)
# will be proxied to http://vminsert:8480/insert/42/prometheus . # will be load-balanced between http://vminsert1:8480/insert/42/prometheus and http://vminsert2:8480/insert/42/prometheus
# For example, http://vmauth:8427/api/v1/write is proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write # For example, http://vmauth:8427/api/v1/write is proxied to the following urls in a round-robin manner:
# - http://vminsert1:8480/insert/42/prometheus/api/v1/write
# - http://vminsert2:8480/insert/42/prometheus/api/v1/write
- username: "cluster-insert-account-42" - username: "cluster-insert-account-42"
password: "***" password: "***"
url_prefix: "http://vminsert:8480/insert/42/prometheus" url_prefix:
- "http://vminsert1:8480/insert/42/prometheus"
- "http://vminsert2:8480/insert/42/prometheus"
# A single user for querying and inserting data: # A single user for querying and inserting data:
# - Requests to http://vmauth:8427/api/v1/query, http://vmauth:8427/api/v1/query_range # - Requests to http://vmauth:8427/api/v1/query, http://vmauth:8427/api/v1/query_range
# and http://vmauth:8427/api/v1/label/<label_name>/values are proxied to http://vmselect:8481/select/42/prometheus. # and http://vmauth:8427/api/v1/label/<label_name>/values are proxied to the following urls in a round-robin manner:
# For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect:8480/select/42/prometheus/api/v1/query # - http://vmselect1:8481/select/42/prometheus
# - http://vmselect2:8481/select/42/prometheus
# For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect1:8480/select/42/prometheus/api/v1/query
# or to http://vmselect2:8480/select/42/prometheus/api/v1/query .
# - Requests to http://vmauth:8427/api/v1/write are proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write # - Requests to http://vmauth:8427/api/v1/write are proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write
- username: "foobar" - username: "foobar"
url_map: url_map:
- src_paths: ["/api/v1/query", "/api/v1/query_range", "/api/v1/label/[^/]+/values"] - src_paths:
url_prefix: "http://vmselect:8481/select/42/prometheus" - "/api/v1/query"
- "/api/v1/query_range"
- "/api/v1/label/[^/]+/values"
url_prefix:
- "http://vmselect1:8481/select/42/prometheus"
- "http://vmselect2:8481/select/42/prometheus"
- src_paths: ["/api/v1/write"] - src_paths: ["/api/v1/write"]
url_prefix: "http://vminsert:8480/insert/42/prometheus" url_prefix: "http://vminsert:8480/insert/42/prometheus"

View file

@ -7,6 +7,11 @@ import (
"strings" "strings"
) )
func (up *URLPrefix) mergeURLs(requestURI *url.URL) *url.URL {
pu := up.getNextURL()
return mergeURLs(pu, requestURI)
}
func mergeURLs(uiURL, requestURI *url.URL) *url.URL { func mergeURLs(uiURL, requestURI *url.URL) *url.URL {
targetURL := *uiURL targetURL := *uiURL
targetURL.Path += requestURI.Path targetURL.Path += requestURI.Path
@ -40,12 +45,12 @@ func createTargetURL(ui *UserInfo, uOrig *url.URL) (*url.URL, error) {
for _, e := range ui.URLMap { for _, e := range ui.URLMap {
for _, sp := range e.SrcPaths { for _, sp := range e.SrcPaths {
if sp.match(u.Path) { if sp.match(u.Path) {
return mergeURLs(e.URLPrefix.u, &u), nil return e.URLPrefix.mergeURLs(&u), nil
} }
} }
} }
if ui.URLPrefix != nil { if ui.URLPrefix != nil {
return mergeURLs(ui.URLPrefix.u, &u), nil return ui.URLPrefix.mergeURLs(&u), nil
} }
return nil, fmt.Errorf("missing route for %q", u.String()) return nil, fmt.Errorf("missing route for %q", u.String())
} }

View file

@ -14,8 +14,12 @@ import (
"github.com/VictoriaMetrics/metrics" "github.com/VictoriaMetrics/metrics"
) )
var relabelConfig = flag.String("relabelConfig", "", "Optional path to a file with relabeling rules, which are applied to all the ingested metrics. "+ var (
"See https://docs.victoriametrics.com/#relabeling for details") relabelConfig = flag.String("relabelConfig", "", "Optional path to a file with relabeling rules, which are applied to all the ingested metrics. "+
"See https://docs.victoriametrics.com/#relabeling for details")
relabelDebug = flag.Bool("relabelDebug", false, "Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, "+
"then the metrics aren't sent to storage. This is useful for debugging the relabeling configs")
)
// Init must be called after flag.Parse and before using the relabel package. // Init must be called after flag.Parse and before using the relabel package.
func Init() { func Init() {
@ -52,7 +56,7 @@ func loadRelabelConfig() (*promrelabel.ParsedConfigs, error) {
if len(*relabelConfig) == 0 { if len(*relabelConfig) == 0 {
return nil, nil return nil, nil
} }
pcs, err := promrelabel.LoadRelabelConfigs(*relabelConfig) pcs, err := promrelabel.LoadRelabelConfigs(*relabelConfig, *relabelDebug)
if err != nil { if err != nil {
return nil, fmt.Errorf("error when reading -relabelConfig=%q: %w", *relabelConfig, err) return nil, fmt.Errorf("error when reading -relabelConfig=%q: %w", *relabelConfig, err)
} }

View file

@ -517,7 +517,7 @@ func (rc *rollupConfig) doInternal(dstValues []float64, tsm *timeseriesMap, valu
if window <= 0 { if window <= 0 {
window = rc.Step window = rc.Step
if rc.CanDropLastSample && rc.LookbackDelta > 0 && window > rc.LookbackDelta { if rc.CanDropLastSample && rc.LookbackDelta > 0 && window > rc.LookbackDelta {
// Implicitly window exceeds -search.maxStalenessInterval, so limit it to -search.maxStalenessInterval // Implicit window exceeds -search.maxStalenessInterval, so limit it to -search.maxStalenessInterval
// according to https://github.com/VictoriaMetrics/VictoriaMetrics/issues/784 // according to https://github.com/VictoriaMetrics/VictoriaMetrics/issues/784
window = rc.LookbackDelta window = rc.LookbackDelta
} }

View file

@ -4,7 +4,7 @@ DOCKER_NAMESPACE := victoriametrics
ROOT_IMAGE ?= alpine:3.13.5 ROOT_IMAGE ?= alpine:3.13.5
CERTS_IMAGE := alpine:3.13.5 CERTS_IMAGE := alpine:3.13.5
GO_BUILDER_IMAGE := golang:1.16.4 GO_BUILDER_IMAGE := golang:1.16.5
BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr : _) BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr : _)
BASE_IMAGE := local/base:1.1.3-$(shell echo $(ROOT_IMAGE) | tr : _)-$(shell echo $(CERTS_IMAGE) | tr : _) BASE_IMAGE := local/base:1.1.3-$(shell echo $(ROOT_IMAGE) | tr : _)-$(shell echo $(CERTS_IMAGE) | tr : _)

View file

@ -2,7 +2,7 @@
# The alerts below are just recommendations and may require some updates # The alerts below are just recommendations and may require some updates
# and threshold calibration according to every specific setup. # and threshold calibration according to every specific setup.
groups: groups:
- name: serviceHealth - name: vm-health
# note the `job` filter and update accordingly to your setup # note the `job` filter and update accordingly to your setup
rules: rules:
# note the `job` filter and update accordingly to your setup # note the `job` filter and update accordingly to your setup
@ -177,6 +177,18 @@ groups:
description: "Exhausting OS file descriptors limit can cause severe degradation of the process. description: "Exhausting OS file descriptors limit can cause severe degradation of the process.
Consider to increase the limit as fast as possible." Consider to increase the limit as fast as possible."
- alert: LabelsLimitExceededOnIngestion
expr: sum(increase(vm_metrics_with_dropped_labels_total[5m])) by (instance) > 0
for: 15m
labels:
severity: warning
annotations:
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=74&var-instance={{ $labels.instance }}"
summary: "Metrics ingested in ({{ $labels.instance }}) are exceeding labels limit"
description: "VictoriaMetrics limits the number of labels per each metric with `-maxLabelsPerTimeseries` command-line flag.\n
This prevents from ingesting metrics with too many labels. Please verify that `-maxLabelsPerTimeseries` is configured
correctly or that clients which send these metrics aren't misbehaving."
# Alerts group for vmagent assumes that Grafana dashboard # Alerts group for vmagent assumes that Grafana dashboard
# https://grafana.com/grafana/dashboards/12683 is installed. # https://grafana.com/grafana/dashboards/12683 is installed.
# Pls update the `dashboard` annotation according to your setup. # Pls update the `dashboard` annotation according to your setup.

View file

@ -39,7 +39,7 @@ services:
restart: always restart: always
grafana: grafana:
container_name: grafana container_name: grafana
image: grafana/grafana:7.5.2 image: grafana/grafana:8.0.0
depends_on: depends_on:
- "victoriametrics" - "victoriametrics"
ports: ports:

View file

@ -11,6 +11,7 @@ sort: 16
* [Observations on Better Resource Usage with Percona Monitoring and Management v2.12.0](https://www.percona.com/blog/2020/12/23/observations-on-better-resource-usage-with-percona-monitoring-and-management-v2-12-0/) * [Observations on Better Resource Usage with Percona Monitoring and Management v2.12.0](https://www.percona.com/blog/2020/12/23/observations-on-better-resource-usage-with-percona-monitoring-and-management-v2-12-0/)
* [Better Prometheus rate() function with VictoriaMetrics](https://www.percona.com/blog/2020/02/28/better-prometheus-rate-function-with-victoriametrics/) * [Better Prometheus rate() function with VictoriaMetrics](https://www.percona.com/blog/2020/02/28/better-prometheus-rate-function-with-victoriametrics/)
* [Percona monitoring and management migration from Prometheus to VictoriaMetrics FAQ](https://www.percona.com/blog/2020/12/16/percona-monitoring-and-management-migration-from-prometheus-to-victoriametrics-faq/) * [Percona monitoring and management migration from Prometheus to VictoriaMetrics FAQ](https://www.percona.com/blog/2020/12/16/percona-monitoring-and-management-migration-from-prometheus-to-victoriametrics-faq/)
* [Compiling a Percona Monitoring and Management v2 Client in ARM: Raspberry Pi 3 Reprise](https://www.percona.com/blog/2021/05/26/compiling-a-percona-monitoring-and-management-v2-client-in-arm-raspberry-pi-3/)
* [Making peace with Prometheus rate()](https://blog.doit-intl.com/making-peace-with-prometheus-rate-43a3ea75c4cf) * [Making peace with Prometheus rate()](https://blog.doit-intl.com/making-peace-with-prometheus-rate-43a3ea75c4cf)
* [Infrastructure monitoring with Prometheus at Zerodha](https://zerodha.tech/blog/infra-monitoring-at-zerodha/) * [Infrastructure monitoring with Prometheus at Zerodha](https://zerodha.tech/blog/infra-monitoring-at-zerodha/)
* [Sismology: Iguana Solutions Monitoring System](https://medium.com/@IG1.com/sismology-iguana-solutions-monitoring-system-f46e4170447f) * [Sismology: Iguana Solutions Monitoring System](https://medium.com/@IG1.com/sismology-iguana-solutions-monitoring-system-f46e4170447f)
@ -32,7 +33,7 @@ sort: 16
* [Observability, Availability & DORAs Research Program](https://medium.com/alteos-tech-blog/observability-availability-and-doras-research-program-85deb6680e78) * [Observability, Availability & DORAs Research Program](https://medium.com/alteos-tech-blog/observability-availability-and-doras-research-program-85deb6680e78)
* [Tame Kubernetes Costs with Percona Monitoring and Management and Prometheus Operator](https://www.percona.com/blog/2021/02/12/tame-kubernetes-costs-with-percona-monitoring-and-management-and-prometheus-operator/) * [Tame Kubernetes Costs with Percona Monitoring and Management and Prometheus Operator](https://www.percona.com/blog/2021/02/12/tame-kubernetes-costs-with-percona-monitoring-and-management-and-prometheus-operator/)
* [Prometheus VictoriaMetrics On AWS ECS](https://dalefro.medium.com/prometheus-victoria-metrics-on-aws-ecs-62448e266090) * [Prometheus VictoriaMetrics On AWS ECS](https://dalefro.medium.com/prometheus-victoria-metrics-on-aws-ecs-62448e266090)
* [Monitoring with Prometheus, Grafana, AlertManager and VictoriaMetrics](https://www.sensedia.com/post/monitoring-with-prometheus-alertmanager) * [API Monitoring With Prometheus, Grafana, AlertManager and VictoriaMetrics](https://nordicapis.com/api-monitoring-with-prometheus-grafana-alertmanager-and-victoriametrics/)
* [Solving Metrics at scale with VictoriaMetrics](https://www.youtube.com/watch?v=QgLMztnj7-8) * [Solving Metrics at scale with VictoriaMetrics](https://www.youtube.com/watch?v=QgLMztnj7-8)
* [Monitoring Kubernetes clusters with VictoriaMetrics and Grafana](https://blog.cybozu.io/entry/2021/03/18/115743) * [Monitoring Kubernetes clusters with VictoriaMetrics and Grafana](https://blog.cybozu.io/entry/2021/03/18/115743)
* [Multi-tenancy monitoring system for Kubernetes cluster using VictoriaMetrics and operators](https://blog.kintone.io/entry/2021/03/31/175256) * [Multi-tenancy monitoring system for Kubernetes cluster using VictoriaMetrics and operators](https://blog.kintone.io/entry/2021/03/31/175256)

View file

@ -7,6 +7,23 @@ sort: 15
## tip ## tip
## [v1.61.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.61.0)
* FEATURE: vmalert: add support for backfilling (aka replay) of recording and alerting rules. See [these docs](https://docs.victoriametrics.com/vmalert.html#rules-backfilling) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/836).
* FEATURE: vmalert: add a command-line flag `-rule.configCheckInterval` for automatic re-reading of `-rule` files without the need to send SIGHUP signal. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/512).
* FEATURE: vmagent: respect the `sample_limit` and `-promscrape.maxScrapeSize` values when scraping targets in [stream parsing mode](https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode). See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/1331).
* FEATURE: vmauth: add ability to specify mutliple `url_prefix` entries for balancing the load among multiple `vmselect` and/or `vminsert` nodes in a cluster. See [these docs](https://docs.victoriametrics.com/vmauth.html#load-balancing).
* FEATURE: vminsert: add `-disableRerouting` command-line flag for forcibly disabling the rerouting. This should help resolving [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/791) and [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1054) issues.
* FEATURE: vminsert: reduce the probability of global re-routing storm if all the vmstorage nodes cannot keep up with the given ingestion rate for some time. This should improve cluster stability in such cases. See [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/791) and [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1054) issues.
* FEATURE: allow building VictoriaMetrics components for Solaris / SmartOS. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1322).
* FEATURE: vmagent: add ability to debug relabeling rules. See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1343).
* BUGFIX: reduce CPU usage by up to 2x during querying a database with big number of active daily time series. The issue has been introduced in `v1.59.0`.
* BUGFIX: vmagent: properly apply auth and tls configs in `eureka_sd_configs`. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/1350).
* BUGFIX: vmauth: do not panic on aborted http requests. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1353).
* BUGFIX: properly generate `target` property for `*Series(foo.*.bar)` responses returned from [Graphite Render API](https://docs.victoriametrics.com/#graphite-render-api-usage). Previously the `target` contained the expanded list of series for `foo.*.bar`, e.g. `sumSeries(foo.a.bar,foo.b.bar,...foo.z.bar)`. Now VictoriaMetrics returns `sumSeries(foo.*.bar)` as a target in the same way as Graphite does.
## [v1.60.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.60.0) ## [v1.60.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.60.0)
* FEATURE: add ability to limit the number of unique time series, which can be added to storage per hour and per day. This can help dealing with high cardinality and high churn rate issues. See [these docs](https://docs.victoriametrics.com/#cardinality-limiter). * FEATURE: add ability to limit the number of unique time series, which can be added to storage per hour and per day. This can help dealing with high cardinality and high churn rate issues. See [these docs](https://docs.victoriametrics.com/#cardinality-limiter).

View file

@ -1,5 +1,5 @@
--- ---
sort: 10 sort: 2
--- ---
# Cluster version # Cluster version
@ -138,7 +138,7 @@ A minimal cluster must contain the following nodes:
It is recommended to run at least two nodes for each service It is recommended to run at least two nodes for each service
for high availability purposes. for high availability purposes.
An http load balancer such as `nginx` must be put in front of `vminsert` and `vmselect` nodes: An http load balancer such as [vmauth](https://docs.victoriametrics.com/vmauth.html) or `nginx` must be put in front of `vminsert` and `vmselect` nodes:
- requests starting with `/insert` must be routed to port `8480` on `vminsert` nodes. - requests starting with `/insert` must be routed to port `8480` on `vminsert` nodes.
- requests starting with `/select` must be routed to port `8481` on `vmselect` nodes. - requests starting with `/select` must be routed to port `8481` on `vmselect` nodes.

View file

@ -90,6 +90,17 @@ and [Remote Write Storage Wars](https://promcon.io/2019-munich/talks/remote-writ
VictoriaMetrics also [uses less RAM than Thanos components](https://github.com/thanos-io/thanos/issues/448). VictoriaMetrics also [uses less RAM than Thanos components](https://github.com/thanos-io/thanos/issues/448).
### What is the difference between VictoriaMetrics and [QuestDB](https://questdb.io/)?
- QuestDB needs more than 20x storage space than VictoriaMetrics. This translates to higher storage costs and slower queries over historical data, which must be read from the disk.
- QuestDB is much harder to setup and operate than VictoriaMetrics. Compare [setup instructions for QuestDB](https://questdb.io/docs/get-started/binaries) to [setup instructions for VictoriaMetrics](https://docs.victoriametrics.com/#how-to-start-victoriametrics).
- VictoriaMetrics provides [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html) query language, which is better suited for typical queries over time series data than SQL-like query language provided by QuestDB. See [this article](https://valyala.medium.com/promql-tutorial-for-beginners-9ab455142085) for details.
- Thanks to PromQL support, VictoriaMetrics [can be used as a drop-in replacement for Prometheus in Grafana](https://docs.victoriametrics.com/#grafana-setup), while QuestDB needs full rewrite of existing dashboards in Grafana.
- Thanks to Prometheus remote_write API support, VictoriaMetrics can be used as a long-term storage for Prometheus or for [vmagent](https://docs.victoriametrics.com/vmagent.html), while QuestDB has no integration with Prometheus.
- QuestDB [supports smaller range of popular data ingestion protocols](https://questdb.io/docs/develop/insert-data) compared to VictoriaMetrics (compare to [the list of supported data ingestion protocols for VictoriaMetrics](https://docs.victoriametrics.com/#how-to-import-time-series-data)).
- [VictoriaMetrics supports backfilling (e.g. storing historical data) out of the box](https://docs.victoriametrics.com/#backfilling), while QuestDB provides [very limited support for backfilling](https://questdb.io/blog/2021/05/10/questdb-release-6-0-tsbs-benchmark#the-problem-with-out-of-order-data).
### What is the difference between VictoriaMetrics and [Cortex](https://github.com/cortexproject/cortex)? ### What is the difference between VictoriaMetrics and [Cortex](https://github.com/cortexproject/cortex)?
VictoriaMetrics is similar to Cortex in the following aspects: VictoriaMetrics is similar to Cortex in the following aspects:
@ -142,7 +153,8 @@ The main differences between Cortex and VictoriaMetrics:
### How does VictoriaMetrics compare to [InfluxDB](https://www.influxdata.com/time-series-platform/influxdb/)? ### How does VictoriaMetrics compare to [InfluxDB](https://www.influxdata.com/time-series-platform/influxdb/)?
- VictoriaMetrics requires [10x less RAM](https://medium.com/@valyala/insert-benchmarks-with-inch-influxdb-vs-victoriametrics-e31a41ae2893) and it [works faster](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae). - VictoriaMetrics requires [10x less RAM](https://medium.com/@valyala/insert-benchmarks-with-inch-influxdb-vs-victoriametrics-e31a41ae2893) and it [works faster](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae).
- VictoriaMetrics provides [better query language](https://medium.com/@valyala/promql-tutorial-for-beginners-9ab455142085) than InfluxQL or Flux. - VictoriaMetrics needs lower amounts of storage space than InfluxDB on production data.
- VictoriaMetrics provides better query language - [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html) - than InfluxQL or Flux. See [this tutorial](https://medium.com/@valyala/promql-tutorial-for-beginners-9ab455142085) for details.
- VictoriaMetrics accepts data in multiple popular data ingestion protocols additionally to InfluxDB - Prometheus remote_write, OpenTSDB, Graphite, CSV, JSON, native binary. - VictoriaMetrics accepts data in multiple popular data ingestion protocols additionally to InfluxDB - Prometheus remote_write, OpenTSDB, Graphite, CSV, JSON, native binary.
See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-import-time-series-data) for details. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-import-time-series-data) for details.
@ -151,6 +163,7 @@ The main differences between Cortex and VictoriaMetrics:
- TimescaleDB insists on using SQL as a query language. While SQL is more powerful than PromQL, this power is rarely required during typical TSDB usage. Real-world queries usually [look clearer and simpler when written in PromQL than in SQL](https://medium.com/@valyala/promql-tutorial-for-beginners-9ab455142085). - TimescaleDB insists on using SQL as a query language. While SQL is more powerful than PromQL, this power is rarely required during typical TSDB usage. Real-world queries usually [look clearer and simpler when written in PromQL than in SQL](https://medium.com/@valyala/promql-tutorial-for-beginners-9ab455142085).
- VictoriaMetrics requires [up to 70x less storage space comparing to TimescaleDB](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4) for storing the same amount of time series data. The gap in storage space usage can be lowered from 70x to 3x if [compression in TimescaleDB is properly configured](https://docs.timescale.com/latest/using-timescaledb/compression) (it isn't an easy task in general case :)). - VictoriaMetrics requires [up to 70x less storage space comparing to TimescaleDB](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4) for storing the same amount of time series data. The gap in storage space usage can be lowered from 70x to 3x if [compression in TimescaleDB is properly configured](https://docs.timescale.com/latest/using-timescaledb/compression) (it isn't an easy task in general case :)).
- TimescaleDB is [harder to setup, configure and operate](https://docs.timescale.com/timescaledb/latest/how-to-guides/install-timescaledb/self-hosted/ubuntu/installation-apt-ubuntu/) than VictoriaMetrics (see [how to run VictoriaMetrics](https://docs.victoriametrics.com/#how-to-start-victoriametrics)).
- VictoriaMetrics accepts data in multiple popular data ingestion protocols - InfluxDB, OpenTSDB, Graphite, CSV, while TimescaleDB supports only SQL inserts. - VictoriaMetrics accepts data in multiple popular data ingestion protocols - InfluxDB, OpenTSDB, Graphite, CSV, while TimescaleDB supports only SQL inserts.

View file

@ -1,18 +0,0 @@
---
sort: 21
---
# Docs
* [Quick start](Quick-Start)
* [`WITH` templates playground](https://play.victoriametrics.com/promql/expand-with-exprs)
* [Grafana playground](http://play-grafana.victoriametrics.com:3000/d/4ome8yJmz/node-exporter-on-victoriametrics-demo)
* [MetricsQL](MetricsQL)
* [Single-node version](Single-server-VictoriaMetrics)
* [FAQ](FAQ)
* [Cluster version](Cluster-VictoriaMetrics)
* [Articles](Articles)
* [Case Studies](CaseStudies)
* [vmbackup](vmbackup)
* [vmrestore](vmrestore)
* [vmagent](vmagent)

View file

@ -13,6 +13,7 @@ If you are unfamiliar with PromQL, then it is suggested reading [this tutorial f
The following functionality is implemented differently in MetricsQL comparing to PromQL in order to improve user experience: The following functionality is implemented differently in MetricsQL comparing to PromQL in order to improve user experience:
* MetricsQL takes into account the previous point before the window in square brackets for range functions such as `rate` and `increase`. * MetricsQL takes into account the previous point before the window in square brackets for range functions such as `rate` and `increase`.
It also doesn't extrapolate range function results. This addresses [this issue from Prometheus](https://github.com/prometheus/prometheus/issues/3746). It also doesn't extrapolate range function results. This addresses [this issue from Prometheus](https://github.com/prometheus/prometheus/issues/3746).
See technical details about VictoriaMetrics and Prometheus calculations for `rate()` and `increase()` [here](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1215#issuecomment-850305711).
* MetricsQL returns the expected non-empty responses for requests with `step` values smaller than scrape interval. This addresses [this issue from Grafana](https://github.com/grafana/grafana/issues/11451). * MetricsQL returns the expected non-empty responses for requests with `step` values smaller than scrape interval. This addresses [this issue from Grafana](https://github.com/grafana/grafana/issues/11451).
* MetricsQL treats `scalar` type the same as `instant vector` without labels, since subtle difference between these types usually confuses users. * MetricsQL treats `scalar` type the same as `instant vector` without labels, since subtle difference between these types usually confuses users.
See [the corresponding Prometheus docs](https://prometheus.io/docs/prometheus/latest/querying/basics/#expression-language-data-types) for details. See [the corresponding Prometheus docs](https://prometheus.io/docs/prometheus/latest/querying/basics/#expression-language-data-types) for details.
@ -67,7 +68,7 @@ This functionality can be tried at [an editable Grafana dashboard](http://play-g
- `label_del(q, label1, ... labelN)` for deleting the given labels from `q`. For example, `label_del(foo, "bar")` would delete `bar` label from all the `foo` series. - `label_del(q, label1, ... labelN)` for deleting the given labels from `q`. For example, `label_del(foo, "bar")` would delete `bar` label from all the `foo` series.
- `label_keep(q, label1, ... labelN)` for deleting all the labels except the given labels from `q`. For example, `label_keep(foo, "bar")` would delete all the labels except `bar` from `foo` series. - `label_keep(q, label1, ... labelN)` for deleting all the labels except the given labels from `q`. For example, `label_keep(foo, "bar")` would delete all the labels except `bar` from `foo` series.
- `label_copy(q, src_label1, dst_label1, ... src_labelN, dst_labelN)` for copying label values from `src_*` to `dst_*`. If `src_label` is empty, then `dst_label` is left untouched. For example, `label_copy(foo, "bar", baz")` would transform `foo{bar="x"}` to `foo{bar="x",baz="x"}`. - `label_copy(q, src_label1, dst_label1, ... src_labelN, dst_labelN)` for copying label values from `src_*` to `dst_*`. If `src_label` is empty, then `dst_label` is left untouched. For example, `label_copy(foo, "bar", baz")` would transform `foo{bar="x"}` to `foo{bar="x",baz="x"}`.
- `label_move(q, src_label1, dst_label1, ... src_labelN, dst_labelN)` for moving label values from `src_*` to `dst_*`. If `src_label` is empty, then `dst_label` is left untouched. For example, `label_move(foo, 'bar", "baz")` would transform `foo{bar="x"}` to `foo{baz="x"}`. - `label_move(q, src_label1, dst_label1, ... src_labelN, dst_labelN)` for moving label values from `src_*` to `dst_*`. If `src_label` is empty, then `dst_label` is left untouched. For example, `label_move(foo, "bar", "baz")` would transform `foo{bar="x"}` to `foo{baz="x"}`.
- `label_transform(q, label, regexp, replacement)` for replacing all the `regexp` occurences with `replacement` in the `label` values from `q`. For example, `label_transform(foo, "bar", "-", "_")` would transform `foo{bar="a-b-c"}` to `foo{bar="a_b_c"}`. - `label_transform(q, label, regexp, replacement)` for replacing all the `regexp` occurences with `replacement` in the `label` values from `q`. For example, `label_transform(foo, "bar", "-", "_")` would transform `foo{bar="a-b-c"}` to `foo{bar="a_b_c"}`.
- `label_value(q, label)` - returns numeric values for the given `label` from `q`. For example, if `label_value(foo, "bar")` is applied to `foo{bar="1.234"}`, then it will return a time series `foo{bar="1.234"}` with `1.234` value. - `label_value(q, label)` - returns numeric values for the given `label` from `q`. For example, if `label_value(foo, "bar")` is applied to `foo{bar="1.234"}`, then it will return a time series `foo{bar="1.234"}` with `1.234` value.
- `label_match(q, label, regexp)` and `label_mismatch(q, label, regexp)` for filtering time series with labels matching (or not matching) the given regexps. - `label_match(q, label, regexp)` and `label_mismatch(q, label, regexp)` for filtering time series with labels matching (or not matching) the given regexps.

View file

@ -463,11 +463,7 @@ The `/api/v1/export` endpoint should return the following response:
Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via the following APIs: Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via the following APIs:
* [Graphite API](#graphite-api-usage) * [Graphite API](#graphite-api-usage)
* [Prometheus querying API](#prometheus-querying-api-usage). Graphite metric names may special chars such as `-`, which may clash * [Prometheus querying API](#prometheus-querying-api-usage). VictoriaMetrics supports `__graphite__` pseudo-label for selecting time series with Graphite-compatible filters in [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html). For example, `{__graphite__="foo.*.bar"}` is equivalent to `{__name__=~"foo[.][^.]*[.]bar"}`, but it works faster and it is easier to use when migrating from Graphite to VictoriaMetrics.
with [MetricsQL operations](https://docs.victoriametrics.com/MetricsQL.html). Such metrics can be queries via `{__name__="foo-bar.baz"}`.
VictoriaMetrics supports `__graphite__` pseudo-label for selecting time series with Graphite-compatible filters in [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html).
For example, `{__graphite__="foo.*.bar"}` is equivalent to `{__name__=~"foo[.][^.]*[.]bar"}`, but it works faster
and it is easier to use when migrating from Graphite to VictoriaMetrics.
* [go-graphite/carbonapi](https://github.com/go-graphite/carbonapi/blob/main/cmd/carbonapi/carbonapi.example.victoriametrics.yaml) * [go-graphite/carbonapi](https://github.com/go-graphite/carbonapi/blob/main/cmd/carbonapi/carbonapi.example.victoriametrics.yaml)
## How to send data from OpenTSDB-compatible agents ## How to send data from OpenTSDB-compatible agents
@ -1770,6 +1766,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
Whether to suppress scrape errors logging. The last error for each target is always available at '/targets' page even if scrape errors logging is suppressed Whether to suppress scrape errors logging. The last error for each target is always available at '/targets' page even if scrape errors logging is suppressed
-relabelConfig string -relabelConfig string
Optional path to a file with relabeling rules, which are applied to all the ingested metrics. See https://docs.victoriametrics.com/#relabeling for details Optional path to a file with relabeling rules, which are applied to all the ingested metrics. See https://docs.victoriametrics.com/#relabeling for details
-relabelDebug
Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, then the metrics aren't sent to storage. This is useful for debugging the relabeling configs
-retentionPeriod value -retentionPeriod value
Data with timestamps outside the retentionPeriod is automatically deleted Data with timestamps outside the retentionPeriod is automatically deleted
The following optional suffixes are supported: h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 1) The following optional suffixes are supported: h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 1)

View file

@ -1,5 +1,5 @@
--- ---
sort: 2 sort: 3
--- ---
# vmagent # vmagent
@ -223,10 +223,10 @@ and also provides the following actions:
The relabeling can be defined in the following places: The relabeling can be defined in the following places:
* At the `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to target labels. * At the `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to target labels. This relabeling can be debugged by passing `relabel_debug: true` option to the corresponding `scrape_config` section. In this case `vmagent` logs target labels before and after the relabeling and then drops the logged target.
* At the `scrape_config -> metric_relabel_configs` section in `-promscrape.config` file. This relabeling is applied to all the scraped metrics in the given `scrape_config`. * At the `scrape_config -> metric_relabel_configs` section in `-promscrape.config` file. This relabeling is applied to all the scraped metrics in the given `scrape_config`. This relabeling can be debugged by passing `metric_relabel_debug: true` option to the corresponding `scrape_config` section. In this case `vmagent` logs metrics before and after the relabeling and then drops the logged metrics.
* At the `-remoteWrite.relabelConfig` file. This relabeling is aplied to all the collected metrics before sending them to remote storage. * At the `-remoteWrite.relabelConfig` file. This relabeling is aplied to all the collected metrics before sending them to remote storage. This relabeling can be debugged by passing `-remoteWrite.relabelDebug` command-line option to `vmagent`. In this case `vmagent` logs metrics before and after the relabeling and then drops all the logged metrics instead of sending them to remote storage.
* At the `-remoteWrite.urlRelabelConfig` files. This relabeling is applied to metrics before sending them to the corresponding `-remoteWrite.url`. * At the `-remoteWrite.urlRelabelConfig` files. This relabeling is applied to metrics before sending them to the corresponding `-remoteWrite.url`. This relabeling can be debugged by passing `-remoteWrite.urlRelabelDebug` command-line options to `vmagent`. In this case `vmagent` logs metrics before and after the relabeling and then drops all the logged metrics instead of sending them to the corresponding `-remoteWrite.url`.
You can read more about relabeling in the following articles: You can read more about relabeling in the following articles:
@ -256,13 +256,13 @@ By default `vmagent` reads the full response from scrape target into memory, the
'match[]': ['{__name__!=""}'] 'match[]': ['{__name__!=""}']
``` ```
Note that `sample_limit` option doesn't work if stream parsing is enabled because the parsed data is pushed to remote storage as soon as it is parsed. Therefore the `sample_limit` option doesn't make sense during stream parsing. Note that `sample_limit` option doesn't prevent from data push to remote storage if stream parsing is enabled because the parsed data is pushed to remote storage as soon as it is parsed.
## Scraping big number of targets ## Scraping big number of targets
A single `vmagent` instance can scrape tens of thousands of scrape targets. Sometimes this isn't enough due to limitations on CPU, network, RAM, etc. A single `vmagent` instance can scrape tens of thousands of scrape targets. Sometimes this isn't enough due to limitations on CPU, network, RAM, etc.
In this case scrape targets can be split among multiple `vmagent` instances (aka `vmagent` horizontal scaling and clustering). In this case scrape targets can be split among multiple `vmagent` instances (aka `vmagent` horizontal scaling, sharding and clustering).
Each `vmagent` instance in the cluster must use identical `-promscrape.config` files with distinct `-promscrape.cluster.memberNum` values. Each `vmagent` instance in the cluster must use identical `-promscrape.config` files with distinct `-promscrape.cluster.memberNum` values.
The flag value must be in the range `0 ... N-1`, where `N` is the number of `vmagent` instances in the cluster. The flag value must be in the range `0 ... N-1`, where `N` is the number of `vmagent` instances in the cluster.
The number of `vmagent` instances in the cluster must be passed to `-promscrape.cluster.membersCount` command-line flag. For example, the following commands The number of `vmagent` instances in the cluster must be passed to `-promscrape.cluster.membersCount` command-line flag. For example, the following commands
@ -725,6 +725,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
Supports array of values separated by comma or specified via multiple flags. Supports array of values separated by comma or specified via multiple flags.
-remoteWrite.relabelConfig string -remoteWrite.relabelConfig string
Optional path to file with relabel_config entries. These entries are applied to all the metrics before sending them to -remoteWrite.url. See https://docs.victoriametrics.com/vmagent.html#relabeling for details Optional path to file with relabel_config entries. These entries are applied to all the metrics before sending them to -remoteWrite.url. See https://docs.victoriametrics.com/vmagent.html#relabeling for details
-remoteWrite.relabelDebug
Whether to log metrics before and after relabeling with -remoteWrite.relabelConfig. If the -remoteWrite.relabelDebug is enabled, then the metrics aren't sent to remote storage. This is useful for debugging the relabeling configs
-remoteWrite.roundDigits array -remoteWrite.roundDigits array
Round metric values to this number of decimal digits after the point before writing them to remote storage. Examples: -remoteWrite.roundDigits=2 would round 1.236 to 1.24, while -remoteWrite.roundDigits=-1 would round 126.78 to 130. By default digits rounding is disabled. Set it to 100 for disabling it for a particular remote storage. This option may be used for improving data compression for the stored metrics Round metric values to this number of decimal digits after the point before writing them to remote storage. Examples: -remoteWrite.roundDigits=2 would round 1.236 to 1.24, while -remoteWrite.roundDigits=-1 would round 126.78 to 130. By default digits rounding is disabled. Set it to 100 for disabling it for a particular remote storage. This option may be used for improving data compression for the stored metrics
Supports array of values separated by comma or specified via multiple flags. Supports array of values separated by comma or specified via multiple flags.
@ -759,6 +761,9 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
-remoteWrite.urlRelabelConfig array -remoteWrite.urlRelabelConfig array
Optional path to relabel config for the corresponding -remoteWrite.url Optional path to relabel config for the corresponding -remoteWrite.url
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-remoteWrite.urlRelabelDebug array
Whether to log metrics before and after relabeling with -remoteWrite.urlRelabelConfig. If the -remoteWrite.urlRelabelDebug is enabled, then the metrics aren't sent to the corresponding -remoteWrite.url. This is useful for debugging the relabeling configs
Supports array of values separated by comma or specified via multiple flags.
-sortLabels -sortLabels
Whether to sort labels for incoming samples before writing them to all the configured remote storage systems. This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}Enabled sorting for labels can slow down ingestion performance a bit Whether to sort labels for incoming samples before writing them to all the configured remote storage systems. This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}Enabled sorting for labels can slow down ingestion performance a bit
-tls -tls

View file

@ -1,5 +1,5 @@
--- ---
sort: 3 sort: 4
--- ---
# vmalert # vmalert
@ -16,7 +16,8 @@ rules against configured address.
support; support;
* Integration with [Alertmanager](https://github.com/prometheus/alertmanager); * Integration with [Alertmanager](https://github.com/prometheus/alertmanager);
* Keeps the alerts [state on restarts](#alerts-state-on-restarts); * Keeps the alerts [state on restarts](#alerts-state-on-restarts);
* Graphite datasource can be used for alerting and recording rules. See [these docs](#graphite) for details. * Graphite datasource can be used for alerting and recording rules. See [these docs](#graphite);
* Recording and Alerting rules backfilling (aka `replay`). See [these docs](#rules-backfilling);
* Lightweight without extra dependencies. * Lightweight without extra dependencies.
## Limitations ## Limitations
@ -231,194 +232,296 @@ implements [Graphite Render API](https://graphite.readthedocs.io/en/stable/rende
When using vmalert with both `graphite` and `prometheus` rules configured against cluster version of VM do not forget When using vmalert with both `graphite` and `prometheus` rules configured against cluster version of VM do not forget
to set `-datasource.appendTypePrefix` flag to `true`, so vmalert can adjust URL prefix automatically based on query type. to set `-datasource.appendTypePrefix` flag to `true`, so vmalert can adjust URL prefix automatically based on query type.
## Rules backfilling
vmalert supports alerting and recording rules backfilling (aka `replay`). In replay mode vmalert
can read the same rules configuration as normally, evaluate them on the given time range and backfill
results via remote write to the configured storage. vmalert supports any PromQL/MetricsQL compatible
data source for backfilling.
### How it works
In `replay` mode vmalert works as a cli-tool and exits immediately after work is done.
To run vmalert in `replay` mode:
```
./bin/vmalert -rule=path/to/your.rules \ # path to files with rules you usually use with vmalert
-datasource.url=http://localhost:8428 \ # PromQL/MetricsQL compatible datasource
-remoteWrite.url=http://localhost:8428 \ # remote write compatible storage to persist results
-replay.timeFrom=2021-05-11T07:21:43Z \ # time from begin replay
-replay.timeTo=2021-05-29T18:40:43Z # time to finish replay
```
The output of the command will look like the following:
```
Replay mode:
from: 2021-05-11 07:21:43 +0000 UTC # set by -replay.timeFrom
to: 2021-05-29 18:40:43 +0000 UTC # set by -replay.timeTo
max data points per request: 1000 # set by -replay.maxDatapointsPerQuery
Group "ReplayGroup"
interval: 1m0s
requests to make: 27
max range per request: 16h40m0s
> Rule "type:vm_cache_entries:rate5m" (ID: 1792509946081842725)
27 / 27 [----------------------------------------------------------------------------------------------------] 100.00% 78 p/s
> Rule "go_cgo_calls_count:rate5m" (ID: 17958425467471411582)
27 / 27 [-----------------------------------------------------------------------------------------------------] 100.00% ? p/s
Group "vmsingleReplay"
interval: 30s
requests to make: 54
max range per request: 8h20m0s
> Rule "RequestErrorsToAPI" (ID: 17645863024999990222)
54 / 54 [-----------------------------------------------------------------------------------------------------] 100.00% ? p/s
> Rule "TooManyLogs" (ID: 9042195394653477652)
54 / 54 [-----------------------------------------------------------------------------------------------------] 100.00% ? p/s
2021-06-07T09:59:12.098Z info app/vmalert/replay.go:68 replay finished! Imported 511734 samples
```
In `replay` mode all groups are executed sequentially one-by-one. Rules within the group are
executed sequentially as well (`concurrency` setting is ignored). Vmalert sends rule's expression
to [/query_range](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries) endpoint
of the configured `-datasource.url`. Returned data then processed according to the rule type and
backfilled to `-remoteWrite.url` via [Remote Write protocol](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations).
Vmalert respects `evaluationInterval` value set by flag or per-group during the replay.
#### Recording rules
Result of recording rules `replay` should match with results of normal rules evaluation.
#### Alerting rules
Result of alerting rules `replay` is time series reflecting [alert's state](#alerts-state-on-restarts).
To see if `replayed` alert has fired in the past use the following PromQL/MetricsQL expression:
```
ALERTS{alertname="your_alertname", alertstate="firing"}
```
Execute the query against storage which was used for `-remoteWrite.url` during the `replay`.
### Additional configuration
There are following non-required `replay` flags:
* `-replay.maxDatapointsPerQuery` - the max number of data points expected to receive in one request.
In two words, it affects the max time range for every `/query_range` request. The higher the value,
the less requests will be issued during `replay`.
* `-replay.ruleRetryAttempts` - when datasource fails to respond vmalert will make this number of retries
per rule before giving up.
* `-replay.rulesDelay` - delay between sequential rules execution. Important in cases if there are chaining
(rules which depend on each other) rules. It is expected, that remote storage will be able to persist
previously accepted data during the delay, so data will be available for the subsequent queries.
Keep it equal or bigger than `-remoteWrite.flushInterval`.
See full description for these flags in `./vmalert --help`.
### Limitations
* Graphite engine isn't supported yet;
* `query` template function is disabled for performance reasons (might be changed in future);
## Configuration ## Configuration
The shortlist of configuration flags is the following: The shortlist of configuration flags is the following:
``` ```
-datasource.appendTypePrefix -datasource.appendTypePrefix
Whether to add type prefix to -datasource.url based on the query type. Set to true if sending different query types to the vmselect URL. Whether to add type prefix to -datasource.url based on the query type. Set to true if sending different query types to the vmselect URL.
-datasource.basicAuth.password string -datasource.basicAuth.password string
Optional basic auth password for -datasource.url Optional basic auth password for -datasource.url
-datasource.basicAuth.username string -datasource.basicAuth.username string
Optional basic auth username for -datasource.url Optional basic auth username for -datasource.url
-datasource.lookback duration -datasource.lookback duration
Lookback defines how far into the past to look when evaluating queries. For example, if the datasource.lookback=5m then param "time" with value now()-5m will be added to every query. Lookback defines how far into the past to look when evaluating queries. For example, if the datasource.lookback=5m then param "time" with value now()-5m will be added to every query.
-datasource.maxIdleConnections int -datasource.maxIdleConnections int
Defines the number of idle (keep-alive connections) to each configured datasource. Consider setting this value equal to the value: groups_total * group.concurrency. Too low a value may result in a high number of sockets in TIME_WAIT state. (default 100) Defines the number of idle (keep-alive connections) to each configured datasource. Consider setting this value equal to the value: groups_total * group.concurrency. Too low a value may result in a high number of sockets in TIME_WAIT state. (default 100)
-datasource.queryStep duration -datasource.queryStep duration
queryStep defines how far a value can fallback to when evaluating queries. For example, if datasource.queryStep=15s then param "step" with value "15s" will be added to every query.If queryStep isn't specified, rule's evaluationInterval will be used instead. queryStep defines how far a value can fallback to when evaluating queries. For example, if datasource.queryStep=15s then param "step" with value "15s" will be added to every query.If queryStep isn't specified, rule's evaluationInterval will be used instead.
-datasource.roundDigits int -datasource.roundDigits int
Adds "round_digits" GET param to datasource requests. In VM "round_digits" limits the number of digits after the decimal point in response values. Adds "round_digits" GET param to datasource requests. In VM "round_digits" limits the number of digits after the decimal point in response values.
-datasource.tlsCAFile string -datasource.tlsCAFile string
Optional path to TLS CA file to use for verifying connections to -datasource.url. By default, system CA is used Optional path to TLS CA file to use for verifying connections to -datasource.url. By default, system CA is used
-datasource.tlsCertFile string -datasource.tlsCertFile string
Optional path to client-side TLS certificate file to use when connecting to -datasource.url Optional path to client-side TLS certificate file to use when connecting to -datasource.url
-datasource.tlsInsecureSkipVerify -datasource.tlsInsecureSkipVerify
Whether to skip tls verification when connecting to -datasource.url Whether to skip tls verification when connecting to -datasource.url
-datasource.tlsKeyFile string -datasource.tlsKeyFile string
Optional path to client-side TLS certificate key to use when connecting to -datasource.url Optional path to client-side TLS certificate key to use when connecting to -datasource.url
-datasource.tlsServerName string -datasource.tlsServerName string
Optional TLS server name to use for connections to -datasource.url. By default, the server name from -datasource.url is used Optional TLS server name to use for connections to -datasource.url. By default, the server name from -datasource.url is used
-datasource.url string -datasource.url string
VictoriaMetrics or vmselect url. Required parameter. E.g. http://127.0.0.1:8428 VictoriaMetrics or vmselect url. Required parameter. E.g. http://127.0.0.1:8428
-dryRun -rule -dryRun -rule
Whether to check only config files without running vmalert. The rules file are validated. The -rule flag must be specified. Whether to check only config files without running vmalert. The rules file are validated. The -rule flag must be specified.
-enableTCP6 -enableTCP6
Whether to enable IPv6 for listening and dialing. By default only IPv4 TCP and UDP is used Whether to enable IPv6 for listening and dialing. By default only IPv4 TCP and UDP is used
-envflag.enable -envflag.enable
Whether to enable reading flags from environment variables additionally to command line. Command line flag values have priority over values from environment vars. Flags are read only from command line if this flag isn't set Whether to enable reading flags from environment variables additionally to command line. Command line flag values have priority over values from environment vars. Flags are read only from command line if this flag isn't set
-envflag.prefix string -envflag.prefix string
Prefix for environment variables if -envflag.enable is set Prefix for environment variables if -envflag.enable is set
-evaluationInterval duration -evaluationInterval duration
How often to evaluate the rules (default 1m0s) How often to evaluate the rules (default 1m0s)
-external.alert.source string -external.alert.source string
External Alert Source allows to override the Source link for alerts sent to AlertManager for cases where you want to build a custom link to Grafana, Prometheus or any other service. External Alert Source allows to override the Source link for alerts sent to AlertManager for cases where you want to build a custom link to Grafana, Prometheus or any other service.
eg. 'explore?orgId=1&left=[\"now-1h\",\"now\",\"VictoriaMetrics\",{\"expr\": \"{{$expr|quotesEscape|crlfEscape|queryEscape}}\"},{\"mode\":\"Metrics\"},{\"ui\":[true,true,true,\"none\"]}]'.If empty '/api/v1/:groupID/alertID/status' is used eg. 'explore?orgId=1&left=[\"now-1h\",\"now\",\"VictoriaMetrics\",{\"expr\": \"{{$expr|quotesEscape|crlfEscape|queryEscape}}\"},{\"mode\":\"Metrics\"},{\"ui\":[true,true,true,\"none\"]}]'.If empty '/api/v1/:groupID/alertID/status' is used
-external.label array -external.label array
Optional label in the form 'name=value' to add to all generated recording rules and alerts. Pass multiple -label flags in order to add multiple label sets. Optional label in the form 'name=value' to add to all generated recording rules and alerts. Pass multiple -label flags in order to add multiple label sets.
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-external.url string -external.url string
External URL is used as alert's source for sent alerts to the notifier External URL is used as alert's source for sent alerts to the notifier
-fs.disableMmap -fs.disableMmap
Whether to use pread() instead of mmap() for reading data files. By default mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread() Whether to use pread() instead of mmap() for reading data files. By default mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread()
-http.connTimeout duration -http.connTimeout duration
Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s) Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s)
-http.disableResponseCompression -http.disableResponseCompression
Disable compression of HTTP responses to save CPU resources. By default compression is enabled to save network bandwidth Disable compression of HTTP responses to save CPU resources. By default compression is enabled to save network bandwidth
-http.idleConnTimeout duration -http.idleConnTimeout duration
Timeout for incoming idle http connections (default 1m0s) Timeout for incoming idle http connections (default 1m0s)
-http.maxGracefulShutdownDuration duration -http.maxGracefulShutdownDuration duration
The maximum duration for a graceful shutdown of the HTTP server. A highly loaded server may require increased value for a graceful shutdown (default 7s) The maximum duration for a graceful shutdown of the HTTP server. A highly loaded server may require increased value for a graceful shutdown (default 7s)
-http.pathPrefix string -http.pathPrefix string
An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. See https://www.robustperception.io/using-external-urls-and-proxies-with-prometheus An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. See https://www.robustperception.io/using-external-urls-and-proxies-with-prometheus
-http.shutdownDelay duration -http.shutdownDelay duration
Optional delay before http server shutdown. During this delay, the server returns non-OK responses from /health page, so load balancers can route new requests to other servers Optional delay before http server shutdown. During this delay, the server returns non-OK responses from /health page, so load balancers can route new requests to other servers
-httpAuth.password string -httpAuth.password string
Password for HTTP Basic Auth. The authentication is disabled if -httpAuth.username is empty Password for HTTP Basic Auth. The authentication is disabled if -httpAuth.username is empty
-httpAuth.username string -httpAuth.username string
Username for HTTP Basic Auth. The authentication is disabled if empty. See also -httpAuth.password Username for HTTP Basic Auth. The authentication is disabled if empty. See also -httpAuth.password
-httpListenAddr string -httpListenAddr string
Address to listen for http connections (default ":8880") Address to listen for http connections (default ":8880")
-loggerDisableTimestamps -loggerDisableTimestamps
Whether to disable writing timestamps in logs Whether to disable writing timestamps in logs
-loggerErrorsPerSecondLimit int -loggerErrorsPerSecondLimit int
Per-second limit on the number of ERROR messages. If more than the given number of errors are emitted per second, the remaining errors are suppressed. Zero values disable the rate limit Per-second limit on the number of ERROR messages. If more than the given number of errors are emitted per second, the remaining errors are suppressed. Zero values disable the rate limit
-loggerFormat string -loggerFormat string
Format for logs. Possible values: default, json (default "default") Format for logs. Possible values: default, json (default "default")
-loggerLevel string -loggerLevel string
Minimum level of errors to log. Possible values: INFO, WARN, ERROR, FATAL, PANIC (default "INFO") Minimum level of errors to log. Possible values: INFO, WARN, ERROR, FATAL, PANIC (default "INFO")
-loggerOutput string -loggerOutput string
Output for the logs. Supported values: stderr, stdout (default "stderr") Output for the logs. Supported values: stderr, stdout (default "stderr")
-loggerTimezone string -loggerTimezone string
Timezone to use for timestamps in logs. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 or Local (default "UTC") Timezone to use for timestamps in logs. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 or Local (default "UTC")
-loggerWarnsPerSecondLimit int -loggerWarnsPerSecondLimit int
Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit
-memory.allowedBytes size -memory.allowedBytes size
Allowed size of system memory VictoriaMetrics caches may occupy. This option overrides -memory.allowedPercent if set to a non-zero value. Too low a value may increase the cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache resulting in higher disk IO usage Allowed size of system memory VictoriaMetrics caches may occupy. This option overrides -memory.allowedPercent if set to a non-zero value. Too low a value may increase the cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache resulting in higher disk IO usage
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0) Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
-memory.allowedPercent float -memory.allowedPercent float
Allowed percent of system memory VictoriaMetrics caches may occupy. See also -memory.allowedBytes. Too low a value may increase cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache which will result in higher disk IO usage (default 60) Allowed percent of system memory VictoriaMetrics caches may occupy. See also -memory.allowedBytes. Too low a value may increase cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache which will result in higher disk IO usage (default 60)
-metricsAuthKey string -metricsAuthKey string
Auth key for /metrics. It overrides httpAuth settings Auth key for /metrics. It overrides httpAuth settings
-notifier.basicAuth.password array -notifier.basicAuth.password array
Optional basic auth password for -notifier.url Optional basic auth password for -notifier.url
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-notifier.basicAuth.username array -notifier.basicAuth.username array
Optional basic auth username for -notifier.url Optional basic auth username for -notifier.url
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-notifier.tlsCAFile array -notifier.tlsCAFile array
Optional path to TLS CA file to use for verifying connections to -notifier.url. By default system CA is used Optional path to TLS CA file to use for verifying connections to -notifier.url. By default system CA is used
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-notifier.tlsCertFile array -notifier.tlsCertFile array
Optional path to client-side TLS certificate file to use when connecting to -notifier.url Optional path to client-side TLS certificate file to use when connecting to -notifier.url
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-notifier.tlsInsecureSkipVerify array -notifier.tlsInsecureSkipVerify array
Whether to skip tls verification when connecting to -notifier.url Whether to skip tls verification when connecting to -notifier.url
Supports array of values separated by comma or specified via multiple flags. Supports array of values separated by comma or specified via multiple flags.
-notifier.tlsKeyFile array -notifier.tlsKeyFile array
Optional path to client-side TLS certificate key to use when connecting to -notifier.url Optional path to client-side TLS certificate key to use when connecting to -notifier.url
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-notifier.tlsServerName array -notifier.tlsServerName array
Optional TLS server name to use for connections to -notifier.url. By default the server name from -notifier.url is used Optional TLS server name to use for connections to -notifier.url. By default the server name from -notifier.url is used
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-notifier.url array -notifier.url array
Prometheus alertmanager URL. Required parameter. e.g. http://127.0.0.1:9093 Prometheus alertmanager URL. Required parameter. e.g. http://127.0.0.1:9093
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-pprofAuthKey string -pprofAuthKey string
Auth key for /debug/pprof. It overrides httpAuth settings Auth key for /debug/pprof. It overrides httpAuth settings
-remoteRead.basicAuth.password string -remoteRead.basicAuth.password string
Optional basic auth password for -remoteRead.url Optional basic auth password for -remoteRead.url
-remoteRead.basicAuth.username string -remoteRead.basicAuth.username string
Optional basic auth username for -remoteRead.url Optional basic auth username for -remoteRead.url
-remoteRead.ignoreRestoreErrors -remoteRead.ignoreRestoreErrors
Whether to ignore errors from remote storage when restoring alerts state on startup. (default true) Whether to ignore errors from remote storage when restoring alerts state on startup. (default true)
-remoteRead.lookback duration -remoteRead.lookback duration
Lookback defines how far to look into past for alerts timeseries. For example, if lookback=1h then range from now() to now()-1h will be scanned. (default 1h0m0s) Lookback defines how far to look into past for alerts timeseries. For example, if lookback=1h then range from now() to now()-1h will be scanned. (default 1h0m0s)
-remoteRead.tlsCAFile string -remoteRead.tlsCAFile string
Optional path to TLS CA file to use for verifying connections to -remoteRead.url. By default system CA is used Optional path to TLS CA file to use for verifying connections to -remoteRead.url. By default system CA is used
-remoteRead.tlsCertFile string -remoteRead.tlsCertFile string
Optional path to client-side TLS certificate file to use when connecting to -remoteRead.url Optional path to client-side TLS certificate file to use when connecting to -remoteRead.url
-remoteRead.tlsInsecureSkipVerify -remoteRead.tlsInsecureSkipVerify
Whether to skip tls verification when connecting to -remoteRead.url Whether to skip tls verification when connecting to -remoteRead.url
-remoteRead.tlsKeyFile string -remoteRead.tlsKeyFile string
Optional path to client-side TLS certificate key to use when connecting to -remoteRead.url Optional path to client-side TLS certificate key to use when connecting to -remoteRead.url
-remoteRead.tlsServerName string -remoteRead.tlsServerName string
Optional TLS server name to use for connections to -remoteRead.url. By default the server name from -remoteRead.url is used Optional TLS server name to use for connections to -remoteRead.url. By default the server name from -remoteRead.url is used
-remoteRead.url vmalert -remoteRead.url vmalert
Optional URL to VictoriaMetrics or vmselect that will be used to restore alerts state. This configuration makes sense only if vmalert was configured with `remoteWrite.url` before and has been successfully persisted its state. E.g. http://127.0.0.1:8428 Optional URL to VictoriaMetrics or vmselect that will be used to restore alerts state. This configuration makes sense only if vmalert was configured with `remoteWrite.url` before and has been successfully persisted its state. E.g. http://127.0.0.1:8428
-remoteWrite.basicAuth.password string -remoteWrite.basicAuth.password string
Optional basic auth password for -remoteWrite.url Optional basic auth password for -remoteWrite.url
-remoteWrite.basicAuth.username string -remoteWrite.basicAuth.username string
Optional basic auth username for -remoteWrite.url Optional basic auth username for -remoteWrite.url
-remoteWrite.concurrency int -remoteWrite.concurrency int
Defines number of writers for concurrent writing into remote querier (default 1) Defines number of writers for concurrent writing into remote querier (default 1)
-remoteWrite.flushInterval duration -remoteWrite.flushInterval duration
Defines interval of flushes to remote write endpoint (default 5s) Defines interval of flushes to remote write endpoint (default 5s)
-remoteWrite.maxBatchSize int -remoteWrite.maxBatchSize int
Defines defines max number of timeseries to be flushed at once (default 1000) Defines defines max number of timeseries to be flushed at once (default 1000)
-remoteWrite.maxQueueSize int -remoteWrite.maxQueueSize int
Defines the max number of pending datapoints to remote write endpoint (default 100000) Defines the max number of pending datapoints to remote write endpoint (default 100000)
-remoteWrite.tlsCAFile string -remoteWrite.tlsCAFile string
Optional path to TLS CA file to use for verifying connections to -remoteWrite.url. By default system CA is used Optional path to TLS CA file to use for verifying connections to -remoteWrite.url. By default system CA is used
-remoteWrite.tlsCertFile string -remoteWrite.tlsCertFile string
Optional path to client-side TLS certificate file to use when connecting to -remoteWrite.url Optional path to client-side TLS certificate file to use when connecting to -remoteWrite.url
-remoteWrite.tlsInsecureSkipVerify -remoteWrite.tlsInsecureSkipVerify
Whether to skip tls verification when connecting to -remoteWrite.url Whether to skip tls verification when connecting to -remoteWrite.url
-remoteWrite.tlsKeyFile string -remoteWrite.tlsKeyFile string
Optional path to client-side TLS certificate key to use when connecting to -remoteWrite.url Optional path to client-side TLS certificate key to use when connecting to -remoteWrite.url
-remoteWrite.tlsServerName string -remoteWrite.tlsServerName string
Optional TLS server name to use for connections to -remoteWrite.url. By default the server name from -remoteWrite.url is used Optional TLS server name to use for connections to -remoteWrite.url. By default the server name from -remoteWrite.url is used
-remoteWrite.url string -remoteWrite.url string
Optional URL to VictoriaMetrics or vminsert where to persist alerts state and recording rules results in form of timeseries. E.g. http://127.0.0.1:8428 Optional URL to VictoriaMetrics or vminsert where to persist alerts state and recording rules results in form of timeseries. E.g. http://127.0.0.1:8428
-replay.maxDatapointsPerQuery int
Max number of data points expected in one request. The higher the value, the less requests will be made during replay. (default 1000)
-replay.ruleRetryAttempts int
Defines how many retries to make before giving up on rule if request for it returns an error. (default 5)
-replay.rulesDelay duration
Delay between rules evaluation within the group. Could be important if there are chained rules inside of the groupand processing need to wait for previous rule results to be persisted by remote storage before evaluating the next rule. Keep it equal or bigger than -remoteWrite.flushInterval. (default 1s)
-replay.timeFrom string
The time filter in RFC3339 format to select time series with timestamp equal or higher than provided value. E.g. '2020-01-01T20:07:00Z'
-replay.timeTo string
The time filter in RFC3339 format to select timeseries with timestamp equal or lower than provided value. E.g. '2020-01-01T20:07:00Z'
-rule array -rule array
Path to the file with alert rules. Path to the file with alert rules.
Supports patterns. Flag can be specified multiple times. Supports patterns. Flag can be specified multiple times.
Examples: Examples:
-rule="/path/to/file". Path to a single file with alerting rules -rule="/path/to/file". Path to a single file with alerting rules
-rule="dir/*.yaml" -rule="/*.yaml". Relative path to all .yaml files in "dir" folder, -rule="dir/*.yaml" -rule="/*.yaml". Relative path to all .yaml files in "dir" folder,
absolute path to all .yaml files in root. absolute path to all .yaml files in root.
Rule files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars. Rule files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars.
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-rule.configCheckInterval duration
Interval for checking for changes in '-rule' files. By default the checking is disabled. Send SIGHUP signal in order to force config check for changes
-rule.validateExpressions -rule.validateExpressions
Whether to validate rules expressions via MetricsQL engine (default true) Whether to validate rules expressions via MetricsQL engine (default true)
-rule.validateTemplates -rule.validateTemplates
Whether to validate annotation and label templates (default true) Whether to validate annotation and label templates (default true)
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set Path to file with TLS key. Used only if -tls is set
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
``` ```
Pass `-help` to `vmalert` in order to see the full list of supported Pass `-help` to `vmalert` in order to see the full list of supported
command-line flags with their descriptions. command-line flags with their descriptions.
To reload configuration without `vmalert` restart send SIGHUP signal `vmalert` supports "hot" config reload via the following methods:
or send GET request to `/-/reload` endpoint. * send SIGHUP signal to `vmalert` process;
* send GET request to `/-/reload` endpoint;
* configure `-rule.configCheckInterval` flag for periodic reload
on config change.
## Contributing ## Contributing

View file

@ -1,12 +1,12 @@
--- ---
sort: 4 sort: 5
--- ---
# vmauth # vmauth
`vmauth` is a simple auth proxy and router for [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics). `vmauth` is a simple auth proxy, router and [load balancer](#load-balancing) for [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics).
It reads username and password from [Basic Auth headers](https://en.wikipedia.org/wiki/Basic_access_authentication), It reads auth credentials from `Authorization` http header ([Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication) and `Bearer token` is supported),
matches them against configs pointed by `-auth.config` command-line flag and proxies incoming HTTP requests to the configured per-user `url_prefix` on successful match. matches them against configs pointed by [-auth.config](#auth-config) command-line flag and proxies incoming HTTP requests to the configured per-user `url_prefix` on successful match.
## Quick start ## Quick start
@ -31,9 +31,14 @@ Feel free [contacting us](mailto:info@victoriametrics.com) if you need customize
accounting and rate limiting such as [vmgateway](https://docs.victoriametrics.com/vmgateway.html). accounting and rate limiting such as [vmgateway](https://docs.victoriametrics.com/vmgateway.html).
## Load balancing
Each `url_prefix` in the [-auth.config](#auth-config) may contain either a single url or a list of urls. In the latter case `vmauth` balances load among the configured urls in a round-robin manner. This feature is useful for balancing the load among multiple `vmselect` and/or `vminsert` nodes in [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html).
## Auth config ## Auth config
Auth config is represented in the following simple `yml` format: `-auth.config` is represented in the following simple `yml` format:
```yml ```yml
@ -65,31 +70,47 @@ users:
# The user for querying account 123 in VictoriaMetrics cluster # The user for querying account 123 in VictoriaMetrics cluster
# See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format # See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format
# All the requests to http://vmauth:8427 with the given Basic Auth (username:password) # All the requests to http://vmauth:8427 with the given Basic Auth (username:password)
# will be proxied to http://vmselect:8481/select/123/prometheus . # will be load-balanced among http://vmselect1:8481/select/123/prometheus and http://vmselect2:8481/select/123/prometheus
# For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect:8481/select/123/prometheus/api/v1/select # For example, http://vmauth:8427/api/v1/query is proxied to the following urls in a round-robin manner:
# - http://vmselect1:8481/select/123/prometheus/api/v1/select
# - http://vmselect2:8481/select/123/prometheus/api/v1/select
- username: "cluster-select-account-123" - username: "cluster-select-account-123"
password: "***" password: "***"
url_prefix: "http://vmselect:8481/select/123/prometheus" url_prefix:
- "http://vmselect1:8481/select/123/prometheus"
- "http://vmselect2:8481/select/123/prometheus"
# The user for inserting Prometheus data into VictoriaMetrics cluster under account 42 # The user for inserting Prometheus data into VictoriaMetrics cluster under account 42
# See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format # See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format
# All the requests to http://vmauth:8427 with the given Basic Auth (username:password) # All the requests to http://vmauth:8427 with the given Basic Auth (username:password)
# will be proxied to http://vminsert:8480/insert/42/prometheus . # will be load-balanced between http://vminsert1:8480/insert/42/prometheus and http://vminsert2:8480/insert/42/prometheus
# For example, http://vmauth:8427/api/v1/write is proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write # For example, http://vmauth:8427/api/v1/write is proxied to the following urls in a round-robin manner:
# - http://vminsert1:8480/insert/42/prometheus/api/v1/write
# - http://vminsert2:8480/insert/42/prometheus/api/v1/write
- username: "cluster-insert-account-42" - username: "cluster-insert-account-42"
password: "***" password: "***"
url_prefix: "http://vminsert:8480/insert/42/prometheus" url_prefix:
- "http://vminsert1:8480/insert/42/prometheus"
- "http://vminsert2:8480/insert/42/prometheus"
# A single user for querying and inserting data: # A single user for querying and inserting data:
# - Requests to http://vmauth:8427/api/v1/query, http://vmauth:8427/api/v1/query_range # - Requests to http://vmauth:8427/api/v1/query, http://vmauth:8427/api/v1/query_range
# and http://vmauth:8427/api/v1/label/<label_name>/values are proxied to http://vmselect:8481/select/42/prometheus. # and http://vmauth:8427/api/v1/label/<label_name>/values are proxied to the following urls in a round-robin manner:
# For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect:8480/select/42/prometheus/api/v1/query # - http://vmselect1:8481/select/42/prometheus
# - http://vmselect2:8481/select/42/prometheus
# For example, http://vmauth:8427/api/v1/query is proxied to http://vmselect1:8480/select/42/prometheus/api/v1/query
# or to http://vmselect2:8480/select/42/prometheus/api/v1/query .
# - Requests to http://vmauth:8427/api/v1/write are proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write # - Requests to http://vmauth:8427/api/v1/write are proxied to http://vminsert:8480/insert/42/prometheus/api/v1/write
- username: "foobar" - username: "foobar"
url_map: url_map:
- src_paths: ["/api/v1/query", "/api/v1/query_range", "/api/v1/label/[^/]+/values"] - src_paths:
url_prefix: "http://vmselect:8481/select/42/prometheus" - "/api/v1/query"
- "/api/v1/query_range"
- "/api/v1/label/[^/]+/values"
url_prefix:
- "http://vmselect1:8481/select/42/prometheus"
- "http://vmselect2:8481/select/42/prometheus"
- src_paths: ["/api/v1/write"] - src_paths: ["/api/v1/write"]
url_prefix: "http://vminsert:8480/insert/42/prometheus" url_prefix: "http://vminsert:8480/insert/42/prometheus"
``` ```

View file

@ -1,5 +1,5 @@
--- ---
sort: 5 sort: 6
--- ---
# vmbackup # vmbackup

View file

@ -1,5 +1,5 @@
--- ---
sort: 9 sort: 10
--- ---
## vmbackupmanager ## vmbackupmanager

View file

@ -1,5 +1,5 @@
--- ---
sort: 7 sort: 8
--- ---
# vmctl # vmctl

View file

@ -1,5 +1,5 @@
--- ---
sort: 8 sort: 9
--- ---
# vmgateway # vmgateway

View file

@ -1,5 +1,5 @@
--- ---
sort: 6 sort: 7
--- ---
# vmrestore # vmrestore

24
go.mod
View file

@ -1,9 +1,8 @@
module github.com/VictoriaMetrics/VictoriaMetrics module github.com/VictoriaMetrics/VictoriaMetrics
require ( require (
cloud.google.com/go v0.82.0 // indirect
cloud.google.com/go/storage v1.15.0 cloud.google.com/go/storage v1.15.0
github.com/VictoriaMetrics/fastcache v1.5.8 github.com/VictoriaMetrics/fastcache v1.6.0
// Do not use the original github.com/valyala/fasthttp because of issues // Do not use the original github.com/valyala/fasthttp because of issues
// like https://github.com/valyala/fasthttp/commit/996610f021ff45fdc98c2ce7884d5fa4e7f9199b // like https://github.com/valyala/fasthttp/commit/996610f021ff45fdc98c2ce7884d5fa4e7f9199b
@ -11,18 +10,20 @@ require (
github.com/VictoriaMetrics/metrics v1.17.2 github.com/VictoriaMetrics/metrics v1.17.2
github.com/VictoriaMetrics/metricsql v0.15.0 github.com/VictoriaMetrics/metricsql v0.15.0
github.com/VividCortex/ewma v1.2.0 // indirect github.com/VividCortex/ewma v1.2.0 // indirect
github.com/aws/aws-sdk-go v1.38.43 github.com/aws/aws-sdk-go v1.38.56
github.com/cespare/xxhash/v2 v2.1.1 github.com/cespare/xxhash/v2 v2.1.1
github.com/cheggaaa/pb/v3 v3.0.8 github.com/cheggaaa/pb/v3 v3.0.8
github.com/cpuguy83/go-md2man/v2 v2.0.0 // indirect github.com/cpuguy83/go-md2man/v2 v2.0.0 // indirect
github.com/fatih/color v1.12.0 // indirect
github.com/go-kit/kit v0.10.0 github.com/go-kit/kit v0.10.0
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/snappy v0.0.3 github.com/golang/snappy v0.0.3
github.com/influxdata/influxdb v1.9.0 github.com/influxdata/influxdb v1.9.1
github.com/klauspost/compress v1.12.2 github.com/klauspost/compress v1.13.0
github.com/mattn/go-isatty v0.0.13 // indirect
github.com/mattn/go-runewidth v0.0.13 // indirect
github.com/oklog/ulid v1.3.1 github.com/oklog/ulid v1.3.1
github.com/prometheus/client_golang v1.10.0 // indirect github.com/prometheus/common v0.28.0 // indirect
github.com/prometheus/common v0.25.0 // indirect
github.com/prometheus/prometheus v1.8.2-0.20201119142752-3ad25a6dc3d9 github.com/prometheus/prometheus v1.8.2-0.20201119142752-3ad25a6dc3d9
github.com/russross/blackfriday/v2 v2.1.0 // indirect github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/urfave/cli/v2 v2.3.0 github.com/urfave/cli/v2 v2.3.0
@ -32,12 +33,11 @@ require (
github.com/valyala/gozstd v1.11.0 github.com/valyala/gozstd v1.11.0
github.com/valyala/histogram v1.1.2 github.com/valyala/histogram v1.1.2
github.com/valyala/quicktemplate v1.6.3 github.com/valyala/quicktemplate v1.6.3
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023 golang.org/x/net v0.0.0-20210525063256-abc453219eb5
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015 golang.org/x/sys v0.0.0-20210608053332-aa57babbf139
google.golang.org/api v0.47.0 google.golang.org/api v0.48.0
google.golang.org/genproto v0.0.0-20210518161634-ec7691c0a37d // indirect google.golang.org/genproto v0.0.0-20210607140030-00d4fb20b1ae // indirect
google.golang.org/grpc v1.38.0 // indirect
gopkg.in/yaml.v2 v2.4.0 gopkg.in/yaml.v2 v2.4.0
) )

77
go.sum
View file

@ -20,8 +20,8 @@ cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmW
cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg= cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8= cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0= cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
cloud.google.com/go v0.82.0 h1:FZ4B2YAzCzkwzGEOp1dqG8sAa3zNIvro1fHRTrB81RU= cloud.google.com/go v0.83.0 h1:bAMqZidYkmIsUqe6PtkEPT7Q+vfizScn+jfNA6jwK9c=
cloud.google.com/go v0.82.0/go.mod h1:vlKccHJGuFBFufnAnuB08dfEH9Y3H7dzDzRECFdC2TA= cloud.google.com/go v0.83.0/go.mod h1:Z7MJUsANfY0pYPdw0lbnivPx4/vhy/e2FEkSkF7vAVY=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o= cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE= cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc= cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
@ -96,8 +96,8 @@ github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdko
github.com/SAP/go-hdb v0.14.1/go.mod h1:7fdQLVC2lER3urZLjZCm0AuMQfApof92n3aylBPEkMo= github.com/SAP/go-hdb v0.14.1/go.mod h1:7fdQLVC2lER3urZLjZCm0AuMQfApof92n3aylBPEkMo=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo= github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI= github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/VictoriaMetrics/fastcache v1.5.8 h1:XW+YVx9lEXITBVv35ugK9OyotdNJVcbza69o3jmqWuI= github.com/VictoriaMetrics/fastcache v1.6.0 h1:C/3Oi3EiBCqufydp1neRZkqcwmEiuRT9c3fqvvgKm5o=
github.com/VictoriaMetrics/fastcache v1.5.8/go.mod h1:SiMZNgwEPJ9qWLshu9tyuE6bKc9ZWYhcNV/L7jurprQ= github.com/VictoriaMetrics/fastcache v1.6.0/go.mod h1:0qHz5QP0GMX4pfmMA/zt5RgfNuXJrTP0zS7DqpHGGTw=
github.com/VictoriaMetrics/fasthttp v1.0.15 h1:UaX6kOxcQRtwMWBCX5avt2d1IzHp8qK8OUpUswz5akQ= github.com/VictoriaMetrics/fasthttp v1.0.15 h1:UaX6kOxcQRtwMWBCX5avt2d1IzHp8qK8OUpUswz5akQ=
github.com/VictoriaMetrics/fasthttp v1.0.15/go.mod h1:s9o5H4T58Kt4CTrdyJp4RorBKCwY7gRVS3N2JAUJ9jw= github.com/VictoriaMetrics/fasthttp v1.0.15/go.mod h1:s9o5H4T58Kt4CTrdyJp4RorBKCwY7gRVS3N2JAUJ9jw=
github.com/VictoriaMetrics/metrics v1.12.2/go.mod h1:Z1tSfPfngDn12bTfZSCqArT3OPY3u88J12hSoOhuiRE= github.com/VictoriaMetrics/metrics v1.12.2/go.mod h1:Z1tSfPfngDn12bTfZSCqArT3OPY3u88J12hSoOhuiRE=
@ -145,8 +145,8 @@ github.com/aws/aws-sdk-go v1.29.16/go.mod h1:1KvfttTE3SPKMpo8g2c6jL3ZKfXtFvKscTg
github.com/aws/aws-sdk-go v1.30.12/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0= github.com/aws/aws-sdk-go v1.30.12/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0=
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48= github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.38.43 h1:OKe9+Cdmrkhe0KXgpKhrDqidPhXQ4bv1FzzKnrmTJ5g= github.com/aws/aws-sdk-go v1.38.56 h1:JI5bnuDfjVLgnBaDHeZO5btxGbYCQ5QA3P0maYtwPQw=
github.com/aws/aws-sdk-go v1.38.43/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/aws/aws-sdk-go v1.38.56/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g= github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/benbjohnson/immutable v0.2.1/go.mod h1:uc6OHo6PN2++n98KHLxW8ef4W42ylHiQSENghE1ezxI= github.com/benbjohnson/immutable v0.2.1/go.mod h1:uc6OHo6PN2++n98KHLxW8ef4W42ylHiQSENghE1ezxI=
github.com/benbjohnson/tmpl v1.0.0/go.mod h1:igT620JFIi44B6awvU9IsDhR77IXWtFigTLil/RPdps= github.com/benbjohnson/tmpl v1.0.0/go.mod h1:igT620JFIi44B6awvU9IsDhR77IXWtFigTLil/RPdps=
@ -234,8 +234,9 @@ github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLi
github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/fatih/color v1.10.0 h1:s36xzo75JdqLaaWoiEHk767eHiwo0598uUxyfiPkDsg=
github.com/fatih/color v1.10.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM= github.com/fatih/color v1.10.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM=
github.com/fatih/color v1.12.0 h1:mRhaKNwANqRgUBGKmnI5ZxEk7QXmjQeCcuYFMX2bfcc=
github.com/fatih/color v1.12.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM=
github.com/fogleman/gg v1.2.1-0.20190220221249-0403632d5b90/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k= github.com/fogleman/gg v1.2.1-0.20190220221249-0403632d5b90/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k= github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/foxcpp/go-mockdns v0.0.0-20201212160233-ede2f9158d15/go.mod h1:tPg4cp4nseejPd+UKxtCVQ2hUxNTZ7qQZJa7CLriIeo= github.com/foxcpp/go-mockdns v0.0.0-20201212160233-ede2f9158d15/go.mod h1:tPg4cp4nseejPd+UKxtCVQ2hUxNTZ7qQZJa7CLriIeo=
@ -257,6 +258,7 @@ github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.10.0 h1:dXFJfIHVvUcpSgDOV+Ne6t7jXri8Tfv2uOLHUZ2XNuo= github.com/go-kit/kit v0.10.0 h1:dXFJfIHVvUcpSgDOV+Ne6t7jXri8Tfv2uOLHUZ2XNuo=
github.com/go-kit/kit v0.10.0/go.mod h1:xUsJbQ/Fp4kEt7AFgCuvyX4a71u8h9jB8tj/ORgOZ7o= github.com/go-kit/kit v0.10.0/go.mod h1:xUsJbQ/Fp4kEt7AFgCuvyX4a71u8h9jB8tj/ORgOZ7o=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0 h1:TrB8swr/68K7m9CcGut2g3UOihhbcbiMAYiuTXdEih4= github.com/go-logfmt/logfmt v0.5.0 h1:TrB8swr/68K7m9CcGut2g3UOihhbcbiMAYiuTXdEih4=
@ -430,16 +432,18 @@ github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no= github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.1.0 h1:wCKgOCHuUEVfsaQLpPSJb7VdYCdTVZQAuOdYm1yc/60=
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.2.1 h1:d8MncMlErDFTwQGBK1xhv026j9kqhvw1Qv9IbWT1VLQ=
github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
@ -453,7 +457,7 @@ github.com/google/pprof v0.0.0-20201117184057-ae444373da19/go.mod h1:kpwsk12EmLe
github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210506205249-923b5ab0fc1a/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
@ -532,8 +536,8 @@ github.com/influxdata/flux v0.113.0/go.mod h1:3TJtvbm/Kwuo5/PEo5P6HUzwVg4bXWkb2w
github.com/influxdata/httprouter v1.3.1-0.20191122104820-ee83e2772f69/go.mod h1:pwymjR6SrP3gD3pRj9RJwdl1j5s3doEEV8gS4X9qSzA= github.com/influxdata/httprouter v1.3.1-0.20191122104820-ee83e2772f69/go.mod h1:pwymjR6SrP3gD3pRj9RJwdl1j5s3doEEV8gS4X9qSzA=
github.com/influxdata/influxdb v1.8.0/go.mod h1:SIzcnsjaHRFpmlxpJ4S3NT64qtEKYweNTUMb/vh0OMQ= github.com/influxdata/influxdb v1.8.0/go.mod h1:SIzcnsjaHRFpmlxpJ4S3NT64qtEKYweNTUMb/vh0OMQ=
github.com/influxdata/influxdb v1.8.3/go.mod h1:JugdFhsvvI8gadxOI6noqNeeBHvWNTbfYGtiAn+2jhI= github.com/influxdata/influxdb v1.8.3/go.mod h1:JugdFhsvvI8gadxOI6noqNeeBHvWNTbfYGtiAn+2jhI=
github.com/influxdata/influxdb v1.9.0 h1:9z/aRmTpWT1rIm4EN+qTJTZqgEdLGZ4xRMgvA276UEA= github.com/influxdata/influxdb v1.9.1 h1:YdRsjmSF+RbxdSuTVC1GkVHYaLjW2y6ojUD5lZ0omDM=
github.com/influxdata/influxdb v1.9.0/go.mod h1:UEe3MeD9AaP5rlPIes102IhYua3FhIWZuOXNHxDjSrI= github.com/influxdata/influxdb v1.9.1/go.mod h1:UEe3MeD9AaP5rlPIes102IhYua3FhIWZuOXNHxDjSrI=
github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo= github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo=
github.com/influxdata/influxql v1.1.0/go.mod h1:KpVI7okXjK6PRi3Z5B+mtKZli+R1DnZgb3N+tzevNgo= github.com/influxdata/influxql v1.1.0/go.mod h1:KpVI7okXjK6PRi3Z5B+mtKZli+R1DnZgb3N+tzevNgo=
github.com/influxdata/influxql v1.1.1-0.20200828144457-65d3ef77d385/go.mod h1:gHp9y86a/pxhjJ+zMjNXiQAA197Xk9wLxaz+fGG+kWk= github.com/influxdata/influxql v1.1.1-0.20200828144457-65d3ef77d385/go.mod h1:gHp9y86a/pxhjJ+zMjNXiQAA197Xk9wLxaz+fGG+kWk=
@ -562,6 +566,7 @@ github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/u
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1 h1:6QPYqodiu3GuPL+7mfx+NwDdp2eTkp9IfEUpgAwUN0o= github.com/jstemmer/go-junit-report v0.9.1 h1:6QPYqodiu3GuPL+7mfx+NwDdp2eTkp9IfEUpgAwUN0o=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
@ -581,8 +586,9 @@ github.com/klauspost/compress v1.4.0/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0
github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.10.7/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs= github.com/klauspost/compress v1.10.7/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.11.0/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs= github.com/klauspost/compress v1.11.0/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.12.2 h1:2KCfW3I9M7nSc5wOqXAlW2v2U6v+w6cbjvbfp+OykW8=
github.com/klauspost/compress v1.12.2/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg= github.com/klauspost/compress v1.12.2/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
github.com/klauspost/compress v1.13.0 h1:2T7tUoQrQT+fQWdaY5rjWztFGAFwbGD04iPJg90ZiOs=
github.com/klauspost/compress v1.13.0/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
github.com/klauspost/cpuid v0.0.0-20170728055534-ae7887de9fa5/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek= github.com/klauspost/cpuid v0.0.0-20170728055534-ae7887de9fa5/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/crc32 v0.0.0-20161016154125-cb6bfca970f6/go.mod h1:+ZoRqAPRLkC4NPOvfYeR5KNOrY6TD+/sAC3HXPZgDYg= github.com/klauspost/crc32 v0.0.0-20161016154125-cb6bfca970f6/go.mod h1:+ZoRqAPRLkC4NPOvfYeR5KNOrY6TD+/sAC3HXPZgDYg=
github.com/klauspost/pgzip v1.0.2-0.20170402124221-0bf5dcad4ada/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs= github.com/klauspost/pgzip v1.0.2-0.20170402124221-0bf5dcad4ada/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
@ -624,12 +630,14 @@ github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNx
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84= github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE= github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHXY=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU= github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.13 h1:qdl+GuBjcsKKDco5BsxPJlId98mSWNKqYA+Co0SC1yA=
github.com/mattn/go-isatty v0.0.13/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.3/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= github.com/mattn/go-runewidth v0.0.3/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.12 h1:Y41i/hVW3Pgwr8gV+J23B9YEY0zxjptBuCWEaxmAOow=
github.com/mattn/go-runewidth v0.0.12/go.mod h1:RAqKPSqVFrSLVXbA8x7dzmKdmGzieGRCM46jaSJTDAk= github.com/mattn/go-runewidth v0.0.12/go.mod h1:RAqKPSqVFrSLVXbA8x7dzmKdmGzieGRCM46jaSJTDAk=
github.com/mattn/go-runewidth v0.0.13 h1:lTGmDsbAYt5DmK6OnoV7EuIF1wEIFAcxld6ypU4OSgU=
github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-sqlite3 v1.11.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc= github.com/mattn/go-sqlite3 v1.11.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/mattn/go-tty v0.0.0-20180907095812-13ff1204f104/go.mod h1:XPvLUNfbS4fJH25nqRHfWLMa1ONC8Amw+mIA639KxkE= github.com/mattn/go-tty v0.0.0-20180907095812-13ff1204f104/go.mod h1:XPvLUNfbS4fJH25nqRHfWLMa1ONC8Amw+mIA639KxkE=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU= github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
@ -738,8 +746,8 @@ github.com/prometheus/client_golang v1.5.1/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3O
github.com/prometheus/client_golang v1.6.0/go.mod h1:ZLOG9ck3JLRdB5MgO8f+lLTe83AXG6ro35rLTxvnIl4= github.com/prometheus/client_golang v1.6.0/go.mod h1:ZLOG9ck3JLRdB5MgO8f+lLTe83AXG6ro35rLTxvnIl4=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.8.0/go.mod h1:O9VU6huf47PktckDQfMTX0Y8tY0/7TSWwj+ITvv0TnM= github.com/prometheus/client_golang v1.8.0/go.mod h1:O9VU6huf47PktckDQfMTX0Y8tY0/7TSWwj+ITvv0TnM=
github.com/prometheus/client_golang v1.10.0 h1:/o0BDeWzLWXNZ+4q5gXltUvaMpJqckTa+jTNoB+z4cg= github.com/prometheus/client_golang v1.11.0 h1:HNkLOAEQMIDv/K+04rukrLx6ch7msSRwf3/SASFAGtQ=
github.com/prometheus/client_golang v1.10.0/go.mod h1:WJM3cc3yu7XKBKa/I8WeZm+V3eltZnBwfENSU7mdogU= github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@ -755,9 +763,9 @@ github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8b
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo= github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.14.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s= github.com/prometheus/common v0.14.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
github.com/prometheus/common v0.15.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s= github.com/prometheus/common v0.15.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
github.com/prometheus/common v0.18.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s= github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.25.0 h1:IjJYZJCI8HZYtqA3xYwGyDzSCy1r4CA2GRh+4vdOmtE= github.com/prometheus/common v0.28.0 h1:vGVfV9KrDTvWt5boZO0I19g2E3CsWfpPPKZM9dt3mEw=
github.com/prometheus/common v0.25.0/go.mod h1:H6QK/N6XVT42whUeIdI3dp36w49c+/iMDk7UAI2qm7Q= github.com/prometheus/common v0.28.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
@ -1030,8 +1038,8 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc= golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023 h1:ADo5wSpq2gqaCGQWzk7S5vd//0iyyLeAratkEoG5dLE= golang.org/x/net v0.0.0-20210525063256-abc453219eb5 h1:wjuX4b5yYQnEQHzd+CBcrcC6OVR2J1CN6mUy0oSxIPo=
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -1044,7 +1052,6 @@ golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210413134643-5e61552d6c78/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210413134643-5e61552d6c78/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210427180440-81ed05c6b58c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c h1:pkQiBZBvdos9qq4wBAHqlzuZHEXo07pqV06ef90u1WI= golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c h1:pkQiBZBvdos9qq4wBAHqlzuZHEXo07pqV06ef90u1WI=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -1129,17 +1136,19 @@ golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210309074719-68d13333faf2/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210412220455-f1c623a9e750/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210412220455-f1c623a9e750/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210503080704-8803ae5d1324/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015 h1:hZR0X1kPW+nwyJ9xRxqZk1vx5RUObAPBdKVvXPDUH/E=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210608053332-aa57babbf139 h1:C+AwYEtBp/VQwoLntUmQ/yx3MS9vmZaKNdw5eOpoQe8=
golang.org/x/sys v0.0.0-20210608053332-aa57babbf139/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -1231,8 +1240,9 @@ golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.1 h1:wGiQel/hW0NnEkJUk8lbzkX2gFJU6PFxf1v5OlCfuOs=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.2 h1:kRBLX7v7Af8W7Gdbbc908OJcdgtK8bOz9Uaj8/F1ACA=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -1267,9 +1277,9 @@ google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjR
google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU= google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94= google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
google.golang.org/api v0.45.0/go.mod h1:ISLIJCedJolbZvDfAk+Ctuq5hf+aJ33WgtUsfyFoLXA= google.golang.org/api v0.45.0/go.mod h1:ISLIJCedJolbZvDfAk+Ctuq5hf+aJ33WgtUsfyFoLXA=
google.golang.org/api v0.46.0/go.mod h1:ceL4oozhkAiTID8XMmJBsIxID/9wMXJVVFXPg4ylg3I=
google.golang.org/api v0.47.0 h1:sQLWZQvP6jPGIP4JGPkJu4zHswrv81iobiyszr3b/0I=
google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo= google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo=
google.golang.org/api v0.48.0 h1:RDAPWfNFY06dffEXfn7hZF5Fr1ZbnChzfQZAPyBd1+I=
google.golang.org/api v0.48.0/go.mod h1:71Pr1vy+TAZRPkPs/xlCf5SsU8WjuAWv1Pfjbtukyy4=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@ -1326,11 +1336,11 @@ google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A= google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
google.golang.org/genproto v0.0.0-20210413151531-c14fb6ef47c3/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= google.golang.org/genproto v0.0.0-20210413151531-c14fb6ef47c3/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
google.golang.org/genproto v0.0.0-20210420162539-3c870d7478d2/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= google.golang.org/genproto v0.0.0-20210420162539-3c870d7478d2/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
google.golang.org/genproto v0.0.0-20210429181445-86c259c2b4ab/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
google.golang.org/genproto v0.0.0-20210517163617-5e0236093d7a/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210518161634-ec7691c0a37d h1:bRz6UmsZEz/CzoTjUDp4ZcdguhSWi6CyU299wMQBpZU= google.golang.org/genproto v0.0.0-20210604141403-392c879c8b08/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210518161634-ec7691c0a37d/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= google.golang.org/genproto v0.0.0-20210607140030-00d4fb20b1ae h1:2dB4bZ/B7RJdKuvHk3mKTzL2xwrikb+Y/QQy7WdyBPk=
google.golang.org/genproto v0.0.0-20210607140030-00d4fb20b1ae/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs= google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM= google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
@ -1361,6 +1371,7 @@ google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQ
google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.38.0 h1:/9BgsAsa5nWe26HqOlvlgJnqBuktYOLCgjCPqsa56W0= google.golang.org/grpc v1.38.0 h1:/9BgsAsa5nWe26HqOlvlgJnqBuktYOLCgjCPqsa56W0=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=

View file

@ -0,0 +1,30 @@
package filestream
import (
"fmt"
"golang.org/x/sys/unix"
)
func (st *streamTracker) adviseDontNeed(n int, fdatasync bool) error {
st.length += uint64(n)
if st.fd == 0 {
return nil
}
if st.length < dontNeedBlockSize {
return nil
}
blockSize := st.length - (st.length % dontNeedBlockSize)
if fdatasync {
if err := unix.Fsync(int(st.fd)); err != nil {
return fmt.Errorf("unix.Fsync error: %w", err)
}
}
st.offset += blockSize
st.length -= blockSize
return nil
}
func (st *streamTracker) close() error {
return nil
}

View file

@ -0,0 +1,8 @@
package fs
import "os"
func fadviseSequentialRead(f *os.File, prefetch bool) error {
// TODO: implement this properly
return nil
}

68
lib/fs/fs_solaris.go Normal file
View file

@ -0,0 +1,68 @@
package fs
import (
"fmt"
"os"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"golang.org/x/sys/unix"
)
func mmap(fd int, length int) (data []byte, err error) {
return unix.Mmap(fd, 0, length, unix.PROT_READ, unix.MAP_SHARED)
}
func mUnmap(data []byte) error {
return unix.Munmap(data)
}
func mustSyncPath(path string) {
d, err := os.Open(path)
if err != nil {
logger.Panicf("FATAL: cannot open %q: %s", path, err)
}
if err := d.Sync(); err != nil {
_ = d.Close()
logger.Panicf("FATAL: cannot flush %q to storage: %s", path, err)
}
if err := d.Close(); err != nil {
logger.Panicf("FATAL: cannot close %q: %s", path, err)
}
}
func createFlockFile(flockFile string) (*os.File, error) {
flockF, err := os.Create(flockFile)
if err != nil {
return nil, fmt.Errorf("cannot create lock file %q: %w", flockFile, err)
}
flock := unix.Flock_t{
Type: unix.F_WRLCK,
Start: 0,
Len: 0,
Whence: 0,
}
if err := unix.FcntlFlock(flockF.Fd(), unix.F_SETLK, &flock); err != nil {
return nil, fmt.Errorf("cannot acquire lock on file %q: %w", flockFile, err)
}
return flockF, nil
}
func mustGetFreeSpace(path string) uint64 {
d, err := os.Open(path)
if err != nil {
logger.Panicf("FATAL: cannot determine free disk space on %q: %s", path, err)
}
defer MustClose(d)
fd := d.Fd()
var stat unix.Statvfs_t
if err := unix.Fstatvfs(int(fd), &stat); err != nil {
logger.Panicf("FATAL: cannot determine free disk space on %q: %s", path, err)
}
return freeSpace(stat)
}
func freeSpace(stat unix.Statvfs_t) uint64 {
return uint64(stat.Bavail) * uint64(stat.Bsize)
}

View file

@ -216,7 +216,9 @@ func handlerWrapper(s *server, w http.ResponseWriter, r *http.Request, rh Reques
// The following recover() code works around this by explicitly stopping the process after logging the panic. // The following recover() code works around this by explicitly stopping the process after logging the panic.
// See https://github.com/golang/go/issues/16542#issuecomment-246549902 for details. // See https://github.com/golang/go/issues/16542#issuecomment-246549902 for details.
defer func() { defer func() {
if err := recover(); err != nil { // need to check for abortHandler
// https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1353
if err := recover(); err != nil && err != http.ErrAbortHandler {
buf := make([]byte, 1<<20) buf := make([]byte, 1<<20)
n := runtime.Stack(buf, false) n := runtime.Stack(buf, false)
fmt.Fprintf(os.Stderr, "panic: %v\n\n%s", err, buf[:n]) fmt.Fprintf(os.Stderr, "panic: %v\n\n%s", err, buf[:n])

View file

@ -0,0 +1,20 @@
package memory
import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"golang.org/x/sys/unix"
)
const PHYS_PAGES = 0x1f4
func sysTotalMemory() int {
memPageSize := unix.Getpagesize()
// https://man7.org/linux/man-pages/man3/sysconf.3.html
// _SC_PHYS_PAGES
memPagesCnt, err := unix.Sysconf(PHYS_PAGES)
if err != nil {
logger.Panicf("FATAL: error in unix.Sysconf: %s", err)
}
return memPageSize * int(memPagesCnt)
}

View file

@ -25,7 +25,8 @@ type RelabelConfig struct {
// ParsedConfigs represents parsed relabel configs. // ParsedConfigs represents parsed relabel configs.
type ParsedConfigs struct { type ParsedConfigs struct {
prcs []*parsedRelabelConfig prcs []*parsedRelabelConfig
relabelDebug bool
} }
// Len returns the number of relabel configs in pcs. // Len returns the number of relabel configs in pcs.
@ -43,19 +44,20 @@ func (pcs *ParsedConfigs) String() string {
} }
var sb strings.Builder var sb strings.Builder
for _, prc := range pcs.prcs { for _, prc := range pcs.prcs {
fmt.Fprintf(&sb, "%s", prc.String()) fmt.Fprintf(&sb, "%s,", prc.String())
} }
fmt.Fprintf(&sb, "relabelDebug=%v", pcs.relabelDebug)
return sb.String() return sb.String()
} }
// LoadRelabelConfigs loads relabel configs from the given path. // LoadRelabelConfigs loads relabel configs from the given path.
func LoadRelabelConfigs(path string) (*ParsedConfigs, error) { func LoadRelabelConfigs(path string, relabelDebug bool) (*ParsedConfigs, error) {
data, err := ioutil.ReadFile(path) data, err := ioutil.ReadFile(path)
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot read `relabel_configs` from %q: %w", path, err) return nil, fmt.Errorf("cannot read `relabel_configs` from %q: %w", path, err)
} }
data = envtemplate.Replace(data) data = envtemplate.Replace(data)
pcs, err := ParseRelabelConfigsData(data) pcs, err := ParseRelabelConfigsData(data, relabelDebug)
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot unmarshal `relabel_configs` from %q: %w", path, err) return nil, fmt.Errorf("cannot unmarshal `relabel_configs` from %q: %w", path, err)
} }
@ -63,16 +65,16 @@ func LoadRelabelConfigs(path string) (*ParsedConfigs, error) {
} }
// ParseRelabelConfigsData parses relabel configs from the given data. // ParseRelabelConfigsData parses relabel configs from the given data.
func ParseRelabelConfigsData(data []byte) (*ParsedConfigs, error) { func ParseRelabelConfigsData(data []byte, relabelDebug bool) (*ParsedConfigs, error) {
var rcs []RelabelConfig var rcs []RelabelConfig
if err := yaml.UnmarshalStrict(data, &rcs); err != nil { if err := yaml.UnmarshalStrict(data, &rcs); err != nil {
return nil, err return nil, err
} }
return ParseRelabelConfigs(rcs) return ParseRelabelConfigs(rcs, relabelDebug)
} }
// ParseRelabelConfigs parses rcs to dst. // ParseRelabelConfigs parses rcs to dst.
func ParseRelabelConfigs(rcs []RelabelConfig) (*ParsedConfigs, error) { func ParseRelabelConfigs(rcs []RelabelConfig, relabelDebug bool) (*ParsedConfigs, error) {
if len(rcs) == 0 { if len(rcs) == 0 {
return nil, nil return nil, nil
} }
@ -85,7 +87,8 @@ func ParseRelabelConfigs(rcs []RelabelConfig) (*ParsedConfigs, error) {
prcs[i] = prc prcs[i] = prc
} }
return &ParsedConfigs{ return &ParsedConfigs{
prcs: prcs, prcs: prcs,
relabelDebug: relabelDebug,
}, nil }, nil
} }

View file

@ -7,7 +7,7 @@ import (
func TestLoadRelabelConfigsSuccess(t *testing.T) { func TestLoadRelabelConfigsSuccess(t *testing.T) {
path := "testdata/relabel_configs_valid.yml" path := "testdata/relabel_configs_valid.yml"
pcs, err := LoadRelabelConfigs(path) pcs, err := LoadRelabelConfigs(path, false)
if err != nil { if err != nil {
t.Fatalf("cannot load relabel configs from %q: %s", path, err) t.Fatalf("cannot load relabel configs from %q: %s", path, err)
} }
@ -19,7 +19,7 @@ func TestLoadRelabelConfigsSuccess(t *testing.T) {
func TestLoadRelabelConfigsFailure(t *testing.T) { func TestLoadRelabelConfigsFailure(t *testing.T) {
f := func(path string) { f := func(path string) {
t.Helper() t.Helper()
rcs, err := LoadRelabelConfigs(path) rcs, err := LoadRelabelConfigs(path, false)
if err == nil { if err == nil {
t.Fatalf("expecting non-nil error") t.Fatalf("expecting non-nil error")
} }
@ -38,7 +38,7 @@ func TestLoadRelabelConfigsFailure(t *testing.T) {
func TestParseRelabelConfigsSuccess(t *testing.T) { func TestParseRelabelConfigsSuccess(t *testing.T) {
f := func(rcs []RelabelConfig, pcsExpected *ParsedConfigs) { f := func(rcs []RelabelConfig, pcsExpected *ParsedConfigs) {
t.Helper() t.Helper()
pcs, err := ParseRelabelConfigs(rcs) pcs, err := ParseRelabelConfigs(rcs, false)
if err != nil { if err != nil {
t.Fatalf("unexected error: %s", err) t.Fatalf("unexected error: %s", err)
} }
@ -72,7 +72,7 @@ func TestParseRelabelConfigsSuccess(t *testing.T) {
func TestParseRelabelConfigsFailure(t *testing.T) { func TestParseRelabelConfigsFailure(t *testing.T) {
f := func(rcs []RelabelConfig) { f := func(rcs []RelabelConfig) {
t.Helper() t.Helper()
pcs, err := ParseRelabelConfigs(rcs) pcs, err := ParseRelabelConfigs(rcs, false)
if err == nil { if err == nil {
t.Fatalf("expecting non-nil error") t.Fatalf("expecting non-nil error")
} }

View file

@ -41,11 +41,20 @@ func (prc *parsedRelabelConfig) String() string {
// //
// The returned labels at labels[labelsOffset:] are sorted. // The returned labels at labels[labelsOffset:] are sorted.
func (pcs *ParsedConfigs) Apply(labels []prompbmarshal.Label, labelsOffset int, isFinalize bool) []prompbmarshal.Label { func (pcs *ParsedConfigs) Apply(labels []prompbmarshal.Label, labelsOffset int, isFinalize bool) []prompbmarshal.Label {
var inStr string
relabelDebug := false
if pcs != nil { if pcs != nil {
relabelDebug = pcs.relabelDebug
if relabelDebug {
inStr = labelsToString(labels[labelsOffset:])
}
for _, prc := range pcs.prcs { for _, prc := range pcs.prcs {
tmp := prc.apply(labels, labelsOffset) tmp := prc.apply(labels, labelsOffset)
if len(tmp) == labelsOffset { if len(tmp) == labelsOffset {
// All the labels have been removed. // All the labels have been removed.
if pcs.relabelDebug {
logger.Infof("\nRelabel In: %s\nRelabel Out: DROPPED - all labels removed", inStr)
}
return tmp return tmp
} }
labels = tmp labels = tmp
@ -56,6 +65,20 @@ func (pcs *ParsedConfigs) Apply(labels []prompbmarshal.Label, labelsOffset int,
labels = FinalizeLabels(labels[:labelsOffset], labels[labelsOffset:]) labels = FinalizeLabels(labels[:labelsOffset], labels[labelsOffset:])
} }
SortLabels(labels[labelsOffset:]) SortLabels(labels[labelsOffset:])
if relabelDebug {
if len(labels) == labelsOffset {
logger.Infof("\nRelabel In: %s\nRelabel Out: DROPPED - all labels removed", inStr)
return labels
}
outStr := labelsToString(labels[labelsOffset:])
if inStr == outStr {
logger.Infof("\nRelabel In: %s\nRelabel Out: KEPT AS IS - no change", inStr)
} else {
logger.Infof("\nRelabel In: %s\nRelabel Out: %s", inStr, outStr)
}
// Drop labels
labels = labels[:labelsOffset]
}
return labels return labels
} }
@ -412,3 +435,33 @@ func CleanLabels(labels []prompbmarshal.Label) {
label.Value = "" label.Value = ""
} }
} }
func labelsToString(labels []prompbmarshal.Label) string {
labelsCopy := append([]prompbmarshal.Label{}, labels...)
SortLabels(labelsCopy)
mname := ""
for _, label := range labelsCopy {
if label.Name == "__name__" {
mname = label.Value
break
}
}
if mname != "" && len(labelsCopy) <= 1 {
return mname
}
b := []byte(mname)
b = append(b, '{')
for i, label := range labelsCopy {
if label.Name == "__name__" {
continue
}
b = append(b, label.Name...)
b = append(b, '=')
b = strconv.AppendQuote(b, label.Value)
if i+1 < len(labelsCopy) {
b = append(b, ',')
}
}
b = append(b, '}')
return string(b)
}

View file

@ -7,10 +7,57 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
) )
func TestLabelsToString(t *testing.T) {
f := func(labels []prompbmarshal.Label, sExpected string) {
t.Helper()
s := labelsToString(labels)
if s != sExpected {
t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", s, sExpected)
}
}
f(nil, "{}")
f([]prompbmarshal.Label{
{
Name: "__name__",
Value: "foo",
},
}, "foo")
f([]prompbmarshal.Label{
{
Name: "foo",
Value: "bar",
},
}, `{foo="bar"}`)
f([]prompbmarshal.Label{
{
Name: "foo",
Value: "bar",
},
{
Name: "a",
Value: "bc",
},
}, `{a="bc",foo="bar"}`)
f([]prompbmarshal.Label{
{
Name: "foo",
Value: "bar",
},
{
Name: "__name__",
Value: "xxx",
},
{
Name: "a",
Value: "bc",
},
}, `xxx{a="bc",foo="bar"}`)
}
func TestApplyRelabelConfigs(t *testing.T) { func TestApplyRelabelConfigs(t *testing.T) {
f := func(config string, labels []prompbmarshal.Label, isFinalize bool, resultExpected []prompbmarshal.Label) { f := func(config string, labels []prompbmarshal.Label, isFinalize bool, resultExpected []prompbmarshal.Label) {
t.Helper() t.Helper()
pcs, err := ParseRelabelConfigsData([]byte(config)) pcs, err := ParseRelabelConfigsData([]byte(config), false)
if err != nil { if err != nil {
t.Fatalf("cannot parse %q: %s", config, err) t.Fatalf("cannot parse %q: %s", config, err)
} }

View file

@ -840,7 +840,7 @@ func BenchmarkApplyRelabelConfigs(b *testing.B) {
} }
func mustParseRelabelConfigs(config string) *ParsedConfigs { func mustParseRelabelConfigs(config string) *ParsedConfigs {
pcs, err := ParseRelabelConfigsData([]byte(config)) pcs, err := ParseRelabelConfigsData([]byte(config), false)
if err != nil { if err != nil {
panic(fmt.Errorf("unexpected error: %w", err)) panic(fmt.Errorf("unexpected error: %w", err))
} }

View file

@ -192,8 +192,10 @@ func (c *client) GetStreamReader() (*streamReader, error) {
} }
scrapesOK.Inc() scrapesOK.Inc()
return &streamReader{ return &streamReader{
r: resp.Body, r: resp.Body,
cancel: cancel, cancel: cancel,
scrapeURL: c.scrapeURL,
maxBodySize: int64(c.hc.MaxResponseBodySize),
}, nil }, nil
} }
@ -328,14 +330,20 @@ func doRequestWithPossibleRetry(hc *fasthttp.HostClient, req *fasthttp.Request,
} }
type streamReader struct { type streamReader struct {
r io.ReadCloser r io.ReadCloser
cancel context.CancelFunc cancel context.CancelFunc
bytesRead int64 bytesRead int64
scrapeURL string
maxBodySize int64
} }
func (sr *streamReader) Read(p []byte) (int, error) { func (sr *streamReader) Read(p []byte) (int, error) {
n, err := sr.r.Read(p) n, err := sr.r.Read(p)
sr.bytesRead += int64(n) sr.bytesRead += int64(n)
if err == nil && sr.bytesRead > sr.maxBodySize {
err = fmt.Errorf("the response from %q exceeds -promscrape.maxScrapeSize=%d; "+
"either reduce the response size for the target or increase -promscrape.maxScrapeSize", sr.scrapeURL, sr.maxBodySize)
}
return n, err return n, err
} }

View file

@ -118,6 +118,8 @@ type ScrapeConfig struct {
GCESDConfigs []gce.SDConfig `yaml:"gce_sd_configs,omitempty"` GCESDConfigs []gce.SDConfig `yaml:"gce_sd_configs,omitempty"`
// These options are supported only by lib/promscrape. // These options are supported only by lib/promscrape.
RelabelDebug bool `yaml:"relabel_debug,omitempty"`
MetricRelabelDebug bool `yaml:"metric_relabel_debug,omitempty"`
DisableCompression bool `yaml:"disable_compression,omitempty"` DisableCompression bool `yaml:"disable_compression,omitempty"`
DisableKeepAlive bool `yaml:"disable_keepalive,omitempty"` DisableKeepAlive bool `yaml:"disable_keepalive,omitempty"`
StreamParse bool `yaml:"stream_parse,omitempty"` StreamParse bool `yaml:"stream_parse,omitempty"`
@ -573,11 +575,11 @@ func getScrapeWorkConfig(sc *ScrapeConfig, baseDir string, globalCfg *GlobalConf
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot parse proxy auth config for `job_name` %q: %w", jobName, err) return nil, fmt.Errorf("cannot parse proxy auth config for `job_name` %q: %w", jobName, err)
} }
relabelConfigs, err := promrelabel.ParseRelabelConfigs(sc.RelabelConfigs) relabelConfigs, err := promrelabel.ParseRelabelConfigs(sc.RelabelConfigs, sc.RelabelDebug)
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot parse `relabel_configs` for `job_name` %q: %w", jobName, err) return nil, fmt.Errorf("cannot parse `relabel_configs` for `job_name` %q: %w", jobName, err)
} }
metricRelabelConfigs, err := promrelabel.ParseRelabelConfigs(sc.MetricRelabelConfigs) metricRelabelConfigs, err := promrelabel.ParseRelabelConfigs(sc.MetricRelabelConfigs, sc.MetricRelabelDebug)
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot parse `metric_relabel_configs` for `job_name` %q: %w", jobName, err) return nil, fmt.Errorf("cannot parse `metric_relabel_configs` for `job_name` %q: %w", jobName, err)
} }

View file

@ -17,7 +17,7 @@ const appsAPIPath = "/apps"
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka
type SDConfig struct { type SDConfig struct {
Server string `yaml:"server,omitempty"` Server string `yaml:"server,omitempty"`
HTTPClientConfig promauth.HTTPClientConfig `ymal:",inline"` HTTPClientConfig promauth.HTTPClientConfig `yaml:",inline"`
ProxyURL proxy.URL `yaml:"proxy_url,omitempty"` ProxyURL proxy.URL `yaml:"proxy_url,omitempty"`
ProxyClientConfig promauth.ProxyClientConfig `yaml:",inline"` ProxyClientConfig promauth.ProxyClientConfig `yaml:",inline"`
// RefreshInterval time.Duration `yaml:"refresh_interval"` // RefreshInterval time.Duration `yaml:"refresh_interval"`

View file

@ -305,6 +305,8 @@ func (sw *scrapeWork) scrapeInternal(scrapeTimestamp, realTimestamp int64) error
wc.resetNoRows() wc.resetNoRows()
up = 0 up = 0
scrapesSkippedBySampleLimit.Inc() scrapesSkippedBySampleLimit.Inc()
err = fmt.Errorf("the response from %q exceeds sample_limit=%d; "+
"either reduce the sample count for the target or increase sample_limit", sw.Config.ScrapeURL, sw.Config.SampleLimit)
} }
sw.updateSeriesAdded(wc) sw.updateSeriesAdded(wc)
seriesAdded := sw.finalizeSeriesAdded(samplesPostRelabeling) seriesAdded := sw.finalizeSeriesAdded(samplesPostRelabeling)
@ -348,6 +350,12 @@ func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error {
// after returning from the callback - this will result in data race. // after returning from the callback - this will result in data race.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825#issuecomment-723198247 // See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825#issuecomment-723198247
samplesPostRelabeling += len(wc.writeRequest.Timeseries) samplesPostRelabeling += len(wc.writeRequest.Timeseries)
if sw.Config.SampleLimit > 0 && samplesPostRelabeling > sw.Config.SampleLimit {
wc.resetNoRows()
scrapesSkippedBySampleLimit.Inc()
return fmt.Errorf("the response from %q exceeds sample_limit=%d; "+
"either reduce the sample count for the target or increase sample_limit", sw.Config.ScrapeURL, sw.Config.SampleLimit)
}
sw.updateSeriesAdded(wc) sw.updateSeriesAdded(wc)
startTime := time.Now() startTime := time.Now()
sw.PushData(&wc.writeRequest) sw.PushData(&wc.writeRequest)

View file

@ -115,7 +115,9 @@ func TestScrapeWorkScrapeInternalSuccess(t *testing.T) {
timestamp := int64(123000) timestamp := int64(123000)
if err := sw.scrapeInternal(timestamp, timestamp); err != nil { if err := sw.scrapeInternal(timestamp, timestamp); err != nil {
t.Fatalf("unexpected error: %s", err) if !strings.Contains(err.Error(), "sample_limit") {
t.Fatalf("unexpected error: %s", err)
}
} }
if pushDataErr != nil { if pushDataErr != nil {
t.Fatalf("unexpected error: %s", pushDataErr) t.Fatalf("unexpected error: %s", pushDataErr)
@ -433,7 +435,7 @@ func timeseriesToString(ts *prompbmarshal.TimeSeries) string {
} }
func mustParseRelabelConfigs(config string) *promrelabel.ParsedConfigs { func mustParseRelabelConfigs(config string) *promrelabel.ParsedConfigs {
pcs, err := promrelabel.ParseRelabelConfigsData([]byte(config)) pcs, err := promrelabel.ParseRelabelConfigsData([]byte(config), false)
if err != nil { if err != nil {
panic(fmt.Errorf("cannot parse %q: %w", config, err)) panic(fmt.Errorf("cannot parse %q: %w", config, err))
} }

View file

@ -2554,10 +2554,10 @@ func (is *indexSearch) updateMetricIDsForOrSuffixesNoFilter(tf *tagFilter, metri
kb.B = append(kb.B, orSuffix...) kb.B = append(kb.B, orSuffix...)
kb.B = append(kb.B, tagSeparatorChar) kb.B = append(kb.B, tagSeparatorChar)
lc, err := is.updateMetricIDsForOrSuffixNoFilter(kb.B, metricIDs, maxMetrics, maxLoopsCount-loopsCount) lc, err := is.updateMetricIDsForOrSuffixNoFilter(kb.B, metricIDs, maxMetrics, maxLoopsCount-loopsCount)
loopsCount += lc
if err != nil { if err != nil {
return loopsCount, err return loopsCount, err
} }
loopsCount += lc
if metricIDs.Len() >= maxMetrics { if metricIDs.Len() >= maxMetrics {
return loopsCount, nil return loopsCount, nil
} }
@ -2575,10 +2575,10 @@ func (is *indexSearch) updateMetricIDsForOrSuffixesWithFilter(tf *tagFilter, met
kb.B = append(kb.B, orSuffix...) kb.B = append(kb.B, orSuffix...)
kb.B = append(kb.B, tagSeparatorChar) kb.B = append(kb.B, tagSeparatorChar)
lc, err := is.updateMetricIDsForOrSuffixWithFilter(kb.B, metricIDs, sortedFilter, tf.isNegative, maxLoopsCount-loopsCount) lc, err := is.updateMetricIDsForOrSuffixWithFilter(kb.B, metricIDs, sortedFilter, tf.isNegative, maxLoopsCount-loopsCount)
loopsCount += lc
if err != nil { if err != nil {
return loopsCount, err return loopsCount, err
} }
loopsCount += lc
} }
return loopsCount, nil return loopsCount, nil
} }

View file

@ -1898,24 +1898,28 @@ type dateMetricIDCache struct {
byDate atomic.Value byDate atomic.Value
// Contains mutable map protected by mu // Contains mutable map protected by mu
byDateMutable *byDateMetricIDMap byDateMutable *byDateMetricIDMap
lastSyncTime uint64 nextSyncDeadline uint64
mu sync.Mutex mu sync.Mutex
} }
func newDateMetricIDCache() *dateMetricIDCache { func newDateMetricIDCache() *dateMetricIDCache {
var dmc dateMetricIDCache var dmc dateMetricIDCache
dmc.Reset() dmc.resetLocked()
return &dmc return &dmc
} }
func (dmc *dateMetricIDCache) Reset() { func (dmc *dateMetricIDCache) Reset() {
dmc.mu.Lock() dmc.mu.Lock()
dmc.resetLocked()
dmc.mu.Unlock()
}
func (dmc *dateMetricIDCache) resetLocked() {
// Do not reset syncsCount and resetsCount // Do not reset syncsCount and resetsCount
dmc.byDate.Store(newByDateMetricIDMap()) dmc.byDate.Store(newByDateMetricIDMap())
dmc.byDateMutable = newByDateMetricIDMap() dmc.byDateMutable = newByDateMetricIDMap()
dmc.lastSyncTime = fasttime.UnixTimestamp() dmc.nextSyncDeadline = 10 + fasttime.UnixTimestamp()
dmc.mu.Unlock()
atomic.AddUint64(&dmc.resetsCount, 1) atomic.AddUint64(&dmc.resetsCount, 1)
} }
@ -1948,20 +1952,12 @@ func (dmc *dateMetricIDCache) Has(date, metricID uint64) bool {
} }
// Slow path. Check mutable map. // Slow path. Check mutable map.
currentTime := fasttime.UnixTimestamp()
dmc.mu.Lock() dmc.mu.Lock()
v = dmc.byDateMutable.get(date) v = dmc.byDateMutable.get(date)
ok := v.Has(metricID) ok := v.Has(metricID)
mustSync := false dmc.syncLockedIfNeeded()
if currentTime-dmc.lastSyncTime > 10 {
mustSync = true
dmc.lastSyncTime = currentTime
}
dmc.mu.Unlock() dmc.mu.Unlock()
if mustSync {
dmc.sync()
}
return ok return ok
} }
@ -2000,21 +1996,47 @@ func (dmc *dateMetricIDCache) Set(date, metricID uint64) {
dmc.mu.Unlock() dmc.mu.Unlock()
} }
func (dmc *dateMetricIDCache) sync() { func (dmc *dateMetricIDCache) syncLockedIfNeeded() {
dmc.mu.Lock() currentTime := fasttime.UnixTimestamp()
if currentTime >= dmc.nextSyncDeadline {
dmc.nextSyncDeadline = currentTime + 10
dmc.syncLocked()
}
}
func (dmc *dateMetricIDCache) syncLocked() {
if len(dmc.byDateMutable.m) == 0 {
// Nothing to sync.
return
}
byDate := dmc.byDate.Load().(*byDateMetricIDMap) byDate := dmc.byDate.Load().(*byDateMetricIDMap)
for date, e := range dmc.byDateMutable.m { byDateMutable := dmc.byDateMutable
for date, e := range byDateMutable.m {
v := byDate.get(date) v := byDate.get(date)
e.v.Union(v) if v == nil {
continue
}
v = v.Clone()
v.Union(&e.v)
byDateMutable.m[date] = &byDateMetricIDEntry{
date: date,
v: *v,
}
}
for date, e := range byDate.m {
v := byDateMutable.get(date)
if v != nil {
continue
}
byDateMutable.m[date] = e
} }
dmc.byDate.Store(dmc.byDateMutable) dmc.byDate.Store(dmc.byDateMutable)
dmc.byDateMutable = newByDateMetricIDMap() dmc.byDateMutable = newByDateMetricIDMap()
dmc.mu.Unlock()
atomic.AddUint64(&dmc.syncsCount, 1) atomic.AddUint64(&dmc.syncsCount, 1)
if dmc.EntriesCount() > memory.Allowed()/128 { if dmc.EntriesCount() > memory.Allowed()/128 {
dmc.Reset() dmc.resetLocked()
} }
} }

View file

@ -89,7 +89,9 @@ func testDateMetricIDCache(c *dateMetricIDCache, concurrent bool) error {
return fmt.Errorf("c.Has(%d, %d) must return true, but returned false", date, metricID) return fmt.Errorf("c.Has(%d, %d) must return true, but returned false", date, metricID)
} }
if i%11234 == 0 { if i%11234 == 0 {
c.sync() c.mu.Lock()
c.syncLocked()
c.mu.Unlock()
} }
if i%34323 == 0 { if i%34323 == 0 {
c.Reset() c.Reset()
@ -103,7 +105,9 @@ func testDateMetricIDCache(c *dateMetricIDCache, concurrent bool) error {
metricID := uint64(i) % 123 metricID := uint64(i) % 123
c.Set(date, metricID) c.Set(date, metricID)
} }
c.sync() c.mu.Lock()
c.syncLocked()
c.mu.Unlock()
for i := 0; i < 1e5; i++ { for i := 0; i < 1e5; i++ {
date := uint64(i) % 3 date := uint64(i) % 3
metricID := uint64(i) % 123 metricID := uint64(i) % 123

View file

@ -79,9 +79,7 @@ func (s *Set) SizeBytes() uint64 {
} }
n := uint64(unsafe.Sizeof(*s)) n := uint64(unsafe.Sizeof(*s))
for i := range s.buckets { for i := range s.buckets {
b32 := &s.buckets[i] n += s.buckets[i].sizeBytes()
n += uint64(unsafe.Sizeof(b32))
n += b32.sizeBytes()
} }
return n return n
} }
@ -411,7 +409,7 @@ type bucket32 struct {
b16his []uint16 b16his []uint16
// buckets are sorted by b16his // buckets are sorted by b16his
buckets []bucket16 buckets []*bucket16
} }
func (b *bucket32) getLen() int { func (b *bucket32) getLen() int {
@ -434,7 +432,7 @@ func (b *bucket32) union(a *bucket32, mayOwn bool) {
for j < len(a.b16his) { for j < len(a.b16his) {
b16 := b.addBucket16(a.b16his[j]) b16 := b.addBucket16(a.b16his[j])
if mayOwn { if mayOwn {
*b16 = a.buckets[j] *b16 = *a.buckets[j]
} else { } else {
a.buckets[j].copyTo(b16) a.buckets[j].copyTo(b16)
} }
@ -445,7 +443,7 @@ func (b *bucket32) union(a *bucket32, mayOwn bool) {
for j < len(a.b16his) && a.b16his[j] < b.b16his[i] { for j < len(a.b16his) && a.b16his[j] < b.b16his[i] {
b16 := b.addBucket16(a.b16his[j]) b16 := b.addBucket16(a.b16his[j])
if mayOwn { if mayOwn {
*b16 = a.buckets[j] *b16 = *a.buckets[j]
} else { } else {
a.buckets[j].copyTo(b16) a.buckets[j].copyTo(b16)
} }
@ -455,7 +453,7 @@ func (b *bucket32) union(a *bucket32, mayOwn bool) {
break break
} }
if b.b16his[i] == a.b16his[j] { if b.b16his[i] == a.b16his[j] {
b.buckets[i].union(&a.buckets[j]) b.buckets[i].union(a.buckets[j])
i++ i++
j++ j++
} }
@ -481,7 +479,7 @@ func (b *bucket32) intersect(a *bucket32) {
j := 0 j := 0
for { for {
for i < len(b.b16his) && j < len(a.b16his) && b.b16his[i] < a.b16his[j] { for i < len(b.b16his) && j < len(a.b16his) && b.b16his[i] < a.b16his[j] {
b.buckets[i] = bucket16{} *b.buckets[i] = bucket16{}
i++ i++
} }
if i >= len(b.b16his) { if i >= len(b.b16his) {
@ -492,13 +490,13 @@ func (b *bucket32) intersect(a *bucket32) {
} }
if j >= len(a.b16his) { if j >= len(a.b16his) {
for i < len(b.b16his) { for i < len(b.b16his) {
b.buckets[i] = bucket16{} *b.buckets[i] = bucket16{}
i++ i++
} }
break break
} }
if b.b16his[i] == a.b16his[j] { if b.b16his[i] == a.b16his[j] {
b.buckets[i].intersect(&a.buckets[j]) b.buckets[i].intersect(a.buckets[j])
i++ i++
j++ j++
} }
@ -506,16 +504,15 @@ func (b *bucket32) intersect(a *bucket32) {
// Remove zero buckets // Remove zero buckets
b16his := b.b16his[:0] b16his := b.b16his[:0]
bs := b.buckets[:0] bs := b.buckets[:0]
for i := range b.buckets { for i, b16 := range b.buckets {
b32 := &b.buckets[i] if b16.isZero() {
if b32.isZero() {
continue continue
} }
b16his = append(b16his, b.b16his[i]) b16his = append(b16his, b.b16his[i])
bs = append(bs, *b32) bs = append(bs, b16)
} }
for i := len(bs); i < len(b.buckets); i++ { for i := len(bs); i < len(b.buckets); i++ {
b.buckets[i] = bucket16{} b.buckets[i] = nil
} }
b.hint = 0 b.hint = 0
b.b16his = b16his b.b16his = b16his
@ -525,9 +522,9 @@ func (b *bucket32) intersect(a *bucket32) {
func (b *bucket32) forEach(f func(part []uint64) bool) bool { func (b *bucket32) forEach(f func(part []uint64) bool) bool {
xbuf := partBufPool.Get().(*[]uint64) xbuf := partBufPool.Get().(*[]uint64)
buf := *xbuf buf := *xbuf
for i := range b.buckets { for i, b16 := range b.buckets {
hi16 := b.b16his[i] hi16 := b.b16his[i]
buf = b.buckets[i].appendTo(buf[:0], b.hi, hi16) buf = b16.appendTo(buf[:0], b.hi, hi16)
if !f(buf) { if !f(buf) {
return false return false
} }
@ -547,9 +544,7 @@ var partBufPool = &sync.Pool{
func (b *bucket32) sizeBytes() uint64 { func (b *bucket32) sizeBytes() uint64 {
n := uint64(unsafe.Sizeof(*b)) n := uint64(unsafe.Sizeof(*b))
n += 2 * uint64(len(b.b16his)) n += 2 * uint64(len(b.b16his))
for i := range b.buckets { for _, b16 := range b.buckets {
b16 := &b.buckets[i]
n += uint64(unsafe.Sizeof(b16))
n += b16.sizeBytes() n += b16.sizeBytes()
} }
return n return n
@ -561,9 +556,11 @@ func (b *bucket32) copyTo(dst *bucket32) {
// Do not reuse dst.buckets, since it may be used in other places. // Do not reuse dst.buckets, since it may be used in other places.
dst.buckets = nil dst.buckets = nil
if len(b.buckets) > 0 { if len(b.buckets) > 0 {
dst.buckets = make([]bucket16, len(b.buckets)) dst.buckets = make([]*bucket16, len(b.buckets))
for i := range b.buckets { for i, b16 := range b.buckets {
b.buckets[i].copyTo(&dst.buckets[i]) b16Dst := &bucket16{}
b16.copyTo(b16Dst)
dst.buckets[i] = b16Dst
} }
} }
} }
@ -617,7 +614,7 @@ func (b *bucket32) getOrCreateBucket16(hi uint16) *bucket16 {
if n < 0 || n >= len(his) || his[n] != hi { if n < 0 || n >= len(his) || his[n] != hi {
return b.addBucketAtPos(hi, n) return b.addBucketAtPos(hi, n)
} }
return &bs[n] return bs[n]
} }
func (b *bucket32) addSlow(hi, lo uint16) bool { func (b *bucket32) addSlow(hi, lo uint16) bool {
@ -635,8 +632,8 @@ func (b *bucket32) addSlow(hi, lo uint16) bool {
func (b *bucket32) addBucket16(hi uint16) *bucket16 { func (b *bucket32) addBucket16(hi uint16) *bucket16 {
b.b16his = append(b.b16his, hi) b.b16his = append(b.b16his, hi)
b.buckets = append(b.buckets, bucket16{}) b.buckets = append(b.buckets, &bucket16{})
return &b.buckets[len(b.buckets)-1] return b.buckets[len(b.buckets)-1]
} }
func (b *bucket32) addBucketAtPos(hi uint16, pos int) *bucket16 { func (b *bucket32) addBucketAtPos(hi uint16, pos int) *bucket16 {
@ -650,8 +647,8 @@ func (b *bucket32) addBucketAtPos(hi uint16, pos int) *bucket16 {
b.b16his = append(b.b16his[:pos+1], b.b16his[pos:]...) b.b16his = append(b.b16his[:pos+1], b.b16his[pos:]...)
b.b16his[pos] = hi b.b16his[pos] = hi
b.buckets = append(b.buckets[:pos+1], b.buckets[pos:]...) b.buckets = append(b.buckets[:pos+1], b.buckets[pos:]...)
b16 := &b.buckets[pos] b16 := &bucket16{}
*b16 = bucket16{} b.buckets[pos] = b16
return b16 return b16
} }

View file

@ -51,7 +51,7 @@ confinement: strict # use 'strict' once you have the right plugs and slots
parts: parts:
build: build:
plugin: go plugin: go
go-channel: 1.15/stable go-channel: 1.16/stable
go-importpath: github.com/VictoriaMetrics/VictoriaMetrics go-importpath: github.com/VictoriaMetrics/VictoriaMetrics
source: . source: .
source-type: local source-type: local

View file

@ -1,5 +1,20 @@
# Changes # Changes
## [0.83.0](https://www.github.com/googleapis/google-cloud-go/compare/v0.82.0...v0.83.0) (2021-06-02)
### Features
* **dialogflow:** added a field in the query result to indicate whether slot filling is cancelled. ([f9cda8f](https://www.github.com/googleapis/google-cloud-go/commit/f9cda8fb6c3d76a062affebe6649f0a43aeb96f3))
* **essentialcontacts:** start generating apiv1 ([#4118](https://www.github.com/googleapis/google-cloud-go/issues/4118)) ([fe14afc](https://www.github.com/googleapis/google-cloud-go/commit/fe14afcf74e09089b22c4f5221cbe37046570fda))
* **gsuiteaddons:** start generating apiv1 ([#4082](https://www.github.com/googleapis/google-cloud-go/issues/4082)) ([6de5c99](https://www.github.com/googleapis/google-cloud-go/commit/6de5c99173c4eeaf777af18c47522ca15637d232))
* **osconfig:** OSConfig: add ExecResourceOutput and per step error message. ([f9cda8f](https://www.github.com/googleapis/google-cloud-go/commit/f9cda8fb6c3d76a062affebe6649f0a43aeb96f3))
* **osconfig:** start generating apiv1alpha ([#4119](https://www.github.com/googleapis/google-cloud-go/issues/4119)) ([8ad471f](https://www.github.com/googleapis/google-cloud-go/commit/8ad471f26087ec076460df6dcf27769ffe1b8834))
* **privatecatalog:** start generating apiv1beta1 ([500c1a6](https://www.github.com/googleapis/google-cloud-go/commit/500c1a6101f624cb6032f0ea16147645a02e7076))
* **serviceusage:** start generating apiv1 ([#4120](https://www.github.com/googleapis/google-cloud-go/issues/4120)) ([e4531f9](https://www.github.com/googleapis/google-cloud-go/commit/e4531f93cfeb6388280bb253ef6eb231aba37098))
* **shell:** start generating apiv1 ([500c1a6](https://www.github.com/googleapis/google-cloud-go/commit/500c1a6101f624cb6032f0ea16147645a02e7076))
* **vpcaccess:** start generating apiv1 ([500c1a6](https://www.github.com/googleapis/google-cloud-go/commit/500c1a6101f624cb6032f0ea16147645a02e7076))
## [0.82.0](https://www.github.com/googleapis/google-cloud-go/compare/v0.81.0...v0.82.0) (2021-05-17) ## [0.82.0](https://www.github.com/googleapis/google-cloud-go/compare/v0.81.0...v0.82.0) (2021-05-17)

View file

@ -136,6 +136,9 @@ As part of the setup that follows, the following variables will be configured:
- `GCLOUD_TESTS_GOLANG_KEYRING`: The full name of the keyring for the tests, - `GCLOUD_TESTS_GOLANG_KEYRING`: The full name of the keyring for the tests,
in the form in the form
"projects/P/locations/L/keyRings/R". The creation of this is described below. "projects/P/locations/L/keyRings/R". The creation of this is described below.
- `GCLOUD_TESTS_BIGTABLE_KEYRING`: The full name of the keyring for the bigtable tests,
in the form
"projects/P/locations/L/keyRings/R". The creation of this is described below. Expected to be single region.
- `GCLOUD_TESTS_GOLANG_ZONE`: Compute Engine zone. - `GCLOUD_TESTS_GOLANG_ZONE`: Compute Engine zone.
Install the [gcloud command-line tool][gcloudcli] to your machine and use it to Install the [gcloud command-line tool][gcloudcli] to your machine and use it to
@ -172,6 +175,7 @@ $ gcloud beta spanner instances create go-integration-test --config regional-us-
$ export MY_KEYRING=some-keyring-name $ export MY_KEYRING=some-keyring-name
$ export MY_LOCATION=global $ export MY_LOCATION=global
$ export MY_SINGLE_LOCATION=us-central1
# Creates a KMS keyring, in the same location as the default location for your # Creates a KMS keyring, in the same location as the default location for your
# project's buckets. # project's buckets.
$ gcloud kms keyrings create $MY_KEYRING --location $MY_LOCATION $ gcloud kms keyrings create $MY_KEYRING --location $MY_LOCATION
@ -182,10 +186,15 @@ $ gcloud kms keys create key2 --keyring $MY_KEYRING --location $MY_LOCATION --pu
$ export GCLOUD_TESTS_GOLANG_KEYRING=projects/$GCLOUD_TESTS_GOLANG_PROJECT_ID/locations/$MY_LOCATION/keyRings/$MY_KEYRING $ export GCLOUD_TESTS_GOLANG_KEYRING=projects/$GCLOUD_TESTS_GOLANG_PROJECT_ID/locations/$MY_LOCATION/keyRings/$MY_KEYRING
# Authorizes Google Cloud Storage to encrypt and decrypt using key1. # Authorizes Google Cloud Storage to encrypt and decrypt using key1.
$ gsutil kms authorize -p $GCLOUD_TESTS_GOLANG_PROJECT_ID -k $GCLOUD_TESTS_GOLANG_KEYRING/cryptoKeys/key1 $ gsutil kms authorize -p $GCLOUD_TESTS_GOLANG_PROJECT_ID -k $GCLOUD_TESTS_GOLANG_KEYRING/cryptoKeys/key1
# Create KMS Key in one region for Bigtable
$ gcloud kms keys create key1 --keyring $MY_KEYRING --location $MY_SINGLE_LOCATION --purpose encryption
# Sets the GCLOUD_TESTS_BIGTABLE_KEYRING environment variable.
$ export GCLOUD_TESTS_BIGTABLE_KEYRING=projects/$GCLOUD_TESTS_GOLANG_PROJECT_ID/locations/$MY_SINGLE_LOCATION/keyRings/$MY_KEYRING
# Authorizes Google Cloud Bigtable to encrypt and decrypt using key1 # Authorizes Google Cloud Bigtable to encrypt and decrypt using key1
$ gcloud kms keys add-iam-policy-binding key1 \ $ gcloud kms keys add-iam-policy-binding key1 \
--keyring $MY_KEYRING \ --keyring $MY_KEYRING \
--location $MY_LOCATION \ --location $MY_SINGLE_LOCATION \
--role roles/cloudkms.cryptoKeyEncrypterDecrypter \ --role roles/cloudkms.cryptoKeyEncrypterDecrypter \
--member "${GCLOUD_TESTS_GOLANG_PROJECT_ID}@${GCLOUD_TESTS_GOLANG_PROJECT_ID}.iam.gserviceaccount.com" \ --member "${GCLOUD_TESTS_GOLANG_PROJECT_ID}@${GCLOUD_TESTS_GOLANG_PROJECT_ID}.iam.gserviceaccount.com" \
--project $GCLOUD_TESTS_GOLANG_PROJECT_ID --project $GCLOUD_TESTS_GOLANG_PROJECT_ID

16
vendor/cloud.google.com/go/go.mod generated vendored
View file

@ -6,18 +6,18 @@ require (
cloud.google.com/go/storage v1.10.0 cloud.google.com/go/storage v1.10.0
github.com/golang/mock v1.5.0 github.com/golang/mock v1.5.0
github.com/golang/protobuf v1.5.2 github.com/golang/protobuf v1.5.2
github.com/google/go-cmp v0.5.5 github.com/google/go-cmp v0.5.6
github.com/google/martian/v3 v3.1.0 github.com/google/martian/v3 v3.2.1
github.com/google/pprof v0.0.0-20210506205249-923b5ab0fc1a github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22
github.com/googleapis/gax-go/v2 v2.0.5 github.com/googleapis/gax-go/v2 v2.0.5
github.com/jstemmer/go-junit-report v0.9.1 github.com/jstemmer/go-junit-report v0.9.1
go.opencensus.io v0.23.0 go.opencensus.io v0.23.0
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 golang.org/x/lint v0.0.0-20210508222113-6edffad5e616
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c
golang.org/x/text v0.3.6 golang.org/x/text v0.3.6
golang.org/x/tools v0.1.1 golang.org/x/tools v0.1.2
google.golang.org/api v0.46.0 google.golang.org/api v0.47.0
google.golang.org/genproto v0.0.0-20210517163617-5e0236093d7a google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c
google.golang.org/grpc v1.37.1 google.golang.org/grpc v1.38.0
google.golang.org/protobuf v1.26.0
) )

34
vendor/cloud.google.com/go/go.sum generated vendored
View file

@ -90,6 +90,8 @@ github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaS
github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM= github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw= github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/snappy v0.0.3 h1:fHPg5GQYlCeLIPB9BZqMVR5nR9A+IM5zcgeTdjMYmLA=
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
@ -102,13 +104,15 @@ github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no= github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.1.0 h1:wCKgOCHuUEVfsaQLpPSJb7VdYCdTVZQAuOdYm1yc/60=
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.2.1 h1:d8MncMlErDFTwQGBK1xhv026j9kqhvw1Qv9IbWT1VLQ=
github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
@ -120,8 +124,8 @@ github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLe
github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210506205249-923b5ab0fc1a h1:jmAp/2PZAScNd62lTD3Mcb0Ey9FvIIJtLohPhtxZJ+Q= github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22 h1:ub2sxhs2A0HRa2dWHavvmWxiVGXNfE9wI+gcTMwED8A=
github.com/google/pprof v0.0.0-20210506205249-923b5ab0fc1a/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
@ -247,7 +251,6 @@ golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210427180440-81ed05c6b58c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c h1:pkQiBZBvdos9qq4wBAHqlzuZHEXo07pqV06ef90u1WI= golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c h1:pkQiBZBvdos9qq4wBAHqlzuZHEXo07pqV06ef90u1WI=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -299,9 +302,9 @@ golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210503080704-8803ae5d1324/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007 h1:gG67DSER+11cZvqIMb8S8bt0vZtiN6xWYARwirrOSfE=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015 h1:hZR0X1kPW+nwyJ9xRxqZk1vx5RUObAPBdKVvXPDUH/E=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -361,8 +364,9 @@ golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.1 h1:wGiQel/hW0NnEkJUk8lbzkX2gFJU6PFxf1v5OlCfuOs=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.2 h1:kRBLX7v7Af8W7Gdbbc908OJcdgtK8bOz9Uaj8/F1ACA=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -389,8 +393,8 @@ google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34q
google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8= google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8=
google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU= google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94= google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
google.golang.org/api v0.46.0 h1:jkDWHOBIoNSD0OQpq4rtBVu+Rh325MPjXG1rakAp8JU= google.golang.org/api v0.47.0 h1:sQLWZQvP6jPGIP4JGPkJu4zHswrv81iobiyszr3b/0I=
google.golang.org/api v0.46.0/go.mod h1:ceL4oozhkAiTID8XMmJBsIxID/9wMXJVVFXPg4ylg3I= google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@ -438,9 +442,9 @@ google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A= google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
google.golang.org/genproto v0.0.0-20210429181445-86c259c2b4ab/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
google.golang.org/genproto v0.0.0-20210517163617-5e0236093d7a h1:VA0wtJaR+W1I11P2f535J7D/YxyvEFMTMvcmyeZ9FBE= google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c h1:wtujag7C+4D6KMoulW9YauvK2lgdvCMS260jsqqBXr0=
google.golang.org/genproto v0.0.0-20210517163617-5e0236093d7a/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
@ -460,8 +464,10 @@ google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAG
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU= google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU= google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.37.1 h1:ARnQJNWxGyYJpdf/JXscNlQr/uv607ZPU9Z7ogHi+iI=
google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.38.0 h1:/9BgsAsa5nWe26HqOlvlgJnqBuktYOLCgjCPqsa56W0=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=

View file

@ -485,6 +485,15 @@
"release_level": "beta", "release_level": "beta",
"library_type": "" "library_type": ""
}, },
"cloud.google.com/go/essentialcontacts/apiv1": {
"distribution_name": "cloud.google.com/go/essentialcontacts/apiv1",
"description": "Essential Contacts API",
"language": "Go",
"client_library_type": "generated",
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/essentialcontacts/apiv1",
"release_level": "beta",
"library_type": ""
},
"cloud.google.com/go/firestore": { "cloud.google.com/go/firestore": {
"distribution_name": "cloud.google.com/go/firestore", "distribution_name": "cloud.google.com/go/firestore",
"description": "Cloud Firestore API", "description": "Cloud Firestore API",
@ -557,6 +566,15 @@
"release_level": "beta", "release_level": "beta",
"library_type": "" "library_type": ""
}, },
"cloud.google.com/go/gsuiteaddons/apiv1": {
"distribution_name": "cloud.google.com/go/gsuiteaddons/apiv1",
"description": "Google Workspace Add-ons API",
"language": "Go",
"client_library_type": "generated",
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/gsuiteaddons/apiv1",
"release_level": "beta",
"library_type": ""
},
"cloud.google.com/go/iam": { "cloud.google.com/go/iam": {
"distribution_name": "cloud.google.com/go/iam", "distribution_name": "cloud.google.com/go/iam",
"description": "Cloud IAM", "description": "Cloud IAM",
@ -604,7 +622,7 @@
}, },
"cloud.google.com/go/language/apiv1beta2": { "cloud.google.com/go/language/apiv1beta2": {
"distribution_name": "cloud.google.com/go/language/apiv1beta2", "distribution_name": "cloud.google.com/go/language/apiv1beta2",
"description": "Cloud Natural Language API", "description": "Google Cloud Natural Language API",
"language": "Go", "language": "Go",
"client_library_type": "generated", "client_library_type": "generated",
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/language/apiv1beta2", "docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/language/apiv1beta2",
@ -773,6 +791,15 @@
"release_level": "ga", "release_level": "ga",
"library_type": "" "library_type": ""
}, },
"cloud.google.com/go/osconfig/apiv1alpha": {
"distribution_name": "cloud.google.com/go/osconfig/apiv1alpha",
"description": "OS Config API",
"language": "Go",
"client_library_type": "generated",
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/osconfig/apiv1alpha",
"release_level": "alpha",
"library_type": ""
},
"cloud.google.com/go/osconfig/apiv1beta": { "cloud.google.com/go/osconfig/apiv1beta": {
"distribution_name": "cloud.google.com/go/osconfig/apiv1beta", "distribution_name": "cloud.google.com/go/osconfig/apiv1beta",
"description": "Cloud OS Config API", "description": "Cloud OS Config API",
@ -818,6 +845,15 @@
"release_level": "ga", "release_level": "ga",
"library_type": "" "library_type": ""
}, },
"cloud.google.com/go/privatecatalog/apiv1beta1": {
"distribution_name": "cloud.google.com/go/privatecatalog/apiv1beta1",
"description": "Cloud Private Catalog API",
"language": "Go",
"client_library_type": "generated",
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/privatecatalog/apiv1beta1",
"release_level": "beta",
"library_type": ""
},
"cloud.google.com/go/profiler": { "cloud.google.com/go/profiler": {
"distribution_name": "cloud.google.com/go/profiler", "distribution_name": "cloud.google.com/go/profiler",
"description": "Cloud Profiler", "description": "Cloud Profiler",
@ -1088,6 +1124,24 @@
"release_level": "ga", "release_level": "ga",
"library_type": "" "library_type": ""
}, },
"cloud.google.com/go/serviceusage/apiv1": {
"distribution_name": "cloud.google.com/go/serviceusage/apiv1",
"description": "Service Usage API",
"language": "Go",
"client_library_type": "generated",
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/serviceusage/apiv1",
"release_level": "beta",
"library_type": ""
},
"cloud.google.com/go/shell/apiv1": {
"distribution_name": "cloud.google.com/go/shell/apiv1",
"description": "Cloud Shell API",
"language": "Go",
"client_library_type": "generated",
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/shell/apiv1",
"release_level": "beta",
"library_type": ""
},
"cloud.google.com/go/spanner": { "cloud.google.com/go/spanner": {
"distribution_name": "cloud.google.com/go/spanner", "distribution_name": "cloud.google.com/go/spanner",
"description": "Cloud Spanner", "description": "Cloud Spanner",
@ -1250,6 +1304,15 @@
"release_level": "beta", "release_level": "beta",
"library_type": "" "library_type": ""
}, },
"cloud.google.com/go/vpcaccess/apiv1": {
"distribution_name": "cloud.google.com/go/vpcaccess/apiv1",
"description": "Serverless VPC Access API",
"language": "Go",
"client_library_type": "generated",
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/latest/vpcaccess/apiv1",
"release_level": "beta",
"library_type": ""
},
"cloud.google.com/go/webrisk/apiv1": { "cloud.google.com/go/webrisk/apiv1": {
"distribution_name": "cloud.google.com/go/webrisk/apiv1", "distribution_name": "cloud.google.com/go/webrisk/apiv1",
"description": "Web Risk API", "description": "Web Risk API",

View file

@ -8,4 +8,5 @@ require (
github.com/davecgh/go-spew v1.1.1 // indirect github.com/davecgh/go-spew v1.1.1 // indirect
github.com/golang/snappy v0.0.3 github.com/golang/snappy v0.0.3
github.com/stretchr/testify v1.3.0 // indirect github.com/stretchr/testify v1.3.0 // indirect
golang.org/x/sys v0.0.0-20210324051608-47abb6519492
) )

View file

@ -12,3 +12,5 @@ github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZN
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q= github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
golang.org/x/sys v0.0.0-20210324051608-47abb6519492 h1:Paq34FxTluEPvVyayQqMPgHm+vTOrIifmcYxFBx9TLg=
golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=

View file

@ -5,8 +5,9 @@ package fastcache
import ( import (
"fmt" "fmt"
"sync" "sync"
"syscall"
"unsafe" "unsafe"
"golang.org/x/sys/unix"
) )
const chunksPerAlloc = 1024 const chunksPerAlloc = 1024
@ -21,7 +22,7 @@ func getChunk() []byte {
if len(freeChunks) == 0 { if len(freeChunks) == 0 {
// Allocate offheap memory, so GOGC won't take into account cache size. // Allocate offheap memory, so GOGC won't take into account cache size.
// This should reduce free memory waste. // This should reduce free memory waste.
data, err := syscall.Mmap(-1, 0, chunkSize*chunksPerAlloc, syscall.PROT_READ|syscall.PROT_WRITE, syscall.MAP_ANON|syscall.MAP_PRIVATE) data, err := unix.Mmap(-1, 0, chunkSize*chunksPerAlloc, unix.PROT_READ|unix.PROT_WRITE, unix.MAP_ANON|unix.MAP_PRIVATE)
if err != nil { if err != nil {
panic(fmt.Errorf("cannot allocate %d bytes via mmap: %s", chunkSize*chunksPerAlloc, err)) panic(fmt.Errorf("cannot allocate %d bytes via mmap: %s", chunkSize*chunksPerAlloc, err))
} }

View file

@ -837,6 +837,16 @@ var awsPartition = partition{
"us-west-2": endpoint{}, "us-west-2": endpoint{},
}, },
}, },
"apprunner": service{
Endpoints: endpoints{
"ap-northeast-1": endpoint{},
"eu-west-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
"us-west-2": endpoint{},
},
},
"appstream2": service{ "appstream2": service{
Defaults: endpoint{ Defaults: endpoint{
Protocols: []string{"https"}, Protocols: []string{"https"},
@ -2857,6 +2867,7 @@ var awsPartition = partition{
"eu-west-1": endpoint{}, "eu-west-1": endpoint{},
"eu-west-2": endpoint{}, "eu-west-2": endpoint{},
"eu-west-3": endpoint{}, "eu-west-3": endpoint{},
"sa-east-1": endpoint{},
"us-east-1": endpoint{}, "us-east-1": endpoint{},
"us-east-2": endpoint{}, "us-east-2": endpoint{},
"us-west-1": endpoint{}, "us-west-1": endpoint{},
@ -3176,9 +3187,27 @@ var awsPartition = partition{
"ap-southeast-2": endpoint{}, "ap-southeast-2": endpoint{},
"eu-central-1": endpoint{}, "eu-central-1": endpoint{},
"eu-west-1": endpoint{}, "eu-west-1": endpoint{},
"us-east-1": endpoint{}, "fips-us-east-1": endpoint{
"us-east-2": endpoint{}, Hostname: "forecast-fips.us-east-1.amazonaws.com",
"us-west-2": endpoint{}, CredentialScope: credentialScope{
Region: "us-east-1",
},
},
"fips-us-east-2": endpoint{
Hostname: "forecast-fips.us-east-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-2",
},
},
"fips-us-west-2": endpoint{
Hostname: "forecast-fips.us-west-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-2",
},
},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
"us-west-2": endpoint{},
}, },
}, },
"forecastquery": service{ "forecastquery": service{
@ -3191,9 +3220,27 @@ var awsPartition = partition{
"ap-southeast-2": endpoint{}, "ap-southeast-2": endpoint{},
"eu-central-1": endpoint{}, "eu-central-1": endpoint{},
"eu-west-1": endpoint{}, "eu-west-1": endpoint{},
"us-east-1": endpoint{}, "fips-us-east-1": endpoint{
"us-east-2": endpoint{}, Hostname: "forecastquery-fips.us-east-1.amazonaws.com",
"us-west-2": endpoint{}, CredentialScope: credentialScope{
Region: "us-east-1",
},
},
"fips-us-east-2": endpoint{
Hostname: "forecastquery-fips.us-east-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-2",
},
},
"fips-us-west-2": endpoint{
Hostname: "forecastquery-fips.us-west-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-2",
},
},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
"us-west-2": endpoint{},
}, },
}, },
"fsx": service{ "fsx": service{
@ -4084,6 +4131,7 @@ var awsPartition = partition{
"ap-southeast-2": endpoint{}, "ap-southeast-2": endpoint{},
"ca-central-1": endpoint{}, "ca-central-1": endpoint{},
"eu-central-1": endpoint{}, "eu-central-1": endpoint{},
"eu-north-1": endpoint{},
"eu-west-1": endpoint{}, "eu-west-1": endpoint{},
"eu-west-2": endpoint{}, "eu-west-2": endpoint{},
"eu-west-3": endpoint{}, "eu-west-3": endpoint{},
@ -5059,6 +5107,7 @@ var awsPartition = partition{
"ap-northeast-1": endpoint{}, "ap-northeast-1": endpoint{},
"ap-southeast-1": endpoint{}, "ap-southeast-1": endpoint{},
"ap-southeast-2": endpoint{}, "ap-southeast-2": endpoint{},
"ca-central-1": endpoint{},
"eu-central-1": endpoint{}, "eu-central-1": endpoint{},
"eu-west-2": endpoint{}, "eu-west-2": endpoint{},
"us-east-1": endpoint{}, "us-east-1": endpoint{},
@ -5423,6 +5472,7 @@ var awsPartition = partition{
"ap-east-1": endpoint{}, "ap-east-1": endpoint{},
"ap-northeast-1": endpoint{}, "ap-northeast-1": endpoint{},
"ap-northeast-2": endpoint{}, "ap-northeast-2": endpoint{},
"ap-northeast-3": endpoint{},
"ap-south-1": endpoint{}, "ap-south-1": endpoint{},
"ap-southeast-1": endpoint{}, "ap-southeast-1": endpoint{},
"ap-southeast-2": endpoint{}, "ap-southeast-2": endpoint{},
@ -6124,6 +6174,61 @@ var awsPartition = partition{
}, },
}, },
}, },
"servicecatalog-appregistry": service{
Endpoints: endpoints{
"af-south-1": endpoint{},
"ap-east-1": endpoint{},
"ap-northeast-1": endpoint{},
"ap-northeast-2": endpoint{},
"ap-south-1": endpoint{},
"ap-southeast-1": endpoint{},
"ap-southeast-2": endpoint{},
"ca-central-1": endpoint{},
"eu-central-1": endpoint{},
"eu-north-1": endpoint{},
"eu-south-1": endpoint{},
"eu-west-1": endpoint{},
"eu-west-2": endpoint{},
"eu-west-3": endpoint{},
"fips-ca-central-1": endpoint{
Hostname: "servicecatalog-appregistry-fips.ca-central-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ca-central-1",
},
},
"fips-us-east-1": endpoint{
Hostname: "servicecatalog-appregistry-fips.us-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-1",
},
},
"fips-us-east-2": endpoint{
Hostname: "servicecatalog-appregistry-fips.us-east-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-2",
},
},
"fips-us-west-1": endpoint{
Hostname: "servicecatalog-appregistry-fips.us-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-1",
},
},
"fips-us-west-2": endpoint{
Hostname: "servicecatalog-appregistry-fips.us-west-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-2",
},
},
"me-south-1": endpoint{},
"sa-east-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
"us-west-1": endpoint{},
"us-west-2": endpoint{},
},
},
"servicediscovery": service{ "servicediscovery": service{
Endpoints: endpoints{ Endpoints: endpoints{
@ -6192,9 +6297,27 @@ var awsPartition = partition{
"ap-southeast-2": endpoint{}, "ap-southeast-2": endpoint{},
"eu-central-1": endpoint{}, "eu-central-1": endpoint{},
"eu-west-1": endpoint{}, "eu-west-1": endpoint{},
"us-east-1": endpoint{}, "fips-us-east-1": endpoint{
"us-east-2": endpoint{}, Hostname: "session.qldb-fips.us-east-1.amazonaws.com",
"us-west-2": endpoint{}, CredentialScope: credentialScope{
Region: "us-east-1",
},
},
"fips-us-east-2": endpoint{
Hostname: "session.qldb-fips.us-east-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-2",
},
},
"fips-us-west-2": endpoint{
Hostname: "session.qldb-fips.us-west-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-2",
},
},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
"us-west-2": endpoint{},
}, },
}, },
"shield": service{ "shield": service{
@ -9831,6 +9954,25 @@ var awsusgovPartition = partition{
}, },
}, },
}, },
"servicecatalog-appregistry": service{
Endpoints: endpoints{
"fips-us-gov-east-1": endpoint{
Hostname: "servicecatalog-appregistry.us-gov-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-east-1",
},
},
"fips-us-gov-west-1": endpoint{
Hostname: "servicecatalog-appregistry.us-gov-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-west-1",
},
},
"us-gov-east-1": endpoint{},
"us-gov-west-1": endpoint{},
},
},
"servicequotas": service{ "servicequotas": service{
Defaults: endpoint{ Defaults: endpoint{
Protocols: []string{"https"}, Protocols: []string{"https"},
@ -10470,6 +10612,12 @@ var awsisoPartition = partition{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
}, },
}, },
"ram": service{
Endpoints: endpoints{
"us-iso-east-1": endpoint{},
},
},
"rds": service{ "rds": service{
Endpoints: endpoints{ Endpoints: endpoints{

View file

@ -129,12 +129,27 @@ func New(cfg aws.Config, clientInfo metadata.ClientInfo, handlers Handlers,
httpReq, _ := http.NewRequest(method, "", nil) httpReq, _ := http.NewRequest(method, "", nil)
var err error var err error
httpReq.URL, err = url.Parse(clientInfo.Endpoint + operation.HTTPPath) httpReq.URL, err = url.Parse(clientInfo.Endpoint)
if err != nil { if err != nil {
httpReq.URL = &url.URL{} httpReq.URL = &url.URL{}
err = awserr.New("InvalidEndpointURL", "invalid endpoint uri", err) err = awserr.New("InvalidEndpointURL", "invalid endpoint uri", err)
} }
if len(operation.HTTPPath) != 0 {
opHTTPPath := operation.HTTPPath
var opQueryString string
if idx := strings.Index(opHTTPPath, "?"); idx >= 0 {
opQueryString = opHTTPPath[idx+1:]
opHTTPPath = opHTTPPath[:idx]
}
if strings.HasSuffix(httpReq.URL.Path, "/") && strings.HasPrefix(opHTTPPath, "/") {
opHTTPPath = opHTTPPath[1:]
}
httpReq.URL.Path += opHTTPPath
httpReq.URL.RawQuery = opQueryString
}
r := &Request{ r := &Request{
Config: cfg, Config: cfg,
ClientInfo: clientInfo, ClientInfo: clientInfo,

View file

@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go" const SDKName = "aws-sdk-go"
// SDKVersion is the version of this SDK // SDKVersion is the version of this SDK
const SDKVersion = "1.38.43" const SDKVersion = "1.38.56"

View file

@ -48,6 +48,10 @@ func ParseResource(s string, resParser ResourceParser) (resARN Resource, err err
return nil, InvalidARNError{ARN: a, Reason: "service is not supported"} return nil, InvalidARNError{ARN: a, Reason: "service is not supported"}
} }
if strings.HasPrefix(a.Region, "fips-") || strings.HasSuffix(a.Region, "-fips") {
return nil, InvalidARNError{ARN: a, Reason: "FIPS region not allowed in ARN"}
}
if len(a.Resource) == 0 { if len(a.Resource) == 0 {
return nil, InvalidARNError{ARN: a, Reason: "resource not set"} return nil, InvalidARNError{ARN: a, Reason: "resource not set"}
} }

View file

@ -71,6 +71,8 @@ func NewInvalidARNWithUnsupportedPartitionError(resource arn.Resource, err error
} }
// NewInvalidARNWithFIPSError ARN not supported for FIPS region // NewInvalidARNWithFIPSError ARN not supported for FIPS region
//
// Deprecated: FIPS will not appear in the ARN region component.
func NewInvalidARNWithFIPSError(resource arn.Resource, err error) InvalidARNError { func NewInvalidARNWithFIPSError(resource arn.Resource, err error) InvalidARNError {
return InvalidARNError{ return InvalidARNError{
message: "resource ARN not supported for FIPS region", message: "resource ARN not supported for FIPS region",
@ -155,6 +157,17 @@ func NewClientConfiguredForFIPSError(resource arn.Resource, clientPartitionID, c
} }
} }
// NewFIPSConfigurationError denotes a configuration error when a client or request is configured for FIPS
func NewFIPSConfigurationError(resource arn.Resource, clientPartitionID, clientRegion string, err error) ConfigurationError {
return ConfigurationError{
message: "use of ARN is not supported when client or request is configured for FIPS",
origErr: err,
resource: resource,
clientPartitionID: clientPartitionID,
clientRegion: clientRegion,
}
}
// NewClientConfiguredForAccelerateError denotes client config error for unsupported S3 accelerate // NewClientConfiguredForAccelerateError denotes client config error for unsupported S3 accelerate
func NewClientConfiguredForAccelerateError(resource arn.Resource, clientPartitionID, clientRegion string, err error) ConfigurationError { func NewClientConfiguredForAccelerateError(resource arn.Resource, clientPartitionID, clientRegion string, err error) ConfigurationError {
return ConfigurationError{ return ConfigurationError{

View file

@ -31,6 +31,8 @@ func (r ResourceRequest) UseFIPS() bool {
} }
// ResourceConfiguredForFIPS returns true if resource ARNs region is FIPS // ResourceConfiguredForFIPS returns true if resource ARNs region is FIPS
//
// Deprecated: FIPS pseudo-regions will not be in the ARN
func (r ResourceRequest) ResourceConfiguredForFIPS() bool { func (r ResourceRequest) ResourceConfiguredForFIPS() bool {
return IsFIPS(r.ARN().Region) return IsFIPS(r.ARN().Region)
} }

View file

@ -356,9 +356,8 @@ func (c *S3) CopyObjectRequest(input *CopyObjectInput) (req *request.Request, ou
// use the s3:x-amz-metadata-directive condition key to enforce certain metadata // use the s3:x-amz-metadata-directive condition key to enforce certain metadata
// behavior when objects are uploaded. For more information, see Specifying // behavior when objects are uploaded. For more information, see Specifying
// Conditions in a Policy (https://docs.aws.amazon.com/AmazonS3/latest/dev/amazon-s3-policy-keys.html) // Conditions in a Policy (https://docs.aws.amazon.com/AmazonS3/latest/dev/amazon-s3-policy-keys.html)
// in the Amazon S3 Developer Guide. For a complete list of Amazon S3-specific // in the Amazon S3 User Guide. For a complete list of Amazon S3-specific condition
// condition keys, see Actions, Resources, and Condition Keys for Amazon S3 // keys, see Actions, Resources, and Condition Keys for Amazon S3 (https://docs.aws.amazon.com/AmazonS3/latest/dev/list_amazons3.html).
// (https://docs.aws.amazon.com/AmazonS3/latest/dev/list_amazons3.html).
// //
// x-amz-copy-source-if Headers // x-amz-copy-source-if Headers
// //
@ -422,7 +421,7 @@ func (c *S3) CopyObjectRequest(input *CopyObjectInput) (req *request.Request, ou
// You can use the CopyObject action to change the storage class of an object // You can use the CopyObject action to change the storage class of an object
// that is already stored in Amazon S3 using the StorageClass parameter. For // that is already stored in Amazon S3 using the StorageClass parameter. For
// more information, see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) // more information, see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html)
// in the Amazon S3 Service Developer Guide. // in the Amazon S3 User Guide.
// //
// Versioning // Versioning
// //
@ -535,7 +534,7 @@ func (c *S3) CreateBucketRequest(input *CreateBucketInput) (req *request.Request
// become the bucket owner. // become the bucket owner.
// //
// Not every string is an acceptable bucket name. For information about bucket // Not every string is an acceptable bucket name. For information about bucket
// naming restrictions, see Working with Amazon S3 buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html). // naming restrictions, see Bucket naming rules (https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html).
// //
// If you want to create an Amazon S3 on Outposts bucket, see Create Bucket // If you want to create an Amazon S3 on Outposts bucket, see Create Bucket
// (https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateBucket.html). // (https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateBucket.html).
@ -723,10 +722,11 @@ func (c *S3) CreateMultipartUploadRequest(input *CreateMultipartUploadInput) (re
// by using CreateMultipartUpload. // by using CreateMultipartUpload.
// //
// To perform a multipart upload with encryption using an AWS KMS CMK, the requester // To perform a multipart upload with encryption using an AWS KMS CMK, the requester
// must have permission to the kms:Encrypt, kms:Decrypt, kms:ReEncrypt*, kms:GenerateDataKey*, // must have permission to the kms:Decrypt and kms:GenerateDataKey* actions
// and kms:DescribeKey actions on the key. These permissions are required because // on the key. These permissions are required because Amazon S3 must decrypt
// Amazon S3 must decrypt and read data from the encrypted file parts before // and read data from the encrypted file parts before it completes the multipart
// it completes the multipart upload. // upload. For more information, see Multipart upload API and permissions (https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html#mpuAndPermissions)
// in the Amazon S3 User Guide.
// //
// If your AWS Identity and Access Management (IAM) user or role is in the same // If your AWS Identity and Access Management (IAM) user or role is in the same
// AWS account as the AWS KMS CMK, then you must have these permissions on the // AWS account as the AWS KMS CMK, then you must have these permissions on the
@ -1835,7 +1835,7 @@ func (c *S3) DeleteBucketReplicationRequest(input *DeleteBucketReplicationInput)
// propagate. // propagate.
// //
// For information about replication configuration, see Replication (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html) // For information about replication configuration, see Replication (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
// //
// The following operations are related to DeleteBucketReplication: // The following operations are related to DeleteBucketReplication:
// //
@ -6497,12 +6497,13 @@ func (c *S3) ListObjectsV2Request(input *ListObjectsV2Input) (req *request.Reque
// ListObjectsV2 API operation for Amazon Simple Storage Service. // ListObjectsV2 API operation for Amazon Simple Storage Service.
// //
// Returns some or all (up to 1,000) of the objects in a bucket. You can use // Returns some or all (up to 1,000) of the objects in a bucket with each request.
// the request parameters as selection criteria to return a subset of the objects // You can use the request parameters as selection criteria to return a subset
// in a bucket. A 200 OK response can contain valid or invalid XML. Make sure // of the objects in a bucket. A 200 OK response can contain valid or invalid
// to design your application to parse the contents of the response and handle // XML. Make sure to design your application to parse the contents of the response
// it appropriately. Objects are returned sorted in an ascending order of the // and handle it appropriately. Objects are returned sorted in an ascending
// respective key names in the list. // order of the respective key names in the list. For more information about
// listing objects, see Listing object keys programmatically (https://docs.aws.amazon.com/AmazonS3/latest/userguide/ListingKeysUsingAPIs.html)
// //
// To use this operation, you must have READ access to the bucket. // To use this operation, you must have READ access to the bucket.
// //
@ -7816,7 +7817,7 @@ func (c *S3) PutBucketLifecycleConfigurationRequest(input *PutBucketLifecycleCon
// //
// Creates a new lifecycle configuration for the bucket or replaces an existing // Creates a new lifecycle configuration for the bucket or replaces an existing
// lifecycle configuration. For information about lifecycle configuration, see // lifecycle configuration. For information about lifecycle configuration, see
// Managing Access Permissions to Your Amazon S3 Resources (https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html). // Managing your storage lifecycle (https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html).
// //
// Bucket lifecycle configuration now supports specifying a lifecycle rule using // Bucket lifecycle configuration now supports specifying a lifecycle rule using
// an object key name prefix, one or more object tags, or a combination of both. // an object key name prefix, one or more object tags, or a combination of both.
@ -8587,7 +8588,7 @@ func (c *S3) PutBucketReplicationRequest(input *PutBucketReplicationInput) (req
// //
// Creates a replication configuration or replaces an existing one. For more // Creates a replication configuration or replaces an existing one. For more
// information, see Replication (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html) // information, see Replication (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
// //
// To perform this operation, the user or role performing the action must have // To perform this operation, the user or role performing the action must have
// the iam:PassRole (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) // the iam:PassRole (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html)
@ -8814,11 +8815,12 @@ func (c *S3) PutBucketTaggingRequest(input *PutBucketTaggingInput) (req *request
// according to resources with the same tag key values. For example, you can // according to resources with the same tag key values. For example, you can
// tag several resources with a specific application name, and then organize // tag several resources with a specific application name, and then organize
// your billing information to see the total cost of that application across // your billing information to see the total cost of that application across
// several services. For more information, see Cost Allocation and Tagging (https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html). // several services. For more information, see Cost Allocation and Tagging (https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html)
// and Using Cost Allocation in Amazon S3 Bucket Tags (https://docs.aws.amazon.com/AmazonS3/latest/dev/CostAllocTagging.html).
// //
// Within a bucket, if you add a tag that has the same key as an existing tag, // When this operation sets the tags for a bucket, it will overwrite any current
// the new value overwrites the old value. For more information, see Using Cost // tags the bucket already has. You cannot use this operation to add tags to
// Allocation in Amazon S3 Bucket Tags (https://docs.aws.amazon.com/AmazonS3/latest/dev/CostAllocTagging.html). // an existing list of tags.
// //
// To use this operation, you must have permissions to perform the s3:PutBucketTagging // To use this operation, you must have permissions to perform the s3:PutBucketTagging
// action. The bucket owner has this permission by default and can grant this // action. The bucket owner has this permission by default and can grant this
@ -9229,7 +9231,7 @@ func (c *S3) PutObjectRequest(input *PutObjectInput) (req *request.Request, outp
// Depending on performance needs, you can specify a different Storage Class. // Depending on performance needs, you can specify a different Storage Class.
// Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information,
// see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html)
// in the Amazon S3 Service Developer Guide. // in the Amazon S3 User Guide.
// //
// Versioning // Versioning
// //
@ -9339,7 +9341,7 @@ func (c *S3) PutObjectAclRequest(input *PutObjectAclInput) (req *request.Request
// have an existing application that updates a bucket ACL using the request // have an existing application that updates a bucket ACL using the request
// body, you can continue to use that approach. For more information, see Access // body, you can continue to use that approach. For more information, see Access
// Control List (ACL) Overview (https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html) // Control List (ACL) Overview (https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
// //
// Access Permissions // Access Permissions
// //
@ -10997,7 +10999,7 @@ type AbortMultipartUploadInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -11025,7 +11027,7 @@ type AbortMultipartUploadInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// Upload ID that identifies the multipart upload. // Upload ID that identifies the multipart upload.
@ -11242,7 +11244,7 @@ type AccessControlTranslation struct {
// Specifies the replica ownership. For default and valid values, see PUT bucket // Specifies the replica ownership. For default and valid values, see PUT bucket
// replication (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) // replication (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html)
// in the Amazon Simple Storage Service API Reference. // in the Amazon S3 API Reference.
// //
// Owner is a required field // Owner is a required field
Owner *string `type:"string" required:"true" enum:"OwnerOverride"` Owner *string `type:"string" required:"true" enum:"OwnerOverride"`
@ -11693,7 +11695,7 @@ type BucketLoggingStatus struct {
// Describes where logs are stored and the prefix that Amazon S3 assigns to // Describes where logs are stored and the prefix that Amazon S3 assigns to
// all log object keys for a bucket. For more information, see PUT Bucket logging // all log object keys for a bucket. For more information, see PUT Bucket logging
// (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html) // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html)
// in the Amazon Simple Storage Service API Reference. // in the Amazon S3 API Reference.
LoggingEnabled *LoggingEnabled `type:"structure"` LoggingEnabled *LoggingEnabled `type:"structure"`
} }
@ -12168,7 +12170,7 @@ type CompleteMultipartUploadInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// ID for the initiated multipart upload. // ID for the initiated multipart upload.
@ -12291,7 +12293,7 @@ type CompleteMultipartUploadOutput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -12577,7 +12579,7 @@ type CopyObjectInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -12735,7 +12737,7 @@ type CopyObjectInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// Specifies the algorithm to use to when encrypting the object (for example, // Specifies the algorithm to use to when encrypting the object (for example,
@ -12764,7 +12766,7 @@ type CopyObjectInput struct {
// or using SigV4. For information about configuring using any of the officially // or using SigV4. For information about configuring using any of the officially
// supported AWS SDKs and AWS CLI, see Specifying the Signature Version in Request // supported AWS SDKs and AWS CLI, see Specifying the Signature Version in Request
// Authentication (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#specify-signature-version) // Authentication (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#specify-signature-version)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"` SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"`
// The server-side encryption algorithm used when storing this object in Amazon // The server-side encryption algorithm used when storing this object in Amazon
@ -12776,7 +12778,7 @@ type CopyObjectInput struct {
// Depending on performance needs, you can specify a different Storage Class. // Depending on performance needs, you can specify a different Storage Class.
// Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information,
// see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html)
// in the Amazon S3 Service Developer Guide. // in the Amazon S3 User Guide.
StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"`
// The tag-set for the object destination object this value must be used in // The tag-set for the object destination object this value must be used in
@ -13358,7 +13360,10 @@ type CreateBucketInput struct {
// Allows grantee to read the bucket ACL. // Allows grantee to read the bucket ACL.
GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"`
// Allows grantee to create, overwrite, and delete any object in the bucket. // Allows grantee to create new objects in the bucket.
//
// For the bucket and object owners of existing objects, also allows deletions
// and overwrites of those objects.
GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"` GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"`
// Allows grantee to write the ACL for the applicable bucket. // Allows grantee to write the ACL for the applicable bucket.
@ -13494,7 +13499,7 @@ type CreateMultipartUploadInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -13583,7 +13588,7 @@ type CreateMultipartUploadInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// Specifies the algorithm to use to when encrypting the object (for example, // Specifies the algorithm to use to when encrypting the object (for example,
@ -13612,7 +13617,7 @@ type CreateMultipartUploadInput struct {
// KMS will fail if not made via SSL or using SigV4. For information about configuring // KMS will fail if not made via SSL or using SigV4. For information about configuring
// using any of the officially supported AWS SDKs and AWS CLI, see Specifying // using any of the officially supported AWS SDKs and AWS CLI, see Specifying
// the Signature Version in Request Authentication (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#specify-signature-version) // the Signature Version in Request Authentication (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#specify-signature-version)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"` SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"`
// The server-side encryption algorithm used when storing this object in Amazon // The server-side encryption algorithm used when storing this object in Amazon
@ -13624,7 +13629,7 @@ type CreateMultipartUploadInput struct {
// Depending on performance needs, you can specify a different Storage Class. // Depending on performance needs, you can specify a different Storage Class.
// Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information,
// see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html)
// in the Amazon S3 Service Developer Guide. // in the Amazon S3 User Guide.
StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"`
// The tag-set for the object. The tag-set must be encoded as URL Query parameters. // The tag-set for the object. The tag-set must be encoded as URL Query parameters.
@ -13908,7 +13913,7 @@ type CreateMultipartUploadOutput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -15613,7 +15618,7 @@ type DeleteObjectInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -15651,7 +15656,7 @@ type DeleteObjectInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// VersionId used to reference a specific version of the object. // VersionId used to reference a specific version of the object.
@ -15819,7 +15824,7 @@ type DeleteObjectTaggingInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -15970,7 +15975,7 @@ type DeleteObjectsInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -16009,7 +16014,7 @@ type DeleteObjectsInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
} }
@ -16333,7 +16338,7 @@ type Destination struct {
// the destination bucket by specifying the AccessControlTranslation property, // the destination bucket by specifying the AccessControlTranslation property,
// this is the account ID of the destination bucket owner. For more information, // this is the account ID of the destination bucket owner. For more information,
// see Replication Additional Configuration: Changing the Replica Owner (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-change-owner.html) // see Replication Additional Configuration: Changing the Replica Owner (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-change-owner.html)
// in the Amazon Simple Storage Service Developer Guide. // in the Amazon S3 User Guide.
Account *string `type:"string"` Account *string `type:"string"`
// The Amazon Resource Name (ARN) of the bucket where you want Amazon S3 to // The Amazon Resource Name (ARN) of the bucket where you want Amazon S3 to
@ -16361,7 +16366,7 @@ type Destination struct {
// //
// For valid values, see the StorageClass element of the PUT Bucket replication // For valid values, see the StorageClass element of the PUT Bucket replication
// (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html)
// action in the Amazon Simple Storage Service API Reference. // action in the Amazon S3 API Reference.
StorageClass *string `type:"string" enum:"StorageClass"` StorageClass *string `type:"string" enum:"StorageClass"`
} }
@ -16468,8 +16473,8 @@ type Encryption struct {
// If the encryption type is aws:kms, this optional value specifies the ID of // If the encryption type is aws:kms, this optional value specifies the ID of
// the symmetric customer managed AWS KMS CMK to use for encryption of job results. // the symmetric customer managed AWS KMS CMK to use for encryption of job results.
// Amazon S3 only supports symmetric CMKs. For more information, see Using Symmetric // Amazon S3 only supports symmetric CMKs. For more information, see Using symmetric
// and Asymmetric Keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) // and asymmetric keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html)
// in the AWS Key Management Service Developer Guide. // in the AWS Key Management Service Developer Guide.
KMSKeyId *string `type:"string" sensitive:"true"` KMSKeyId *string `type:"string" sensitive:"true"`
} }
@ -16520,11 +16525,11 @@ func (s *Encryption) SetKMSKeyId(v string) *Encryption {
type EncryptionConfiguration struct { type EncryptionConfiguration struct {
_ struct{} `type:"structure"` _ struct{} `type:"structure"`
// Specifies the ID (Key ARN or Alias ARN) of the customer managed customer // Specifies the ID (Key ARN or Alias ARN) of the customer managed AWS KMS key
// master key (CMK) stored in AWS Key Management Service (KMS) for the destination // stored in AWS Key Management Service (KMS) for the destination bucket. Amazon
// bucket. Amazon S3 uses this key to encrypt replica objects. Amazon S3 only // S3 uses this key to encrypt replica objects. Amazon S3 only supports symmetric,
// supports symmetric customer managed CMKs. For more information, see Using // customer managed KMS keys. For more information, see Using symmetric and
// Symmetric and Asymmetric Keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) // asymmetric keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html)
// in the AWS Key Management Service Developer Guide. // in the AWS Key Management Service Developer Guide.
ReplicaKmsKeyID *string `type:"string"` ReplicaKmsKeyID *string `type:"string"`
} }
@ -17035,7 +17040,7 @@ func (s *ErrorDocument) SetKey(v string) *ErrorDocument {
// Optional configuration to replicate existing source bucket objects. For more // Optional configuration to replicate existing source bucket objects. For more
// information, see Replicating Existing Objects (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-what-is-isnot-replicated.html#existing-object-replication) // information, see Replicating Existing Objects (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-what-is-isnot-replicated.html#existing-object-replication)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
type ExistingObjectReplication struct { type ExistingObjectReplication struct {
_ struct{} `type:"structure"` _ struct{} `type:"structure"`
@ -18337,7 +18342,7 @@ type GetBucketLoggingOutput struct {
// Describes where logs are stored and the prefix that Amazon S3 assigns to // Describes where logs are stored and the prefix that Amazon S3 assigns to
// all log object keys for a bucket. For more information, see PUT Bucket logging // all log object keys for a bucket. For more information, see PUT Bucket logging
// (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html) // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html)
// in the Amazon Simple Storage Service API Reference. // in the Amazon S3 API Reference.
LoggingEnabled *LoggingEnabled `type:"structure"` LoggingEnabled *LoggingEnabled `type:"structure"`
} }
@ -19490,7 +19495,7 @@ type GetObjectAclInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// Bucket is a required field // Bucket is a required field
@ -19510,7 +19515,7 @@ type GetObjectAclInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// VersionId used to reference a specific version of the object. // VersionId used to reference a specific version of the object.
@ -19664,7 +19669,7 @@ type GetObjectInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -19720,7 +19725,7 @@ type GetObjectInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// Sets the Cache-Control header of the response. // Sets the Cache-Control header of the response.
@ -19964,7 +19969,7 @@ type GetObjectLegalHoldInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// Bucket is a required field // Bucket is a required field
@ -19984,7 +19989,7 @@ type GetObjectLegalHoldInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// The version ID of the object whose Legal Hold status you want to retrieve. // The version ID of the object whose Legal Hold status you want to retrieve.
@ -20119,7 +20124,7 @@ type GetObjectLockConfigurationInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// Bucket is a required field // Bucket is a required field
@ -20567,7 +20572,7 @@ type GetObjectRetentionInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// Bucket is a required field // Bucket is a required field
@ -20587,7 +20592,7 @@ type GetObjectRetentionInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// The version ID for the object whose retention settings you want to retrieve. // The version ID for the object whose retention settings you want to retrieve.
@ -20722,7 +20727,7 @@ type GetObjectTaggingInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -20750,7 +20755,7 @@ type GetObjectTaggingInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// The versionId of the object for which to get the tagging information. // The versionId of the object for which to get the tagging information.
@ -20910,7 +20915,7 @@ type GetObjectTorrentInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
} }
@ -21342,7 +21347,7 @@ type HeadBucketInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -21457,7 +21462,7 @@ type HeadObjectInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -21514,7 +21519,7 @@ type HeadObjectInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// Specifies the algorithm to use to when encrypting the object (for example, // Specifies the algorithm to use to when encrypting the object (for example,
@ -22417,7 +22422,7 @@ func (s *IntelligentTieringFilter) SetTag(v *Tag) *IntelligentTieringFilter {
// Specifies the inventory configuration for an Amazon S3 bucket. For more information, // Specifies the inventory configuration for an Amazon S3 bucket. For more information,
// see GET Bucket inventory (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETInventoryConfig.html) // see GET Bucket inventory (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETInventoryConfig.html)
// in the Amazon Simple Storage Service API Reference. // in the Amazon S3 API Reference.
type InventoryConfiguration struct { type InventoryConfiguration struct {
_ struct{} `type:"structure"` _ struct{} `type:"structure"`
@ -23987,7 +23992,7 @@ type ListMultipartUploadsInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -24627,7 +24632,7 @@ type ListObjectsInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -24921,7 +24926,7 @@ type ListObjectsV2Input struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -25157,7 +25162,7 @@ type ListObjectsV2Output struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -25273,7 +25278,7 @@ type ListPartsInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -25308,7 +25313,7 @@ type ListPartsInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// Upload ID identifying the multipart upload whose parts are being listed. // Upload ID identifying the multipart upload whose parts are being listed.
@ -25730,7 +25735,7 @@ func (s *Location) SetUserMetadata(v []*MetadataEntry) *Location {
// Describes where logs are stored and the prefix that Amazon S3 assigns to // Describes where logs are stored and the prefix that Amazon S3 assigns to
// all log object keys for a bucket. For more information, see PUT Bucket logging // all log object keys for a bucket. For more information, see PUT Bucket logging
// (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html) // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html)
// in the Amazon Simple Storage Service API Reference. // in the Amazon S3 API Reference.
type LoggingEnabled struct { type LoggingEnabled struct {
_ struct{} `type:"structure"` _ struct{} `type:"structure"`
@ -25953,7 +25958,7 @@ func (s *MetricsAndOperator) SetTags(v []*Tag) *MetricsAndOperator {
// the existing metrics configuration. If you don't include the elements you // the existing metrics configuration. If you don't include the elements you
// want to keep, they are erased. For more information, see PUT Bucket metrics // want to keep, they are erased. For more information, see PUT Bucket metrics
// (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTMetricConfiguration.html) // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTMetricConfiguration.html)
// in the Amazon Simple Storage Service API Reference. // in the Amazon S3 API Reference.
type MetricsConfiguration struct { type MetricsConfiguration struct {
_ struct{} `type:"structure"` _ struct{} `type:"structure"`
@ -26155,7 +26160,7 @@ type NoncurrentVersionExpiration struct {
// perform the associated action. For information about the noncurrent days // perform the associated action. For information about the noncurrent days
// calculations, see How Amazon S3 Calculates When an Object Became Noncurrent // calculations, see How Amazon S3 Calculates When an Object Became Noncurrent
// (https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html#non-current-days-calculations) // (https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html#non-current-days-calculations)
// in the Amazon Simple Storage Service Developer Guide. // in the Amazon S3 User Guide.
NoncurrentDays *int64 `type:"integer"` NoncurrentDays *int64 `type:"integer"`
} }
@ -27336,7 +27341,10 @@ type PutBucketAclInput struct {
// Allows grantee to read the bucket ACL. // Allows grantee to read the bucket ACL.
GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"`
// Allows grantee to create, overwrite, and delete any object in the bucket. // Allows grantee to create new objects in the bucket.
//
// For the bucket and object owners of existing objects, also allows deletions
// and overwrites of those objects.
GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"` GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"`
// Allows grantee to write the ACL for the applicable bucket. // Allows grantee to write the ACL for the applicable bucket.
@ -29693,7 +29701,7 @@ type PutObjectAclInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// Bucket is a required field // Bucket is a required field
@ -29720,7 +29728,10 @@ type PutObjectAclInput struct {
// This action is not supported by Amazon S3 on Outposts. // This action is not supported by Amazon S3 on Outposts.
GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"`
// Allows grantee to create, overwrite, and delete any object in the bucket. // Allows grantee to create new objects in the bucket.
//
// For the bucket and object owners of existing objects, also allows deletions
// and overwrites of those objects.
GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"` GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"`
// Allows grantee to write the ACL for the applicable bucket. // Allows grantee to write the ACL for the applicable bucket.
@ -29734,7 +29745,7 @@ type PutObjectAclInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -29752,7 +29763,7 @@ type PutObjectAclInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// VersionId used to reference a specific version of the object. // VersionId used to reference a specific version of the object.
@ -29944,7 +29955,7 @@ type PutObjectInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -30046,14 +30057,15 @@ type PutObjectInput struct {
// The Object Lock mode that you want to apply to this object. // The Object Lock mode that you want to apply to this object.
ObjectLockMode *string `location:"header" locationName:"x-amz-object-lock-mode" type:"string" enum:"ObjectLockMode"` ObjectLockMode *string `location:"header" locationName:"x-amz-object-lock-mode" type:"string" enum:"ObjectLockMode"`
// The date and time when you want this object's Object Lock to expire. // The date and time when you want this object's Object Lock to expire. Must
// be formatted as a timestamp parameter.
ObjectLockRetainUntilDate *time.Time `location:"header" locationName:"x-amz-object-lock-retain-until-date" type:"timestamp" timestampFormat:"iso8601"` ObjectLockRetainUntilDate *time.Time `location:"header" locationName:"x-amz-object-lock-retain-until-date" type:"timestamp" timestampFormat:"iso8601"`
// Confirms that the requester knows that they will be charged for the request. // Confirms that the requester knows that they will be charged for the request.
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// Specifies the algorithm to use to when encrypting the object (for example, // Specifies the algorithm to use to when encrypting the object (for example,
@ -30080,13 +30092,11 @@ type PutObjectInput struct {
// If x-amz-server-side-encryption is present and has the value of aws:kms, // If x-amz-server-side-encryption is present and has the value of aws:kms,
// this header specifies the ID of the AWS Key Management Service (AWS KMS) // this header specifies the ID of the AWS Key Management Service (AWS KMS)
// symmetrical customer managed customer master key (CMK) that was used for // symmetrical customer managed customer master key (CMK) that was used for
// the object.
//
// If the value of x-amz-server-side-encryption is aws:kms, this header specifies
// the ID of the symmetric customer managed AWS KMS CMK that will be used for
// the object. If you specify x-amz-server-side-encryption:aws:kms, but do not // the object. If you specify x-amz-server-side-encryption:aws:kms, but do not
// providex-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the AWS // providex-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the AWS
// managed CMK in AWS to protect the data. // managed CMK in AWS to protect the data. If the KMS key does not exist in
// the same account issuing the command, you must use the full ARN and not just
// the ID.
SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"` SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"`
// The server-side encryption algorithm used when storing this object in Amazon // The server-side encryption algorithm used when storing this object in Amazon
@ -30098,7 +30108,7 @@ type PutObjectInput struct {
// Depending on performance needs, you can specify a different Storage Class. // Depending on performance needs, you can specify a different Storage Class.
// Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information,
// see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html)
// in the Amazon S3 Service Developer Guide. // in the Amazon S3 User Guide.
StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"`
// The tag-set for the object. The tag-set must be encoded as URL Query parameters. // The tag-set for the object. The tag-set must be encoded as URL Query parameters.
@ -30401,7 +30411,7 @@ type PutObjectLegalHoldInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// Bucket is a required field // Bucket is a required field
@ -30425,7 +30435,7 @@ type PutObjectLegalHoldInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// The version ID of the object that you want to place a Legal Hold on. // The version ID of the object that you want to place a Legal Hold on.
@ -30578,7 +30588,7 @@ type PutObjectLockConfigurationInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// A token to allow Object Lock to be enabled for an existing bucket. // A token to allow Object Lock to be enabled for an existing bucket.
@ -30831,7 +30841,7 @@ type PutObjectRetentionInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// Bucket is a required field // Bucket is a required field
@ -30855,7 +30865,7 @@ type PutObjectRetentionInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// The container element for the Object Retention configuration. // The container element for the Object Retention configuration.
@ -31007,7 +31017,7 @@ type PutObjectTaggingInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -31035,7 +31045,7 @@ type PutObjectTaggingInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// Container for the TagSet and Tag elements // Container for the TagSet and Tag elements
@ -31752,7 +31762,7 @@ type ReplicationRule struct {
// Optional configuration to replicate existing source bucket objects. For more // Optional configuration to replicate existing source bucket objects. For more
// information, see Replicating Existing Objects (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-what-is-isnot-replicated.html#existing-object-replication) // information, see Replicating Existing Objects (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-what-is-isnot-replicated.html#existing-object-replication)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
ExistingObjectReplication *ExistingObjectReplication `type:"structure"` ExistingObjectReplication *ExistingObjectReplication `type:"structure"`
// A filter that identifies the subset of objects to which the replication rule // A filter that identifies the subset of objects to which the replication rule
@ -32195,7 +32205,7 @@ type RestoreObjectInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -32223,7 +32233,7 @@ type RestoreObjectInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// Container for restore job parameters. // Container for restore job parameters.
@ -32540,8 +32550,8 @@ func (s *RoutingRule) SetRedirect(v *Redirect) *RoutingRule {
// Specifies lifecycle rules for an Amazon S3 bucket. For more information, // Specifies lifecycle rules for an Amazon S3 bucket. For more information,
// see Put Bucket Lifecycle Configuration (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlifecycle.html) // see Put Bucket Lifecycle Configuration (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlifecycle.html)
// in the Amazon Simple Storage Service API Reference. For examples, see Put // in the Amazon S3 API Reference. For examples, see Put Bucket Lifecycle Configuration
// Bucket Lifecycle Configuration Examples (https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html#API_PutBucketLifecycleConfiguration_Examples). // Examples (https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html#API_PutBucketLifecycleConfiguration_Examples).
type Rule struct { type Rule struct {
_ struct{} `type:"structure"` _ struct{} `type:"structure"`
@ -33287,17 +33297,17 @@ func (s *SelectParameters) SetOutputSerialization(v *OutputSerialization) *Selec
// bucket. If a PUT Object request doesn't specify any server-side encryption, // bucket. If a PUT Object request doesn't specify any server-side encryption,
// this default encryption will be applied. For more information, see PUT Bucket // this default encryption will be applied. For more information, see PUT Bucket
// encryption (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTencryption.html) // encryption (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTencryption.html)
// in the Amazon Simple Storage Service API Reference. // in the Amazon S3 API Reference.
type ServerSideEncryptionByDefault struct { type ServerSideEncryptionByDefault struct {
_ struct{} `type:"structure"` _ struct{} `type:"structure"`
// AWS Key Management Service (KMS) customer master key ID to use for the default // AWS Key Management Service (KMS) customer AWS KMS key ID to use for the default
// encryption. This parameter is allowed if and only if SSEAlgorithm is set // encryption. This parameter is allowed if and only if SSEAlgorithm is set
// to aws:kms. // to aws:kms.
// //
// You can specify the key ID or the Amazon Resource Name (ARN) of the CMK. // You can specify the key ID or the Amazon Resource Name (ARN) of the KMS key.
// However, if you are using encryption with cross-account operations, you must // However, if you are using encryption with cross-account operations, you must
// use a fully qualified CMK ARN. For more information, see Using encryption // use a fully qualified KMS key ARN. For more information, see Using encryption
// for cross-account operations (https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html#bucket-encryption-update-bucket-policy). // for cross-account operations (https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html#bucket-encryption-update-bucket-policy).
// //
// For example: // For example:
@ -33306,8 +33316,8 @@ type ServerSideEncryptionByDefault struct {
// //
// * Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab // * Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
// //
// Amazon S3 only supports symmetric CMKs and not asymmetric CMKs. For more // Amazon S3 only supports symmetric KMS keys and not asymmetric KMS keys. For
// information, see Using Symmetric and Asymmetric Keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) // more information, see Using symmetric and asymmetric keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html)
// in the AWS Key Management Service Developer Guide. // in the AWS Key Management Service Developer Guide.
KMSMasterKeyID *string `type:"string" sensitive:"true"` KMSMasterKeyID *string `type:"string" sensitive:"true"`
@ -33531,7 +33541,7 @@ type SseKmsEncryptedObjects struct {
_ struct{} `type:"structure"` _ struct{} `type:"structure"`
// Specifies whether Amazon S3 replicates objects created with server-side encryption // Specifies whether Amazon S3 replicates objects created with server-side encryption
// using a customer master key (CMK) stored in AWS Key Management Service. // using an AWS KMS key stored in AWS Key Management Service.
// //
// Status is a required field // Status is a required field
Status *string `type:"string" required:"true" enum:"SseKmsEncryptedObjectsStatus"` Status *string `type:"string" required:"true" enum:"SseKmsEncryptedObjectsStatus"`
@ -34170,7 +34180,7 @@ type UploadPartCopyInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -34275,7 +34285,7 @@ type UploadPartCopyInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// Specifies the algorithm to use to when encrypting the object (for example, // Specifies the algorithm to use to when encrypting the object (for example,
@ -34612,7 +34622,7 @@ type UploadPartInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -34655,7 +34665,7 @@ type UploadPartInput struct {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// Specifies the algorithm to use to when encrypting the object (for example, // Specifies the algorithm to use to when encrypting the object (for example,
@ -34919,7 +34929,7 @@ func (s *UploadPartOutput) SetServerSideEncryption(v string) *UploadPartOutput {
// Describes the versioning state of an Amazon S3 bucket. For more information, // Describes the versioning state of an Amazon S3 bucket. For more information,
// see PUT Bucket versioning (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTVersioningStatus.html) // see PUT Bucket versioning (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTVersioningStatus.html)
// in the Amazon Simple Storage Service API Reference. // in the Amazon S3 API Reference.
type VersioningConfiguration struct { type VersioningConfiguration struct {
_ struct{} `type:"structure"` _ struct{} `type:"structure"`
@ -36028,6 +36038,9 @@ const (
// InventoryOptionalFieldIntelligentTieringAccessTier is a InventoryOptionalField enum value // InventoryOptionalFieldIntelligentTieringAccessTier is a InventoryOptionalField enum value
InventoryOptionalFieldIntelligentTieringAccessTier = "IntelligentTieringAccessTier" InventoryOptionalFieldIntelligentTieringAccessTier = "IntelligentTieringAccessTier"
// InventoryOptionalFieldBucketKeyStatus is a InventoryOptionalField enum value
InventoryOptionalFieldBucketKeyStatus = "BucketKeyStatus"
) )
// InventoryOptionalField_Values returns all elements of the InventoryOptionalField enum // InventoryOptionalField_Values returns all elements of the InventoryOptionalField enum
@ -36044,6 +36057,7 @@ func InventoryOptionalField_Values() []string {
InventoryOptionalFieldObjectLockMode, InventoryOptionalFieldObjectLockMode,
InventoryOptionalFieldObjectLockLegalHoldStatus, InventoryOptionalFieldObjectLockLegalHoldStatus,
InventoryOptionalFieldIntelligentTieringAccessTier, InventoryOptionalFieldIntelligentTieringAccessTier,
InventoryOptionalFieldBucketKeyStatus,
} }
} }
@ -36477,7 +36491,7 @@ func RequestCharged_Values() []string {
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
const ( const (
// RequestPayerRequester is a RequestPayer enum value // RequestPayerRequester is a RequestPayer enum value
RequestPayerRequester = "requester" RequestPayerRequester = "requester"

View file

@ -155,8 +155,9 @@ func endpointHandler(req *request.Request) {
} }
case arn.OutpostAccessPointARN: case arn.OutpostAccessPointARN:
// outposts does not support FIPS regions // outposts does not support FIPS regions
if resReq.ResourceConfiguredForFIPS() { if resReq.UseFIPS() {
req.Error = s3shared.NewInvalidARNWithFIPSError(resource, nil) req.Error = s3shared.NewFIPSConfigurationError(resource, req.ClientInfo.PartitionID,
aws.StringValue(req.Config.Region), nil)
return return
} }

View file

@ -29,7 +29,7 @@ type UploadInput struct {
// the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.
// When using this action with an access point through the AWS SDKs, you provide // When using this action with an access point through the AWS SDKs, you provide
// the access point ARN in place of the bucket name. For more information about // the access point ARN in place of the bucket name. For more information about
// access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
@ -126,14 +126,15 @@ type UploadInput struct {
// The Object Lock mode that you want to apply to this object. // The Object Lock mode that you want to apply to this object.
ObjectLockMode *string `location:"header" locationName:"x-amz-object-lock-mode" type:"string" enum:"ObjectLockMode"` ObjectLockMode *string `location:"header" locationName:"x-amz-object-lock-mode" type:"string" enum:"ObjectLockMode"`
// The date and time when you want this object's Object Lock to expire. // The date and time when you want this object's Object Lock to expire. Must
// be formatted as a timestamp parameter.
ObjectLockRetainUntilDate *time.Time `location:"header" locationName:"x-amz-object-lock-retain-until-date" type:"timestamp" timestampFormat:"iso8601"` ObjectLockRetainUntilDate *time.Time `location:"header" locationName:"x-amz-object-lock-retain-until-date" type:"timestamp" timestampFormat:"iso8601"`
// Confirms that the requester knows that they will be charged for the request. // Confirms that the requester knows that they will be charged for the request.
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`
// Specifies the algorithm to use to when encrypting the object (for example, // Specifies the algorithm to use to when encrypting the object (for example,
@ -160,13 +161,11 @@ type UploadInput struct {
// If x-amz-server-side-encryption is present and has the value of aws:kms, // If x-amz-server-side-encryption is present and has the value of aws:kms,
// this header specifies the ID of the AWS Key Management Service (AWS KMS) // this header specifies the ID of the AWS Key Management Service (AWS KMS)
// symmetrical customer managed customer master key (CMK) that was used for // symmetrical customer managed customer master key (CMK) that was used for
// the object.
//
// If the value of x-amz-server-side-encryption is aws:kms, this header specifies
// the ID of the symmetric customer managed AWS KMS CMK that will be used for
// the object. If you specify x-amz-server-side-encryption:aws:kms, but do not // the object. If you specify x-amz-server-side-encryption:aws:kms, but do not
// providex-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the AWS // providex-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the AWS
// managed CMK in AWS to protect the data. // managed CMK in AWS to protect the data. If the KMS key does not exist in
// the same account issuing the command, you must use the full ARN and not just
// the ID.
SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"` SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"`
// The server-side encryption algorithm used when storing this object in Amazon // The server-side encryption algorithm used when storing this object in Amazon
@ -178,7 +177,7 @@ type UploadInput struct {
// Depending on performance needs, you can specify a different Storage Class. // Depending on performance needs, you can specify a different Storage Class.
// Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information,
// see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html)
// in the Amazon S3 Service Developer Guide. // in the Amazon S3 User Guide.
StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"`
// The tag-set for the object. The tag-set must be encoded as URL Query parameters. // The tag-set for the object. The tag-set must be encoded as URL Query parameters.

View file

@ -127,14 +127,16 @@ fmt.Println("All text will now be bold magenta.")
There might be a case where you want to explicitly disable/enable color output. the There might be a case where you want to explicitly disable/enable color output. the
`go-isatty` package will automatically disable color output for non-tty output streams `go-isatty` package will automatically disable color output for non-tty output streams
(for example if the output were piped directly to `less`) (for example if the output were piped directly to `less`).
`Color` has support to disable/enable colors both globally and for single color The `color` package also disables color output if the [`NO_COLOR`](https://no-color.org) environment
definitions. For example suppose you have a CLI app and a `--no-color` bool flag. You variable is set (regardless of its value).
can easily disable the color output with:
`Color` has support to disable/enable colors programatically both globally and
for single color definitions. For example suppose you have a CLI app and a
`--no-color` bool flag. You can easily disable the color output with:
```go ```go
var flagNoColor = flag.Bool("no-color", false, "Disable color output") var flagNoColor = flag.Bool("no-color", false, "Disable color output")
if *flagNoColor { if *flagNoColor {
@ -156,6 +158,10 @@ c.EnableColor()
c.Println("This prints again cyan...") c.Println("This prints again cyan...")
``` ```
## GitHub Actions
To output color in GitHub Actions (or other CI systems that support ANSI colors), make sure to set `color.NoColor = false` so that it bypasses the check for non-tty output streams.
## Todo ## Todo
* Save/Return previous values * Save/Return previous values
@ -170,4 +176,3 @@ c.Println("This prints again cyan...")
## License ## License
The MIT License (MIT) - see [`LICENSE.md`](https://github.com/fatih/color/blob/master/LICENSE.md) for more details The MIT License (MIT) - see [`LICENSE.md`](https://github.com/fatih/color/blob/master/LICENSE.md) for more details

View file

@ -15,9 +15,11 @@ import (
var ( var (
// NoColor defines if the output is colorized or not. It's dynamically set to // NoColor defines if the output is colorized or not. It's dynamically set to
// false or true based on the stdout's file descriptor referring to a terminal // false or true based on the stdout's file descriptor referring to a terminal
// or not. This is a global option and affects all colors. For more control // or not. It's also set to true if the NO_COLOR environment variable is
// over each color block use the methods DisableColor() individually. // set (regardless of its value). This is a global option and affects all
NoColor = os.Getenv("TERM") == "dumb" || // colors. For more control over each color block use the methods
// DisableColor() individually.
NoColor = noColorExists() || os.Getenv("TERM") == "dumb" ||
(!isatty.IsTerminal(os.Stdout.Fd()) && !isatty.IsCygwinTerminal(os.Stdout.Fd())) (!isatty.IsTerminal(os.Stdout.Fd()) && !isatty.IsCygwinTerminal(os.Stdout.Fd()))
// Output defines the standard output of the print functions. By default // Output defines the standard output of the print functions. By default
@ -33,6 +35,12 @@ var (
colorsCacheMu sync.Mutex // protects colorsCache colorsCacheMu sync.Mutex // protects colorsCache
) )
// noColorExists returns true if the environment variable NO_COLOR exists.
func noColorExists() bool {
_, exists := os.LookupEnv("NO_COLOR")
return exists
}
// Color defines a custom color object which is defined by SGR parameters. // Color defines a custom color object which is defined by SGR parameters.
type Color struct { type Color struct {
params []Attribute params []Attribute
@ -108,7 +116,14 @@ const (
// New returns a newly created color object. // New returns a newly created color object.
func New(value ...Attribute) *Color { func New(value ...Attribute) *Color {
c := &Color{params: make([]Attribute, 0)} c := &Color{
params: make([]Attribute, 0),
}
if noColorExists() {
c.noColor = boolPtr(true)
}
c.Add(value...) c.Add(value...)
return c return c
} }
@ -387,7 +402,7 @@ func (c *Color) EnableColor() {
} }
func (c *Color) isNoColorSet() bool { func (c *Color) isNoColorSet() bool {
// check first if we have user setted action // check first if we have user set action
if c.noColor != nil { if c.noColor != nil {
return *c.noColor return *c.noColor
} }

View file

@ -118,6 +118,8 @@ the color output with:
color.NoColor = true // disables colorized output color.NoColor = true // disables colorized output
} }
You can also disable the color by setting the NO_COLOR environment variable to any value.
It also has support for single color definitions (local). You can It also has support for single color definitions (local). You can
disable/enable color output on the fly: disable/enable color output on the fly:

View file

@ -644,7 +644,7 @@ func (d *compressor) init(w io.Writer, level int) (err error) {
d.fill = (*compressor).fillBlock d.fill = (*compressor).fillBlock
d.step = (*compressor).store d.step = (*compressor).store
case level == ConstantCompression: case level == ConstantCompression:
d.w.logNewTablePenalty = 8 d.w.logNewTablePenalty = 10
d.window = make([]byte, 32<<10) d.window = make([]byte, 32<<10)
d.fill = (*compressor).fillBlock d.fill = (*compressor).fillBlock
d.step = (*compressor).storeHuff d.step = (*compressor).storeHuff

View file

@ -45,7 +45,7 @@ const (
bTableBits = 17 // Bits used in the big tables bTableBits = 17 // Bits used in the big tables
bTableSize = 1 << bTableBits // Size of the table bTableSize = 1 << bTableBits // Size of the table
allocHistory = maxStoreBlockSize * 10 // Size to preallocate for history. allocHistory = maxStoreBlockSize * 5 // Size to preallocate for history.
bufferReset = (1 << 31) - allocHistory - maxStoreBlockSize - 1 // Reset the buffer offset when reaching this. bufferReset = (1 << 31) - allocHistory - maxStoreBlockSize - 1 // Reset the buffer offset when reaching this.
) )

Some files were not shown because too many files have changed in this diff Show more