Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files

This commit is contained in:
Aliaksandr Valialkin 2020-11-23 09:40:16 +02:00
commit f000a10cd0
20 changed files with 608 additions and 54 deletions

View file

@ -306,6 +306,8 @@ Currently the following [scrape_config](https://prometheus.io/docs/prometheus/la
* [dns_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config) * [dns_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config)
* [openstack_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config) * [openstack_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config)
* [dockerswarm_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config) * [dockerswarm_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config)
* [eureka_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config)
Other `*_sd_config` types will be supported in the future. Other `*_sd_config` types will be supported in the future.

View file

@ -169,6 +169,8 @@ The following scrape types in [scrape_config](https://prometheus.io/docs/prometh
[OpenStack identity API v3](https://docs.openstack.org/api-ref/identity/v3/) is supported only. [OpenStack identity API v3](https://docs.openstack.org/api-ref/identity/v3/) is supported only.
* `dockerswarm_sd_configs` - for scraping Docker Swarm targets. * `dockerswarm_sd_configs` - for scraping Docker Swarm targets.
See [dockerswarm_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config) for details. See [dockerswarm_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config) for details.
* `eureka_sd_configs` - for scraping targets registered in [Netflix Eureka](https://github.com/Netflix/eureka).
See [eureka_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config) for details.
File feature requests at [our issue tracker](https://github.com/VictoriaMetrics/VictoriaMetrics/issues) if you need other service discovery mechanisms to be supported by `vmagent`. File feature requests at [our issue tracker](https://github.com/VictoriaMetrics/VictoriaMetrics/issues) if you need other service discovery mechanisms to be supported by `vmagent`.

View file

@ -6,7 +6,11 @@
```bash ```bash
snap install victoriametrics snap install victoriametrics
``` ```
* FEATURE: vmselect: add `-replicationFactor` command-line flag for reducing query duration when replication is enabled and a part of vmstorage nodes
are temporarily slow and/or temporarily unavailable. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/711
* FEATURE: vminsert: export `vm_rpc_vmstorage_is_reachable` metric, which can be used for monitoring reachability of vmstorage nodes from vminsert nodes. * FEATURE: vminsert: export `vm_rpc_vmstorage_is_reachable` metric, which can be used for monitoring reachability of vmstorage nodes from vminsert nodes.
* FEATURE: vmagent: add Netflix Eureka service discovery (aka [eureka_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config)).
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/851
* FEATURE: add `-loggerWarnsPerSecondLimit` command-line flag for rate limiting of WARN messages in logs. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/905 * FEATURE: add `-loggerWarnsPerSecondLimit` command-line flag for rate limiting of WARN messages in logs. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/905
* FEATURE: apply `loggerErrorsPerSecondLimit` and `-loggerWarnsPerSecondLimit` rate limit per caller. I.e. log messages are suppressed if the same caller logs the same message * FEATURE: apply `loggerErrorsPerSecondLimit` and `-loggerWarnsPerSecondLimit` rate limit per caller. I.e. log messages are suppressed if the same caller logs the same message
at the rate exceeding the given limit. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/905#issuecomment-729395855 at the rate exceeding the given limit. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/905#issuecomment-729395855

View file

@ -335,8 +335,9 @@ In order to enable application-level replication, `-replicationFactor=N` command
This guarantees that all the data remains available for querying if up to `N-1` `vmstorage` nodes are unavailable. This guarantees that all the data remains available for querying if up to `N-1` `vmstorage` nodes are unavailable.
For example, when `-replicationFactor=3` is passed to `vminsert`, then it replicates all the ingested data to 3 distinct `vmstorage` nodes. For example, when `-replicationFactor=3` is passed to `vminsert`, then it replicates all the ingested data to 3 distinct `vmstorage` nodes.
When the replication is enabled, `-dedup.minScrapeInterval=1ms` command-line flag must be passed to `vmselect` When the replication is enabled, `-replicationFactor=N` and `-dedup.minScrapeInterval=1ms` command-line flag must be passed to `vmselect` nodes.
in order to de-duplicate replicated data during queries. It is OK if `-dedup.minScrapeInterval` exceeds 1ms The `-replicationFactor=N` improves query performance when a part of vmstorage nodes respond slowly and/or temporarily unavailable.
The `-dedup.minScrapeInterval=1ms` de-duplicates replicated data during queries. It is OK if `-dedup.minScrapeInterval` exceeds 1ms
when [deduplication](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#deduplication) is used additionally to replication. when [deduplication](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#deduplication) is used additionally to replication.
Note that [replication doesn't save from disaster](https://medium.com/@valyala/speeding-up-backups-for-big-time-series-databases-533c1a927883), Note that [replication doesn't save from disaster](https://medium.com/@valyala/speeding-up-backups-for-big-time-series-databases-533c1a927883),

View file

@ -68,15 +68,15 @@ This functionality can be tried at [an editable Grafana dashboard](http://play-g
- `step()` function for returning the step in seconds used in the query. - `step()` function for returning the step in seconds used in the query.
- `start()` and `end()` functions for returning the start and end timestamps of the `[start ... end]` range used in the query. - `start()` and `end()` functions for returning the start and end timestamps of the `[start ... end]` range used in the query.
- `integrate(m[d])` for returning integral over the given duration `d` for the given metric `m`. - `integrate(m[d])` for returning integral over the given duration `d` for the given metric `m`.
- `ideriv(m)` - for calculating `instant` derivative for `m`. - `ideriv(m[d])` - for calculating `instant` derivative for the metric `m` over the duration `d`.
- `deriv_fast(m[d])` - for calculating `fast` derivative for `m` based on the first and the last points from duration `d`. - `deriv_fast(m[d])` - for calculating `fast` derivative for `m` based on the first and the last points from duration `d`.
- `running_` functions - `running_sum`, `running_min`, `running_max`, `running_avg` - for calculating [running values](https://en.wikipedia.org/wiki/Running_total) on the selected time range. - `running_` functions - `running_sum`, `running_min`, `running_max`, `running_avg` - for calculating [running values](https://en.wikipedia.org/wiki/Running_total) on the selected time range.
- `range_` functions - `range_sum`, `range_min`, `range_max`, `range_avg`, `range_first`, `range_last`, `range_median`, `range_quantile` - for calculating global value over the selected time range. Note that global value is based on calculated datapoints for the inner query. The calculated datapoints can differ from raw datapoints stored in the database. See [these docs](https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness) for details. - `range_` functions - `range_sum`, `range_min`, `range_max`, `range_avg`, `range_first`, `range_last`, `range_median`, `range_quantile` - for calculating global value over the selected time range. Note that global value is based on calculated datapoints for the inner query. The calculated datapoints can differ from raw datapoints stored in the database. See [these docs](https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness) for details.
- `smooth_exponential(q, sf)` - smooths `q` using [exponential moving average](https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average) with the given smooth factor `sf`. - `smooth_exponential(q, sf)` - smooths `q` using [exponential moving average](https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average) with the given smooth factor `sf`.
- `remove_resets(q)` - removes counter resets from `q`. - `remove_resets(q)` - removes counter resets from `q`.
- `lag(q[d])` - returns lag between the current timestamp and the timestamp from the previous data point in `q` over `d`. - `lag(m[d])` - returns lag between the current timestamp and the timestamp from the previous data point in `m` over `d`.
- `lifetime(q[d])` - returns lifetime of `q` over `d` in seconds. It is expected that `d` exceeds the lifetime of `q`. - `lifetime(m[d])` - returns lifetime of `q` over `d` in seconds. It is expected that `d` exceeds the lifetime of `m`.
- `scrape_interval(q[d])` - returns the average interval in seconds between data points of `q` over `d` aka `scrape interval`. - `scrape_interval(m[d])` - returns the average interval in seconds between data points of `m` over `d` aka `scrape interval`.
- Trigonometric functions - `sin(q)`, `cos(q)`, `asin(q)`, `acos(q)` and `pi()`. - Trigonometric functions - `sin(q)`, `cos(q)`, `asin(q)`, `acos(q)` and `pi()`.
- `range_over_time(m[d])` - returns value range for `m` over `d` time window, i.e. `max_over_time(m[d])-min_over_time(m[d])`. - `range_over_time(m[d])` - returns value range for `m` over `d` time window, i.e. `max_over_time(m[d])-min_over_time(m[d])`.
- `median_over_time(m[d])` - calculates median values for `m` over `d` time window. Shorthand to `quantile_over_time(0.5, m[d])`. - `median_over_time(m[d])` - calculates median values for `m` over `d` time window. Shorthand to `quantile_over_time(0.5, m[d])`.

View file

@ -306,6 +306,8 @@ Currently the following [scrape_config](https://prometheus.io/docs/prometheus/la
* [dns_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config) * [dns_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config)
* [openstack_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config) * [openstack_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config)
* [dockerswarm_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config) * [dockerswarm_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config)
* [eureka_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config)
Other `*_sd_config` types will be supported in the future. Other `*_sd_config` types will be supported in the future.

View file

@ -169,6 +169,8 @@ The following scrape types in [scrape_config](https://prometheus.io/docs/prometh
[OpenStack identity API v3](https://docs.openstack.org/api-ref/identity/v3/) is supported only. [OpenStack identity API v3](https://docs.openstack.org/api-ref/identity/v3/) is supported only.
* `dockerswarm_sd_configs` - for scraping Docker Swarm targets. * `dockerswarm_sd_configs` - for scraping Docker Swarm targets.
See [dockerswarm_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config) for details. See [dockerswarm_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config) for details.
* `eureka_sd_configs` - for scraping targets registered in [Netflix Eureka](https://github.com/Netflix/eureka).
See [eureka_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config) for details.
File feature requests at [our issue tracker](https://github.com/VictoriaMetrics/VictoriaMetrics/issues) if you need other service discovery mechanisms to be supported by `vmagent`. File feature requests at [our issue tracker](https://github.com/VictoriaMetrics/VictoriaMetrics/issues) if you need other service discovery mechanisms to be supported by `vmagent`.

4
go.mod
View file

@ -10,7 +10,7 @@ require (
github.com/VictoriaMetrics/fasthttp v1.0.7 github.com/VictoriaMetrics/fasthttp v1.0.7
github.com/VictoriaMetrics/metrics v1.12.3 github.com/VictoriaMetrics/metrics v1.12.3
github.com/VictoriaMetrics/metricsql v0.7.2 github.com/VictoriaMetrics/metricsql v0.7.2
github.com/aws/aws-sdk-go v1.35.31 github.com/aws/aws-sdk-go v1.35.33
github.com/cespare/xxhash/v2 v2.1.1 github.com/cespare/xxhash/v2 v2.1.1
github.com/go-kit/kit v0.10.0 github.com/go-kit/kit v0.10.0
github.com/golang/snappy v0.0.2 github.com/golang/snappy v0.0.2
@ -24,7 +24,7 @@ require (
github.com/valyala/quicktemplate v1.6.3 github.com/valyala/quicktemplate v1.6.3
golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58 golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68 golang.org/x/sys v0.0.0-20201119102817-f84b799fce68
golang.org/x/tools v0.0.0-20201119132711-4783bc9bebf0 // indirect golang.org/x/tools v0.0.0-20201121010211-780cb80bd7fb // indirect
google.golang.org/api v0.35.0 google.golang.org/api v0.35.0
google.golang.org/appengine v1.6.7 // indirect google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20201119123407-9b1e624d6bc4 // indirect google.golang.org/genproto v0.0.0-20201119123407-9b1e624d6bc4 // indirect

7
go.sum
View file

@ -118,8 +118,9 @@ github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef/go.mod h1:W
github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQwij/eHl5CU= github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQwij/eHl5CU=
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48= github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
github.com/aws/aws-sdk-go v1.35.31 h1:6tlaYq4Q311qfhft/fIaND33XI27aW3zIdictcHxifE=
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.35.33 h1:8qPRZqCRok5i7VNN51k/Ky7CuyoXMdSs4mUfKyCqvPw=
github.com/aws/aws-sdk-go v1.35.33/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g= github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
@ -1027,8 +1028,8 @@ golang.org/x/tools v0.0.0-20200915173823-2db8f0ff891c/go.mod h1:z6u4i615ZeAfBE4X
golang.org/x/tools v0.0.0-20200918232735-d647fc253266/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU= golang.org/x/tools v0.0.0-20200918232735-d647fc253266/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201119054027-25dc3e1ccc3c/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201119054027-25dc3e1ccc3c/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201119132711-4783bc9bebf0 h1:26vapYZ9m+DJd68m3DCFP/swNNErXAU7tOMFG7NyUuM= golang.org/x/tools v0.0.0-20201121010211-780cb80bd7fb h1:z5+u0pkAUPUWd3taoTialQ2JAMo4Wo1Z3L25U4ZV9r0=
golang.org/x/tools v0.0.0-20201119132711-4783bc9bebf0/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201121010211-780cb80bd7fb/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View file

@ -20,6 +20,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/dns" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/dns"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/dockerswarm" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/dockerswarm"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/ec2" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/ec2"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/eureka"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/gce" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/gce"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/kubernetes" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/kubernetes"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/openstack" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/openstack"
@ -76,6 +77,7 @@ type ScrapeConfig struct {
KubernetesSDConfigs []kubernetes.SDConfig `yaml:"kubernetes_sd_configs,omitempty"` KubernetesSDConfigs []kubernetes.SDConfig `yaml:"kubernetes_sd_configs,omitempty"`
OpenStackSDConfigs []openstack.SDConfig `yaml:"openstack_sd_configs,omitempty"` OpenStackSDConfigs []openstack.SDConfig `yaml:"openstack_sd_configs,omitempty"`
ConsulSDConfigs []consul.SDConfig `yaml:"consul_sd_configs,omitempty"` ConsulSDConfigs []consul.SDConfig `yaml:"consul_sd_configs,omitempty"`
EurekaSDConfigs []eureka.SDConfig `yaml:"eureka_sd_configs,omitempty"`
DockerSwarmConfigs []dockerswarm.SDConfig `yaml:"dockerswarm_sd_configs,omitempty"` DockerSwarmConfigs []dockerswarm.SDConfig `yaml:"dockerswarm_sd_configs,omitempty"`
DNSSDConfigs []dns.SDConfig `yaml:"dns_sd_configs,omitempty"` DNSSDConfigs []dns.SDConfig `yaml:"dns_sd_configs,omitempty"`
EC2SDConfigs []ec2.SDConfig `yaml:"ec2_sd_configs,omitempty"` EC2SDConfigs []ec2.SDConfig `yaml:"ec2_sd_configs,omitempty"`
@ -293,6 +295,34 @@ func (cfg *Config) getConsulSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
return dst return dst
} }
// getEurekaSDScrapeWork returns `eureka_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getEurekaSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.EurekaSDConfigs {
sdc := &sc.EurekaSDConfigs[j]
var okLocal bool
dst, okLocal = appendEurekaScrapeWork(dst, sdc, cfg.baseDir, sc.swc)
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering eureka targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getDNSSDScrapeWork returns `dns_sd_configs` ScrapeWork from cfg. // getDNSSDScrapeWork returns `dns_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getDNSSDScrapeWork(prev []ScrapeWork) []ScrapeWork { func (cfg *Config) getDNSSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
@ -537,6 +567,15 @@ func appendConsulScrapeWork(dst []ScrapeWork, sdc *consul.SDConfig, baseDir stri
return appendScrapeWorkForTargetLabels(dst, swc, targetLabels, "consul_sd_config"), true return appendScrapeWorkForTargetLabels(dst, swc, targetLabels, "consul_sd_config"), true
} }
func appendEurekaScrapeWork(dst []ScrapeWork, sdc *eureka.SDConfig, baseDir string, swc *scrapeWorkConfig) ([]ScrapeWork, bool) {
targetLabels, err := eureka.GetLabels(sdc, baseDir)
if err != nil {
logger.Errorf("error when discovering eureka targets for `job_name` %q: %s; skipping it", swc.jobName, err)
return dst, false
}
return appendScrapeWorkForTargetLabels(dst, swc, targetLabels, "eureka_sd_config"), true
}
func appendDNSScrapeWork(dst []ScrapeWork, sdc *dns.SDConfig, swc *scrapeWorkConfig) ([]ScrapeWork, bool) { func appendDNSScrapeWork(dst []ScrapeWork, sdc *dns.SDConfig, swc *scrapeWorkConfig) ([]ScrapeWork, bool) {
targetLabels, err := dns.GetLabels(sdc) targetLabels, err := dns.GetLabels(sdc)
if err != nil { if err != nil {

View file

@ -0,0 +1,75 @@
package eureka
import (
"encoding/xml"
"fmt"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discoveryutils"
)
var configMap = discoveryutils.NewConfigMap()
type apiConfig struct {
client *discoveryutils.Client
}
func newAPIConfig(sdc *SDConfig, baseDir string) (*apiConfig, error) {
token := ""
if sdc.Token != nil {
token = *sdc.Token
}
var ba *promauth.BasicAuthConfig
if len(sdc.Username) > 0 {
ba = &promauth.BasicAuthConfig{
Username: sdc.Username,
Password: sdc.Password,
}
token = ""
}
ac, err := promauth.NewConfig(baseDir, ba, token, "", sdc.TLSConfig)
if err != nil {
return nil, fmt.Errorf("cannot parse auth config: %w", err)
}
apiServer := sdc.Server
if apiServer == "" {
apiServer = "localhost:8080/eureka/v2"
}
if !strings.Contains(apiServer, "://") {
scheme := sdc.Scheme
if scheme == "" {
scheme = "http"
}
apiServer = scheme + "://" + apiServer
}
client, err := discoveryutils.NewClient(apiServer, ac)
if err != nil {
return nil, fmt.Errorf("cannot create HTTP client for %q: %w", apiServer, err)
}
cfg := &apiConfig{
client: client,
}
return cfg, nil
}
func getAPIConfig(sdc *SDConfig, baseDir string) (*apiConfig, error) {
v, err := configMap.Get(sdc, func() (interface{}, error) { return newAPIConfig(sdc, baseDir) })
if err != nil {
return nil, err
}
return v.(*apiConfig), nil
}
func getAPIResponse(cfg *apiConfig, path string) ([]byte, error) {
return cfg.client.GetAPIResponse(path)
}
func parseAPIResponse(data []byte) (*applications, error) {
var apps applications
if err := xml.Unmarshal(data, &apps); err != nil {
return nil, fmt.Errorf("failed parse eureka api response: %q, err: %w", data, err)
}
return &apps, nil
}

View file

@ -0,0 +1,107 @@
package eureka
import (
"reflect"
"testing"
)
func Test_parseAPIResponse(t *testing.T) {
type args struct {
data []byte
}
tests := []struct {
name string
args args
want *applications
wantErr bool
}{
{
name: "parse ok 1 app with instance",
args: args{
data: []byte(`<applications>
<versions__delta>1</versions__delta>
<apps__hashcode>UP_1_</apps__hashcode>
<application>
<name>HELLO-NETFLIX-OSS</name>
<instance>
<hostName>98de25ebef42</hostName>
<app>HELLO-NETFLIX-OSS</app>
<ipAddr>10.10.0.3</ipAddr>
<status>UP</status>
<overriddenstatus>UNKNOWN</overriddenstatus>
<port enabled="true">8080</port>
<securePort enabled="false">443</securePort>
<countryId>1</countryId>
<dataCenterInfo class="com.netflix.appinfo.InstanceInfo$DefaultDataCenterInfo">
<name>MyOwn</name>
</dataCenterInfo>
<leaseInfo>
<renewalIntervalInSecs>30</renewalIntervalInSecs>
<durationInSecs>90</durationInSecs>
<registrationTimestamp>1605757726477</registrationTimestamp>
<lastRenewalTimestamp>1605759135484</lastRenewalTimestamp>
<evictionTimestamp>0</evictionTimestamp>
<serviceUpTimestamp>1605757725913</serviceUpTimestamp>
</leaseInfo>
<metadata class="java.util.Collections$EmptyMap"/>
<appGroupName>UNKNOWN</appGroupName>
<homePageUrl>http://98de25ebef42:8080/</homePageUrl>
<statusPageUrl>http://98de25ebef42:8080/Status</statusPageUrl>
<healthCheckUrl>http://98de25ebef42:8080/healthcheck</healthCheckUrl>
<vipAddress>HELLO-NETFLIX-OSS</vipAddress>
<isCoordinatingDiscoveryServer>false</isCoordinatingDiscoveryServer>
<lastUpdatedTimestamp>1605757726478</lastUpdatedTimestamp>
<lastDirtyTimestamp>1605757725753</lastDirtyTimestamp>
<actionType>ADDED</actionType>
</instance>
</application>
</applications>`),
},
want: &applications{
Applications: []Application{
{
Name: "HELLO-NETFLIX-OSS",
Instances: []Instance{
{
HostName: "98de25ebef42",
HomePageURL: "http://98de25ebef42:8080/",
StatusPageURL: "http://98de25ebef42:8080/Status",
HealthCheckURL: "http://98de25ebef42:8080/healthcheck",
App: "HELLO-NETFLIX-OSS",
IPAddr: "10.10.0.3",
VipAddress: "HELLO-NETFLIX-OSS",
SecureVipAddress: "",
Status: "UP",
Port: Port{
Enabled: true,
Port: 8080,
},
SecurePort: Port{
Port: 443,
},
DataCenterInfo: DataCenterInfo{
Name: "MyOwn",
},
Metadata: MetaData{},
CountryID: 1,
InstanceID: "",
},
},
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := parseAPIResponse(tt.args.data)
if (err != nil) != tt.wantErr {
t.Errorf("parseAPIResponse() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("unxpected response for parseAPIResponse() \ngot = %v, \nwant %v", got, tt.want)
}
})
}
}

View file

@ -0,0 +1,150 @@
package eureka
import (
"encoding/xml"
"fmt"
"strconv"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discoveryutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
)
const appsAPIPath = "/apps"
// SDConfig represents service discovery config for eureka.
//
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka
type SDConfig struct {
Server string `yaml:"server,omitempty"`
Token *string `yaml:"token"`
Datacenter string `yaml:"datacenter"`
Scheme string `yaml:"scheme,omitempty"`
Username string `yaml:"username"`
Password string `yaml:"password"`
TLSConfig *promauth.TLSConfig `yaml:"tls_config,omitempty"`
// RefreshInterval time.Duration `yaml:"refresh_interval"`
// refresh_interval is obtained from `-promscrape.ec2SDCheckInterval` command-line option.
Port *int `yaml:"port,omitempty"`
}
type applications struct {
Applications []Application `xml:"application"`
}
// Application - eureka application https://github.com/Netflix/eureka/wiki/Eureka-REST-operations/
type Application struct {
Name string `xml:"name"`
Instances []Instance `xml:"instance"`
}
// Port - eureka instance port.
type Port struct {
Port int `xml:",chardata"`
Enabled bool `xml:"enabled,attr"`
}
// Instance - eureka instance https://github.com/Netflix/eureka/wiki/Eureka-REST-operations
type Instance struct {
HostName string `xml:"hostName"`
HomePageURL string `xml:"homePageUrl"`
StatusPageURL string `xml:"statusPageUrl"`
HealthCheckURL string `xml:"healthCheckUrl"`
App string `xml:"app"`
IPAddr string `xml:"ipAddr"`
VipAddress string `xml:"vipAddress"`
SecureVipAddress string `xml:"secureVipAddress"`
Status string `xml:"status"`
Port Port `xml:"port"`
SecurePort Port `xml:"securePort"`
DataCenterInfo DataCenterInfo `xml:"dataCenterInfo"`
Metadata MetaData `xml:"metadata"`
CountryID int `xml:"countryId"`
InstanceID string `xml:"instanceId"`
}
// MetaData - eureka objects metadata.
type MetaData struct {
Items []Tag `xml:",any"`
}
// Tag - eureka metadata tag - list of k/v values.
type Tag struct {
XMLName xml.Name
Content string `xml:",innerxml"`
}
// DataCenterInfo -eureka datacentre metadata
type DataCenterInfo struct {
Name string `xml:"name"`
Metadata MetaData `xml:"metadata"`
}
// GetLabels returns Eureka labels according to sdc.
func GetLabels(sdc *SDConfig, baseDir string) ([]map[string]string, error) {
cfg, err := getAPIConfig(sdc, baseDir)
if err != nil {
return nil, fmt.Errorf("cannot get API config: %w", err)
}
data, err := getAPIResponse(cfg, appsAPIPath)
if err != nil {
return nil, err
}
apps, err := parseAPIResponse(data)
if err != nil {
return nil, err
}
port := 80
if sdc.Port != nil {
port = *sdc.Port
}
return addInstanceLabels(apps, port), nil
}
func addInstanceLabels(apps *applications, port int) []map[string]string {
var ms []map[string]string
for _, app := range apps.Applications {
for _, instance := range app.Instances {
instancePort := port
if instance.Port.Port != 0 {
instancePort = instance.Port.Port
}
targetAddress := discoveryutils.JoinHostPort(instance.HostName, instancePort)
m := map[string]string{
"__address__": targetAddress,
"instance": instance.InstanceID,
"__meta_eureka_app_name": app.Name,
"__meta_eureka_app_instance_hostname": instance.HostName,
"__meta_eureka_app_instance_homepage_url": instance.HomePageURL,
"__meta_eureka_app_instance_statuspage_url": instance.StatusPageURL,
"__meta_eureka_app_instance_healthcheck_url": instance.HealthCheckURL,
"__meta_eureka_app_instance_ip_addr": instance.IPAddr,
"__meta_eureka_app_instance_vip_address": instance.VipAddress,
"__meta_eureka_app_instance_secure_vip_address": instance.SecureVipAddress,
"__meta_eureka_app_instance_status": instance.Status,
"__meta_eureka_app_instance_country_id": strconv.Itoa(instance.CountryID),
"__meta_eureka_app_instance_id": instance.InstanceID,
}
if instance.Port.Port != 0 {
m["__meta_eureka_app_instance_port"] = strconv.Itoa(instance.Port.Port)
m["__meta_eureka_app_instance_port_enabled"] = strconv.FormatBool(instance.Port.Enabled)
}
if instance.SecurePort.Port != 0 {
m["__meta_eureka_app_instance_secure_port"] = strconv.Itoa(instance.SecurePort.Port)
m["__meta_eureka_app_instance_secure_port_enabled"] = strconv.FormatBool(instance.SecurePort.Enabled)
}
if len(instance.DataCenterInfo.Name) > 0 {
m["__meta_eureka_app_instance_datacenterinfo_name"] = instance.DataCenterInfo.Name
for _, tag := range instance.DataCenterInfo.Metadata.Items {
m["__meta_eureka_app_instance_datacenterinfo_metadata_"+discoveryutils.SanitizeLabelName(tag.XMLName.Local)] = tag.Content
}
}
for _, tag := range instance.Metadata.Items {
m["__meta_eureka_app_instance_metadata_"+discoveryutils.SanitizeLabelName(tag.XMLName.Local)] = tag.Content
}
ms = append(ms, m)
}
}
return ms
}

View file

@ -0,0 +1,84 @@
package eureka
import (
"reflect"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discoveryutils"
)
func Test_addInstanceLabels(t *testing.T) {
type args struct {
applications *applications
port int
}
tests := []struct {
name string
args args
want [][]prompbmarshal.Label
}{
{
name: "1 application",
args: args{
port: 9100,
applications: &applications{
Applications: []Application{
{
Name: "test-app",
Instances: []Instance{
{
Status: "Ok",
HealthCheckURL: "some-url",
HomePageURL: "some-home-url",
StatusPageURL: "some-status-url",
HostName: "host-1",
IPAddr: "10.15.11.11",
CountryID: 5,
VipAddress: "10.15.11.11",
InstanceID: "some-id",
Metadata: MetaData{Items: []Tag{
{
Content: "value-1",
XMLName: struct{ Space, Local string }{Local: "key-1"},
},
}},
},
},
},
},
},
},
want: [][]prompbmarshal.Label{
discoveryutils.GetSortedLabels(map[string]string{
"__address__": "host-1:9100",
"instance": "some-id",
"__meta_eureka_app_instance_hostname": "host-1",
"__meta_eureka_app_name": "test-app",
"__meta_eureka_app_instance_healthcheck_url": "some-url",
"__meta_eureka_app_instance_ip_addr": "10.15.11.11",
"__meta_eureka_app_instance_vip_address": "10.15.11.11",
"__meta_eureka_app_instance_secure_vip_address": "",
"__meta_eureka_app_instance_country_id": "5",
"__meta_eureka_app_instance_homepage_url": "some-home-url",
"__meta_eureka_app_instance_statuspage_url": "some-status-url",
"__meta_eureka_app_instance_id": "some-id",
"__meta_eureka_app_instance_metadata_key_1": "value-1",
"__meta_eureka_app_instance_status": "Ok",
}),
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := addInstanceLabels(tt.args.applications, tt.args.port)
var sortedLabelss [][]prompbmarshal.Label
for _, labels := range got {
sortedLabelss = append(sortedLabelss, discoveryutils.GetSortedLabels(labels))
}
if !reflect.DeepEqual(sortedLabelss, tt.want) {
t.Fatalf("unexpected labels \ngot : %v, \nwant: %v", got, tt.want)
}
})
}
}

View file

@ -28,6 +28,9 @@ var (
consulSDCheckInterval = flag.Duration("promscrape.consulSDCheckInterval", 30*time.Second, "Interval for checking for changes in consul. "+ consulSDCheckInterval = flag.Duration("promscrape.consulSDCheckInterval", 30*time.Second, "Interval for checking for changes in consul. "+
"This works only if `consul_sd_configs` is configured in '-promscrape.config' file. "+ "This works only if `consul_sd_configs` is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config for details") "See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config for details")
eurekaSDCheckInterval = flag.Duration("promscrape.eurekaSDCheckInterval", 30*time.Second, "Interval for checking for changes in eureka. "+
"This works only if `eureka_sd_configs` is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config for details")
dnsSDCheckInterval = flag.Duration("promscrape.dnsSDCheckInterval", 30*time.Second, "Interval for checking for changes in dns. "+ dnsSDCheckInterval = flag.Duration("promscrape.dnsSDCheckInterval", 30*time.Second, "Interval for checking for changes in dns. "+
"This works only if `dns_sd_configs` is configured in '-promscrape.config' file. "+ "This works only if `dns_sd_configs` is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config for details") "See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config for details")
@ -99,6 +102,7 @@ func runScraper(configFile string, pushData func(wr *prompbmarshal.WriteRequest)
scs.add("kubernetes_sd_configs", *kubernetesSDCheckInterval, func(cfg *Config, swsPrev []ScrapeWork) []ScrapeWork { return cfg.getKubernetesSDScrapeWork(swsPrev) }) scs.add("kubernetes_sd_configs", *kubernetesSDCheckInterval, func(cfg *Config, swsPrev []ScrapeWork) []ScrapeWork { return cfg.getKubernetesSDScrapeWork(swsPrev) })
scs.add("openstack_sd_configs", *openstackSDCheckInterval, func(cfg *Config, swsPrev []ScrapeWork) []ScrapeWork { return cfg.getOpenStackSDScrapeWork(swsPrev) }) scs.add("openstack_sd_configs", *openstackSDCheckInterval, func(cfg *Config, swsPrev []ScrapeWork) []ScrapeWork { return cfg.getOpenStackSDScrapeWork(swsPrev) })
scs.add("consul_sd_configs", *consulSDCheckInterval, func(cfg *Config, swsPrev []ScrapeWork) []ScrapeWork { return cfg.getConsulSDScrapeWork(swsPrev) }) scs.add("consul_sd_configs", *consulSDCheckInterval, func(cfg *Config, swsPrev []ScrapeWork) []ScrapeWork { return cfg.getConsulSDScrapeWork(swsPrev) })
scs.add("eureka_sd_configs", *eurekaSDCheckInterval, func(cfg *Config, swsPrev []ScrapeWork) []ScrapeWork { return cfg.getEurekaSDScrapeWork(swsPrev) })
scs.add("dns_sd_configs", *dnsSDCheckInterval, func(cfg *Config, swsPrev []ScrapeWork) []ScrapeWork { return cfg.getDNSSDScrapeWork(swsPrev) }) scs.add("dns_sd_configs", *dnsSDCheckInterval, func(cfg *Config, swsPrev []ScrapeWork) []ScrapeWork { return cfg.getDNSSDScrapeWork(swsPrev) })
scs.add("ec2_sd_configs", *ec2SDCheckInterval, func(cfg *Config, swsPrev []ScrapeWork) []ScrapeWork { return cfg.getEC2SDScrapeWork(swsPrev) }) scs.add("ec2_sd_configs", *ec2SDCheckInterval, func(cfg *Config, swsPrev []ScrapeWork) []ScrapeWork { return cfg.getEC2SDScrapeWork(swsPrev) })
scs.add("gce_sd_configs", *gceSDCheckInterval, func(cfg *Config, swsPrev []ScrapeWork) []ScrapeWork { return cfg.getGCESDScrapeWork(swsPrev) }) scs.add("gce_sd_configs", *gceSDCheckInterval, func(cfg *Config, swsPrev []ScrapeWork) []ScrapeWork { return cfg.getGCESDScrapeWork(swsPrev) })

View file

@ -2,6 +2,7 @@ package promscrape
import ( import (
"context" "context"
"fmt"
"net" "net"
"sync" "sync"
"sync/atomic" "sync/atomic"
@ -52,6 +53,9 @@ func statDial(addr string) (conn net.Conn, err error) {
dialsTotal.Inc() dialsTotal.Inc()
if err != nil { if err != nil {
dialErrors.Inc() dialErrors.Inc()
if !netutil.TCP6Enabled() {
err = fmt.Errorf("%w; try -enableTCP6 command-line flag", err)
}
return nil, err return nil, err
} }
conns.Inc() conns.Inc()

View file

@ -1430,8 +1430,10 @@ var awsPartition = partition{
"ap-southeast-2": endpoint{}, "ap-southeast-2": endpoint{},
"ca-central-1": endpoint{}, "ca-central-1": endpoint{},
"eu-central-1": endpoint{}, "eu-central-1": endpoint{},
"eu-north-1": endpoint{},
"eu-west-1": endpoint{}, "eu-west-1": endpoint{},
"eu-west-2": endpoint{}, "eu-west-2": endpoint{},
"eu-west-3": endpoint{},
"fips-us-east-1": endpoint{ "fips-us-east-1": endpoint{
Hostname: "cognito-identity-fips.us-east-1.amazonaws.com", Hostname: "cognito-identity-fips.us-east-1.amazonaws.com",
CredentialScope: credentialScope{ CredentialScope: credentialScope{
@ -1465,8 +1467,10 @@ var awsPartition = partition{
"ap-southeast-2": endpoint{}, "ap-southeast-2": endpoint{},
"ca-central-1": endpoint{}, "ca-central-1": endpoint{},
"eu-central-1": endpoint{}, "eu-central-1": endpoint{},
"eu-north-1": endpoint{},
"eu-west-1": endpoint{}, "eu-west-1": endpoint{},
"eu-west-2": endpoint{}, "eu-west-2": endpoint{},
"eu-west-3": endpoint{},
"fips-us-east-1": endpoint{ "fips-us-east-1": endpoint{
Hostname: "cognito-idp-fips.us-east-1.amazonaws.com", Hostname: "cognito-idp-fips.us-east-1.amazonaws.com",
CredentialScope: credentialScope{ CredentialScope: credentialScope{
@ -1576,6 +1580,7 @@ var awsPartition = partition{
"config": service{ "config": service{
Endpoints: endpoints{ Endpoints: endpoints{
"af-south-1": endpoint{},
"ap-east-1": endpoint{}, "ap-east-1": endpoint{},
"ap-northeast-1": endpoint{}, "ap-northeast-1": endpoint{},
"ap-northeast-2": endpoint{}, "ap-northeast-2": endpoint{},
@ -1585,15 +1590,40 @@ var awsPartition = partition{
"ca-central-1": endpoint{}, "ca-central-1": endpoint{},
"eu-central-1": endpoint{}, "eu-central-1": endpoint{},
"eu-north-1": endpoint{}, "eu-north-1": endpoint{},
"eu-south-1": endpoint{},
"eu-west-1": endpoint{}, "eu-west-1": endpoint{},
"eu-west-2": endpoint{}, "eu-west-2": endpoint{},
"eu-west-3": endpoint{}, "eu-west-3": endpoint{},
"me-south-1": endpoint{}, "fips-us-east-1": endpoint{
"sa-east-1": endpoint{}, Hostname: "config-fips.us-east-1.amazonaws.com",
"us-east-1": endpoint{}, CredentialScope: credentialScope{
"us-east-2": endpoint{}, Region: "us-east-1",
"us-west-1": endpoint{}, },
"us-west-2": endpoint{}, },
"fips-us-east-2": endpoint{
Hostname: "config-fips.us-east-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-2",
},
},
"fips-us-west-1": endpoint{
Hostname: "config-fips.us-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-1",
},
},
"fips-us-west-2": endpoint{
Hostname: "config-fips.us-west-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-2",
},
},
"me-south-1": endpoint{},
"sa-east-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
"us-west-1": endpoint{},
"us-west-2": endpoint{},
}, },
}, },
"connect": service{ "connect": service{
@ -1888,6 +1918,12 @@ var awsPartition = partition{
Region: "eu-west-3", Region: "eu-west-3",
}, },
}, },
"sa-east-1": endpoint{
Hostname: "rds.sa-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "sa-east-1",
},
},
"us-east-1": endpoint{ "us-east-1": endpoint{
Hostname: "rds.us-east-1.amazonaws.com", Hostname: "rds.us-east-1.amazonaws.com",
CredentialScope: credentialScope{ CredentialScope: credentialScope{
@ -7977,6 +8013,18 @@ var awsusgovPartition = partition{
"config": service{ "config": service{
Endpoints: endpoints{ Endpoints: endpoints{
"fips-us-gov-east-1": endpoint{
Hostname: "config.us-gov-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-east-1",
},
},
"fips-us-gov-west-1": endpoint{
Hostname: "config.us-gov-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-west-1",
},
},
"us-gov-east-1": endpoint{}, "us-gov-east-1": endpoint{},
"us-gov-west-1": endpoint{}, "us-gov-west-1": endpoint{},
}, },
@ -8349,12 +8397,25 @@ var awsusgovPartition = partition{
Protocols: []string{"https"}, Protocols: []string{"https"},
}, },
Endpoints: endpoints{ Endpoints: endpoints{
"dataplane-us-gov-east-1": endpoint{
Hostname: "greengrass-ats.iot.us-gov-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-east-1",
},
},
"dataplane-us-gov-west-1": endpoint{ "dataplane-us-gov-west-1": endpoint{
Hostname: "greengrass-ats.iot.us-gov-west-1.amazonaws.com", Hostname: "greengrass-ats.iot.us-gov-west-1.amazonaws.com",
CredentialScope: credentialScope{ CredentialScope: credentialScope{
Region: "us-gov-west-1", Region: "us-gov-west-1",
}, },
}, },
"fips-us-gov-east-1": endpoint{
Hostname: "greengrass-fips.us-gov-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-east-1",
},
},
"us-gov-east-1": endpoint{},
"us-gov-west-1": endpoint{ "us-gov-west-1": endpoint{
Hostname: "greengrass.us-gov-west-1.amazonaws.com", Hostname: "greengrass.us-gov-west-1.amazonaws.com",
CredentialScope: credentialScope{ CredentialScope: credentialScope{
@ -8370,6 +8431,12 @@ var awsusgovPartition = partition{
}, },
Endpoints: endpoints{ Endpoints: endpoints{
"us-gov-east-1": endpoint{}, "us-gov-east-1": endpoint{},
"us-gov-east-1-fips": endpoint{
Hostname: "guardduty.us-gov-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-east-1",
},
},
"us-gov-west-1": endpoint{}, "us-gov-west-1": endpoint{},
"us-gov-west-1-fips": endpoint{ "us-gov-west-1-fips": endpoint{
Hostname: "guardduty.us-gov-west-1.amazonaws.com", Hostname: "guardduty.us-gov-west-1.amazonaws.com",
@ -9692,6 +9759,12 @@ var awsisobPartition = partition{
"us-isob-east-1": endpoint{}, "us-isob-east-1": endpoint{},
}, },
}, },
"codedeploy": service{
Endpoints: endpoints{
"us-isob-east-1": endpoint{},
},
},
"config": service{ "config": service{
Endpoints: endpoints{ Endpoints: endpoints{

View file

@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go" const SDKName = "aws-sdk-go"
// SDKVersion is the version of this SDK // SDKVersion is the version of this SDK
const SDKVersion = "1.35.31" const SDKVersion = "1.35.33"

View file

@ -4513,10 +4513,10 @@ func (c *S3) GetObjectRequest(input *GetObjectInput) (req *request.Request, outp
// For more information about returning the ACL of an object, see GetObjectAcl // For more information about returning the ACL of an object, see GetObjectAcl
// (https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html). // (https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html).
// //
// If the object you are retrieving is stored in the S3 Glacier, S3 Glacier // If the object you are retrieving is stored in the S3 Glacier or S3 Glacier
// Deep Archive, S3 Intelligent-Tiering Archive, or S3 Intelligent-Tiering Deep // Deep Archive storage class, or S3 Intelligent-Tiering Archive or S3 Intelligent-Tiering
// Archive storage classes, before you can retrieve the object you must first // Deep Archive tiers, before you can retrieve the object you must first restore
// restore a copy using RestoreObject (https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html). // a copy using RestoreObject (https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html).
// Otherwise, this operation returns an InvalidObjectStateError error. For information // Otherwise, this operation returns an InvalidObjectStateError error. For information
// about restoring archived objects, see Restoring Archived Objects (https://docs.aws.amazon.com/AmazonS3/latest/dev/restoring-objects.html). // about restoring archived objects, see Restoring Archived Objects (https://docs.aws.amazon.com/AmazonS3/latest/dev/restoring-objects.html).
// //
@ -8346,6 +8346,10 @@ func (c *S3) PutBucketOwnershipControlsRequest(input *PutBucketOwnershipControls
output = &PutBucketOwnershipControlsOutput{} output = &PutBucketOwnershipControlsOutput{}
req = c.newRequest(op, input, output) req = c.newRequest(op, input, output)
req.Handlers.Unmarshal.Swap(restxml.UnmarshalHandler.Name, protocol.UnmarshalDiscardBodyHandler) req.Handlers.Unmarshal.Swap(restxml.UnmarshalHandler.Name, protocol.UnmarshalDiscardBodyHandler)
req.Handlers.Build.PushBackNamed(request.NamedHandler{
Name: "contentMd5Handler",
Fn: checksum.AddBodyContentMD5Handler,
})
return return
} }
@ -8567,12 +8571,9 @@ func (c *S3) PutBucketReplicationRequest(input *PutBucketReplicationInput) (req
// When you add the Filter element in the configuration, you must also add the // When you add the Filter element in the configuration, you must also add the
// following elements: DeleteMarkerReplication, Status, and Priority. // following elements: DeleteMarkerReplication, Status, and Priority.
// //
// The latest version of the replication configuration XML is V2. XML V2 replication // If you are using an earlier version of the replication configuration, Amazon
// configurations are those that contain the Filter element for rules, and rules // S3 handles replication of delete markers differently. For more information,
// that specify S3 Replication Time Control (S3 RTC). In XML V2 replication // see Backward Compatibility (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-add-config.html#replication-backward-compat-considerations).
// configurations, Amazon S3 doesn't replicate delete markers. Therefore, you
// must set the DeleteMarkerReplication element to Disabled. For backward compatibility,
// Amazon S3 continues to support the XML V1 replication configuration.
// //
// For information about enabling versioning on a bucket, see Using Versioning // For information about enabling versioning on a bucket, see Using Versioning
// (https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html). // (https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html).
@ -9999,16 +10000,17 @@ func (c *S3) RestoreObjectRequest(input *RestoreObjectInput) (req *request.Reque
// * Amazon S3 accepts a select request even if the object has already been // * Amazon S3 accepts a select request even if the object has already been
// restored. A select request doesnt return error response 409. // restored. A select request doesnt return error response 409.
// //
// Restoring Archives // Restoring objects
// //
// Objects that you archive to the S3 Glacier, S3 Glacier Deep Archive, S3 Intelligent-Tiering // Objects that you archive to the S3 Glacier or S3 Glacier Deep Archive storage
// Archive, or S3 Intelligent-Tiering Deep Archive storage classes are not accessible // class, and S3 Intelligent-Tiering Archive or S3 Intelligent-Tiering Deep
// in real time. For objects in Archive Access tier or Deep Archive Access tier // Archive tiers are not accessible in real time. For objects in Archive Access
// you must first initiate a restore request, and then wait until the object // or Deep Archive Access tiers you must first initiate a restore request, and
// is moved into the Frequent Access tier. For objects in S3 Glacier or S3 Glacier // then wait until the object is moved into the Frequent Access tier. For objects
// Deep Archive you must first initiate a restore request, and then wait until // in S3 Glacier or S3 Glacier Deep Archive storage classes you must first initiate
// a temporary copy of the object is available. To access an archived object, // a restore request, and then wait until a temporary copy of the object is
// you must restore the object for the duration (number of days) that you specify. // available. To access an archived object, you must restore the object for
// the duration (number of days) that you specify.
// //
// To restore a specific object version, you can provide a version ID. If you // To restore a specific object version, you can provide a version ID. If you
// don't provide a version ID, Amazon S3 restores the current version. // don't provide a version ID, Amazon S3 restores the current version.
@ -10018,31 +10020,31 @@ func (c *S3) RestoreObjectRequest(input *RestoreObjectInput) (req *request.Reque
// request body: // request body:
// //
// * Expedited - Expedited retrievals allow you to quickly access your data // * Expedited - Expedited retrievals allow you to quickly access your data
// stored in the S3 Glacier or S3 Intelligent-Tiering Archive storage class // stored in the S3 Glacier storage class or S3 Intelligent-Tiering Archive
// when occasional urgent requests for a subset of archives are required. // tier when occasional urgent requests for a subset of archives are required.
// For all but the largest archived objects (250 MB+), data accessed using // For all but the largest archived objects (250 MB+), data accessed using
// Expedited retrievals is typically made available within 15 minutes. // Expedited retrievals is typically made available within 15 minutes.
// Provisioned capacity ensures that retrieval capacity for Expedited retrievals // Provisioned capacity ensures that retrieval capacity for Expedited retrievals
// is available when you need it. Expedited retrievals and provisioned capacity // is available when you need it. Expedited retrievals and provisioned capacity
// are not available for objects stored in the S3 Glacier Deep Archive or // are not available for objects stored in the S3 Glacier Deep Archive storage
// S3 Intelligent-Tiering Deep Archive storage class. // class or S3 Intelligent-Tiering Deep Archive tier.
// //
// * Standard - Standard retrievals allow you to access any of your archived // * Standard - Standard retrievals allow you to access any of your archived
// objects within several hours. This is the default option for retrieval // objects within several hours. This is the default option for retrieval
// requests that do not specify the retrieval option. Standard retrievals // requests that do not specify the retrieval option. Standard retrievals
// typically finish within 35 hours for objects stored in the S3 Glacier // typically finish within 35 hours for objects stored in the S3 Glacier
// or S3 Intelligent-Tiering Archive storage class. They typically finish // storage class or S3 Intelligent-Tiering Archive tier. They typically finish
// within 12 hours for objects stored in the S3 Glacier Deep Archive or S3 // within 12 hours for objects stored in the S3 Glacier Deep Archive storage
// Intelligent-Tiering Deep Archive storage class. Standard retrievals are // class or S3 Intelligent-Tiering Deep Archive tier. Standard retrievals
// free for objects stored in S3 Intelligent-Tiering. // are free for objects stored in S3 Intelligent-Tiering.
// //
// * Bulk - Bulk retrievals are the lowest-cost retrieval option in S3 Glacier, // * Bulk - Bulk retrievals are the lowest-cost retrieval option in S3 Glacier,
// enabling you to retrieve large amounts, even petabytes, of data inexpensively. // enabling you to retrieve large amounts, even petabytes, of data inexpensively.
// Bulk retrievals typically finish within 512 hours for objects stored // Bulk retrievals typically finish within 512 hours for objects stored
// in the S3 Glacier or S3 Intelligent-Tiering Archive storage class. They // in the S3 Glacier storage class or S3 Intelligent-Tiering Archive tier.
// typically finish within 48 hours for objects stored in the S3 Glacier // They typically finish within 48 hours for objects stored in the S3 Glacier
// Deep Archive or S3 Intelligent-Tiering Deep Archive storage class. Bulk // Deep Archive storage class or S3 Intelligent-Tiering Deep Archive tier.
// retrievals are free for objects stored in S3 Intelligent-Tiering. // Bulk retrievals are free for objects stored in S3 Intelligent-Tiering.
// //
// For more information about archive retrieval options and provisioned capacity // For more information about archive retrieval options and provisioned capacity
// for Expedited data access, see Restoring Archived Objects (https://docs.aws.amazon.com/AmazonS3/latest/dev/restoring-objects.html) // for Expedited data access, see Restoring Archived Objects (https://docs.aws.amazon.com/AmazonS3/latest/dev/restoring-objects.html)
@ -33478,9 +33480,11 @@ type Tiering struct {
// AccessTier is a required field // AccessTier is a required field
AccessTier *string `type:"string" required:"true" enum:"IntelligentTieringAccessTier"` AccessTier *string `type:"string" required:"true" enum:"IntelligentTieringAccessTier"`
// The number of days that you want your archived data to be accessible. The // The number of consecutive days of no access after which an object will be
// minimum number of days specified in the restore request must be at least // eligible to be transitioned to the corresponding tier. The minimum number
// 90 days. If a smaller value is specifed it will be ignored. // of days specified for Archive Access tier must be at least 90 days and Deep
// Archive Access tier must be at least 180 days. The maximum can be up to 2
// years (730 days).
// //
// Days is a required field // Days is a required field
Days *int64 `type:"integer" required:"true"` Days *int64 `type:"integer" required:"true"`

4
vendor/modules.txt vendored
View file

@ -19,7 +19,7 @@ github.com/VictoriaMetrics/metrics
# github.com/VictoriaMetrics/metricsql v0.7.2 # github.com/VictoriaMetrics/metricsql v0.7.2
github.com/VictoriaMetrics/metricsql github.com/VictoriaMetrics/metricsql
github.com/VictoriaMetrics/metricsql/binaryop github.com/VictoriaMetrics/metricsql/binaryop
# github.com/aws/aws-sdk-go v1.35.31 # github.com/aws/aws-sdk-go v1.35.33
github.com/aws/aws-sdk-go/aws github.com/aws/aws-sdk-go/aws
github.com/aws/aws-sdk-go/aws/arn github.com/aws/aws-sdk-go/aws/arn
github.com/aws/aws-sdk-go/aws/awserr github.com/aws/aws-sdk-go/aws/awserr
@ -205,7 +205,7 @@ golang.org/x/text/secure/bidirule
golang.org/x/text/transform golang.org/x/text/transform
golang.org/x/text/unicode/bidi golang.org/x/text/unicode/bidi
golang.org/x/text/unicode/norm golang.org/x/text/unicode/norm
# golang.org/x/tools v0.0.0-20201119132711-4783bc9bebf0 # golang.org/x/tools v0.0.0-20201121010211-780cb80bd7fb
golang.org/x/tools/cmd/goimports golang.org/x/tools/cmd/goimports
golang.org/x/tools/go/ast/astutil golang.org/x/tools/go/ast/astutil
golang.org/x/tools/go/gcexportdata golang.org/x/tools/go/gcexportdata